99% of Organizations Expose Sensitive Data to AI Tools, Report Shows
4:20
Fri | May 23, 2025 | 9:19 AM PDT

As organizations race to adopt artificial intelligence tools to boost productivity, automate tasks, and gain competitive insights, many are unknowingly exposing their most sensitive data in the process. From generative AI models embedded in everyday apps to custom-built internal tools, the rapid spread of AI is outpacing the ability of security teams to maintain proper controls.

The explosion of AI-powered capabilities has introduced new "shadow AI" tools and services deployed without formal approval or oversight. Unlike traditional shadow IT, shadow AI can process, store, and even redistribute data unpredictably, raising serious compliance and security concerns.

Now, a new report from data security firm Varonis quantifies the scale of the problem, and the numbers are sobering. According to the State of Data Security Report: Quantifying AI's Impact on Data Risk, 99% of organizations analyzed had sensitive data exposed to AI tools due to misconfigurations, permissive access controls, and a lack of visibility across cloud environments.

Shadow AI and a widening visibility gap

The risk posed by shadow AI is a consistent theme across industry research. Varonis found that 98% of organizations had unverified applications with AI capabilities running across their environments.

"Shadow AI has introduced greater risk with security teams progressively seeking ways to lock down their data," said Nicole Carignan, SVP of Security and AI Strategy at Darktrace. "In addition to managing in-house AI tools, security teams now face an upsurge in external tools with embedded AI features—and that's before we even account for shadow AI."

 

Carignan stressed the importance of AI asset discovery to identify and monitor the presence of AI tools across the enterprise. "CIOs and CISOs must dig deep into AI security solutions—asking comprehensive questions about data access and visibility," she added.

A personnel and prioritization crisis

Even as awareness of AI-related risks increases, many organizations feel overwhelmed by the volume and pace of change.

"There's a lot that needs to be done, and organizations are struggling to prioritize," said Satyam Sinha, CEO of Acuvity, which focuses on runtime GenAI security. "Personnel seems to be a key inhibitor, and this pain will only grow."

Sinha advocates for AI-native security tools, not retrofitted solutions, that can scale with enterprise workloads and provide a "multiplier effect" on limited cybersecurity teams. He also said he sees a significant opportunity for companies to bridge their AI knowledge gap through training and certifications tailored to cybersecurity personnel.

The mobile threat: AI outside the perimeter

The problem becomes even more complex in the mobile ecosystem, where traditional network protections often fall short.

"Shadow AI isn't limited to desktops or sanctioned enterprise apps," warned Krishna Vishnubhotla, VP of Product Strategy at Zimperium. "Unvetted mobile apps with embedded AI components create blind spots for security teams and can easily process or leak sensitive data."

 

Vishnubhotla highlighted the unique risks of mobile environments, where AI-powered apps may operate undetected, often bypassing outdated security policies. "Behavioral analysis powered by AI can help agencies identify unauthorized AI apps and enforce policies in real time," he said.

What's next for CISOs?

The report and the expert commentary underscore a growing imperative for CISOs and IT leaders: close the AI visibility gap before compliance violations or breaches occur.

Recommendations include:

  • Deploying AI-native security tools designed for cloud and hybrid environments

  • Implementing AI asset discovery and real-time monitoring

  • Enforcing policy controls on AI use, especially for mobile environments

  • Educating employees on the risks of shadow AI and the importance of vetted tools

As regulatory scrutiny around AI intensifies globally, failure to address these gaps could lead to costly breaches, loss of intellectual property, and severe fines.

"AI is not inherently the enemy," said Carignan. "But unchecked AI use, combined with poor data governance, absolutely is."

Follow SecureWorld News for more stories related to cybersecurity.

Comments