The cybersecurity world is waking up to a sobering truth: While AI deployment is racing ahead, AI security is crawling behind. SandboxAQ's newly-released 2025 AI Security Benchmark Report reveals a wide and dangerous gap between enterprise enthusiasm for artificial intelligence and their readiness to secure it.
"This isn't just a solution gap, it's a conceptual one," said Marc Manzano, General Manager of the Cybersecurity Group at SandboxAQ. "AI is radically changing the cybersecurity paradigm at an unprecedented speed."
AI is already here and security isn't, according to key findings in the report (gated link):
-
79% of organizations are using AI in production environments.
-
Only 6% have deployed comprehensive, AI-native security protections across IT and AI systems.
-
Just 10% have a dedicated AI security team.
-
Only 28% have conducted a full AI-specific security risk assessment.
Traditional security teams are being asked to defend machine-speed, logic-bending systems with tooling built for rule-based, human-led environments—and it's not working.
The report spotlights the emerging threat posed by non-human identities (NHIs)—autonomous AI agents, APIs, machine accounts, and services operating independently with access to sensitive systems and cryptographic credentials. Most organizations lack visibility, governance, or access control over these entities, undermining Zero Trust principles and cryptographic hygiene.
"These systems often operate without human oversight… exchanging credentials and accessing sensitive resources," the report warns.
This aligns with findings in the companion Cybersecurity Market Report by SandboxAQ, which forecasts that AI-native threats targeting NHIs will drive demand for post-quantum cryptography, AI identity governance, and automated remediation tools in 2025 and beyond.
Despite today's risk gaps, 85% of organizations plan to increase AI security budgets in the next 12–24 months. Top areas of focus include:
-
Protecting AI training data and inference pipelines
-
Securing NHIs and embedded ML systems
-
Deploying automated incident response tailored for AI-driven infrastructure
"Most organizations aren't measuring AI security in any meaningful way because the foundations just aren't there yet. In the report, fewer than 30% of security leaders said they've assessed the risk of their AI deployments. Only 10% said they had a dedicated AI security team, and just 6% said they've implemented any kind of AI security controls. That tells me there's still a fundamental and critical gap between AI adoption and security readiness," Manzano said.
"What we're starting to see from more mature teams is a shift away from trying to retrofit legacy controls," Manzano added. "Instead, these teams are taking first steps toward evaluating risks that actually reflect how AI systems behave in production and the blast radius that such systems can have if breached or went rogue. That includes implementing observability and monitoring capabilities of non-human identities and cryptographic assets leveraged by AI workflows."
Manzano continued, "In practice, what we are seeing is that some large organizations are actually downgrading the data security policies they have put in place over the past decade so that they can enable AI use cases that require large amounts of information to function. This phenomenon is not isolated and it deeply troubles me. We cybersecurity professionals have a pivotal responsibility and need to step up and build cybersecurity solutions that can keep pace with fast AI adoption. This is just getting started, and I believe we are still on time to catch up, but we don't have time to lose."
There are some basic implications for practitioners, including:
-
Treating AI as a new attack surface. Threat modeling should include inference APIs, training pipelines, and LLM agents.
-
Mapping and monitoring NHIs. Every AI agent, API service, and machine credential must be inventoried, governed, and monitored like a privileged identity.
-
Auditing AI systems independently. Use AI-specific threat assessments—not just generic pen tests or vulnerability scans.
-
Planning for quantum-resilient security. The market report underscores an urgent need to replace legacy cryptography in anticipation of future AI-accelerated quantum attacks.
RELATED: Entro Security has released its H1 2025 NHI & Secrets Risk Report, revealing a sharp rise in unmanaged machine identities and widespread credential exposure across modern enterprise environments.
The report shows that non-human identities continue to outpace human accounts, with the NHI-to-human ratio growing more than 56% in just one year—with the average growing from 92:1 to 144:1. As the number of NHIs skyrockets, driven by AI agents, automation, and CI/CD pipelines, so does the blast radius of leaked secrets, many of which are found in places security teams aren't even scanning.
Highlights from the Entro Security report:
- 44% growth in NHIs YoY. Entro Labs attributes this growth to the adoption of agentic AI and automation-first development practices.
- Nearly half of all exposed secrets are found outside of code in workflows, messaging app channels, and other collaboration tools like Confluence.
- The #1 most exposed secret type are tied to Slack bots which are often wired into security systems, alerting tools and internal workflows—making Slack tokens easy to generate and just as easy to expose.
- 7.5% of NHIs live 5–10 years, with some exceeding a decade. These identities often outlive their intended function and their human owners.
- One in 20 AWS machine identities carry full-admin privileges, making them critical risk multipliers.
- 8.7% of NHIs are overprivileged and idle, meaning they have access and permissions to services and actions that they rarely or never interact with.
"Non-Human Identities aren't a new or emerging risk—they've been at the center of some of the most high-profile breaches in recent memory. From SolarWinds to CodeCov to CircleCI, attackers have repeatedly exploited poorly managed service accounts, tokens, and secrets to gain deep, undetected access," said Shane Barney, CISO at Keeper Security. "That's what makes this report so frustrating. Despite years of clear warnings and real-world consequences, many organizations still lack basic visibility and control over their non-human credentials. It's not that the risk is misunderstood, it's that it's being deprioritized. This should be a wake-up call.
"Protecting NHIs starts with applying the same controls used for human users, including managing access through least privilege, automating credential rotation, and auditing usage regularly. Secrets management tools and Privileged Access Management (PAM) platforms are critically important tools to achieve this—providing centralized control, automatic rotation, and fine-grained access policies to prevent credentials from being lost, misused, or exposed."
"There’s a reason why cyber insurance underwriters ask about the number of service accounts, and the scope of permissions for them. They're usually very long-lived, poorly monitored, and often have excessive permissions," said Trey Ford, CISO at Bugcrowd. "There is a 'did-it-work' bias for technical work; the fastest and easiest way to set up an account when troubleshooting or operating under time pressure is an account with overly broad permissions. Best practices dictate following up to narrow to appropriate permissions once troubleshooting or initial setup is complete, and this is regularly missed."
Ford added, "Similar to the age-old problem of hard-coded secrets (passwords/passkeys), NHI secrets are hard to manage. Centralizing their inventory, rotation, and monitoring sounds like a great idea, but it's harder to implement, hence all of the research and innovation in this space."