The BreachLock 2025 Penetration Testing Intelligence Report provides a rare, data-driven view into how attackers are exploiting modern enterprise environments, distilled from more than 4,200 global penetration tests conducted across industries in the past year. Anchored in real-world threat intelligence and mapped to MITRE ATT&CK and OWASP Top 10 frameworks, the report offers both a macro-level risk picture and sector-specific insights to inform proactive defense strategies.
"Cybersecurity is no longer about reacting to yesterday's threats. It's about preparing for tomorrow's," notes Seemant Sehgal, Founder & CEO of BreachLock.
BreachLock's analysis confirms that attack surfaces are growing faster than traditional security controls can adapt. The emergence of agentic AI and automated "vibe coding" is accelerating software deployment cycles—often without corresponding security hardening.
Across all engagements, 45% of findings were rated Critical or High severity, with top issues including:
-
Broken access control – Present in 32% of high-severity findings, enabling unauthorized access and privilege escalation
-
Security misconfiguration – Found in 52% of tested systems, particularly in cloud and hybrid environments
-
Injection attacks – SQL, NoSQL, and command injections remain prevalent in legacy apps and poorly validated APIs.
The median time to achieve lateral movement during red team simulations was just 2.5 hours, underscoring the need for rapid detection and response capabilities.
The report details sector-specific threat patterns:
-
Technology & SaaS providers – Saw a 400% YoY spike in critical API vulnerabilities, often linked to poor access control and insecure multi-tenant logic.
-
Banking & financial services – Regulatory frameworks like NIS2 and DORA are driving more frequent and risk-based pentesting, with broken access control, injection flaws, and security misconfigurations as leading issues.
-
Retail & consumer goods – 68% of APIs tested had misconfigured authorizations or excessive data exposure. Ransomware incidents affected 45% of organizations in this sector.
-
Healthcare – Broken access control (22%), security misconfiguration (17%), and cryptographic failures (14%) dominate, often tied to legacy systems and insecure IoT medical devices.
-
Energy & utilities – Medium and high-severity OT/IT convergence risks persist, with legacy SCADA and ICS components a frequent target.
In a first for the series, the 2025 report includes large language model (LLM) security testing results, identifying key risks such as:
-
Prompt injection attacks
-
Data leakage
-
Model poisoning
-
Weak API authentication
-
Insecure logging
"While LLMs are getting better at generating syntactically correct and functional code, their ability to produce secure code has not shown meaningful improvement over time. A recent report reveals that, in 45% of cases, AI-generated code introduces a known OWASP Top 10 vulnerability. These issues persist even as newer and larger LLMs are made available," said Mike McGuire, Senior Security Solutions Manager at Black Duck. "To secure AI-generated code, teams need to adopt a security-first development approach that is both proactive and reactive. Application security without compromise is essential."
McQuire continued, "Automated AppSec testing should be integrated to run early and often, including SAST, SCA, and DAST scans to evaluate AI-generated code before it merges into production branches. Where possible, teams should use coding assistants that are trained or tuned for secure coding practices and avoid relying solely on generic AI models that lack security awareness. Developers should be trained on secure prompting; prompts can be phrased to guide models toward safer implementations (i.e., 'use parameterized queries' or 'sanitize user input')."
These risks map closely to the OWASP Top 10, illustrating that traditional web security vulnerabilities have direct analogs in AI systems.
The top exploited techniques in 2025 included:
-
Exploit public-facing applications (T1190) – 15%
-
OS credential dumping (T1003) – 12%
-
Command and scripting interpreter (T1059) – 11%
-
Account discovery (T1087) – 11%
-
Valid accounts (T1078) – 10%
This alignment allows defenders to prioritize mitigations based on adversary behaviors most likely to target their environments.
BreachLock emphasizes several strategic imperatives for 2025:
-
Adopt continuous security validation – Move beyond periodic pentests to ongoing, intelligence-driven assessments.
-
Prioritize API & identity security – Enforce strict access controls, secure session management, and dedicated API testing.
-
Integrate security into DevOps – Embed secure coding, threat modeling, and automated checks early in the CI/CD process.
-
Leverage MITRE ATT&CK mapping – Use technique-based prioritization to focus remediation on the most likely exploitation paths.
-
Proactively test LLM deployments – Treat AI systems as first-class security assets requiring dedicated testing.
The 2025 BreachLock report paints a picture of attackers innovating faster than many organizations can respond. The combination of automation, AI exploitation, and persistent weaknesses in access control and cloud configurations demands a shift toward continuous, adversary-informed security.
As BreachLock concludes, "With the right offensive strategy, security teams can take back the advantage and stay ahead of evolving threats."
This report struck a nerve with the vendor community, with several experts offering their commentary.
Diana Kelley, Chief Information Security Officer at Noma Security, said:
-
"AI systems, and especially agentic tools, are fragile to certain kinds of manipulation because their behaviors and outputs can be drastically altered by malicious or poorly formed prompts. AI interprets prompts as executable commands, so a single malformed prompt can reasonably result in wiped systems. Robust AI security and agentic AI governance has never been more critical, ensuring systems are not harmed due to AI agent system access."
-
"AI agents bridge the gap between LLMs, tools, and system actions. Agents can execute commands, often autonomously, or instruct tools to perform actions. If an attacker can influence the agent via malicious AI prompt, they have the ability to direct the system to perform destructive operations at scale with a much bigger blast radius than a traditional AI application."
Roslyn Rissler, Senior Cybersecurity Strategist at Menlo Security, said:
-
"Any of the risks posed by the use of GenAI in general are much greater in regulated environments such as finance or healthcare, because the majority of data in these organizations is personal, sensitive, and/or proprietary. Not only could sensitive information be shared back with training models, but the very fact of the information leaving the enterprise in any form could be considered a regulatory breach."
-
"One of the biggest issues with GenAI lies in the fact that information may be copied and pasted into the tools. This may bypass traditional security tools, as it is contained in the browser. DLP tools and firewalls may miss such traffic, and endpoint monitors may not track such input either. An additional complication is that users may simply choose to use unmanaged or personal devices to make use of the benefits of GenAI."
-
"There are steps that can be taken easily to remedy these issues quickly with a browser-centric approach. First, the enterprise should determine sanctioned, enterprise/business-level GenAI tools and educate users of the reasons for the choice. Attempts to browse to another GenAI tool should be met with a redirect to the preferred tool, while the attempt should be logged. Finally, treat GenAI like any other sensitive application, with copy/paste or upload/download restrictions, character input limits, and watermarking."
James Maude, Field CTO at BeyondTrust, said:
-
"An important defense is to reduce standing privileges in the environment so that in the event an identity is compromised, the 'blast radius' is limited. This is especially important in the age of identity attacks and hybrid environments, where one compromised identity can open up paths to privileged access on dozens of systems on-prem and in the cloud that organizations weren’t aware of."
-
"Organizations need to look beyond siloed views of obviously privileged identities in individual systems and take a holistic view of the combinations of privileges, entitlements, and roles that could be exploited by an attacker to elevation privilege, move laterally and inflict damage. The identity security debt accumulated by many organizations represents a far great risk than any other area, as it only takes the attacker to log in using the right identity and all is lost because of the paths to privilege that abound in their environment."
-
"Understanding and reducing your identity attack surface should be at to forefront of every organization thinking when it comes to cyber defense in 2025."
Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace, said:
-
"Vibe coding appears to offer two primary use cases: initial brainstorming of structure and applications, and coding assistance for more junior or novice developers. It allows someone to articulate a rough idea of what they want a program to do, and then AI helps generate a code framework to get started. The key concern is that this code will not be secure by design."
-
"Vibe coding enables non-expert professionals to develop and prototype, however, the code it produces will not inherently be secure and could interject vulnerabilities into systems. While it's gaining popularity, the security community is seeing an increasing volume of vulnerability disclosures. At the same time, threat actors are beginning to use agents—or agentic systems—to identify vulnerabilities in applications that could be exploited as points of ingress."
-
"Vibe coding won't replace software development, but it will change it, and it will require organizations to think carefully about who is validating the code and how it's being secured before deployment."
Randolph Barr, CISO at Cequence Security, said:
-
"What's particularly concerning today is the role of generative AI in democratizing exploitation. Attackers with little technical experience can now use AI to identify exposed systems, craft malicious API requests, and launch targeted attacks, significantly accelerating the threat window."
-
"Specialized API security solutions can detect and block anomalous API activity in real time, provide endpoint-level risk scoring, and stop automated scanning and payload delivery. These capabilities are increasingly critical as more attack paths originate through APIs rather than traditional network services."
Amit Zimerman, Co-Founder & Chief Product Officer at Oasis Security, said:
-
"AI addresses manpower challenges by automating tasks that traditionally require skilled personnel. In offensive cybersecurity, processes like vulnerability assessment, penetration testing, and red teaming can be scaled up through AI without necessitating a proportional increase in human resources. AI can simulate attacks, analyze responses, and uncover vulnerabilities at speeds that far exceed human capabilities, allowing teams to operate more efficiently.