Report: AI Is Rewriting Software Faster than It Can Be Secured
9:40
author photo
By Cam Sivesind
Wed | Dec 17, 2025 | 7:28 AM PST

Artificial intelligence has officially become the engine of modern software development. From AI coding assistants to open source AI/ML models embedded deep in applications, development velocity has accelerated dramatically. But according to new research from Black Duck, security, governance, and compliance practices are not keeping pace — and the resulting gap is quickly becoming one of the most dangerous fault lines in enterprise cybersecurity.

Black Duck’s latest report, Navigating Software Supply Chain Risk in a Rapid-Release World, reveals a stark disconnect: 95% of organizations now rely on AI tools to generate code, yet only 24% apply comprehensive IP, license, security, and quality evaluations to that AI-generated code.

For cybersecurity leaders, this finding underscores a sobering reality: the software supply chain has become the new attack surface—and AI is expanding it faster than traditional AppSec programs were ever designed to handle.

The report shows near-universal adoption of AI in development workflows. Engineering teams are leveraging AI coding assistants, proprietary AI/ML components, and open source models at scale. Nearly two-thirds of organizations report using proprietary AI/ML elements, 57% rely on AI coding assistants, and 49% incorporate open source AI/ML models into the software they build.

Despite this widespread adoption, evaluation practices remain inconsistent:

  • 76% check AI-generated code for security risks

  • 56% evaluate code quality

  • 54% assess IP or licensing risk

  • Only 24% perform all four checks—security, quality, IP, and licensing

This leaves organizations exposed to hidden licensing violations, protected IP contamination, insecure code patterns, and embedded secrets that can cascade across the supply chain.

Ironically, confidence is high. Ninety-five percent of respondents say they are at least moderately confident in their ability to secure AI-generated code, with 77% saying they are very or extremely confident.

But the data suggests that confidence is often misplaced. The report references external research indicating that nearly half of AI-generated code snippets contain insecure patterns that could be exploited, reinforcing that traditional AppSec tooling and review processes are not tuned for AI-driven risks.

Supply chain attacks are already common

This gap between velocity and security is not theoretical. Sixty-five percent of organizations experienced a software supply chain attack in the past year, with the most common attack types including:

  • Malicious dependencies (30%)

  • Unpatched vulnerabilities (28%)

  • Zero-day vulnerabilities (27%)

  • Malware injected into build pipelines (14%)

Nearly 40% of affected organizations experienced multiple types of supply chain attacks, highlighting how one weakness can quickly lead to others.

The report makes clear that AI fundamentally changes the scale and complexity of software risk. AI tools can introduce:

  • Undocumented dependencies

  • Licensing ambiguity

  • Protected IP without attribution

  • Rapid code changes that outpace manual review

While 98% of organizations use automated AppSec tools, many struggle with effectiveness. Common challenges include:

  • High false-positive rates (37%)

  • Poor coverage of transitive dependencies (33%)

  • Difficulty prioritizing findings by exploitability or business impact (32%)

These limitations make it difficult for security teams to keep up as release cycles compress and AI accelerates code production.

SBOMs and continuous monitoring: proven but underused

Transparency is one of the report’s most consistent themes. Organizations that generate, validate, and operationalize Software Bills of Materials (SBOMs) consistently outperform their peers:

  • 51% always validate supplier SBOMs

  • Those organizations are 15 percentage points more prepared to evaluate third-party software

  • 59% of them remediate critical vulnerabilities within one day, compared to 45% overall

Yet, SBOM maturity remains uneven. Only 38% produce SBOMs for all software, and many generate them infrequently, limiting their usefulness in real-time risk response.

Implications for organizations of all types

For CISOs and security leaders:

  • AI governance must become a core security control, not an innovation afterthought

  • AI-generated code should be treated like third-party software, subject to the same scrutiny

  • SBOMs, continuous monitoring, and automated remediation are no longer optional

For development and DevSecOps teams:

  • Speed without visibility creates systemic risk

  • Security tooling must integrate directly into CI/CD pipelines to keep up with AI-driven velocity

  • Dependency governance is now a business enabler, not just a compliance checkbox

For boards and executives:

  • 42% of respondents say software supply chain risk is already a board-level issue, tied directly to revenue protection, customer trust, and regulatory exposure

  • AI-driven risk expands legal, regulatory, and reputational exposure far beyond traditional breach scenarios

As the report concludes, organizations that integrate secure SDLC practices, SBOM validation, automated monitoring, and AI governance will define the next generation of resilient enterprises.

AI is not slowing down. Release cycles will only get faster. The question for cybersecurity leaders is no longer whether AI is reshaping the software supply chain—it’s whether security programs can evolve quickly enough to keep up.

Right now, the data suggests most organizations are still running behind.

We asked a few cybersecurity vendor SMEs for their takes.

Saumitra Das, Vice President of Engineering at Qualys, said:

"By 2030, 95% of code is expected to be AI-generated. Even now, in 2025, it is reported to be around 30% at large enterprises and close to 90-95% at small startups. The key word to keep in mind is “generated”. This is more code being generated than humans can reasonably even review for correctness, functionality, readability and security issues. As a result, we now have code review companies coming up that use AI models to review code, because humans cannot scale. Due to the sheer volume of code being generated and the lack of people who reasonably understand it, we will need new architectures for dealing with the kind of issues discussed in the report.

  1. We need to use AI models that are diverse in their training datasets to review the generated code

  2. We need automation via for example MCP that can take any code being compiled and send it to vendor A for security reviews, understand the findings, and use vendor B to automate the patching of the issues found. Even if we find issues with large generated codebases we will need agentic workflows to fix them with minimal human intervention.

  3. QA will need to evolve to better test various scenarios with AI-generated harnesses and test cases.

  4. It’s harder to understand if the AI-generated code violates a license. A model could have learnt coding practices or libraries from a repository with license A and used that “knowledge” to generate code that now taints a user's codebase with that license, without them realizing. We will need better guarantees from AI model providers on what code they have used to train their data. This is similar to how image generation models must avoid generating copyrighted characters."

Jason Soroko, Senior Fellow at Sectigo, said:

"Organizations should assume that AI-generated code expands their software supply chain risk, not just their development speed. Black Duck’s survey shows 95% of organizations already use AI tools to generate code, however, only 24% apply comprehensive IP, license, security, and quality evaluations to that output. This leaves large blind spots in provenance, obligations, and exploitable flaws. AI can also amplify dependency sprawl and introduce opaque third-party components that traditional AppSec programs were not built to inventory or govern at rapid-release cadence. The result is a widening gap where shipping gets easier while accountability and assurance get harder, and the downstream cost shows up as security exposure, compliance friction, and slower incident response when something breaks."

Soroko continued, "Security teams can close the gap by treating AI output like third-party software and enforcing the same controls by default inside the developer workflow. Start with dependency management because organizations that track and manage open source dependencies well report far higher preparedness. Then harden the pipeline with automatic continuous monitoring to accelerate remediation, since teams with automation fix critical vulnerabilities with a day much more often, and much more quickly. Make SBOM validation non-optional for suppliers because teams that always validate supplier SBOMs report stronger third-party readiness at 63% and faster one-day remediation at 59%, then raise compliance maturity by implementing multiple controls, since three or more controls lift one-day remediation to over 50%. Put these requirements into CI with clear pass fail gates, codified policy, and audit-ready evidence so security becomes repeatable at AI speed instead of negotiated release by release."

Comments