author photo
By Cam Sivesind
Thu | Aug 28, 2025 | 5:36 AM PDT

The embedded software landscape is undergoing a seismic shift. According to Black Duck's newly-released State of Embedded Software Quality and Safety 2025 report, based on a survey of 785 development and security professionals worldwide, the convergence of artificial intelligence (AI) and supply chain transparency is redefining how companies develop, deploy, and secure software.

The report finds AI adoption to be nearly universal across embedded software teams:

  • 89.3% of organizations now use AI-powered coding assistants.

  • 96.1% are integrating open-source AI models into their products.

Yet, governance lags behind. More than 21% of organizations admit they lack confidence in their ability to prevent AI from introducing security vulnerabilities. An additional 18% report incidents of "Shadow AI"—where developers bypass company policy to use unapproved AI tools, creating unmanaged risk vectors.

This governance gap is precisely what Black Duck CEO Jason Schmitt warns against: "The old software world is gone, giving way to a new set of truths being defined by AI. To navigate the changes, technical leaders should carry out rigorous validation on AI assistants. Managers should establish formal AI governance policies and invest in training for emerging technologies. Security professionals should update their threat models to include AI-specific risks and leverage SBOMs [Software Bill of Materials] as a strategic asset for risk management to achieve true scale application security."

For cybersecurity professionals, this means that AI-specific threat modeling—from poisoned training data to prompt-injection attacks—must become standard practice in embedded systems security reviews.

"SBOM adoption is growing in the vendor community but is still somewhat in its infancy. A concerning fact is that not all SBOMs are alike. Many SBOMs are still not as robust or comprehensive as many would like, nor are they being dynamically updated," said Craig Spiezle, Managing Director of Agelight Digital Trust & Security Strategies. "It is somewhat a chicken and an egg situation; SBOMs need to mature, and at the same time enterprises need to up their game and require them for all software, client, server, and the cloud. Last week's call for comments from U.S. CISA underscores the need for more transparent data and improved taxonomy throughout the ecosystem."

SBOMs: from compliance to commercial necessity

Another clear trend is the evolution of SBOMs. Once viewed as a regulatory burden, SBOMs are now a market-driven requirement. In fact, 70.8% of organizations produce SBOMs, with customer and partner demands (39.4%) overtaking regulatory pressure (31.5%) as the primary driver.

It signals a fundamental market shift: supply chain transparency is no longer optional. For industries like automotive, healthcare, and manufacturing—where embedded systems are pervasive—SBOMs are increasingly a contractual expectation, not just a compliance checkbox.

The report also highlights the changing role of embedded developers. More than 80% of companies are adopting memory-safe languages, with Python overtaking C++ in some contexts.

For security leaders, this means development teams are retooling skillsets, and application security programs must adapt testing methodologies for newer languages and frameworks. Legacy vulnerabilities tied to memory-unsafe languages (buffer overflows, memory corruption) may decline, but fresh attack surfaces emerge in scripting ecosystems and AI-driven codebases.

"Without SBOMs, we're flying blind. With them, we're finally turning the lights on in the supply chain cockpit," said Helen Oakley, Director of Secure Software Supply Chains and Secure Development at SAP. "AI coding assistants are like interns with rocket fuel. They accelerate everything, including errors, if you don't set boundaries."

Perhaps the most revealing insight is the disconnect between executives and developers:

  • 86% of CTOs and directors rated their projects as successful.

  • Only 56% of hands-on developers agreed.

This perception gap introduces systemic business risk. Overconfidence at the leadership level could delay investments in governance and security tooling, while engineers remain acutely aware of unresolved risks. Cybersecurity professionals should view this gap as a warning sign that cultural alignment and risk communication must improve across organizations.

What this means for cybersecurity professionals

For CISOs and security engineers operating in sectors reliant on embedded software—from automotive manufacturing to medical devices—the implications are clear.

  • AI threat models are non-negotiable: Traditional vulnerability scanning alone is insufficient. Teams must extend risk modeling to cover AI-specific exploits, adversarial inputs, and misuse of AI-powered coding assistants.

  • SBOMs are a strategic security asset: Moving beyond compliance, SBOMs should be used proactively to assess supplier integrity, detect license risks, and continuously monitor for vulnerable dependencies.

  • Governance must catch up to adoption: Shadow AI highlights the risk of "policy lag." Organizations must implement formal AI governance, enforce policies on approved tools, and monitor usage across the development lifecycle.

  • Close the leadership–practitioner gap: Security professionals must facilitate transparent conversations between developers and leadership, ensuring executive optimism does not overshadow unresolved vulnerabilities.

Black Duck's 2025 report signals that embedded software security is at a crossroads. AI is now a fixture in development pipelines, SBOMs are reshaping supply chain expectations, and governance is struggling to keep pace.

Organizations that adapt—by embedding AI governance, leveraging SBOMs strategically, and aligning leadership with practitioner realities—will be better positioned to innovate securely and compete in this rapidly changing landscape.

As Schmitt concludes, the path forward requires rigorous validation, formal governance, updated threat models, and a recognition that "the old software world is gone."

We asked SMEs from cybersecurity vendors for their perspective on the Black Duck research.

Diana Kelley, CISO at Noma Security, said:

  • "AI systems, and especially agentic tools, are fragile to certain kinds of manipulation because their behaviors and outputs can be drastically altered by malicious or poorly formed prompts. AI interprets prompts as executable commands, so a single malformed prompt can reasonably result in wiped systems. Robust AI security and agentic AI governance has never been more critical, ensuring systems are not harmed due to AI agent system access."

  • "AI agents bridge the gap between LLMs, tools, and system actions. Agents can execute commands, often autonomously, or instruct tools to perform actions. If an attacker can influence the agent via malicious AI prompt, they have the ability to direct the system to perform destructive operations at scale with a much bigger blast radius than a traditional AI application."

Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO, at Darktrace, said:

  • "Before organizations can think meaningfully about AI governance, they need to lay the groundwork with strong data science principles. That means understanding how data is sourced, structured, classified, and secured—because AI systems are only as reliable as the data they’re built on. Solid data foundations are essential to ensuring accuracy, accountability, and safety throughout the AI lifecycle."

  • "For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies."

  • "As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies. But there is no one-size-fits-all approach. Each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements. That’s why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions."

  • Effective AI governance requires deep cross-functional collaboration. Security, privacy, legal, HR, compliance, data, and product leaders each bring vital perspectives. Together, they must shape policies that prioritize ethics, data privacy, and safety—while still enabling innovation. In the absence of mature regulatory frameworks, industry collaboration is equally critical. Sharing successful governance models and operational insights will help raise the bar for secure AI adoption across sectors."

  • "The integration of AI into core business operations also has implications for the workforce. Security practitioners—and teams in legal, compliance, and risk—must upskill in AI technologies and data governance. Understanding system architectures, communication pathways, and agent behaviors will be essential to managing risk. As these systems evolve, so must governance strategies. Static policies won’t be enough, AI governance must be dynamic, real-time, and embedded from the start. Organizations that treat governance and security as strategic enablers will be best positioned to harness the full potential of AI safely and responsibly."

Guy Feinberg, Growth Product Manager at Oasis Security, said:

  • "AI agents, like human employees, can be manipulated. Just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions. The real risk isn't AI itself but the fact that organizations don't manage these non-human identities (NHIs) with the same security controls as human users."

  • "Manipulation is inevitable. Just as we can't prevent attackers from tricking people, we can't stop them from manipulating AI agents. The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. Security teams need visibility. If these NHIs were properly governed, security teams could detect and block unauthorized actions before they escalate into a breach."

  • "Organizations should:

    • Treat AI agents like human users. Assign them only the permissions they need and continuously monitor their activity.

    • Implement strong identity governance. Track which systems and data AI agents can access, and revoke unnecessary privileges.

    • Assume AI will be manipulated. Build security controls that detect and prevent unauthorized actions, just as you would with phishing-resistant authentication for humans."

  • "The bottom line is that you can't stop attackers from manipulating AI, just like you can't stop them from phishing employees. The solution is better governance and security for all identities—human and non-human alike."

Mayuresh Dani, Security Research Manager at Qualys Threat Research Unit, said:

  • "In recent times, government mandates are forcing vendors to create and share SBOMs with their customers. Organizations should request for SBOMs from their vendors. This is the easiest approach. There are other approaches where the firmware is dumped and actively probed for, but this may lead to a breach of agreements. Such activities can also be carried out in conjunction with a vendor's approval."

  • "Organizations should maintain and audit the existence of exposed ports by their network devices. These should then be mapped to the installed software based on the vendor provided SBOM. These are the highest priority since they will be publicly exposed. Secondly, OS updates should be preceded by reading the change logs that signifies the software's being updated, removed."

  • "Note that SBOMs will bring visibility into which components are being used in a project. This can definitely help in a post compromise scenario where triaging for affected systems is necessary. However, more scrutiny is needed when dealing with open-source projects. Steps like detecting the use and vetting open-source project code should be made mandatory. Also, there should be a verification mechanism for everyone who contributes to open-source projects."

  • "Security leaders can harden their defenses against software supply chain attacks by investing in visibility and risk assessment across their complex software environment, including SBOM risk assessment and Software Composition Analysis (SCA). Part of the risk assessment should include accounting for upcoming EoS software so they can upgrade or replace it proactively."

Satyam Sinha, CEO and Co-founder at Acuvity, said:

  • "There has been a great deal of information and mindfulness about the risks and threats with regards to AI provided over the past year. In addition, there are abundant regulations brought in by various governments. In our discussions with customers, it is evident that they are overwhelmed on how to prioritize and tackle the issues—there's a lot that needs to be done. At the face of it, personnel seems to be a key inhibitor, however, this pain will only grow. GenAI has helped in multiple industries from customer support to writing code. Workflows that could not be automated are being handled by AI agents. We have to consider the use of GenAI native security products and techniques which will help achieve a multiplier effect on the personnel."

  • "The field of AI has seen massive leaps over the last two years, but it is evolving with new developments nearly every day. The gap in confidence and understanding of AI creates a massive opportunity for AI native security products to be created which can ease this gap. In addition, enterprises must consider approaches to bridge this gap with specialized learning programs or certifications to aid their cybersecurity teams. GenAI has helped in multiple industries from customer support to writing code. Workflows that could not be automated are being handled by AI agents."

  • "Moving forward, we must consider the use of GenAI-native security products and techniques which will help achieve a multiplier effect on the personnel. This is the only way to solve this problem."

Comments