The year 2025 has become a turning point for artificial intelligence and cybersecurity. Three new industry reports—AvePoint's State of AI in 2025: Go Beyond the Hype, Fortinet's Cybersecurity Skills Gap Report 2025, and Black Duck's Global DevSecOps Report—collectively paint a portrait of transformation and turbulence across the enterprise.
Each offers a distinct lens on how AI is reshaping operations, skills, and security. Together, they underscore one central truth: AI's value will be realized only through governance, skills development, and integrated security practices.
AvePoint's State of AI in 2025 reveals a sobering contradiction: while nearly all enterprises are racing to deploy AI, more than 75% have already experienced AI-related security breaches, and rollout delays of up to 12 months are now common due to data quality and protection issue.
Key findings include:
68.5% cite data security and 68.7% cite AI inaccuracy as primary obstacles.
32.5% identify hallucinations as the most extreme threat from generative AI.
70.7% of enterprise data is more than five years old, impairing AI training data quality.
While 90% of organizations claim to have effective information management, only 30% have implemented data classification systems, creating what AvePoint calls the "AI Governance Paradox"—confidence without control.
As Dana Simberkoff, Chief Risk, Privacy, and Information Security Officer at AvePoint, puts it: "Organizations treat AI governance as a checkbox exercise rather than an operational imperative. The gap between having policies and implementing them effectively is where most security incidents occur."
Takeaway: Enterprises are caught between enthusiasm and execution. AI adoption without governance is accelerating risk, and information management—not just model tuning—will define who wins in AI maturity.
Vendor SME comments on the Fortinet research:
Diana Kelley, CISO at Noma Security, said:
"AI risks have rapidly moved from a watch list item to a front-line security concern, especially when it comes to data security and misuse. To manage this emerging threat landscape, security teams need a mature, continuous security approach, which includes blue team programs, starting with a full inventory of all AI systems, including agentic components as a baseline for governance and risk management."
As vulnerabilities increase, the adoption of an AI Bill of Materials (AIBOM) is the foundation for effective supply chain security and AI vulnerability management. Robust red team and pre-deployment testing remain vital as does runtime monitoring and logging which round out the approach by providing the visibility to detect and in some cases even block, attacks during use."
Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace, said:
"Before organizations can think meaningfully about AI governance, they need to lay the groundwork with strong data science principles. That means understanding how data is sourced, structured, classified, and secured—because AI systems are only as reliable as the data they’re built on. Solid data foundations are essential to ensuring accuracy, accountability, and safety throughout the AI lifecycle."
"As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies. However, there is no one-size-fits-all approach. Each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements. That's why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions."
"Effective AI governance requires deep cross-functional collaboration. Security, privacy, legal, HR, compliance, data, and product leaders each bring vital perspectives. Together, they must shape policies that prioritize ethics, data privacy, and safety—while still enabling innovation. In the absence of mature regulatory frameworks, industry collaboration is equally critical. Sharing successful governance models and operational insights will help raise the bar for secure AI adoption across sectors."
"The integration of AI into core business operations also has implications for the workforce. Security practitioners—and teams in legal, compliance, and risk—must upskill in AI technologies and data governance. Understanding system architectures, communication pathways, and agent behaviors will be essential to managing risk. As these systems evolve, so must governance strategies. Static policies won’t be enough, AI governance must be dynamic, real-time, and embedded from the start."
"Organizations that treat governance and security as strategic enablers will be best positioned to harness the full potential of AI safely and responsibly."
Fortinet's 2025 Cybersecurity Skills Gap Report shifts the focus to people. While the AI era promises efficiency, it's also widening the talent chasm. The report finds that 82% of organizations struggle to fill cybersecurity roles, while nearly 80% report that AI adoption is changing the skills they now need.
Notable findings include:
The shortage of skilled cybersecurity professionals persists, with 56% of organizations experiencing more breaches due to staffing gaps.
AI literacy is now ranked as one of the top three skills gaps across cybersecurity teams.
More than 70% of respondents said that AI is both a "force multiplier" and a "force disruptor," with many teams unprepared to validate AI-driven outputs.
Fortinet warns that AI doesn't replace expertise—it reshapes it. Routine monitoring and analysis are being automated, but the need for strategic, risk-based thinking is rising. The company advocates for cross-training between cybersecurity and data science disciplines to close what it calls the "AI comprehension gap."
Takeaway: AI isn't closing the skills gap—it's redefining it. Organizations must now cultivate professionals who can not only respond to incidents but also understand, validate, and secure AI systems themselves.
Vendor SME comments on the Fortinet research:
Shane Barney, CISO at Keeper Security, said:
"Fortinet's report makes it clear that the cybersecurity skills gap has become a business risk, not just a technical one. Eighty-six percent of organizations experienced a breach last year, and more than half cited a lack of security expertise as a contributing factor. With nearly every company adopting AI to strengthen defenses, the absence of in-house skills to manage these tools safely is widening the gap between technology and readiness."
"Cybersecurity training must be ongoing, not occasional. AI streamlines detection and efficiency, but it still relies on human oversight and sound governance to operate securely. Security teams need the skills to interpret data, validate AI-driven insights and act with precision and accountability."
"The organizations best prepared to withstand today's threats are those that align skilled people, advanced technology, and a culture of accountability. When teams are empowered to make informed decisions and supported by intelligent, well-governed systems, access remains tightly controlled, visibility stays comprehensive and real-time, and responses are swift and coordinated. That balance of human expertise and technological capability turns cybersecurity from a reactive function into a true driver of resilience."
Black Duck's Global DevSecOps Report 2025 exposes AI's double-edged nature within development and open-source ecosystems. Based on a survey of more than 1,000 software and security professionals, the study found that 97% of organizations use open-source AI models, but 57% admit AI introduces new security risks, even as 63% believe it's improving code quality.
Additional key findings:
71% of security alerts are false positives, creating noise and alert fatigue.
81% of professionals say security testing slows down development cycles.
45.6% of companies still rely on manual processes for security testing integration.
10.7% admit to using AI coding assistants without authorization, a growing "shadow AI" issue.
88.8% of respondents express confidence in managing AI-related risks—despite evidence their tools and processes are underdeveloped.
Black Duck calls this the "AI confidence paradox." As the report states: "AI is seen as both a powerful way to improve security and a significant source of scalable risk. Figuring out how to navigate the dual nature of AI is the central strategic challenge of every security leader today."
Takeaway: Speed and security remain at odds. For CISOs, the lesson is clear: AI in DevSecOps must be governed with the same rigor as open source code—integrated, validated, and continuously monitored.
Vendor SME comments on the Black Duck research:
Casey Ellis, Founder at Bugcrowd, said:
"The operational reality is that organizations are misaligning defenses, including those driven by AI, with where attackers are actually succeeding. While AI tools are often focused on modern threats like cloud services and web apps, older hardware and network systems are often neglected and left vulnerable. These legacy systems, which were never designed to handle today's evolving threat landscape, and often overlooked from both an IT governance and cybersecurity solution standpoint, have accumulated years of technical debt and misconfigurations which makes them easy targets."
"The proliferation of AI-powered vulnerability discovery tools, as well as the growth of AI-assisted code generation, means that a fresh, vulnerable attack surface is being created at an increasing rate, and the tooling to find and exploit this attack surface is doing so more effectively. All of this nets out to higher throughput into the SOC, which necessitates a shift in thinking around the economics of processing SOC alerts."
"AI is already accelerating the creation of attack surface and the ease of discovery and exploitation of certain classes of vulnerability. It's reasonable to assume that these two things will net to an increase in SOC alerts and the need for a shift in strategy to deal with it. I expect to see risk-based prioritization take center stage on the defender side, and there are a lot of ways that AI can help to scale this approach."
"AI will automate mundane tasks, allowing analysts to focus on complex, high-value work like threat hunting and strategic defense. The role of SOC analysts will shift toward managing AI systems, interpreting their outputs, and addressing the nuanced, creative challenges that machines can't handle. Jobs won't disappear, they'll adapt. The key is ensuring that SOC professionals are prepared for this shift through ongoing education, training, and tooling."
James Maude, Field CTO at BeyondTrust, said:
"While the goal of DevSecOps has always been about balancing security and productivity, this report highlights that shipping fast without mature security is still the default for many. This is a common challenge for many organizations, as the saying goes, 'Good, Fast, Cheap—pick two.'"
"For many organizations, getting a handle on the security debt of application developments is often focused purely on securing the codebase. However, this is only part of the picture. While it vital to gain visibility into the code and software lifecycle, you also need to look at the bigger picture of the identities, accounts, and secrets that enable the CI/CD toolchain to run. In order to really close the security gap, it is important to also get a handle on identity and secrets management."
"In the race to the cloud, and then into AI, lots of shortcuts have been taken to make pipelines work and ship code faster. This is why it is important to get a handle on the sprawl of identities, privileges, and access to ensure that your identity attack surface is under control and not a ticking time bomb. In some cases, credentials were hardcoded into scripts, accounts over privileged or abandoned, and misconfigurations introduced that allow any user to control and disrupt your ability to ship code."
Across all three reports, a clear pattern emerges: AI adoption is universal, but governance, human capability, and integration maturity are not.
Here are some themes from each report and how each vendor's perspective is slightly different based on their reports.
Core issues
For AvePoint, it's governance and data integrity.
For Fortinet, it's workforce readiness and AI literacy.
For Black Duck, it's DevSecOps tool sprawl and shadow AI.
Primary risks
AvePoint: AI hallucinations and data leaks
Fortinet: Misaligned skills in AI-driven environments
Black Duck: False positives, manual workflows
Opportunities
AvePoint: Building AI governance into data workflows
Fortinet: Upskilling and cross-disciplinary education
Black Duck: Embedding security seamlessly into dev pipelines
Key statistics
AvePoint: 75% faced AI-related breaches.
Fortinet: 82% struggle to fill cybersecurity roles.
Black Duck: 97% use open-source AI models.
For cybersecurity teams and enterprise leaders, the implications are profound:
AI maturity requires data maturity. Without classification, lineage, and governance, AI's output is unreliable—and exploitable.
Security talent is evolving. The next-generation professional must blend technical acumen with AI literacy and policy fluency.
DevSecOps needs re-engineering. Integrating AI safely means rethinking toolchains, reducing noise, and embedding risk management into the code workflow itself.
Shadow AI must be surfaced. From development teams to enterprise employees, unsanctioned AI usage is a governance blind spot.
Confidence ≠ readiness. Across all reports, leaders overestimate their ability to manage AI risk, signaling an urgent need for continuous validation and policy enforcement.
The 2025 research from AvePoint, Fortinet, and Black Duck collectively signals a maturing but precarious landscape. AI is everywhere—accelerating development, improving productivity, and redefining cybersecurity—but it's also introducing new dimensions of risk.
Cyber leaders must now evolve from AI adopters to AI governors. The next frontier in cybersecurity isn't stopping AI—it's securing how we use it.