SecureWorld News

2025 Risk Reality Check: Cybersecurity at a Crossroads

Written by Cam Sivesind | Mon | Oct 27, 2025 | 1:46 PM Z

The newly-released Riskonnect 2025 New Generation of Risk Report paints a clear picture of progress and peril in enterprise risk management. While risk teams are improving in some areas—such as scenario planning, geopolitical awareness, and AI adoption—the data show that organizations remain dangerously underprepared for the next generation of threats reshaping the business landscape.

For cybersecurity professionals, the implications from the report are profound: political instability, AI-driven automation, and third-party exposure are converging to redefine what "resilience" means in practice.

Political risk has surged into the top three corporate threats for 2025, with 97% of risk leaders reporting some level of impact and 40% calling it significant or severe.

Nearly four in 10 organizations have stalled hiring or technology investments due to political uncertainty.

Yet only 17% feel "very prepared" to manage and recover from political disruptions—despite many of these policy shifts being anticipated well in advance.

The concern isn't just economic. As the company's press release highlights, 62% of risk leaders believe trade wars and long-term restrictive trade policies will trigger cyberattacks—specifically state-sponsored incidents targeting intellectual property and supply chains.

"Geopolitical volatility creates conditions ripe for cyberattacks," the report warns, emphasizing that nation-state threats exploit digital vulnerabilities at third parties when oversight wanes.

AI: The double-edged sword of risk and opportunity

The rise of agentic AI—autonomous systems capable of executing tasks with minimal human input—marks a major turning point. Nearly 60% of companies are exploring such technologies, but 55% admit they haven't assessed the risks.

[RELATED: The Impact of AI on Cybersecurity: Navigating AI-Enhanced Threats and AI-Enabled Defenses]

Riskonnect CEO Jim Wetekamp explains: "Agentic AI is a new and critical category of enterprise risk. The autonomous execution of tasks brings tremendous efficiency gains—but also the potential for runaway processes or cascading failures if not managed properly. Companies need accountability and continuous oversight designed for autonomous, adaptive systems."

The gaps are glaring:

  • Only 12% of organizations feel "very prepared" for AI and AI governance risks.

  • 42% lack an employee-use AI policy.

  • 72% have no GenAI policy for partners or suppliers.

  • 75% lack a dedicated AI risk plan.

  • Just 15% allocate budget to mitigate AI-related risks.

For cybersecurity leaders, this means shadow AI—employees using unauthorized chatbots or automation tools—has become the new insider threat. Nearly 90% of workers in one cited study use AI tools at work without IT approval, creating an unseen web of ungoverned data flows and potential leak points.

Third-party and 'nth-party' risks: Still the Achilles’ heel

Eighty-five percent of companies say they have a business continuity plan for vendor-related outages—but the fine print reveals that most can only assess their Tier 1 suppliers. Just 8% have visibility into their suppliers' suppliers.

This blind spot remains even after major disruptions like the MOVEit and CrowdStrike incidents prompted widespread policy reviews. Riskonnect calls this "an incomplete picture of digital supply chain risk."

For CISOs and risk officers, this is more than a data governance issue—it's a resilience liability. Attackers are increasingly leveraging small, peripheral vendors to compromise larger ecosystems. Without nth-party insight, enterprises risk being blindsided by exposures hidden deep in the supply web.

AI-powered risk management: progress with a caveat

The good news: risk teams are adopting AI to manage risk itself. Seventy percent of companies are now using or planning to use AI in their risk programs, up from 62% last year.

Top applications include:

  • Risk assessment (34%)

  • Forecasting (28%)

  • Scenario simulations (28%)

And while 61% have simulated worst-case scenarios, 39% still haven't conducted this critical exercise. AI-driven modeling allows teams to test multiple complex "what-if" scenarios at scale, identifying vulnerabilities before disruption hits—a discipline cybersecurity teams can easily extend into attack simulation and adversarial resilience planning.

The risk function evolves—but budgets don't

Risk management is finally earning a seat at the table: 60% of organizations now have a Chief Risk Officer, up from 52% last year.

Yet budgets are stagnant; only 28% report any increase in technology spending for risk management.

The paradox is clear: risk expectations are rising faster than resources, forcing CISOs and risk leaders to innovate under constraint. That reality makes AI-driven automation and cross-functional risk integration essential for scaling defenses without ballooning cost.

What it means for cybersecurity leaders
  1. Connect geopolitical risk with cyber posture – Trade tensions, sanctions, and state-sponsored campaigns are linked. Integrate political risk forecasting into cyber threat intelligence.

  2. Establish AI governance now – Define policies, ownership, and response procedures before AI-induced incidents become unmanageable.

  3. Map your digital supply web – Move beyond Tier 1 visibility to model dependencies and exposures across the full vendor ecosystem.

  4. Embed resilience exercises – Simulate not only ransomware or DDoS events but also AI system failures, supplier breaches, and geopolitical disruptions.

  5. Elevate risk management to a business enabler – Translate cyber posture into business risk language; boards are listening more closely than ever.

Riskonnect's report makes one truth unavoidable: risk management maturity doesn't equal preparedness. Political shocks, agentic AI, and global supply chains are fusing into a volatile new threat matrix, and the organizations that thrive will be those who treat risk as a strategic discipline, not a compliance checkbox.

As Wetekamp summarized, "The new generation of risk isn't defined just by the threats themselves, but by how quickly companies can adapt, act, and recover when they strike."

We asked some security vendor SMEs for their thoughts on the report.

Dana Simberkoff, Chief Risk, Privacy and Information Security Officer at AvePoint, said:

  • "The gap between having policies and implementing them effectively is where most security incidents occur. This challenge becomes exponentially more critical as organizations move toward agentic AI systems that can act independently and make decisions without human oversight. Basic security measures cannot keep pace with the complexity and sprawl of AI-generated data, leaving organizations vulnerable unless they evolve their governance models to handle autonomous AI agents."

  • Here are additional stats from a recent AvePoint research report on AI that tie in nicely with these findings:

    • The report revealed a striking disconnect between AI ambitions and execution: while organizations race to deploy AI at scale, more than 75% experienced AI-related security breaches, and security concerns are forcing deployment delays of up to 12 months.

      • Inaccurate AI output (68.7%) and data security concerns (68.5%) top the list of factors for why organizations are slowing the rollout of generative AI assistants.

      • Among organizations claiming highest information management effectiveness (52.4%), 77.2% still experienced data security incidents, revealing that perceived readiness doesn't translate to actual protection.

    • Despite the challenges, organizations are responding with targeted investments in foundational infrastructure:

      • 64.4% are increasing investment in AI governance tools.

      • 54.5% are boosting data security tool investments.

      • 99.5% are implementing AI literacy interventions, with role-based training proving most effective (79.4% rate as highly impactful).

      • 73.9% use both quantitative and qualitative feedback methods to assess AI program effectiveness.

Randolph Barr, CISO at Cequence Security, said:

  • "We're quickly seeing AI evolve from simple automation to deeply personalized, context-aware assistance—and it's heading toward an Agentic AI future where tasks are orchestrated across domains with negligeable human input."

  • "In the rush to bring AI to market quickly, engineering and product teams often cut corners to meet aggressive launch timelines. When that happens, basic security controls get skipped, and those shortcuts make their way into production. So, while organizations are absolutely starting to think about model protections, prompt injection, data leakage, and anomaly detection, those efforts mean little if you haven’t locked down identity, access, and configuration at a foundational level. Security needs to be part of the development lifecycle from day one."

Nicole Carignan, SVP, Security & AI Strategy, and Field CISO at Darktrace, said:

  • "As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies. However, there is no one-size-fits-all approach. Each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements. That's why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions."

John Watters, CEO and Managing Partner at iCOUNTER, said:

  • "Traditional security approaches of updating defenses to combat general threat tactics are no longer sufficient to protect sensitive information and systems. To effectively defend against AI-driven rapid developments in targeted attacks, organizations need more than mere actionable intelligence—they need AI-powered analysis of attack innovations and insights into their own specific weaknesses which can be exploited by external parties."

Diana Kelley, CISO at Noma Security, said:

  • "AI is quickly being woven into the fabric of all business operations and workflows. With AI everywhere, workers with skills that enable effective use of AI will be well positioned to help companies make the most of the AI revolution."

  • "With the rise of agentic AI, autonomous systems capable of acting on their own, red team testing becomes essential to identify emergent behaviors and security vulnerabilities before deployment. Once in production, runtime security and compliance monitoring help detect drift, abuse, or policy violations in real time. These controls work together to ensure that both organizations and regulators have continuous assurance that AI operates safely and ethically."

Marc Maiffret, CTO at BeyondTrust, said:

  • "As Agentic AI systems begin to autonomously interact with infrastructure, make decisions, and even provision access themselves, the hidden risks posed by unmanaged secrets and non-human identities (NHIs) become exponentially more dangerous. These identity infrastructure issues aren’t just misconfigurations, they’re invitations. Our recent Identity Security Risk Assessment data shows that many organizations lack the complete story when it comes to their identity attack surface. For many, overlooked hygiene issues silently open the door to attackers. With the rise of Agentic AI, the stakes have never been higher, especially as most organizations lack visibility into how compromised accounts can be leveraged to seize control of application secrets, which often carry elevated privileges."

Chad Cragle, CISO at Deepwatch, said:

  • "Agentic AI is already slipping into business operations, often without leaders noticing, and that's where the real risk begins. These systems don't just follow instructions; they make their own decisions based on context, which means that bad inputs, flawed datasets, or excessive permissions can quickly spiral into data leaks, compliance violations, or system outages. Companies that skip vendor risk assessments are essentially giving autonomy to software they don't fully understand."

  • "The smart move is to get visibility now and know where AI is running, what data it handles, and who is responsible for its actions. Treat it like any other high-impact asset, with strong access controls, detailed audit trails, and human oversight for critical decisions. Contracts with vendors and partners should reflect this, and incident response plans should account for AI that acts unpredictably or is manipulated. Organizations that establish governance and transparency early will not only reduce risk but also build trust and resilience, while others scramble to keep up."