A joint study from Cybersecurity at MIT Sloan (CAMS) and Safe Security has looked at 2,800 ransomware incidents and found that a staggering 80.83%—more than 2,272 attacks—were powered by artificial intelligence. This statistic isn't theoretical; it's based on comprehensive, real-world data collected during 2023–2024.
The Rethinking the Cybersecurity Arms Race working paper paints a vivid picture of how AI is transforming attack methods. Adversaries are no longer relying on manual orchestration. Instead, they are deploying agentic AI systems that can autonomously execute and adapt ransomware campaigns—from reconnaissance through to extortion.
These AI-driven threats exhibit advanced capabilities, including:
Targeted file selection: AI identifies and encrypts only high-value data, improving efficiency and impact—seen notably in ransomware like CL0P.
Adaptive kill chain execution: Threat actors leveraging groups like LockBit, RansomHub, Akira, and ALPHV/BlackCat showed dynamic orchestration using AI throughout attack stages. Among the 2,811 recorded incidents:
2,272 (80.83%) were AI-enabled.
LockBit led with 815 incidents, followed by RansomHub (548), Akira (314), and ALPHV (189).
"AI-powered cybersecurity tools alone will not suffice," the study's authors write. "A proactive, multi-layered approach—integrating human oversight, governance frameworks, AI-driven threat simulations, and real-time intelligence sharing—is critical."
With ransomware campaigns increasingly AI-driven, the threat landscape is accelerating—and so must defensive strategies.
1. Automation must be the defense baseline: Manual patching and manual hygiene aren't sufficient. Defensive automation—self-patching, continuous attack surface monitoring, zero-trust architectures—must be foundational.
2. Adopt deceptive and autonomous defense systems: Real-time, intelligent defenses like SOAR-enabled moving target defenses and deception tools help level the playing field.
3. Executive-level situational awareness: Security leaders must leverage real-time AI-powered insights to understand threat dynamics and guide risk-informed decisions.
4. Reframe security as an AI arms race: Michael Siegel of CAMS underlines an urgent reality: "Can we crack the asymmetric warfare nature of cybersecurity? Attackers benefit from single points of failure, while defenders must protect all."
The study recommends some strategies and defensive tactics, including deploying self-healing code, continuous monitoring, zero-trust enforcement; using deception tools, analytic SOAR platforms, autonomous threat adjustments; using dashboards with real-time risk scoring and impact forecasting and prioritization; employing AI-led red teaming and threat simulations to anticipate attack vectors; and sharing AI-driven threat intelligence and attack patterns across sectors.
The MIT Sloan and Safe Security study shifts the narrative: AI is no longer a future threat—it is today's norm in ransomware attacks. At more than 80%, AI's dominance in cybercriminal operations is a call to action.
"The autonomous nature of things has caused there to be a reexamination of the way in which we defend ourselves and the way in which we have to look at both old- and new-style attacks," Siegel said.
"For cybersecurity, there are tremendous opportunities for things to go wrong," Siegal continued. "Protecting in this new environment that is moving at light speed is challenging, but we can learn from our previous work. Many researchers and products are already addressing management, prevention, detection, response, and resilience issues."
One example of that work: Siegel and colleagues from MIT Sloan are investigating the role that generative AI is playing in both attacks on and the defense of industrial control systems.