SecureWorld News

FBI: AI-Enabled Fraud Topped $893M in 2025—Real Toll Likely Far Higher

Written by Drew Todd | Thu | Apr 9, 2026 | 4:50 PM Z

The FBI's Internet Crime Complaint Center (IC3) has released its latest annual report, marking the first time in the center's 25-year history that it has devoted a dedicated section to artificial intelligence as a cybercrime tool. The milestone reflects how rapidly the technology has shifted from an emerging concern to a mainstream instrument of fraud.

The broader context is stark: total cybercrime losses reported to IC3 crossed $20 billion for the first time in 2025, reaching $20.877 billion across more than 1 million complaints—the first time IC3 has received that many reports in a single year.

The $893M figure is a floor, not a ceiling

IC3 logged 22,364 complaints with an AI-related descriptor in 2025, representing $893 million in adjusted losses. But the report draws an important distinction that security leaders should internalize: the AI attribution reflects only what victims reported and recognized. Actual AI involvement across fraud schemes is far broader.

The starkest illustration of this gap comes from investment fraud. Complaints in which victims specifically noted an AI nexus generated $632 million in losses. But total investment fraud losses in 2025 hit $8.648 billion—meaning AI was officially attributed to less than 8% of that category. The FBI's own analysis suggests many victims simply had no way to detect that synthetic content, generated personas, or AI-assisted scripts were used to manipulate them.

"AI-enabled synthetic content is becoming increasingly difficult to detect and easier to make, which allows criminal actors to potentially conduct successful fraud schemes against individuals, businesses, and financial institutions," the report states.

Investment fraud: AI at industrial scale

The investment fraud picture in 2025 reflects AI's role as an industrial scaler for social engineering. Criminals deployed AI chat tools to generate thousands of personalized victim conversations simultaneously—each one appearing distinct, building trust across weeks or months before the eventual theft.

Investment clubs became a key delivery mechanism. Fraudsters used AI-generated videos and audio to impersonate celebrities, CEOs, and financial figures, creating fake endorsements that were often distributed via social media or staged video calls. These productions were professional enough to deceive victims who would have recognized a low-quality fake.

Cryptocurrency investment fraud—commonly known as "pig butchering"—accounted for $7.228 billion in losses across 61,559 complaints, a 48% increase in complaint volume from 2024. These scams, largely run by organized criminal enterprises in Southeast Asia using trafficked labor, now rely on AI to accelerate the trust-building phase and increase the volume of simultaneous operations.

Business email compromise: voice cloning enters the kill chain

Business email compromise (BEC) remains one of the most financially damaging crime types tracked by IC3, generating $3.046 billion in losses in 2025. Within that category, AI is increasingly embedded in the attack chain.

Chat-generation tools allow attackers to rapidly produce executive-impersonation emails with the tone, vocabulary, and contextual detail of a specific organization's leadership. The FBI report highlights that voice cloning is now being layered into these attacks, used to place follow-up calls that appear to come from a CFO or CEO, reinforcing written wire transfer instructions.

In 2025, businesses reported more than $30 million in losses specifically attributed to BEC scams with a confirmed AI component. Given the attribution gap noted elsewhere in the report, that number should be treated as a conservative baseline.

Confidence and romance scams: synthetic personas at scale

AI-assisted confidence and romance scams generated $19 million in reported losses in 2025, with a confirmed AI nexus; but the mechanics documented in the IC3 report point to broader infiltration of this category.

Criminals are using AI chat generators to produce profiles and conversation scripts that make synthetic relationships more believable and sustainable over longer periods. A related and particularly concerning subcategory is the "distress scam": voice-cloning technology mimics the voice of a family member in apparent crisis, prompting victims to wire money immediately. These calls are increasingly difficult to distinguish from a real emergency.

Distress scams generated more than $5 million in losses in 2025, and the FBI notes that the tactic is evolving—expanding beyond grandparent-targeting schemes to impersonate a wider range of family members and friends in various emergency scenarios.

The employment fraud: deepfake interviews as network access vectors

AI-enabled employment fraud represents a threat category that sits at the intersection of individual financial crime and enterprise network security. The FBI documented widespread use of voice spoofing and video deepfakes during online job interviews in 2025, with victims reporting losses of approximately $13 million.

The enterprise dimension is significant: the IC3 report notes that financial loss is often not the primary objective in these cases. Instead, the goal appears to be gaining access to corporate networks under the cover of legitimate remote employment. An attacker who passes a deepfake interview and is provisioned with credentials and internal access represents a persistent, authorized threat inside the perimeter.

This pattern connects directly to the FBI's ongoing warnings about North Korean IT worker infiltration schemes, documented separately in the report, in which state-sponsored actors placed remote workers inside U.S. companies to exfiltrate data and generate revenue for weapons programs.

[RELATED: North Korean IT Workers Expand Global Reach and Tactics]

What security teams should take from this

The IC3's decision to formally break out AI as a tracked fraud descriptor for the first time is itself a signal. It acknowledges that AI has evolved from an emerging threat to a defined, measurable component of the cybercrime ecosystem.

Several operational implications stand out for defenders.

  • The attribution gap is a detection problem. If victims can't identify AI involvement, detection controls aren't surfacing it either. Voice biometric verification, deepfake detection tooling, and out-of-band confirmation workflows for high-value wire requests deserve renewed attention.

  • BEC defenses need to account for audio, not just email. Voice cloning as a BEC layer means that a callback to a "known" number or a voice that sounds right is no longer a reliable verification signal.

  • Remote hiring processes are an attack surface. Organizations should treat the interview and onboarding process as a security boundary—particularly for positions that carry privileged access or handle sensitive data.

  • The 60+ demographic is a significant target and, for enterprise security teams, represents a risk vector through employees' families. Distress scams and tech-support fraud targeting older Americans generated $7.748 billion in losses in 2025—a 59% increase from 2024.

The FBI launched several initiatives in response to the broader fraud picture in 2025. Operation Level Up, focused on cryptocurrency investment fraud, notified 3,780 victims last year—78% of whom were unaware they were being scammed at the time of contact—and prevented an estimated $225.8 million in losses. A new Scam Center Strike Force targeting Southeast Asian criminal enterprises responsible for large-scale pig butchering operations is pursuing both prosecutorial and sanctions-based disruption.

The 2025 Internet Crime Report is available here.

Follow SecureWorld News for more cybersecurity news.