How AI Deepfakes Are Fueling Synthetic Identity Fraud in Enterprises
9:15
author photo
By David Balaban
Sun | Mar 1, 2026 | 7:21 AM PST

Enterprise fraud has always followed the path of least resistance. What's changed is the attacker's toolkit. Generative AI can now produce believable voices, faces, and "perfectly normal" video calls on demand, while synthetic identities can be built like Legos–one real data fragment here, one fabricated detail there, all wrapped in a clean online footprint.

Put the two together and you get a form of deception that doesn't just trick people; it slips into workflows, approval chains, and onboarding systems that were built for a different era.

What this means for security teams

Deepfakes don't replace classic social engineering; they upgrade it. A spoofed email from "the CFO" is one thing. A live video call where the CFO appears, speaks naturally, and pressures a fast decision is something else entirely. In many organizations, "I saw them" and "I heard them" still carries weight. Attackers know that, and they lean into it.

The risk grows when fraud is routed through processes that already look legitimate. A synthetic vendor can be onboarded, receive payments, and build transaction history. Later, a deepfake call arrives to approve a "routine" change, such as new bank details, a rush payment, an exception to normal procedure. Each step can pass a basic reasonableness test, especially when teams are busy and the request lands with the right mix of authority and urgency.

This is where governance and verification collide. KYB compliance exists to validate business entities and reduce exposure to shell companies and questionable counterparties. The problem is that convincing-looking paperwork and polished digital presence are easier to manufacture than ever. If verification leans too heavily on static documents and surface-level checks, attackers can create something that looks compliant while being entirely fraudulent underneath.

The takeaway is uncomfortable but necessary: familiar identity signals, such as voice, face, or official-looking documents, can no longer be treated as standalone proof.

Incidents and trends that show where this is headed

Recent cases highlight how deepfakes are being used as a force multiplier, especially in finance.

One widely-reported incident in Hong Kong involved a deepfake-driven video conference in which a finance employee was persuaded to transfer roughly $25 million. The detail that stands out isn't just the money, it's the method. The attackers used a meeting format that feels normal in modern enterprises—multiple participants, a legitimate-looking context, and a sense of internal routine. That's precisely why it worked.

In the U.S., deepfake videos featuring well-known public figures, including Elon Musk, have been used to promote fraudulent investment schemes. These campaigns often target individuals, but the mechanics translate directly to enterprise environments: manufactured authority, rapid trust-building, and a narrative designed to override skepticism.

The scale is not hypothetical. A Deloitte poll in 2024 found that one in four organizations had experienced at least one deepfake incident aimed at financial or accounting data. That number matters because it points to repeatable targeting of processes that move money, change payee details, or expose sensitive reporting information.

Several patterns are showing up across these events:

  • Real-time is replacing prerecorded. Live calls and interactive conversations reduce the "this looks edited" suspicion.

  • Multi-channel reinforcement is common. A compromised email thread plus a deepfake call is far more persuasive than either alone.

  • Synthetic identities are being played long. Some fraudsters establish vendors or "employees," build credibility, and cash out later when the environment is primed.

This is not random opportunism. It's a workflow attack.

Why synthetic identities work so well 

Synthetic identity fraud succeeds because it exploits a gap between "looks valid" and "is real." Attackers blend authentic data (a real address, a legitimate registration number, a compromised tax identifier) with fabricated elements (a generated headshot, a made-up executive, a curated work history) until the result passes routine checks.

It's also durable. A stolen identity may be locked down once the victim notices. A synthetic identity has no victim to complain, no baseline to compare, and no obvious trigger in the early stages. It can be nurtured through vendor onboarding, small invoices, normal communication cadence, gradual trust-building. By the time it's used for major fraud, it feels established rather than newly created.

Deepfakes make the synthetic identity feel human. If a "vendor contact" can appear on camera and answer questions without sounding scripted, suspicion drops. People are reluctant to challenge what appears to be a direct face-to-face interaction, especially when it comes from someone framed as senior, busy, and decisive.

From the attacker's perspective, that combination is efficient because it ensures scalable identity creation, believable interaction, and fewer points of immediate detection.

Regulation and compliance are catching up, albeit slowly

Regulators and industry frameworks already emphasize controls around onboarding, due diligence, and transaction monitoring. Those expectations aren't going away. If anything, deepfakes and synthetic identities make the case for stronger, more demonstrable governance.

The challenge is that many compliance programs were built around documentation and formal attestations, which are items that can now be simulated convincingly. That pushes organizations toward a more evidence-based approach through continuous monitoring, cross-validation of identity claims, stronger change-control for payment details, and transaction scrutiny that accounts for social-engineering signals.

For enterprises, this becomes shared territory between Compliance and Security. Fraud awareness and prevention can't sit solely in finance, and identity assurance can't be treated as a one-time check at onboarding.

The human factor still decides the outcome 

The best deepfake doesn't win because it's technically perfect. It wins because it lands at the right time, on the right person, with the right pressure.

Hierarchy is an attacker's friend. People hesitate to slow down a "senior" request. Teams evaluated on speed and responsiveness tend to treat verification as friction. Remote work adds another complication: video calls feel normal, and "I saw them on camera" can be mistaken for certainty.

Security awareness needs a reality check. Many programs still revolve around suspicious links and odd email phrasing. Deepfake fraud is often more subtle: a plausible request, delivered through a familiar channel, supported by social context. Employees need clear permission to pause, verify, and escalate, even if the request appears to come from the top.

Defense strategies 

No single control neutralizes this threat. The goal is to make fraud hard to execute and easy to interrupt.

  • Use out-of-band verification for money movement. Confirm high-risk requests through a separate, pre-approved channel (not a reply to the same email thread).

  • Lock down payee change procedures. Bank detail updates should trigger enhanced verification and, for higher-risk vendors, a short waiting period.

  • Require multi-person approval for large transfers. Two sets of eyes reduce the odds that urgency overrides judgment.

  • Harden vendor onboarding with ongoing checks. Treat KYB compliance as continuous, not a one-time document review.

  • Add "challenge steps" for executive requests. Pre-agreed internal verification phrases, call-back rules, or secure approvals help validate real-time requests.

  • Monitor for lookalike domains and thread hijacking. Many deepfake incidents are reinforced by business email compromise or domain spoofing.

  • Prepare an incident playbook for deepfakes. Include rapid internal notification, payment recall workflows, and media/forensics triage.

  • Run tabletop exercises that involve finance. If finance teams aren't practicing fraud interruption, controls can fail under pressure.

These measures work best when Security, Finance, and Compliance treat deepfake-driven fraud as shared operational risk. That alignment is often the difference between "we had controls" and "we stopped it."

The bottom line

Deepfakes and synthetic identities are pushing enterprise fraud into a more convincing, process-aware phase. Attackers are no longer limited to stealing credentials or sending generic lures. They can manufacture authority, build believable entities, and pressure employees in real time.

Enterprises don't need panic; they need modernization. Treat audiovisual proof as a weak signal, tighten the rails around financial workflows, reinforce KYB compliance beyond paperwork, and normalize verification as a professional habit. Organizations that adapt now will be the ones that keep trust intact when the next executive call isn't an executive at all. 

Comments