Tue | Feb 13, 2024 | 4:27 AM PST

As artificial intelligence continues advancing at a rapid pace, criminals are increasingly using AI capabilities to carry out sophisticated scams and attacks. Technologies that synthesize realistic fake media, known as deepfakes, are among the newest tools being deployed to enable fraud.

A finance clerk working at a Hong Kong branch of a large multinational corporation recently fell victim to an elaborate scam utilizing deepfake technology to impersonate senior executives and swindle more than $25 million, according to reports.

The scam began with the employee receiving a phishing message purportedly from the company's chief financial officer requesting an urgent confidential transaction. Despite initial skepticism, the clerk's doubts were eased after joining a video conference call where deepfakes impersonated both the CFO and other senior managers familiar to the clerk.

Police investigations revealed that the deepfakes likely relied on publicly available company videos and audio to digitally recreate the likenesses and voices of executives. By not engaging the clerk directly beyond an introduction, the fakes appeared more genuine and authoritative. Over multiple transactions, the criminals accumulated $25 million transferred to Hong Kong accounts before the company discovered and reported the fraud.

This complex scam represents the first known case of using customized deepfakes to mimic an entire group meeting to manipulate staff. Authorities described it as a "new deception tactic" showing sophisticated technological capabilities.

Nick France, the Chief Technology Officer at Sectigo, discussed this new tactic with SecureWorld:

"With deepfake technology, we can no longer trust what we see and hear remotely. Perfectly-written phishing emails, audio messages with the correct tone, and now even fully fake video can be created easily and used to socially engineer into companies and steal money or valuable data and intellectual property. Employees may still assume today that live audio or video cannot be faked, and act on requests they are given seemingly by colleagues or leaders without question, as we have seen in this recent case.

Security teams should see this as another threat to their organizations and update their practices and training accordingly. Following best practices for cybersecurity, adhere to the principles of "least privilege" so that employees only have access to the accounts and systems they need to perform their roles. Confirm payments and access to critical data with additional confirmations—even if you know the face on the screen."

Security experts suggest countermeasures will be needed such as digital authentication of meeting attendees. But for now, it serves as a wake-up call about the potential damages from such AI-enhanced fraud. Companies globally have been warned to remain vigilant about verifying identities, even in online meetings that may appear legitimate.

As video conferencing becomes routine in business, the cloning of meetings via realistic deepfakes poses a growing threat. The Hong Kong scam is likely just the first financially motivated attack to exploit synthesized media. Without better safeguards, more voice/image forgery cons targeting enterprises seem inevitable.

In the past, fraud often relied on simplicity and social engineering to trick victims. However, today's perpetrators employ machine learning, harvested personal data, natural language processing, and other AI to create intricately personalized ruses. The combination of computing power and Trojan horse psychological manipulation is producing a new era of highly deceptive cybercrime.

Follow SecureWorld News for more stories related to cybersecurity.

Comments