author photo
By Cam Sivesind
Wed | Jul 26, 2023 | 11:56 AM PDT

A new potential cybercrime tool called "FraudGPT" appears to be an AI bot exclusively being used for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, and more nefarious activities.

The Netenrich threat research team shared its discovery of the tool, which is currently being sold as a service on various Dark Web marketplaces and the Telegram platform for $200 per month or $1,700 per year. The threat actor behind the fraud tool created a Telegram Channel just over a month ago, on June 23, 2023.

According to the Netenrich blog post: "The threat actor claimed to be a verified vendor on various Underground Dark Web marketplaces, such as EMPIRE, WHM, TORREZ, WORLD, ALPHABAY, and VERSUS. As all these marketplaces are exit scammed frequently, it can be assumed that the threat actor had decided to start a Telegram Channel to offer his services seamlessly, without the issues of Dark Web marketplace exit scams."

The new research details initial findings surrounding FraudGPT to help analysts investigate any potential exposures to their systems. A threat actor can draft an email, with a high-level of confidence, to entice recipients to click on the supplied malicious link. This craftiness would play a vital role in business email compromise (BEC) phishing campaigns on organizations.

Some of the features of FraudGPT include its ability to:

•    Write malicious code
•    Create undetectable malware
•    Find non-VBV bins
•    Create phishing pages
•    Create hacking tools
•    Find groups, sites, markets
•    Write scam pages/letters
•    Find leaks, vulnerabilities
•    Learn to code/hack
•    Find cardable sites
•    Escrow available 24/7
•    3,000+ confirmed sales/reviews

Here are some comments from cybersecurity vendor experts:

Pyry Åvist, Co-founder and CTO at Hoxhunt:

"While ChatGPT works even for cybercriminals just smart enough to rub two brain cells together, the new FraudGPT offers added convenience, no ethical guardrails, and provides hand-holding throughout the phishing campaign creation process. This lowers barrier of entry to cybercrime and increases the probability of democratization of sophisticated phishing attacks. It's the cybercrime economy's version of next-gen product development for the phishing kit model. Phishing kits, which including email texts and malicious site templates, are cheap and work pretty well despite their telltale signs of poor grammar and graphics. The next level that FraudGPT offers is, instead of phishing templates, criminals can craft tailored attacks as per targeted specifications. That is certainly concerning, but it's something that ChatGPT will also do, and probably do better.

The real story, and the one that isn't being talked about enough, is in the automation of a multi-step attack. For instance, we've seen attacks using chatbots for successful BECs, where the malicious actors often must interact with the victim to obtain credentials or bypass MFA. You can also leverage chatbots with deepfake technology to have a convincing conversation with a human voice and face. These models could do highly sophisticated attack campaigns at scale, and make malware and BEC even more of a problem.

The good news about FraudGPT is that good behavior change training has a protective effect even against generative AI attacks. We have tested this repeatedly with hundreds of thousands of email users, and it's remained true that trained users don't fall even for sophisticated, well-crafted attacks generated by AI."

[RELATED: Research Examines WormGPT, an AI Cybercrime Tool Used in BEC Attacks]

Timothy Morris, Chief Security Advisor at Tanium:

"This could be another exit scam. Criminals know how to market, so any latest techno-buzz-worthy will create the excitement and provoke other criminals to give up their coins to join in. Enterprises should continue to do what they should already be doing. Threat hunting and monitoring using IOCs that this latest actor may use. Strong security controls (email, web, endpoint), MFA and least privilege, and user training.

So, what does FraudGPT allow attackers to do that they couldn't do before? Unlike ChatGPT, or any other LLM GPT, it allows would-be miscreants to use FraudGPT without guardrails. Meaning the abuse filters aren't there, so almost anything is fair game since misuse isn't being checked for. To do 'evil' in normal GPT tools you must learn how to do 'prompt jailbreaks' and master DAN (do anything now). That skill is not required with criminal based GPTs like FraudGPT or WormGPT.

Basically, the attacks are the same, FraudGPT is just a another way to do them better, faster. BEC is one example. Whatever is being done to defend against BEC now—like training, email security, payment authorization controls—will still be the same. Expect the BEC content to be better, i.e., it may be more convincing and have correct grammar."

Åvist had this to add on security awareness training and the efficacy of human social engineers versus AI tools:

"In this study we performed, a phishing prompt was created and our human social engineers and ChatGPT had one afternoon to craft a phishing email based on that prompt. Four simulation pairs—four human and four AI—were then sent to 53,127 email users in over 100 countries in the Hoxhunt network. Users received the phishing simulations in their inboxes as they'd normally receive any legitimate or malicious email, as per the Hoxhunt phishing training workflow. The results of our experiment indicated human social engineers still significantly outperformed AI in terms of inducing clicks on malicious links. 

But perhaps the most important takeaway, given the emergence of blackhat GPT models, is that good security awareness, phishing, and behavior change training work. Users with more experience in a security awareness and behavior change program displayed significant protection against phishing attacks by both human and AI-generated emails. Failure rates dropped from over 14% with less trained users to between 2-4% with experienced users.

Having training in place that's dynamic enough to keep pace with the constantly-changing attack landscape will continue to protect organizations against data breaches. Users who are actively engaged in training are less likely to click on a simulated phish regardless of its human or robotic origins."

Comments