Today, SlashNext published original threat findings on a unique module based on the generative AI of ChatGPT that cybercriminals are leveraging for nefarious purposes.
These research findings have widespread implications for the security community in understanding how threat actors are not only manipulating generative AI platforms for malicious purposes but also creating entirely new platforms based on the same technology, specifically designed to do their ill-bidding.
In its latest research, SlashNext—a provider of multi-channel phishing and human hacking solutions—delves into the emerging use of generative AI, including OpenAI's ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the research explores the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of generative AI in facilitating such attacks.
The SlashNext team collaborated with Daniel Kelley, a reformed black hat computer hacker who is researching the latest threats and tactics employed by cybercriminals. Delving into cybercrime forums, Kelley and SlashNext uncovered discussion threads wherein threat actors were:
- Freely sharing with one another tips for how to leverage ChatGPT to refine emails that can be used in phishing or BEC attacks;
- Promoting "jailbreaks" for interfaces like ChatGPT, referring to specialized prompts and inputs that are designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content or executing harmful code;
- Promoting a custom module similar to ChatGPT, presented as a black hat alternative to ChatGPT but without any ethical boundaries or limitations.
Here is commentary from cybersecurity experts at vendor solution companies:
Timothy Morris, Chief Security Advisor at Tanium:
"Using generative AI to create better BEC, phishing, and spear phishing emails was inevitable. Not only are the emails more convincing with correct grammar, but the ability to also create them almost effortlessly has lowered the barrier to entry for any would-be criminal. Not to mention the ability to increase the pool of potential victims since language is no longer an obstacle.
The report is further evidence that 'jailbreaking,' prompt injection, and model poisoning are big risks that must be better learned and understood. Research has focused on this area a great deal in the last several months. Offensive security researchers have been 'attacking' generative AI tools to find vulnerabilities. OpenAI has been paying out bug bounties to learn about as many as quickly as possible. Next month, Black Hat and DEF CON will have many talks about generative AI. There is one specifically by security researcher Adrian Woo that will show 'research into hiding c2 implants in machine learning models and performing supply chain attacks.' As these bugs or vulnerabilities are learned, the generative AI companies—like OpenAI, Google, and Microsoft—will fix them, but it will take time.
Which is the next point. As the more public GPT tools are tuned to better protect themselves against unethical use, the bad guys will create their own. The evil counterparts will not have those ethical boundaries to contend with, as the report illustrates with WormGPT."
Mike Parkin, Senior Technical Engineer at Vulcan Cyber:
"The original 'scare' over ChatGPT was over its ability to lower the bar on writing malicious code, which was largely overblown. I have held that conversational AI was really a threat was with social engineering, where sophisticated phishing campaigns would become much easier to generate. With a little data scraping and some dedicated AI training, it would be possible to automate much, if not all, of the process to enable threat actors to phish at scale. While the example in the report was not especially sophisticated and looked like myriad similar BEC emails, it still shows that threat actors are doing exactly what I predicted they would do months ago.
It is no surprise that cybercriminal groups have gone this route. Conversational AI like ChatGPT and its kin are good at sounding like a real person. That makes it a lot easier for a criminal operator who might have English as their second or third language to write convincing hooks. Creating a phishing email is almost the exact opposite of creating malicious code, in that a good social engineering hook will strive for clarity rather than obscurity."
Claude Mandy, Chief Evangelist, Data Security, at Symmetry Systems:
"Security professionals struggle to secure this data already and are continually challenged in understanding where personal data is being stored, let alone used. The increased consumerization of AI will make it even harder to control the flow of data into AI tools, representing an even bigger hurdle to prove organizations are using customer data ethically and with consideration of their privacy rights and needs.
In the hype around ChatGPT, it is inevitable many organizations will ignore data protection and privacy best practices resulting in potential devastating consequences for others."
Mika Aalto, Co-Founder and CEO at Hoxhunt:
"Without question, AI and Large Language Models, like ChatGPT, have tremendous potential to be used for good in cybersecurity; however, they can, and will, be used by cybercriminals.
ChatGPT allows criminals to launch perfectly-worded phishing campaigns at scale, and while that removes a key indicator of a phishing attack—bad grammar—other indicators are readily observable to the trained eye. For now, the misuse of ChatGPT for BEC, phishing, and smishing attacks will likely be focused on improving capabilities of existing cybercriminals more than activating new legions of attackers. Cybercrime is a multi-billion-dollar organized criminal industry, and ChatGPT is going to be used to help smart criminals get smarter and dumb criminals get more effective with their phishing attacks.
Effective, existing security awareness and behavior change programs protect against AI-augmented phishing attacks. Within your holistic cybersecurity strategy, be sure to focus on your people and their email behavior, because that is what our adversaries are doing with their new AI tools. Embed security as a shared responsibility throughout the organization with ongoing training that enables users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit. If there is great urgency in the message that promises missing out on reward or consequences for failure to act, immediately be cautious. Urgency is a key emotion that social engineers prey upon to induce actions."