In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of generative AI in facilitating such attacks. If you’d like to learn more and protect your organization from advanced BEC attacks, contact us here.
How Generative AI is Revolutionising BEC Attacks
The progression of artificial intelligence (AI) technologies, such as OpenAI’s ChatGPT, has introduced a new vector for business email compromise (BEC) attacks. ChatGPT, a sophisticated AI model, generates human-like text based on the input it receives. Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalised to the recipient, thus increasing the chances of success for the attack.
Consider the first image above, where a recent discussion thread unfolded on a cybercrime forum. In this exchange, a cybercriminal showcased the potential of harnessing generative AI to refine an email that could be used in a phishing or BEC attack. They recommended composing the email in one’s native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality. This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks.
Moving on to the second image above, we’re now seeing an unsettling trend among cybercriminals on forums, evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT. These “jailbreaks” are specialised prompts that are becoming increasingly common. They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.
Finally, in the third image above, we see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors. This shows how cybersecurity is becoming more challenging due to the increasing complexity and adaptability of these activities in a world shaped by AI.
Uncovering WormGPT: A Cybercriminal’s Arsenal
Our team recently gained access to a tool known as “WormGPT” through a prominent online forum that’s often associated with cybercrime. This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.
WormGPT is an AI module based on the GPTJ language model, which was developed in 2021. It boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.
As depicted above, WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential, as decided by the tool’s author.
As you can see in the screenshot above, we conducted tests focusing on BEC attacks to comprehensively assess the potential dangers associated with WormGPT. In one experiment, we instructed WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.
The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.
In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals. To protect your organization from BEC attacks like WormGPT and others, contact us here.
Benefits of Using Generative AI for BEC Attacks
So, what specific advantages does using generative AI confer for BEC attacks?
Exceptional Grammar: Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious.
Lowered Entry Threshold: The use of generative AI democratises the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.
Ways of Safeguarding Against AI-Driven BEC Attacks
In conclusion, the growth of AI, while beneficial, brings progressive, new attack vectors. Implementing strong preventative measures is crucial. Here are some strategies you can employ:
BEC-Specific Training: Companies should develop extensive, regularly updated training programs aimed at countering BEC attacks, especially those enhanced by AI. Such programs should educate employees on the nature of BEC threats, how AI is used to augment them, and the tactics employed by attackers. This training should also be incorporated as a continuous aspect of employee professional development.
Enhanced Email Verification Measures: To fortify against AI-driven BEC attacks, organisations should enforce stringent email verification processes. These include implementing systems that automatically alert when emails originating outside the organisation impersonate internal executives or vendors, and using email systems that flag messages containing specific keywords linked to BEC attacks like “urgent”, “sensitive”, or “wire transfer.” Such measures ensure that potentially malicious emails are subjected to thorough examination before any action is taken.
Test Your Security Efficacy
To test your security efficacy, learn about WormGPT, and stop BEC attacks like WormGPT, reach out to a SlashNext expert.
About the Author
Daniel Kelley is a reformed black hat computer hacker who collaborated with our team at SlashNext to research the latest threats and tactics employed by cybercriminals, particularly those involving BEC, phishing, smishing, social engineering, ransomware, and other attacks that exploit the human element.