Cybercriminals have increasingly turned to generative AI tools, such as ChatGPT [2], for creating sophisticated phishing messages and business email compromise (BEC) attacks [2]. These AI-powered attacks have led to exponential growth in phishing attacks [2], with threat actors able to modify malware code and create numerous variations of social engineering attacks [2].


The launch of ChatGPT has coincided with a significant rise in malicious phishing emails [2], making it easier for attackers to launch targeted spear-phishing attacks at scale [2]. Researchers have even discovered malicious chatbots like WormGPT and FraudGPT [2], specifically designed for BEC attacks and fraud. Additionally, there is growing concern about AI “jailbreaks,” where hackers exploit AI chatbots to steal personal data and carry out damaging incursions [2]. These AI-powered phishing messages are highly convincing [2], mimicking the styles of trusted sources like government agencies and financial services providers [2]. By analyzing past writings and publicly available information [2], cybercriminals can make their emails seem authentic and trustworthy [2], often referencing specific details to deceive their targets [2].

Despite these findings, the Sophos research indicates that cybercriminals on dark-web forums have expressed concerns about the risks associated with generative AI tools. While there were some posts related to AI, the majority focused on compromised ChatGPT accounts and ways to bypass language model protections [1]. Some cybercriminals claimed to use ChatGPT derivatives for cyber-attacks, but many were skeptical and suspected scams [1]. Attempts to create malware using language models were often rudimentary and met with skepticism [1]. Furthermore, there were discussions about the negative effects of AI on society [1]. Overall, cybercriminals seem to have mixed reactions and are more skeptical than enthusiastic about using generative AI for attacks [1].


The use of generative AI tools by cybercriminals has resulted in a significant increase in phishing attacks and the creation of sophisticated phishing messages. However, there are concerns among cybercriminals themselves about the risks associated with these tools. While some cybercriminals have attempted to use generative AI for attacks, many are skeptical and suspect scams [1]. It is important for researchers and cybersecurity professionals to continue monitoring and mitigating the risks posed by AI-powered attacks, as well as exploring ways to enhance the security of language models and protect against AI “jailbreaks.” The future implications of generative AI in cybercrime remain uncertain, but it is clear that vigilance and proactive measures are necessary to combat this evolving threat.