A research project conducted by Chief People Hacker Stephanie “Snow” Carruthers at IBM X-Force compared the effectiveness of AI-generated phishing emails to those created by human social engineers. The study aimed to assess the potential of AI in phishing attacks and highlight the need for organizations to enhance their cybersecurity measures.

Description

The study took place at a global healthcare company in Canada [1] [6], with two other organizations declining to participate due to concerns about the effectiveness of the phishing emails [6]. The research found that AI models, like ChatGPT [1], were able to generate highly convincing phishing emails in just five minutes using five simple prompts [5]. This saved significant time compared to the 16 hours it took for human engineers to craft similar emails.

Although the AI-generated emails had a slightly lower click rate than the human-generated ones, employees reported them as suspicious more frequently [1] [6]. Factors such as emotional intelligence, personalization [3], and a short subject line were found to influence employees to click on phishing emails [3].

The study recommends businesses to verify suspicious emails through direct communication [4], revamp training modules [4], and strengthen identity and access management systems to enhance their defenses against evolving threats. Security leaders expressed concerns about the potential for generative AI to create more sophisticated email attacks [2], as its rapid growth and adoption continue to make headlines in the cybersecurity field.

Conclusion

This research highlights the potential of AI in phishing attacks and emphasizes the need for organizations to enhance their cybersecurity measures. By understanding the factors that influence employees to click on phishing emails [3], businesses can take proactive steps to mitigate the risks. Verifying suspicious emails through direct communication [4], revamping training modules [4], and strengthening identity and access management systems are recommended strategies to strengthen defenses against evolving threats [4].

As generative AI continues to advance, there are concerns about its ability to create more sophisticated email attacks. It is crucial for organizations to stay vigilant and adapt their cybersecurity strategies accordingly. Phishing remains the most common infection vector for cybersecurity incidents [6], making it essential for businesses to prioritize robust defenses against this threat.

References

[1] https://cyber.vumetric.com/security-news/2023/10/24/generative-ai-can-write-phishing-emails-but-humans-are-better-at-it-ibm-x-force-finds/
[2] https://www.csoonline.com/article/656698/generative-ai-phishing-fears-realized-as-model-develops-highly-convincing-emails-in-5-minutes.html
[3] https://securityboulevard.com/2023/10/ibm-chatgpt-generated-can-write-convincing-phishing-emails/
[4] https://siliconangle.com/2023/10/24/ibm-study-indicates-near-parity-human-ai-phishing-attempts/
[5] https://dnyuz.com/2023/10/24/ibm-x-force-pits-chatgpt-against-humans-whos-better-at-phishing/
[6] https://www.mrpumpkinstechblog.com/therepublic/generative-ai-can-write-phishing-emails-but-humans-are-better-at-it-ibm-x-force-finds/