Introduction

Since January 2024, OpenAI has actively disrupted over 20 cybercriminal operations that misused its generative AI technologies for malicious purposes. These operations primarily involved foreign actors engaging in political influence and election interference, highlighting the growing concern over AI’s role in spreading misinformation and conducting cyberattacks.

Description

OpenAI has disrupted over 20 operations run by cybercriminals since January 2024 [6], targeting foreign actors who misused its generative AI technologies for malicious activities, particularly in the realms of political influence and election interference. These efforts included debugging malware, generating deceptive content [6], and analyzing social media [2], with a significant rise in election-related misinformation [3], notably a 900% increase in deepfakes year over year [6]. The US Department of Homeland Security has raised concerns about threats from Russia [6], Iran [4] [6], and China leveraging AI tools to spread misinformation ahead of the upcoming US presidential election [6].

Among the identified threat actors was a suspected China-based adversary named SweetSpecter, which attempted a spear phishing campaign against OpenAI employees by posing as a ChatGPT user [5]. This campaign involved an attachment designed to deploy “Windows malware known as SugarGh0st RAT,” allowing the adversary to gain control over compromised machines; however [5], OpenAI’s spam filter successfully blocked these attempts [5].

Additionally, OpenAI monitored a group called CyberAv3ngers, suspected to be linked to the Iranian Islamic Revolutionary Guard Corps, which utilized ChatGPT for vulnerability research and scripting advice related to attacks on critical infrastructure, including water utilities in the US, Ireland [5], and Israel [2] [3] [5] [6]. The Iranian group STORM-0817 was noted for developing and debugging Android malware and creating an Instagram scraper targeting journalists critical of the Iranian government. Iranian-linked actors were also observed mass-generating social media comments and articles on divisive topics [4], including the Gaza conflict and Venezuelan politics [4]. Furthermore, an Iranian influence operation named Storm-2035 created politically charged content for social media [1], blending it with lifestyle topics to appear more authentic [1]. Despite these efforts [1] [3] [5] [6], threat actors experienced limited success [1], with low audience engagement reported in various campaigns [1], including long-form articles and social media comments about the US election [3].

OpenAI emphasized that while threat actors are evolving and experimenting with its models [2], there has been no significant advancement in their ability to create new malware or build viral audiences [2]. The company noted that the techniques employed by these actors were largely achievable with publicly available resources, suggesting that their reliance on AI could make their operations more vulnerable to disruption [5]. OpenAI cited an election interference case that was quickly silenced due to adversaries’ over-reliance on AI tools [5].

Moreover, OpenAI took swift action against activities generating social media content concerning elections in the US, Rwanda [2] [3] [4] [6], India [2] [3] [6], and the European Union [2], banning accounts and addressing covert operations within 24 hours [3]. An Israeli commercial company named STOIC (also known as Zero Zeno) was involved in generating social media comments about Indian elections [2], as previously reported by Meta and OpenAI [2]. Despite these efforts [1] [3] [5] [6], none of these networks achieved viral engagement or sustained audiences [2].

OpenAI acknowledged that it cannot combat AI threats alone and stressed the need for collaboration to build defenses against state-linked cyber actors and covert influence operations [5]. The report called for continued investment in detection and investigation capabilities across the Internet to strengthen the information ecosystem [5]. OpenAI also expressed the potential for its models to evolve [5], suggesting that future advancements could enable the reverse engineering and analysis of malicious attachments in phishing campaigns [5]. The company has enhanced its threat detection capabilities [4], developing AI-powered tools that significantly reduce the time required for analytical tasks [4], as foreign actors increasingly use AI to tailor synthetic content [4].

Conclusion

OpenAI’s proactive measures have highlighted the potential vulnerabilities and threats posed by the misuse of AI technologies in cyber operations. While the company has successfully mitigated several threats, the evolving landscape of AI-driven cybercrime necessitates ongoing vigilance and collaboration. Future advancements in AI could further enhance the ability to detect and counteract malicious activities, underscoring the importance of continued investment in cybersecurity and information integrity.

References

[1] https://www.techtarget.com/searchSecurity/news/366613512/OpenAI-details-how-threat-actors-are-abusing-ChatGPT
[2] https://thehackernews.com/2024/10/openai-blocks-20-global-malicious.html
[3] https://www.cnbc.com/2024/10/09/openai-says-more-cyber-actors-using-its-platform-to-disrupt-elections.html
[4] https://cyberscoop.com/openai-threat-report-foreign-influence-generative-ai/
[5] https://arstechnica.com/tech-policy/2024/10/using-chatgpt-to-make-fake-social-media-posts-backfires-on-bad-actors/
[6] https://www.inkl.com/news/openai-report-reveals-over-20-ai-driven-election-disruption-attempts-this-year