Foreign adversaries [1] [2] [3], such as China, Russia [1] [2] [3], and Iran [1] [2] [3], pose a significant threat to US elections through various means, including the use of generative AI tools.


These nations have a history of disrupting US elections through espionage [2] [3], hack and leak campaigns [2], and the malicious use of generative AI. In the upcoming 2024 US election, they are expected to target election infrastructure and political campaign assets [2], creating and spreading disinformation to undermine public confidence [2]. Less sophisticated hackers have also been able to interfere with elections using AI, as demonstrated by a local magician creating fake robocalls. Influence operations by these adversaries involve tactics such as phony social media profiles, typosquatting [1], manufacturing false evidence of cybersecurity incidents [1], and voice cloning [1]. Recent sanctions by the Treasury Department on Russian companies highlight the use of fake news websites to impersonate legitimate European news outlets. Russian efforts have focused on undermining US support for Ukraine [1], while Chinese operatives have used fake social media personas to gather information on US domestic issues and political themes dividing voters. The use of generative AI in these operations makes it easier for threat actors to create misleading content, further eroding public confidence [1].


To counter these threats and protect democracy in the digital age [3], increased awareness [3], cybersecurity measures [3], and collaboration among stakeholders are essential [3]. The impact of foreign interference in US elections through generative AI tools is significant, requiring proactive measures to safeguard the integrity of the electoral process. As technology continues to evolve, it is crucial to stay vigilant and adapt strategies to defend against emerging threats.