The Federal Communications Commission (FCC) has taken action against the potential risks of AI-generated deepfake robocalls targeting US elections. This move aims to prevent scams and misleading voters ahead of the upcoming US presidential election [1].


Recent events have prompted the FCC to unanimously approve a measure making AI robocalls illegal, specifically targeting those made with AI voice-cloning tools or “deepfakes.” This decision comes in response to an investigation that linked two Texas companies, Life Corp and Lingo Telecom [1], to robocalls that used AI to mimic President Joe Biden’s voice and targeted up to 25,000 New Hampshire voters [2].

FCC Chairwoman Jessica Rosenworcel has emphasized the need to protect vulnerable individuals from bad actors who use AI-generated voices to extort, imitate celebrities [1], and misinform voters [1]. The new ruling classifies AI-generated voices in robocalls as “artificial” and subject to the same standards as other automated calls [1]. Violators can face fines of over $23,000 per call [1], and the FCC now has the power to fine companies that use AI voices in their robocalls and block the service providers that carry them [1]. Recipients of these robocalls can also file lawsuits against the responsible companies [1].

The FCC’s proposal to make the use of voice cloning technology in robocalls fundamentally illegal will make it easier to charge the operators of these frauds [2]. This move is part of the FCC’s efforts to crack down on robocallers interfering in elections and protect the integrity of the US electoral process.

The rise of “coordinated inauthentic behavior” networks [3], fueled by AI-generated content and fake news outlets [3], poses a threat to public trust in the electoral process [3]. Additionally, AI is being used to craft more believable and targeted phishing campaigns [3], which could be used for election-related attacks [3]. The adoption of AI lowers the barrier to entry for launching such attacks [3], increasing the volume of campaigns that aim to infiltrate campaigns or impersonate candidates [3]. The rapid evolution of AI technology makes the risk substantial [3], and there is a potential for microtargeting with AI-generated content to proliferate on social media platforms [3]. This could lead to increased polarization and the spread of “bespoke realities” among US citizens [3].


The FCC’s action against AI-generated deepfake robocalls is a crucial step in protecting the integrity of US elections. By making these robocalls illegal and imposing significant fines, the FCC aims to deter bad actors from using AI voices to deceive and manipulate voters. However, the rise of AI technology poses ongoing challenges, as it enables more sophisticated and targeted attacks. To mitigate these risks, continued vigilance and regulation are necessary to ensure the trustworthiness of the electoral process.