Introduction

In recent years, numerous states have enacted legislation to address the misuse of artificial intelligence, particularly concerning the creation and distribution of deepfake content. These laws aim to protect individuals from intimidation, harassment, and unauthorized use of their likenesses, with a focus on sexually explicit and politically misleading content.

Description

New laws have been enacted in 17 states to prohibit the use of artificial intelligence for intimidation [2], bullying [2], threatening [2], or harassing individuals through electronic communications [2]. States implementing these laws include California [2], Connecticut [1] [2], Florida [2] [3], Hawaii [2], Illinois [2], Louisiana [2], Massachusetts [2], Mississippi [2], New Jersey [2], New York [2] [3], North Carolina [2], Oklahoma [2], Rhode Island [2], Texas [2], Utah [2], Washington [2], and Wyoming [2]. Some laws specifically target AI-generated deepfakes [2], while others address broader issues of manipulated media that misrepresent individuals without their consent [2], particularly focusing on sexually explicit content [2]. States like Georgia [2], Hawaii [2], New York [2] [3], and Virginia have enhanced existing revenge porn laws to include deepfakes [2], while Florida [2], Minnesota [1] [2], and Texas have introduced new statutes addressing sexually explicit deepfakes [2].

In New York [3], laws protect individuals from the nonconsensual distribution of deepfake content that harms their emotional [3], financial [3], or physical welfare [3]. New York Civil Rights Law § 52-C and NY Penal Code § 245.15 (2024) prohibit the nonconsensual distribution of sexually explicit images [3], including those created or altered through digitization [3], allowing individuals to pursue private legal action [3]. Florida’s Section 836.13 criminalizes the willful and malicious promotion of altered sexual depictions of identifiable persons without consent [3], classifying it as a third-degree felony [3]. This law also expands the definition of child pornography to include digitally altered images of minors engaged in sexual conduct [3], allowing for civil actions that can result in significant monetary damages and attorney’s fees [3].

Certain states [2], including Minnesota and Texas [2], have enacted laws to prohibit the distribution of deepfakes that could harm a political candidate’s reputation or mislead voters [2]. States such as California [2], Michigan [2], and Washington allow exemptions or affirmative defenses for deepfakes if a disclaimer is included in political advertisements [2]. The requirements for malicious intent and resulting liabilities vary by state [2], with some imposing criminal penalties and others [2], like California [2], Florida [2] [3], Illinois [2], and Minnesota [1] [2], providing only civil or injunctive relief [2].

States are updating their “right of publicity” laws to protect individuals’ names [2], images [1] [2] [3], and likenesses from unauthorized commercial use [2]. New York revised its right of publicity statute in 2021 to extend protections to deceased celebrities and performers [2]. Tennessee passed the Ensuring Likeness Voice and Image Security Act (ELVIS Act) [2], which safeguards against unauthorized use of a person’s voice or its simulation [2].

As of the end of 2024 [2], at least 50 deepfake-related bills have been enacted [2]. Notable highlights include Alabama criminalizing the creation of private images without consent [2], California allowing individuals to report digital identity theft related to deepfakes [2], and Florida requiring disclaimers on political advertisements using deepfakes [2]. Iowa prohibits the sexual exploitation of minors through misleading images or videos [2], while Louisiana criminalizes the dissemination of AI-created images [2]. South Dakota expands the definition of child pornography to include AI-generated content [2], Tennessee protects personal rights related to names and likenesses [2], and Utah updates the definition of counterfeit intimate images to include AI-generated content [2].

In Connecticut [1], Senate Bill 1440 has been introduced to criminalize the dissemination of AI-generated sexual images without the depicted individual’s consent [1], addressing the misuse of artificial intelligence in creating non-consensual intimate images [1]. This legislative effort aligns with similar initiatives in other states [1], such as Minnesota’s proposed ban on “nudification” technology and California’s recent laws prohibiting the distribution of AI-generated sexual images [1]. However, the Connecticut bill does not impose liability on social media companies for the spread of such images [1]. Additionally, a broader proposal by Sen [1]. James Maroney seeks to regulate AI use across various sectors [1], addressing concerns about algorithmic discrimination [1], although it currently lacks support from the governor’s administration [1].

At the federal level [3], while there are no specific laws banning deepfake images [3], the Digital Millennium Copyright Act (DMCA) is utilized to combat deepfake pornography [3]. Federal child pornography laws may apply if the images involve minors [3]. Individuals can recover damages under 15 USC § 6851 for the unauthorized dissemination of nude images [3]. Proposed federal legislation includes the DEFIANCE Act [3], which would enable victims to sue creators of deepfakes who acted without consent [3], and the Kids Online Safety Act (KOSA) [3], which would impose a duty of care on tech platforms regarding minors [3]. The DEEPFAKES Accountability Act of 2023 would mandate digital watermarking of deepfake content and criminalize the failure to identify malicious deepfakes [3].

Since 2019 [2], state legislatures have enacted at least 50 new laws addressing the harms of AI-generated deepfakes [2]. In 2025 [2], legislators in 18 states introduced 38 bills related to AI-generated imagery [2], reflecting active state-level responses to the challenges posed by AI deepfakes amidst a stalled Congress [2]. Mitigating risks associated with deepfake pornography requires proactive legal planning [3], adherence to evolving regulatory frameworks [3], and exploring compensation avenues for victims [3]. Early intervention and legal counsel are crucial for addressing concerns related to deepfake pornography [3], particularly given the significant distress caused by altered images within communities, especially affecting women and girls. Ongoing discussions in the legislature reflect the complexities of addressing these emerging legal challenges related to AI technology [1].

Conclusion

The legislative measures taken by various states underscore the growing recognition of the potential harms posed by AI-generated deepfakes. These laws aim to protect individuals’ rights and reputations, address the misuse of technology, and provide avenues for legal recourse. As technology continues to evolve, ongoing legislative efforts will be essential to mitigate the risks associated with deepfake content and ensure the protection of individuals’ privacy and dignity.

References

[1] https://ctmirror.org/2025/03/10/ct-ai-intimate-images-bill/
[2] https://www.transparencycoalition.ai/news/deepfake-how-state-lawmakers-are-acting-to-stop-deepfakes
[3] https://www.buzko.legal/content-eng/an-overview-of-deepfake-laws-in-select-us-jurisdictions