Introduction
The Take It Down Act [1] [2] [3] [4] [6] [7] [9], passed by the House of Representatives with overwhelming bipartisan support, represents a significant federal effort to combat the misuse of non-consensual intimate images (NCII), including AI-generated deepfake pornography [6]. This legislation aims to protect individuals, particularly young teens [7], from the harmful effects of such content and provides a legal framework for victims to reclaim their dignity.
Description
On April 28 [3], the House of Representatives passed the Take It Down Act with overwhelming bipartisan support [3] [9], reflected in a 402-2 vote after earlier approval in the Senate. This significant legislation, introduced by Senators Ted Cruz and Amy Klobuchar and supported by First Lady Melania Trump, criminalizes the use of a person’s likeness to create non-consensual intimate images (NCII) [4], including both real and AI-generated deepfake pornography, without the consent of the individuals depicted [2]. The Act addresses the increasing prevalence of AI-generated explicit content that can harm individuals [7], particularly young teens [7], and aims to provide a means for victims to reclaim their dignity, especially in light of the experiences of teenagers who have suffered from the circulation of fake nude images.
The legislation mandates that technology platforms [2] [7], including social media applications, remove such content within 48 hours of a valid request from victims [2]. Importantly, it clarifies that a victim’s consent to the creation of an image does not extend to its publication [5]. Additionally, platforms are required to make reasonable efforts to eliminate copies of these images [5], ensuring a swift response to victims’ requests. The Act applies to “covered platforms,” defined as websites and applications that serve the public with user-generated content [1], but does not extend to Internet service providers or email platforms [1]. Violators face mandatory restitution and criminal penalties [1], with prison sentences of up to two years for offenses involving adults and up to three years for those involving minors [1]. The Federal Trade Commission is designated to enforce these provisions [5], ensuring compliance while aligning with First Amendment standards [5]. The Act also stipulates that computer-generated NCII must meet a “reasonable person” standard to be considered indistinguishable from authentic images [5].
While many states have already enacted laws against non-consensual intimate imagery [9], the Take It Down Act represents a significant federal effort to regulate internet companies in this area [9]. Critics [1] [4] [6], including free speech advocates and digital rights organizations [6], have raised concerns that the bill’s broad language could lead to the censorship of legitimate content [6], such as legal pornography and LGBTQ material [6]. The takedown provision encompasses a wider range of content than the specific definitions of NCII outlined in the legislation [6], placing pressure on platforms to actively monitor encrypted speech to mitigate liability risks [6]. Some lawmakers warn that the provisions could be misused [4], leading to unintended consequences [4], including the possibility of partisan abuse [4].
The urgency for such legal protections was underscored by the experience of a Texas teenager who became a victim of a fake nude image circulated by a classmate [8], highlighting the need for accountability in the face of these abuses.
At the state level [9], lawmakers are also working on measures to safeguard against AI-generated content [9]. For instance [9], California’s Senate Bill 11 seeks to incorporate AI-manipulated images and videos into existing right of publicity laws and criminal impersonation statutes [9]. This bill would require technology providers to inform consumers about their potential legal liabilities when using such technology [9].
Since 2019 [9], numerous states have introduced legislation targeting the harms posed by AI-generated deepfakes [9], with 38 bills introduced in 18 states in 2025 alone [9]. These legislative efforts reflect growing concerns about the exploitation [9], identity theft [9], and misinformation risks associated with AI deepfakes [9], underscoring the need for comprehensive legal frameworks to address these challenges.
Conclusion
The Take It Down Act marks a pivotal step in addressing the challenges posed by non-consensual intimate images and AI-generated deepfakes. By establishing a federal framework, the Act seeks to protect individuals from exploitation and provide a means for victims to seek justice. However, the legislation also raises concerns about potential overreach and the balance between protecting individuals and preserving free speech. As states continue to develop their own measures, the ongoing dialogue around these issues will be crucial in shaping effective and balanced legal responses.
References
[1] https://petapixel.com/2025/04/29/house-overwhelmingly-passes-take-it-down-act-aimed-at-deepfakes/
[2] https://www.cbsnews.com/news/house-take-it-down-act-vote-deepfake-pornography-victims/
[3] https://partners.time.com/7277746/ai-deepfakes-take-it-down-act-2025/
[4] https://cyberscoop.com/take-it-down-act-passes-house-first-amendment-encryption/
[5] https://www.commerce.senate.gov/2025/4/take-it-down-act-passes-the-house-heads-to-president-trump-s-desk
[6] https://www.syracuse.com/us-news/2025/04/congress-passes-take-it-down-act-targeting-revenge-porn-and-ai-deepfakes.html
[7] https://www.usatoday.com/story/news/politics/2025/04/28/congress-deepfake-bill-melania-trump/83229060007/
[8] https://www.nbcnews.com/politics/congress/house-passes-bipartisan-bill-combat-explicit-deepfakes-trump-rcna203316
[9] https://www.transparencycoalition.ai/news/congress-passes-take-it-down-act-with-strong-bipartisan-support