Introduction

The TAKE IT DOWN Act [1] [3] [5], passed by Congress on April 29, 2025, represents a significant legislative effort to address the challenges posed by non-consensual intimate imagery (NCII), including AI-generated deepfakes [4] [5]. This bipartisan legislation aims to criminalize the creation and distribution of such content, filling a critical gap in existing laws and providing new protections for victims.

Description

On April 29, 2025 [5], Congress passed the TAKE IT DOWN Act [5], formally known as the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act [3]. This bipartisan legislation [1], which received overwhelming support with a 409-2 vote in the House following unanimous consent in the Senate, criminalizes the creation and distribution of non-consensual intimate imagery (NCII) [2], including AI-generated deepfakes [4] [5]. It establishes a new federal offense for distributing materially altered media depicting identifiable individuals without their consent [4], addressing a critical gap in existing laws [2]. The act defines “digital forgery” as any intimate visual depiction created through software [3], machine learning [3], artificial intelligence [1] [3], or other technological means that is indistinguishable from an authentic depiction [3].

The legislation was inspired by the experiences of two teenagers who faced significant challenges in having their deepfake images removed from social media platforms. It aims to empower victims and hold online platforms accountable while safeguarding lawful speech [1]. The act mandates that websites and social media platforms remove reported deepfake pornography within 48 hours of receiving notice from victims, with oversight from the Federal Trade Commission (FTC) for non-compliance [4]. This requirement creates an exception to the protections offered by Section 230 of the Communications Decency Act, which may complicate compliance for smaller websites lacking robust content moderation systems. Victims are also granted the right to sue creators [2], distributors [2] [4], and noncompliant platforms for damages [2], although the act does not provide a private right of action, meaning victims must rely on prosecutors or the FTC for enforcement [4].

Noncompliance with these notice and takedown obligations may result in violations under the Federal Trade Commission Act [3], particularly concerning unfair or deceptive acts or practices [3]. The act establishes a “reasonable person” test to determine NCII [5], with penalties of up to five years in prison for knowingly publishing such material [5], particularly if the victims are minors [2]. Critics have raised concerns about the act’s broad scope [4], particularly regarding the undefined term “materially altered,” which could lead to the over-removal of lawful content [4]. Additionally, the absence of an appeal mechanism for users facing takedown decisions raises due process and viewpoint discrimination issues [4].

The TAKE IT DOWN Act represents the first substantial regulation of AI-generated content in the US and has garnered support from over 120 companies and organizations, including major tech firms like Meta and Google [1]. Advocates emphasize the urgent need for legal protections against the misuse of artificial intelligence in creating deepfake pornography [1], which poses significant threats to the privacy and security of individuals [1], particularly women and children [1]. Legal experts agree that these deepfakes fall outside First Amendment protections due to their severe invasion of privacy and personal dignity [2], reinforcing the law’s robustness against potential free-speech challenges [2].

In conjunction with this [5], an Executive Order was signed to advance AI education for American youth [5], aiming to foster interest and expertise in AI to maintain the US’s global technological leadership [5]. This order establishes a White House Task Force on AI Education [5], which will include various cabinet members and focus on enhancing AI training for educators and promoting apprenticeships in the field [5]. The act represents a significant legislative step in regulating AI and its societal impacts [4], with potential future bills addressing content provenance and user consent [4], reflecting a growing consensus on the need to address digital harms influenced by public outrage over high-profile cases of synthetic imagery [4].

Conclusion

The TAKE IT DOWN Act marks a pivotal moment in the regulation of AI-generated content, setting a precedent for future legislative efforts. By criminalizing the distribution of non-consensual intimate imagery and holding platforms accountable, the act aims to protect individuals’ privacy and dignity. Its passage underscores the urgent need for comprehensive legal frameworks to address the evolving challenges posed by artificial intelligence, while also highlighting the importance of fostering AI education to ensure responsible technological advancement.

References

[1] https://problemsolverscaucus.house.gov/media/press-releases/house-passes-problem-solvers-caucus-endorsed-take-it-down-act
[2] https://thearchivist24.com/2025/05/01/house-approves-take-it-down-act-to-combat-deepfake-revenge-imagery/
[3] https://www.govtrack.us/congress/bills/119/s146/text
[4] https://legalnotlegal.com/take-it-down-act-legal-analysis/
[5] https://www.jdsupra.com/legalnews/congress-passes-ai-deepfake-law-trump-8440170/