Introduction
The “Take It Down” Act is a landmark federal law aimed at combating the creation and distribution of non-consensual sexually explicit images, including AI-generated deepfake pornography [2] [4]. This legislation addresses the growing issue of revenge pornography and provides a legal framework to protect individuals from the unauthorized use of their likeness in misleading digital content.
Description
A new federal law [2] [4] [5] [7], the “Take It Down” Act [1] [2], criminalizes the creation and distribution of sexually explicit images, including non-consensual AI-generated deepfake pornography [2] [4], without the consent of the depicted individual [2] [4]. This legislation addresses the rising issue of revenge pornography, specifically targeting the distribution of nonconsensual intimate images. It is significant as the first federal law to explicitly confront the risks associated with misleading images [3], videos [3], or audio created without consent, particularly affecting students and educators [2] [4].
The Act provides a nationwide remedy for victims struggling to remove such content online [2], allowing them to seek recourse against both the publishers and the online platforms hosting this material. It mandates that social media platforms remove non-consensual content within 48 hours of receiving notice from victims or their representatives [7]. The law criminalizes the publication or threat of publication of sexually explicit images [1], including actual images of children in nude and exploitative situations [6], thereby filling a critical gap in existing laws and responding to the growing problem of deepfake harassment.
The law defines “authentic intimate visual depictions” and “digital forgeries,” with specific distinctions made between adults and minors [4]. It prohibits the knowing publication of both authentic images and digital forgeries intended to cause psychological, financial [2], or reputational harm to identifiable individuals [4]. For adults [2] [4], actionable digital forgeries must be published without consent [2], while for minors [2], the same criteria apply as for authentic depictions [2].
Exceptions to liability include lawful investigations [2], medical education [2], scientific purposes [2], or disclosures by the depicted individual [2] [4]. Penalties for violations can lead to significant prison sentences, with up to two years for adult content and three years for content involving minors [4]. The Act also addresses threats related to these depictions [2], raising questions about the implications for Section 230 protections [5], particularly regarding the Federal Trade Commission’s enforcement and the liability of platforms for hosting or removing nonconsensual intimate imagery [5].
A critical provision requires covered platforms to establish a clear and accessible process for victims to report and request the removal of such content by May 19, 2026. Compliance will be overseen by the FTC [5], which will enforce the notice-and-removal requirements [5]. Platforms are granted immunity for good faith removals [5], even if the content is later found to be lawful [5]. However, concerns have been raised regarding potential misuse of the notice-and-removal process, which critics argue could infringe on free speech rights and lead to unintended consequences, including challenges to end-to-end encryption and a lack of necessary safeguards for free expression, user privacy [6], and due process [6] [7].
As the notice-and-removal requirements will not take effect for up to a year [5], the effectiveness of the law in addressing revenge pornography and AI-generated deepfakes remains to be seen [5], along with the FTC’s capacity to monitor compliance effectively, especially in light of recent budget cuts [3]. Advocacy from affected youth and their families has significantly contributed to the law’s passage [7], reflecting a strong bipartisan consensus on the urgency of addressing the issue of nonconsensual deepfake pornography [1]. Schools are encouraged to take proactive measures to comply with the law and protect their communities [2], as the Act’s provisions extend to faculty and parents facing threats of nonconsensual explicit content [2]. This law highlights the recognition of deepfakes as a serious societal threat [1], with support from major technology companies [1].
Conclusion
The “Take It Down” Act represents a significant step forward in addressing the challenges posed by non-consensual explicit content and deepfake technology. By establishing clear legal consequences and mandating swift action from online platforms, the law aims to protect individuals from the psychological, financial [2], and reputational harm caused by such content. However, its success will depend on effective enforcement and the balance between protecting victims and safeguarding free speech and privacy rights. The Act underscores the importance of continued vigilance and adaptation in the face of evolving digital threats.
References
[1] https://www.azoai.com/news/20250526/4-Essential-Insights-From-A-Deepfake-Expert-On-The-Take-It-Down-Act.aspx
[2] https://www.jdsupra.com/legalnews/new-federal-ai-deepfake-law-takes-2360005/
[3] https://www.lexology.com/library/detail.aspx?g=d8baef3e-a620-4ffc-bdaa-d6fbfb7087c4
[4] https://www.fisherphillips.com/en/news-insights/new-federal-ai-deepfake-law-takes-effect.html
[5] https://natlawreview.com/article/take-it-down-act-signed-law-offering-tools-fight-non-consensual-intimate-images-and
[6] https://www.nbcchicago.com/news/national-international/take-it-down-law-websites-remove-sexually-explicit-images/3750748/
[7] https://www.jdsupra.com/legalnews/the-bipartisan-take-it-down-act-is-now-4052124/