Introduction

The United Kingdom is introducing new legislation to combat the alarming rise in online child sexual abuse (CSA), particularly focusing on the misuse of artificial intelligence (AI) in generating child sexual abuse material (CSAM). This initiative aims to address the increasing prevalence of AI-generated content and enhance enforcement capabilities to protect children from exploitation.

Description

A new offence is being introduced in the UK to address the alarming rise in online child sexual abuse (CSA), which has seen an 830% increase in reported imagery since 2014. This legislation criminalizes the adaptation [6], possession [1] [2] [4] [6] [7], supply [6], or offering of child sexual abuse image generators [6], with penalties of up to five years in prison [3] [4] [6] [7]. It also makes it illegal to create, possess [1] [2] [4] [6] [7], or distribute AI models specifically designed for generating child sexual abuse material (CSAM), including “paedophile manuals” that instruct on using AI for child exploitation [4], which carries a potential three-year prison sentence [4]. The definition of CSAM aligns with existing legislation, encompassing photographs [6], pseudo-photographs [6], and prohibited images [6], and includes non-consensual sexually explicit deepfake images [1].

The legislation amends the existing offence related to “paedophile manuals” to include pseudo-photographs and prohibited images [6], ensuring a consistent approach to both real and AI-generated abuse imagery [6]. A new offence has been established to criminalize internet activities intended to facilitate child sexual abuse [6], including maintaining CSA sites and providing grooming advice, punishable by up to ten years in prison [4].

To enhance enforcement capabilities [6], a new criminal misuse team will be established to investigate various crime and security issues [3], particularly focusing on the use of AI in the creation of CSAM [3]. This initiative is part of the broader Crime and Policing Bill [4], reflecting a commitment to addressing child sexual abuse, especially in the context of evolving technologies and the increasing prevalence of AI-generated content [6]. The Labour Party has also committed to implementing binding regulations for companies developing advanced AI models to ensure their safe development and use [5], including a prohibition on the creation of sexually explicit deepfakes [5].

The bill addresses the challenges posed by digital devices [6], which often store CSAM protected by passcodes or biometrics [6]. Border Force officers are granted the authority to compel individuals suspected of posing a sexual risk to children to unlock their devices for inspection [4], with non-compliance resulting in a three-year prison term [4]. The Child Abuse Image Database (CAID) has been developed to assist in identifying known CSAM [6], and rapid scanning technology is now operational at UK borders to detect such material [6].

AI models are increasingly being exploited by offenders to generate photorealistic CSAM [6], which can depict real children and normalize child sexual abuse [6]. While creating [2] [3] [5] [6] [7], possessing [1] [2] [4] [6] [7], or distributing AI-generated CSAM is illegal [6], the fine-tuning of AI models for this purpose is not currently prohibited [6]. Disturbingly [7], guides on generating AI CSAM have been discovered on dark web forums [6], where offenders share knowledge and techniques, often referring to creators of AI imagery as “artists,” indicating a troubling normalization of this behavior [7]. Although the volume of AI-generated images remains lower than that of non-AI images [7], it is rising rapidly [7], with over 20,000 AI-generated images reported in a single month on one dark web forum [7].

The legislation aims to combat the networking and commercialization of child sexual abuse material [6], which is facilitated by online forums that allow offenders to share techniques [6], evade detection [6], and monetize their activities [6]. The National Crime Agency has identified a significant number of individuals with a sexual interest in children [6], many of whom operate online and perceive AI-generated CSAM as a victimless crime, failing to recognize the potential harm [7], as such images can be derived from real [7], innocent photos of children [7].

The bill emphasizes that the new offence will not criminalize AI developers but will target those who exploit AI technology to create CSAM [6]. Establishing intent to facilitate child sexual exploitation is crucial for a conviction under this new offence [6]. Service providers using their platforms without knowledge of such activities will not be held criminally responsible [6]. Additionally, there is a call for a comprehensive legal framework that provides civil law provisions [5], allowing victims to have harmful material removed from the internet without the necessity of a criminal conviction [5].

Generative AI tools and platforms will also fall under the UK Online Safety Act if they facilitate user interactions [2], such as sharing generative AI chatbots [2]. Ofcom has outlined requirements for these platforms [2], including conducting risk assessments [2], implementing measures to mitigate risks [2], and allowing users to report illegal content and material harmful to children [2]. Further measures are anticipated to bolster victim support and promote awareness and prevention efforts [4].

Conclusion

The introduction of this legislation marks a significant step in addressing the challenges posed by AI in the realm of child sexual abuse. By criminalizing the creation and distribution of AI-generated CSAM and enhancing enforcement measures, the UK aims to protect children from exploitation and adapt to the evolving technological landscape. The bill also underscores the importance of a comprehensive legal framework to support victims and ensure the safe development and use of AI technologies.

References

[1] https://ineqe.com/2025/02/18/ai-generated-image-exploitation-school-guide/
[2] https://www.iwf.org.uk/news-media/blogs/global-leaders-and-ai-developers-can-act-now-to-prioritise-child-safety/
[3] https://www.holyrood.com/news/view,uk-government-unveils-new-plans-to-tackle-aipowered-child-abuse
[4] https://cyber-helpline.squarespace.com/helpline-blog/2025/2/24/uk-cracks-down-on-ai-generated-child-abuse-content
[5] https://publicpolicyexchange.co.uk/event.php?eventUID=OG30-PPE
[6] https://www.gov.uk/government/publications/crime-and-policing-bill-2025-factsheets/crime-and-policing-bill-child-sexual-abuse-material-factsheet
[7] https://theconversation.com/our-research-on-dark-web-forums-reveals-the-growing-threat-of-ai-generated-child-abuse-images-249067