UK Technology Firms and Child Protection Officials to Test AI's Capability to Generate Exploitation Images
Tech firms and child safety organizations will be granted authority to evaluate whether artificial intelligence systems can produce child abuse material under new UK laws.
Substantial Increase in AI-Generated Illegal Material
The announcement coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will permit designated AI developers and child safety organizations to examine AI systems – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating depictions of child sexual abuse.
"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the risk in AI models promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is aimed at averting that problem by enabling to halt the production of those images at source.
Legal Structure
The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI models designed to create child sexual abuse material.
Real-World Impact
This week, the official toured the London headquarters of a children's helpline and listened to a simulated conversation to advisors featuring a report of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.
"When I learn about young people experiencing extortion online, it is a source of intense anger in me and rightful concern amongst parents," he stated.
Alarming Data
A leading online safety organization reported that cases of AI-generated exploitation material – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of prohibited AI images in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a vital step to ensure AI products are secure before they are launched," commented the head of the internet monitoring foundation.
"AI tools have made it so survivors can be targeted all over again with just a few clicks, giving criminals the ability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which further exploits victims' suffering, and makes young people, especially girls, less safe on and off line."
Support Interaction Data
Childline also released information of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to evaluate body size, physique and looks
- Chatbots dissuading children from talking to safe adults about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated pictures
Between April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including using AI assistants for support and AI therapeutic applications.