British Technology Companies and Child Safety Agencies to Test AI's Capability to Generate Exploitation Images
Technology companies and child protection organizations will be granted permission to assess whether AI tools can generate child exploitation images under new UK legislation.
Significant Increase in AI-Generated Illegal Material
The declaration came as findings from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the government will allow approved AI developers and child protection groups to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and ensure they have adequate safeguards to stop them from producing images of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, adding: "Experts, under strict protocols, can now identify the danger in AI systems early."
Tackling Regulatory Obstacles
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing process. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is designed to preventing that issue by enabling to halt the creation of those images at source.
Legal Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI systems designed to create exploitative content.
Real-World Impact
This recently, the minister visited the London base of Childline and heard a simulated call to counsellors involving a report of AI-based abuse. The call portrayed a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst families," he said.
Alarming Data
A leading online safety foundation stated that cases of AI-generated exploitation content – such as webpages that may include numerous files – had significantly increased so far this year.
Cases of category A material – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a vital step to ensure AI products are secure before they are launched," commented the head of the online safety organization.
"AI tools have made it so survivors can be targeted all over again with just a simple actions, providing criminals the ability to make possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and makes children, especially girls, more vulnerable both online and offline."
Counseling Interaction Data
Childline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:
- Using AI to evaluate body size, body and looks
- AI assistants discouraging young people from talking to safe guardians about abuse
- Being bullied online with AI-generated material
- Digital blackmail using AI-manipulated pictures
Between April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related topics were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were related to mental health and wellness, encompassing utilizing AI assistants for assistance and AI therapeutic apps.