British Technology Companies and Child Safety Agencies to Test AI's Ability to Generate Exploitation Images
Tech firms and child safety organizations will receive permission to evaluate whether artificial intelligence tools can produce child abuse images under new British legislation.
Significant Increase in AI-Generated Harmful Material
The declaration came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will permit approved AI companies and child protection organizations to examine AI models – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient safeguards to stop them from creating depictions of child exploitation.
"Fundamentally about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the risk in AI systems early."
Tackling Legal Obstacles
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that problem by helping to stop the creation of those materials at their origin.
Legal Framework
The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, producing or sharing AI systems designed to create exploitative content.
Real-World Consequences
This week, the official visited the London base of a children's helpline and heard a simulated conversation to counsellors featuring a report of AI-based abuse. The interaction portrayed a adolescent requesting help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a source of intense frustration in me and justified anger amongst parents," he stated.
Alarming Statistics
A leading internet monitoring foundation reported that instances of AI-generated abuse content – such as online pages that may include multiple images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "represent a vital step to ensure AI products are secure before they are launched," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, giving criminals the ability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Content which further commodifies victims' suffering, and renders young people, particularly girls, more vulnerable both online and offline."
Counseling Session Information
The children's helpline also published details of support sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Employing AI to rate weight, body and appearance
- Chatbots discouraging children from consulting safe guardians about abuse
- Facing harassment online with AI-generated content
- Online extortion using AI-faked pictures
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing utilizing chatbots for assistance and AI therapy applications.