British Technology Companies and Child Protection Officials to Test AI's Capability to Create Abuse Content
Technology companies and child safety agencies will be granted authority to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced British laws.
Substantial Rise in AI-Generated Illegal Content
The declaration coincided with findings from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will permit designated AI companies and child protection organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from creating images of child exploitation.
"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now detect the risk in AI models early."
Tackling Legal Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that issue by enabling to stop the production of those images at source.
Legislative Framework
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI models designed to generate exploitative content.
Practical Consequences
This week, the minister visited the London base of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a source of extreme anger in me and justified concern amongst parents," he stated.
Alarming Data
A leading internet monitoring organization stated that cases of AI-generated exploitation content – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a crucial step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a few clicks, giving offenders the ability to create potentially limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which additionally exploits victims' trauma, and makes young people, particularly female children, less safe on and off line."
Counseling Session Information
The children's helpline also published information of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to evaluate weight, body and appearance
- AI assistants dissuading young people from consulting trusted guardians about harm
- Being bullied online with AI-generated content
- Online extortion using AI-faked images
During April and September this year, Childline delivered 367 support sessions where AI, chatbots and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, encompassing using chatbots for support and AI therapy apps.