UK Technology Firms and Child Protection Agencies to Test AI's Ability to Create Abuse Content

Technology companies and child safety agencies will be granted permission to evaluate whether AI tools can generate child exploitation material under recently introduced UK legislation.

Significant Rise in AI-Generated Illegal Content

The declaration came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will permit approved AI developers and child safety organizations to inspect AI systems – the foundational systems for conversational AI and image generators – and ensure they have sufficient protective measures to stop them from creating images of child sexual abuse.

"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now detect the danger in AI systems early."

Addressing Legal Obstacles

The changes have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at preventing that issue by helping to halt the production of those images at source.

Legal Structure

The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, creating or sharing AI systems developed to create child sexual abuse material.

Practical Consequences

This recently, the minister visited the London headquarters of Childline and listened to a simulated call to counsellors involving a report of AI-based exploitation. The call depicted a adolescent seeking help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I learn about children experiencing extortion online, it is a cause of extreme anger in me and justified anger amongst parents," he stated.

Alarming Data

A prominent internet monitoring organization reported that cases of AI-generated abuse content – such as webpages that may include numerous images – had more than doubled so far this year.

Cases of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a crucial step to ensure AI products are secure before they are launched," stated the chief executive of the online safety organization.

"AI tools have enabled so survivors can be targeted all over again with just a few clicks, providing offenders the ability to create possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which further exploits victims' trauma, and makes young people, especially female children, more vulnerable both online and offline."

Support Session Information

The children's helpline also released information of support sessions where AI has been referenced. AI-related harms discussed in the sessions comprise:

  • Employing AI to evaluate body size, physique and looks
  • AI assistants dissuading young people from consulting safe guardians about abuse
  • Being bullied online with AI-generated material
  • Digital extortion using AI-faked images

Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing AI assistants for assistance and AI therapy applications.

Anthony Morrison
Anthony Morrison

A seasoned gamer and strategy expert, Elara shares her passion for competitive gaming and innovative tactics to help players excel.