British Technology Firms and Child Protection Agencies to Examine AI's Ability to Create Abuse Content

Technology companies and child safety agencies will be granted permission to assess whether artificial intelligence tools can generate child exploitation images under new UK legislation.

Significant Increase in AI-Generated Illegal Material

The declaration came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will permit approved AI companies and child safety organizations to inspect AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate safeguards to stop them from producing images of child exploitation.

"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now detect the risk in AI models promptly."

Tackling Legal Obstacles

The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.

This law is designed to preventing that problem by enabling to halt the creation of those materials at their origin.

Legislative Framework

The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI models designed to generate exploitative content.

Practical Impact

This week, the official toured the London headquarters of Childline and heard a simulated call to counsellors involving a account of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of himself, constructed using AI.

"When I learn about children experiencing blackmail online, it is a source of extreme frustration in me and justified concern amongst families," he stated.

Concerning Statistics

A leading internet monitoring foundation stated that cases of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.

Instances of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of prohibited AI images in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "represent a crucial step to ensure AI tools are secure before they are released," stated the chief executive of the online safety organization.

"AI tools have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to make potentially limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally commodifies victims' trauma, and renders children, especially female children, less safe on and off line."

Support Session Information

The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions include:

  • Employing AI to rate weight, physique and appearance
  • Chatbots discouraging young people from talking to safe guardians about abuse
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-faked images

Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellness, including using chatbots for support and AI therapeutic applications.

Kelly Frazier
Kelly Frazier

Elara is a seasoned content creator and writing coach, passionate about helping others craft compelling stories in the digital age.