UK Technology Companies and Child Safety Officials to Examine AI's Ability to Generate Abuse Content

Tech firms and child safety organizations will receive permission to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced British laws.

Substantial Increase in AI-Generated Illegal Content

The announcement came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the government will allow approved AI companies and child safety organizations to examine AI systems – the underlying systems for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from producing images of child exploitation.

"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now identify the risk in AI systems early."

Addressing Legal Obstacles

The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to averting that problem by enabling to halt the production of those images at their origin.

Legislative Framework

The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI systems developed to generate child sexual abuse material.

Practical Impact

This recently, the official visited the London headquarters of a children's helpline and heard a simulated conversation to advisors featuring a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after being blackmailed using a explicit AI-generated image of themselves, created using AI.

"When I learn about children experiencing blackmail online, it is a source of extreme frustration in me and rightful anger amongst parents," he said.

Concerning Data

A prominent online safety foundation stated that instances of AI-generated exploitation content – such as online pages that may include multiple images – had more than doubled so far this year.

Cases of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to guarantee AI tools are secure before they are launched," commented the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a few clicks, providing offenders the capability to make potentially limitless amounts of advanced, lifelike exploitative content," she continued. "Content which additionally commodifies victims' suffering, and renders children, particularly girls, less safe on and off line."

Counseling Interaction Information

The children's helpline also published details of support sessions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Using AI to evaluate body size, physique and appearance
  • Chatbots dissuading young people from talking to trusted adults about harm
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-manipulated pictures

Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing AI assistants for support and AI therapeutic apps.

Brenda Harmon
Brenda Harmon

Elara is a seasoned hiker and nature photographer who shares her passion for the outdoors through engaging stories and practical advice.