UK Tech Companies and Child Protection Agencies to Examine AI's Capability to Create Abuse Content

Technology companies and child protection organizations will be granted authority to evaluate whether AI systems can generate child exploitation images under recently introduced British legislation.

Significant Increase in AI-Generated Harmful Content

The declaration came as findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the authorities will permit approved AI companies and child protection organizations to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing images of child exploitation.

"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the danger in AI models early."

Tackling Regulatory Obstacles

The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at averting that issue by helping to stop the production of those materials at their origin.

Legislative Framework

The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI models designed to create child sexual abuse material.

Practical Consequences

This week, the official visited the London base of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based abuse. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I hear about young people experiencing extortion online, it is a source of extreme frustration in me and rightful anger amongst families," he said.

Concerning Data

A prominent internet monitoring organization stated that cases of AI-generated abuse material – such as webpages that may include numerous files – had more than doubled so far this year.

Cases of category A material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a vital step to ensure AI tools are secure before they are launched," commented the head of the internet monitoring foundation.

"AI tools have made it so survivors can be targeted repeatedly with just a few clicks, providing offenders the capability to create possibly endless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which further commodifies victims' suffering, and renders children, particularly female children, more vulnerable on and off line."

Counseling Session Data

The children's helpline also published details of support sessions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Employing AI to evaluate body size, physique and appearance
  • AI assistants discouraging young people from consulting safe adults about harm
  • Being bullied online with AI-generated material
  • Online extortion using AI-manipulated pictures

Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated topics were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing utilizing chatbots for support and AI therapy apps.

Tina Jackson
Tina Jackson

A passionate gamer and tech reviewer with over a decade of experience in the gaming industry, specializing in controller ergonomics and performance.