Technology companies and child protection organizations will be granted authority to evaluate whether AI systems can generate child exploitation images under recently introduced British legislation.
The declaration came as findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will permit approved AI companies and child protection organizations to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing images of child exploitation.
"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the danger in AI models early."
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at averting that issue by helping to stop the production of those materials at their origin.
The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI models designed to create child sexual abuse material.
This week, the official visited the London base of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based abuse. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.
"When I hear about young people experiencing extortion online, it is a source of extreme frustration in me and rightful anger amongst families," he said.
A prominent internet monitoring organization stated that cases of AI-generated abuse material – such as webpages that may include numerous files – had more than doubled so far this year.
Cases of category A material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
The legislative amendment could "represent a vital step to ensure AI tools are secure before they are launched," commented the head of the internet monitoring foundation.
"AI tools have made it so survivors can be targeted repeatedly with just a few clicks, providing offenders the capability to create possibly endless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which further commodifies victims' suffering, and renders children, particularly female children, more vulnerable on and off line."
The children's helpline also published details of support sessions where AI has been referenced. AI-related harms discussed in the sessions include:
Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing utilizing chatbots for support and AI therapy apps.
A passionate gamer and tech reviewer with over a decade of experience in the gaming industry, specializing in controller ergonomics and performance.
News
News
News
News
Tina Jackson
Tina Jackson
Tina Jackson