UK Technology Firms and Child Protection Officials to Examine AI's Ability to Create Abuse Images

Tech firms and child protection agencies will be granted permission to evaluate whether artificial intelligence systems can generate child exploitation images under recently introduced British laws.

Significant Increase in AI-Generated Harmful Material

The announcement came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the government will allow designated AI companies and child protection organizations to inspect AI systems – the underlying systems for chatbots and visual AI tools – and verify they have adequate protective measures to stop them from producing images of child sexual abuse.

"Fundamentally about preventing abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under rigorous conditions, can now identify the danger in AI models early."

Addressing Regulatory Obstacles

The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.

This law is designed to averting that problem by enabling to stop the production of those materials at source.

Legislative Framework

The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a ban on possessing, creating or sharing AI systems designed to generate exploitative content.

Practical Consequences

This week, the minister visited the London base of Childline and heard a mock-up conversation to counsellors involving a report of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about children facing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.

Concerning Data

A leading online safety foundation reported that instances of AI-generated abuse material – such as webpages that may contain numerous images – had significantly increased so far this year.

Instances of the most severe material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," commented the chief executive of the online safety foundation.

"AI tools have made it so victims can be targeted repeatedly with just a simple actions, providing offenders the ability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which additionally commodifies victims' trauma, and makes children, especially girls, more vulnerable on and off line."

Counseling Session Information

Childline also published information of counselling interactions where AI has been mentioned. AI-related risks discussed in the conversations include:

  • Using AI to evaluate body size, body and looks
  • AI assistants dissuading children from talking to trusted guardians about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and related terms were mentioned, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellness, including utilizing chatbots for assistance and AI therapeutic applications.

Rita Mahoney
Rita Mahoney

A seasoned gamer and strategy expert, Elara shares in-depth guides to help players improve their skills and achieve gaming excellence.