UK Tech Firms and Child Safety Officials to Examine AI's Ability to Generate Abuse Images

Technology companies and child safety organizations will be granted authority to evaluate whether artificial intelligence tools can generate child abuse material under new UK legislation.

Substantial Increase in AI-Generated Illegal Material

The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will allow approved AI companies and child safety organizations to inspect AI systems – the underlying technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from creating images of child sexual abuse.

"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the danger in AI models early."

Tackling Regulatory Obstacles

The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at averting that problem by helping to stop the production of those images at source.

Legal Structure

The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, producing or sharing AI systems developed to create child sexual abuse material.

Practical Impact

This week, the minister visited the London headquarters of Childline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.

"When I hear about children facing extortion online, it is a source of extreme anger in me and rightful anger amongst parents," he stated.

Concerning Data

A leading online safety organization reported that cases of AI-generated abuse content – such as webpages that may contain multiple images – had more than doubled so far this year.

Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to ensure AI tools are safe before they are launched," commented the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so victims can be targeted all over again with just a few clicks, giving criminals the capability to make potentially limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which further exploits victims' trauma, and renders children, especially female children, less safe both online and offline."

Support Session Data

The children's helpline also released information of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations include:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots discouraging children from consulting trusted guardians about abuse
  • Facing harassment online with AI-generated content
  • Digital blackmail using AI-faked pictures

During April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using AI assistants for support and AI therapeutic applications.

Holly Barton
Holly Barton

A passionate writer and tech enthusiast sharing insights on innovation and self-improvement.