🔗 Share this article British Technology Firms and Child Safety Officials to Test AI's Ability to Generate Abuse Images Technology companies and child safety agencies will be granted authority to evaluate whether AI tools can produce child exploitation material under new British laws. Significant Rise in AI-Generated Illegal Content The declaration came as findings from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025. Updated Legal Structure Under the changes, the government will allow designated AI developers and child protection groups to inspect AI models – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to stop them from creating depictions of child sexual abuse. "Ultimately about stopping exploitation before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now detect the risk in AI systems promptly." Addressing Regulatory Obstacles The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it. This legislation is designed to preventing that problem by enabling to stop the production of those materials at their origin. Legal Framework The amendments are being added by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI systems designed to create child sexual abuse material. Practical Consequences This week, the minister toured the London base of a children's helpline and listened to a simulated conversation to counsellors involving a account of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of themselves, created using AI. "When I learn about children experiencing extortion online, it is a cause of intense anger in me and rightful anger amongst parents," he stated. Concerning Statistics A leading online safety foundation reported that cases of AI-generated abuse material – such as online pages that may include numerous files – had significantly increased so far this year. Instances of category A content – the gravest form of abuse – increased from 2,621 visual files to 3,086. Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025 Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025 Sector Response The law change could "constitute a crucial step to ensure AI products are secure before they are released," stated the chief executive of the internet monitoring organization. "AI tools have made it so victims can be victimised repeatedly with just a simple actions, giving criminals the ability to create possibly limitless quantities of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and renders children, especially girls, less safe on and off line." Support Session Data The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related risks discussed in the conversations comprise: Using AI to evaluate weight, body and appearance AI assistants dissuading young people from talking to safe guardians about harm Being bullied online with AI-generated content Digital extortion using AI-faked pictures Between April and September this year, Childline conducted 367 support sessions where AI, conversational AI and related topics were mentioned, four times as many as in the same period last year. Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, including utilizing chatbots for support and AI therapeutic applications.