AI-generated child abuse webpages surge 400%, alarming watchdog https://t.co/2Olanq9jgK https://t.co/GWxHfbOF0t
Reports indicate a massive uptick in AI-generated CSAM throughout the internet https://t.co/1NS1Ms2afd
There has been a major increase in child sexual abuse videos made with AI. Exclusive data shared with Channel 4 News shows that 1,300 videos have been found globally so far in 2025 - compared to only 2 in the same time last year. Warning: this report contains distressing https://t.co/5GuG0iWC5b
The UK-based Internet Watch Foundation said reports of webpages hosting child sexual abuse material generated with artificial-intelligence tools rose 400% in the first six months of 2025. The group verified 210 URLs containing such content, up from 42 a year earlier, and identified 1,286 AI-made videos—compared with just two in the same period of 2024. About 78% of the footage fell into Category A, the most severe classification under UK law. Analysts warned that advances in generative-AI video models are making the material almost indistinguishable from real footage and allowing offenders to repurpose images of actual children. The National Center for Missing & Exploited Children said it received 485,000 reports related to AI-generated abuse imagery in the first half of 2025, far eclipsing the 67,000 cases recorded in all of 2024. Law-enforcement agencies are beginning to respond. Europol said a coordinated investigation earlier this year led to 25 arrests and the identification of 250 suspects across 19 countries for distributing AI-generated abuse content. In the private sector, companies such as Amazon and OpenAI have reported taking down hundreds of thousands of abusive AI items from their platforms. Regulators are tightening rules as the technology spreads. The UK in February became the first country to outlaw the creation, possession or distribution of AI models designed to generate child sexual abuse material, with penalties of up to five years in prison. Watchdog groups are urging other jurisdictions to adopt similar controls and to require AI developers to build safeguards that prevent their systems from being fine-tuned for illegal content.