Loading article...
Loading article...

Generating AI summary...
The Internet Watch Foundation (IWF) reported a 14% increase in AI-generated child sexual abuse material (CSAM) online last year, with a significant rise in videos. The majority of these videos were classified as category A, the most severe type of content under UK law. The IWF is working with tech companies and child protection agencies to ensure AI tools are designed with safety as a priority and cannot produce CSAM.
The IWF identified 8,029 AI-made images and videos of realistic CSAM in 2025, with a 260-fold increase in videos compared to previous years. 65% of the 3,443 videos were classified as category A, showing the most extreme type of content. This rise in AI-generated CSAM is a concern for child safety experts, who warn that the technology is being used to create more violent content.
The rise of AI-generated CSAM is a growing concern for child safety experts, who warn that the technology is being used to create more violent content. The IWF's chief executive, Kerry Smith, said that advances in technology should never come at the expense of a child's safety and wellbeing. The government has announced a ban on possessing, creating, or distributing AI models designed to generate CSAM, and is working with tech companies and child protection agencies to ensure AI tools are designed with safety as a priority.
The rise of AI-generated CSAM is having a significant impact on the industry, with tech companies and child protection agencies working together to ensure AI tools are designed with safety as a priority. The IWF is working with designated AI companies and child safety organisations to examine generative artificial intelligence models and ensure they have safeguards to prevent them from creating CSAM.
The rise of AI-generated CSAM is a growing concern for child safety experts, and the government is taking steps to address the issue. The ban on possessing, creating, or distributing AI models designed to generate CSAM is a significant step forward, and the work of the IWF and other organisations is crucial in ensuring that AI tools are designed with safety as a priority.
A: AI-generated CSAM refers to images and videos of child sexual abuse material that are created using artificial intelligence algorithms.
A: The IWF is a safety watchdog that identifies and reports AI-generated CSAM online. It is working with tech companies and child protection agencies to ensure AI tools are designed with safety as a priority.
Source: The Guardian
A: The government has announced a ban on possessing, creating, or distributing AI models designed to generate CSAM, and is working with tech companies and child protection agencies to ensure AI tools are designed with safety as a priority.