UNICEF calls for criminalization of AI content depicting child sex abuse
Published by Global Banking and Finance Review
Posted on February 4, 2026
2 min readLast updated: February 4, 2026
Published by Global Banking and Finance Review
Posted on February 4, 2026
2 min readLast updated: February 4, 2026
UNICEF urges global action to criminalize AI-generated child abuse content, highlighting deepfake risks and calling for enhanced digital safety measures.
By Jasper Ward
Feb 4 (Reuters) - The United Nations children's agency UNICEF on Wednesday called for countries to criminalize the creation of AI-generated child sexual abuse content, saying it was alarmed by reports of an increase in the number of artificial intelligence images sexualizing children.
The agency also urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models. It said digital companies should prevent the circulation of these images by strengthening content moderation with investment in detection technologies.
"The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF said in a statement. Deepfakes are AI-generated images, videos, and audio that convincingly impersonate real people.
UNICEF also raised concerns about what it called the "nudification" of children, using AI to strip or alter clothing in photos to create fabricated nude or sexualized images.
At least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year, according to UNICEF.
Britain said on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so.
Concerns have increased in recent years about the use of AI to generate child abuse content, particularly chatbots such as xAI's Grok - owned by Elon Musk - which has come under scrutiny for producing sexualized images of women and minors.
A Reuters investigation found the chatbot continued to produce these images even when users explicitly warned the subjects had not consented.
xAI said on January 14 it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in "jurisdictions where it's illegal." It did not identify the countries. It had earlier limited the use of Grok's image generation and editing features only to paying subscribers.
(Reporting by Jasper Ward in Washington; editing by Michelle Nichols and Rod Nickel)
AI-generated content refers to any type of content created using artificial intelligence technologies, including images, text, and videos, often used in various applications, including marketing and entertainment.
Deepfakes are synthetic media created using AI that can manipulate images, videos, or audio to convincingly impersonate real people, often raising ethical and security concerns.
Child sexual abuse content refers to any material that depicts or promotes sexual exploitation or abuse of children, which is illegal and harmful.
Content moderation is the process of monitoring and managing user-generated content on platforms to ensure compliance with community guidelines and legal standards.
Safety-by-design is an approach that integrates safety considerations into the design and development of products and technologies to minimize risks and protect users.
Explore more articles in the Finance category

