Italy regulator probes DeepSeek over false information risks
Published by Global Banking & Finance Review®
Posted on June 16, 2025
1 min readLast updated: January 23, 2026
Published by Global Banking & Finance Review®
Posted on June 16, 2025
1 min readLast updated: January 23, 2026
Italy's AGCM investigates DeepSeek over AI risks, focusing on inadequate user warnings about potential false information generation.
ROME (Reuters) -Italy's antitrust watchdog AGCM said on Monday it had opened an investigation into Chinese artificial intelligence startup DeepSeek for allegedly failing to warn users that it may produce false information.
DeepSeek did not immediately respond to an emailed request for comment.
The Italian regulator, which also polices consumer rights, said in a statement DeepSeek did not give users "sufficiently clear, immediate and intelligible" warnings about the risk of so-called "hallucinations" in its AI-produced content.
It described these as "situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information."
In February, another Italian watchdog, the data protection authority, ordered DeepSeek to block access to its chatbot after it failed to address its concerns on privacy policy.
(Reporting by Alvise Armellini and Elvira Pollina, editing by Gavin Jones)
Italy's antitrust watchdog AGCM is investigating DeepSeek for allegedly failing to provide clear warnings about the risks of AI-generated misinformation.
The regulator described 'hallucinations' as situations where the AI model generates outputs that contain inaccurate, misleading, or invented information in response to user inputs.
In February, Italy's data protection authority ordered DeepSeek to block access to its chatbot due to concerns regarding its privacy policy.
Explore more articles in the Headlines category

