Italy fines OpenAI 15 million euros over privacy rules breach
Published by Global Banking & Finance Review®
Posted on December 20, 2024
1 min readLast updated: January 27, 2026

Published by Global Banking & Finance Review®
Posted on December 20, 2024
1 min readLast updated: January 27, 2026

Italy's privacy watchdog fined OpenAI €15M for using personal data without proper legal basis. The Garante highlights transparency issues in ChatGPT's data handling.
MILAN (Reuters) - Italy's privacy watchdog said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application.
The authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime.
The Garante said it found OpenAI processed users' personal data "to train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users".
OpenAI had no immediate comment on Friday. It has previously said it believes its practices are aligned with the European Union's privacy laws.
Last year the Italian watchdog briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules.
The service was reactivated after Microsoft-backed OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train algorithms.
($1 = 0.9626 euros)
(Reporting by Alessia Pe and Elvira Pollina; Editing by Alvise Armellini and Frances Kerry)
The article discusses Italy fining OpenAI €15M for breaching privacy rules related to ChatGPT's data usage.
OpenAI was fined for processing personal data without adequate legal basis and violating transparency obligations.
Garante is Italy's privacy watchdog, known for its proactive role in enforcing EU data privacy regulations.
Explore more articles in the Finance category



