Published by Global Banking and Finance Review
Posted on January 26, 2026
3 min readLast updated: January 26, 2026
Published by Global Banking and Finance Review
Posted on January 26, 2026
3 min readLast updated: January 26, 2026
The EU is investigating Musk's X for potential rule breaches by Grok AI, focusing on illegal content dissemination under the Digital Services Act.
By Foo Yun Chee and Sam Tabahriti
BRUSSELS, Jan 26 (Reuters) - Elon Musk's X faces investigation by the European Union into whether it disseminates illegal content, following public outcry over the spreading of manipulated sexualised images by its artificial intelligence Grok chatbot.
The European Commission, the 27-nation bloc's executive arm, said on Monday that it would investigate whether social media platform X protected consumers by properly assessing and mitigating risks related to Grok's functionalities.
Its probe comes two weeks after British media regulator Ofcom launched its own investigation over concerns Grok was creating sexually intimate deepfake images, and after Indonesia, the Philippines and Malaysia temporarily blocked the chatbot.
The Commission said earlier this month that the AI-generated images of undressed women and children being shared on X were unlawful and appalling, joining condemnation across the world.
"Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation," EU tech chief Henna Virkkunen said in a statement.
DEEPFAKE IMAGES ALARMED REGULATORS GLOBALLY
X referred to a statement issued on January 14 in which it said owner xAI had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in "jurisdictions where it's illegal". It did not identify the countries.
The Philippines and Malaysia restored access to Grok after xAI said it had installed extra safety measures.
The Commission's move under the EU Digital Services Act, which requires Big Tech to do more to tackle illegal and harmful online content, came after xAI's Grok produced sexualised images of women and minors that alarmed global regulators.
Companies risk fines as much as 6% of their global annual turnover for DSA breaches.
Although the changes made by xAI were welcome, they do not resolve all the issues and systemic risks, a senior official for the executive told reporters on Monday. The Commission believed X did not carry out an ad hoc assessment when it rolled out Grok's functionalities in Europe, the official added.
EU PROBE RISKS IRRITATING TRUMP
The investigation risks antagonising the administration of President Donald Trump as an EU crackdown on Big Tech has triggered criticism and even the threat of U.S. tariffs.
"With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service," Virkkunen said.
European lawmaker Regina Doherty said the case exposed wider weaknesses in how AI technologies are regulated and enforced.
"The AI Act must remain a living piece of legislation. If gaps in enforcement or oversight become clear, then it is our responsibility to address them. EU laws must be enforceable in real time when serious harms occur," she said.
EU regulators also extended an investigation into X opened in December 2023 to establish whether it has properly assessed and mitigated all systemic risks related to its so-called recommender systems, including the impact of its recently announced switch to a Grok-based system.
They said X, which was hit with a 150 million euro fine in December for breaching its transparency obligations under the DSA, may face interim measures in the absence of meaningful adjustments to its service.
(Additional reporting by Sam Tabahriti; Writing by Richard Lough; Editing by William James and Alexander Smith)
The Digital Services Act (DSA) is a regulation in the EU that aims to create a safer digital space by holding online platforms accountable for illegal content and ensuring user protection.
Artificial intelligence (AI) refers to the simulation of human intelligence in machines programmed to think and learn like humans, often used in applications like chatbots and data analysis.
Compliance in finance refers to the process of ensuring that financial institutions adhere to laws, regulations, and guidelines set by governing bodies to prevent illegal activities.
Cybersecurity involves protecting computer systems, networks, and data from theft, damage, or unauthorized access, ensuring the integrity and confidentiality of information.
Financial crime encompasses illegal activities that result in financial gain, including fraud, money laundering, and embezzlement, often targeting financial institutions and their customers.
Explore more articles in the Finance category