X Agrees to UK Regulatory Crackdown on Hate Speech and Militant Content
By Sam Tabahriti
UK Regulatory Actions and X's Commitments
LONDON, May 15 (Reuters) - Elon Musk's X has agreed to strengthen protection for UK users against illegal hate speech and terrorist content, Britain's media regulator said on Friday, following months of regulatory pressure.
Under the agreement, Ofcom said the social media platform will review suspected illegal hate and terrorism-related posts within 24 hours on average, and assess at least 85% within 48 hours.
X - which regularly says it enforces bans on terrorist groups and hateful content - did not immediately respond to a request for comment.
Focus on Hate Speech After Antisemitic Attacks
FOCUS ON HATE SPEECH AFTER ANTISEMITIC ATTACKS
Measures to Restrict Terrorist Content
The platform had promised to restrict access in Britain to accounts operated by or on behalf of organisations banned under UK terrorism laws, and will submit quarterly performance data to Ofcom over the next year, the regulator added.
Engagement with External Experts
X would also engage external experts to improve its reporting systems after concerns from civil society groups that flagged content was not always clearly received or acted on, Ofcom said.
Statements from Ofcom
"We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites," Oliver Griffiths, Ofcom's online safety group director, said.
"This is of particular importance in the UK following a number of recent hate-motivated crimes suffered by the country's Jewish community."
Recent Antisemitic Incidents in the UK
Britain has seen a string of attacks on Jewish people and targets, including the stabbing of two men in north London last month in what police are treating as a terrorist incident.
Reactions from Advocacy Groups
Imran Ahmed, chief executive of the Center for Countering Digital Hate, said the commitments followed "sustained campaigning" after last year's attack on Heaton Park Synagogue in northern England.
Danny Stone, chief executive of the Antisemitism Policy Trust, said the commitments were "a good start" but that X was still "failing in so many regards" to tackle racism.
Global Regulatory Pressure and Ongoing Investigations
International Scrutiny of X
Regulators in the European Union, Australia and Singapore have also pressed the platform over illegal or militant content, and the European Commission has opened a formal probe into whether X is failing to curb hate speech.
AI Tools and Content Moderation Concerns
The new commitments in Britain follow increased scrutiny of X's platform and artificial intelligence tools.
In February, Reuters reported that Musk's Grok chatbot generated sexualised images in many cases even when users warned subjects had not consented.
Ongoing Ofcom Investigation
Ofcom said its own investigation into X, including into its systems for tackling illegal content and issues related to Grok, remains ongoing.
(Reporting by Sam Tabahriti; Editing by Andrew Heavens)

