AI models with systemic risks given pointers on how to comply with EU AI rules
Published by Global Banking and Finance Review
Posted on July 18, 2025
2 min readLast updated: January 22, 2026
Published by Global Banking and Finance Review
Posted on July 18, 2025
2 min readLast updated: January 22, 2026
The EU provides guidelines for AI models with systemic risks to comply with the AI Act, impacting major tech companies with potential fines for non-compliance.
By Foo Yun Chee
BRUSSELS (Reuters) -The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act).
The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35 million euros or 7% of global turnover for violations.
The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google, OpenAI, Meta Platforms, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation.
The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society.
The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse.
General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training.
"With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement.
($1 = 0.8597 euros)
(Reporting by Foo Yun Chee;Editing by Elaine Hardcastle)
The European Commission has set out guidelines to help AI models with systemic risks comply with the AI Act, which includes tougher obligations to mitigate potential threats.
The AI Act will apply on August 2 for AI models with systemic risks and foundation models such as those developed by Google, OpenAI, and others.
These models must conduct evaluations, assess and mitigate risks, perform adversarial testing, report serious incidents, and ensure adequate cybersecurity measures.
General-purpose AI models will need to create technical documentation, adopt copyright policies, and provide detailed summaries about their functioning.
The guidelines aim to clarify compliance for businesses facing fines and address criticism regarding the regulatory burden imposed by the AI Act.
Explore more articles in the Headlines category
