EU lays out guidelines on misuse of AI by employers, websites and police
Published by Global Banking and Finance Review
Posted on February 4, 2025
2 min readLast updated: January 26, 2026

Published by Global Banking and Finance Review
Posted on February 4, 2025
2 min readLast updated: January 26, 2026

The EU has introduced guidelines to prevent AI misuse by employers and websites, aiming to ensure ethical AI practices across Europe.
By Foo Yun Chee
BRUSSELS (Reuters) - Employers will be banned from using artificial intelligence to track their staff's emotions and websites will not be allowed to use it to trick users into spending money under EU AI guidelines announced on Tuesday.
The guidelines from the European Commission come as companies grapple with the complexity and cost of complying with the world's first legislation on the use of the technology.
The Artificial Intelligence Act, binding since last year, will be fully applicable on Aug. 2, 2026, with certain provisions kicking in earlier, such as the ban on certain practices from Feb. 2 this year.
"The ambition is to provide legal certainty for those who provide or deploy the artificial intelligence systems on the European market, also for the market surveillance authorities. The guidelines are not legally binding," a Commission official told reporters.
Prohibited practices include AI-enabled dark patterns embedded in services designed to manipulate users into making substantial financial commitments, and AI-enabled applications which exploit users based on their age, disability or socio-economic situation.
AI-enabled social scoring using unrelated personal data such as origin and race by social welfare agencies and other public and private bodies is banned, while police are not allowed to predict individuals' criminal behaviour solely based on their biometric data if this has not been verified.
Employers cannot use webcams and voice recognition systems to track employees' emotions, while mobile CCTV cameras equipped with AI-based facial recognition technologies for law enforcement purposes are prohibited, with limited exceptions and stringent safeguards.
EU countries have until Aug. 2 to designate market surveillance authorities to enforce the AI rules. AI breaches can cost companies fines ranging from 1.5% to 7% of their total global revenue.
The EU AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control.
(Reporting by Foo Yun Chee; Editing by Alison Williams)
Prohibited practices include AI-enabled dark patterns designed to manipulate users into making financial commitments and AI applications that exploit users based on their emotional state.
The EU AI Act will be fully applicable on August 2, 2026, with certain provisions, such as the ban on specific practices, starting earlier.
Companies that breach the AI regulations can face fines ranging from 1.5% to 7% of their total global revenue.
The EU AI Act is more comprehensive than the United States' light-touch voluntary compliance approach, which lacks binding regulations.
The guidelines aim to provide legal certainty for those deploying AI systems in the European market and to establish clear rules for market surveillance authorities.
Explore more articles in the Headlines category

