Exclusive-Australia says it may go after app stores, search engines in AI age crackdown
Published by Global Banking & Finance Review®
Posted on March 1, 2026
5 min readLast updated: March 1, 2026
Published by Global Banking & Finance Review®
Posted on March 1, 2026
5 min readLast updated: March 1, 2026
Australia’s eSafety regulator may enlist app stores and search engines to enforce age restrictions on AI services that don’t verify users’ ages by March 9, under a sweeping crackdown targeting access to harmful content for under‑18s.
SYDNEY, Mar 2 (Reuters) - Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week.
The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media.
Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology.
From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million).
"eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services".
OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities.
Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day.
eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said.
Top app store operator Apple did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods.
A spokesperson for Google, Australia's dominant search engine provider and No.2 app store operator, declined to comment.
Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them".
COMPLIANCE IN THE MINORITY
A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters.
Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found.
Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s.
Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code.
But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required.
Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment.
Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls".
"It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said.
($1 = 1.4085 Australian dollars)
(Reporting by Byron Kaye; Editing by Saad Sayeed)
Australia is enforcing age restrictions on AI services to prevent minors from accessing harmful content, targeting both AI providers and app stores or search engines for compliance.
The law restricts Australians under 18 from accessing pornography, extreme violence, self-harm, and eating disorder content on AI platforms.
AI services and gatekeepers like search engines and app stores could face fines up to A$49.5 million ($35 million) for failing to comply with the age restriction requirements.
As of a week before the deadline, most large chat-based AIs like ChatGPT and Replika have begun implementing age assurance systems, but a majority of platforms have not publicly announced compliance measures.
Australia cites concerns about youth mental health and reports of children engaging excessively with chatbots, with the aim of preventing emotional manipulation and negative psychological impacts.
Explore more articles in the Finance category