Australia Is Taking Its Teen Social Media Ban to the Next Level: AI Chatbots
Hugo Heimendinger
Toggle Dark Mode
In December, Australia’s internet regulator introduced a ban on social apps for kids under 16, and now the country is considering expanding that with a crackdown on age-verification requirements for AI apps.
The 2025 decision made Australia the first country to ban kids and younger teenagers from using social media. The move came as part of a national drive to protect the mental health of its youngsters, following growing concern around the globe about the impact of social media on underaged users.
Now, Reuters reports that Australian regulators will soon begin applying similar policies to generative AI services. As of March 9, platforms like OpenAI’s ChatGPT and Google’s Gemini will be required to follow a list of requirements intended to prevent users under the age of 18 from accessing harmful materials, including pornography and content that contains extreme violence or promotes eating disorders or self harm. Companies that fail to do so could face fines of up to $49.5 million AUD (~$35 million USD).
The new rules also attempt to address concerns about excessive chatbot use among young people, due to fears about emotionally manipulative AI features that could encourage children to become dependent on the chatbots.
“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson for the country’s online safety commissioner said, including “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services”.
As noted by Reuters, while Australia has yet to have to deal with reports of chatbot-linked violence or self-harm, the eSafety commissioner has received reports about children as young as 10 having conversations with AI-powered chatbots for as much as six hours per day.
The commissioner became “concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage”, the spokesperson said.
The new rules will possibly require search engines and app stores like those run by Apple and Google to block access to AI services that do not verify users’ ages. Australia’s internet regulator is making the move after a Reuters review revealed that over 50 percent of AI services have not taken any action to comply by a deadline next week.
While Apple declined to respond when Reuters asked for comment, the company said on its website last week that it would use “reasonable methods” to prevent minors from downloading 18+ apps in Australia and other jurisdictions that have age restrictions. The Cupertino company did not specify the methods it would be using to accomplish this.
A spokesperson for Google also declined to comment.
It should be noted that Apple has been rolling out new age-related controls across its device platforms since early last year to allow it to comply with age-restriction laws around the globe. Still, adoption of these APIs to comply with local requirements remains the ultimate responsibility of the developers themselves.
Reuters research shows that compliance is still limited, with the lion’s share of the 50 most popular text-based AI tools still not taking any visible steps toward implementing age verification features or ways to filter content to young users ahead of the deadline.
Another 11 platforms either had plans to put in place blanket content filters or to simply block all Australians from using their service, which would comply with the new law, as it would block restricted content from all users. 30 companies have taken no visible steps to follow the new rules.
