New Bipartisan Bill Could Ban Teens from Using AI Chatbots — Even Siri
Kent Weakley / Shutterstock
Toggle Dark Mode
If a new bipartisan bill becomes law, teens could be banned from using AI chatbots — a move that might even affect Apple’s revamped Siri. The proposed bill is the result of parents who have expressed concerns about their offspring having access to inappropriate content, which includes AI chat content ranging from suicide planning to sexual conversations with the bots.
The proposed Guidelines for User Age-verification and Responsible Dialogue Act of 2025, or GUARD Act, was introduced by US Senators Josh Hawley (R-Missouri), Richard Blumenthal (D-Connecticut), Katie Britt (R-Alabama), Mark Warner (D-Virginia), and Chris Murphy (D-Connecticut) and would “ban AI companions for minors, mandate AI chatbots disclose its non-human status, and create new crimes for companies who make AI for minors that solicits or produces sexual content.”
The legislation was unveiled at a press conference on Tuesday and could impact Apple in multiple ways if it passes into law — including the company’s new Siri virtual assistant when it eventually rolls out next year.
How Will This Impact Siri?
It remains unclear how much of a traditional chatbot Apple’s “Siri 2.0” will be, but the language of the GUARD Act is broad enough that even the current Siri might qualify. However, the bill’s intent seems to focus more on companionship-oriented or emotional AI systems, leaving Siri’s more transactional query system as an outlier.
Currently, if Siri cannot provide a satisfactory response to a query on its own, it will either pass the query to ChatGPT or ask for your permission to do so, depending on your settings. If lawmakers were to classify Siri as an “AI companion,” Apple may be required to verify a user’s age before forwarding requests to ChatGPT.
However, if next year’s revamped Siri qualifies as an AI chatbot, Apple would be required to gatekeep access to the virtual assistant entirely, requiring age verification during the setup process of the iPhone or other Siri-capable Apple devices, such as the iPad or Mac. It’s unclear what Apple would do in that scenario, but it’s possible that users under 18 could be given access to a more basic version of the voice assistant rather than being blocked entirely.
The legislation could increase the already strong pressure on Apple (and Google) to use age verification in their app stores. As noted by 9to5Mac, companies like Meta are pushing this solution, saying it makes more sense for Apple and Google to verify ages on their app stores, rather than placing the burden on individual app developers. Several states have already passed laws to this effect, but this new bill could turn it into a national requirement — and raise fresh compliance headaches for Apple and Google alike.
The Dangers of AI Chatbots
There has been growing concern about how people — especially teens — can develop unhealthy relationships with AI chatbots. While AI companies say they take steps to prevent users from becoming emotionally dependent on chatbots, others claim the companies are deliberately making the bots addictive.
Meanwhile, lawmakers backing the bill have framed it as a moral imperative.
AI chatbots pose a serious threat to our kids. More than seventy percent of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology. I’m proud to introduce this bipartisan legislation with tremendous support from parents and survivors that will ensure our kids are protected online.
Senator Josh Hawley (R-Missouri)
The GUARD Act would:
- Ban AI companies from providing AI companions to minors.
- Mandate that AI companions disclose their non-human status and lack of professional credentials for all users
- Create new crimes for companies which knowingly make available to minors AI companions that solicit or produce sexual content.
Parents have been vocal in opposing AI chatbot access for minors, with many of them speaking to Congress about the issue last month.
“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” Megan Garcia, a Florida mom who last year sued the chatbot platform Character.AI, told NBC News. Garcia claims one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.


