US senators seek to prohibit minors from using AI chatbots

US senators seek to prohibit minors from using AI chatbots

Legislation introduced in the US Congress could require artificial intelligence (AI) chatbot operators to put in place age verification processes and stop under 18s from using their services, following a string of teen suicides.

The bipartisan Guidelines for User Age-verification and Responsible Dialogue (Guard) Act, introduced by Republican senator Josh Hawley and Democrat senator Richard Blumenthal, aims to protect children in their interactions with chatbots and generative AI (GenAI).

The move follows a number of high-profile teen suicides that the parents have linked to their child’s use of AI-powered chatbots.

Hawley said the legislation could set a precedent to challenge Big Tech’s power and political dominance, stating that “there ought to be a sign outside of the Senate chamber that says “bought and paid for by Big Tech, because the truth is, almost nothing they object to crosses that Senate floor”.

In a statement, Blumenthal criticised the role of tech companies in fuelling harm to children, stating that “AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide … Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety”.

The bill comes a month after bereaved families testified in congress in front of the Senate Judiciary Committee on the Harm of AI Chatbots.

Senator Hawley also launched an investigation into Meta’s AI policies in August, following the release of an internal Meta policy document that revealed the company allowed chatbots to “engage a child in conversations that are romantic or sensual”.

In September, the senate heard from Megan Garcia, the mother of 14-year-old Sewell Setzer, who used Character.AI, speaking regularly with a chatbot nicknamed Daenerys Targaryen, and who shot himself in February 2024.

The parents of 16-year-old Adam Raine also testified in front of the committee. Adam died by suicide after using ChatGPT for mental health support and companionship, and his parents launched a lawsuit in August against OpenAI for wrongful death, in a global first.

The bill would require AI chatbots to remind users they aren’t human at 30-minute intervals, as well as introducing measures to prevent them from claiming to be human and disclosing that they do not provide “medical, legal, financial or psychological services”.

The announcement of the bill comes the same week that OpenAI released data revealing more than one million users per week were shown “suicidal intent” content when using ChatGPT, while over half a million showed possible signs of mental health emergencies.

Criminal liability is also within the scope of the bill, meaning AI companies that design or develop AI companions that induce sexually explicit behaviour from minors, or encourage suicide, will face criminal penalties and fines of up to $100,000.

The Guard Act defines AI companions as any AI chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship or therapeutic communication”.

Research this year from Harvard Business Review found the number one use case of GenAI is now therapy and companionship, overtaking personal organisation, generating ideas and specific search.

ParentsSOS statement

In a statement from ParentsSOS, a coalition of 20 survivor families impacted by online harms welcomed the act, but highlighted that it needs strengthening. “This bill should address Big Tech companies’ core design practices and prohibit AI platforms from employing features that maximise engagement to the detriment of young people’s safety and well-being,” they said.

Historically, AI companies have argued that chatbots’ speech should be protected under the First Amendment and right to freedom of expression.

In May this year, a US judge ruled against Character.AI, noting that AI-generated content cannot be protected under the First Amendment if it results in foreseeable harm. Other bipartisan efforts to regulate tech companies, including the Kids Online Safety Act, have failed to become law due to arguments around free speech and Section 230 of the Communications Decency Act.

Currently, ChatGPT, Google Gemini, Meta AI and xAI’s Grok all allow children as young as 13 to use their services. Earlier this month, California governor Gavin Newsom introduced the country’s first law to regulate AI chatbots, Senate Bill 243, which will come into force in 2026.

A day after the Guard Act was announced, Character.AI announced it will ban under 18s from using its chatbots from 25 November. The decision followed an investigation that revealed the company’s chatbots are being used by teenagers and providing harmful and inappropriate content, including bots modelled on people such as Jeffrey Epstein, Tommy Robinson, Anne Frank and Madeleine McCann.



Source link