OpenAI is going to try and predict the ages of its users to protect them better, as stories of AI-induced harms in children mount.
The company, which runs the popular ChatGPT AI, is working on what it calls a long-term system to determine whether users are over 18. If it can’t verify that a user is an adult, they will eventually get a different chat experience, CEO Sam Altman warned.
“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” Altman said in a blog post on the issue.
Citing “principles in conflict,” Altman talked in a supporting blog post about how the company is struggling with competing values: allowing people the freedom to use the product as they wish, while also protecting teens (the system isn’t supposed to be used by those under 13). Privacy is another concept it holds dear, Altman said.
OpenAI is prioritizing teen safety over its other values. Two things that it shouldn’t do with teens, but can do with adults, are flirting and discussing suicide, even as a theoretical creative writing endeavor.
Altman commented:
“The model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”
It will also try to contact a teen user’s parents if it looks like the child is considering taking their own life, and possibly even the authorities if the child seems likely to harm themselves imminently.
The move comes as lawsuits mount against the company from parents of teens who took their own lives after using the system. Late last month, the parents of 16-year-old Adam Raine sued the company after ChatGPT allegedly advised him on suicide techniques and offered to write the first draft of his suicide note.
The company hasn’t gone into detail about how it will try and predict user age, other than looking at “how people use ChatGPT.” You can be sure some wily teens will do their best to game the system. Altman says that if the system can’t predict with confidence that a user is an adult, it will drop them into teen-oriented chat sessions.
Altman also signaled that ID authentication might be coming to some ChatGPT users. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” he said.
While OpenAI works on the age prediction system, Altman recommends parental controls for families with teen users. Available by the month’s end, it will allow parents to link their teens’ accounts with their own, guide how ChatGPT responds to them, and disable certain features including memory and chat history. It will also allow blackout hours, and will alert parents if their teen seems in distress.
This is a laudable step, but the problems are bigger than the effects on teens alone. As Altman says, this is a “new and powerful technology”, and it’s affecting adults in unexpected ways too. This summer, the New York Times reported that a Toronto man, Allen Brooks, fell into a delusional spiral after beginning a simple conversation with ChatGPT.
There are plenty more such stories. How, exactly, does the company plan to protect those people?
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Source link