It hit me like a lightning bolt during a casual conversation about AI safety: we’re tuning these models for adults, but kids are using them too.
Think about it. When we discuss whether an AI model is “safe,” we’re thinking about bombs, violence, and other adult topics. But most AI apps today don’t expose the user’s age to the model. So ithas absolutely no idea that a user is ten or seven or five years old.
The Young User Problem
Current AI safety measures operate under a fundamental assumption: the user is a reasonable adult who can handle adult-level information. The model will cheerfully explain:
- The historical context of various genocides
- Different types of substance abuse and their effects
- Adult relationship dynamics like “friends with benefits”
- Complex moral dilemmas without age-appropriate framing
And why shouldn’t it? For some reason, the assumption has been, during training, that the conversations are being held with adults.
The Safety Tuning Gap
Model providers have spent enormous effort making AI systems refuse to help with clearly harmful requests—bomb-making, illegal activities, hate speech. But we’ve completely ignored the more subtle question: How do we make AI responses appropriate for this specific user?
Sure, many of the apps now have cross-chat search and memory about the user, but a vast majority of users are on free plans or not logged in at all. So the model has no idea who they are, what their age is, or what their background knowledge might be.
The current approach is like having a library where every book is available to everyone. There’s no age-appropriate partitioning or consideration for developmental readiness.
Access
And we know kids are using these models. They’re asking about everything—history, science, relationships, current events. And they’re getting responses calibrated for adult comprehension and emotional resilience.
The Technical Challenge
This isn’t easy to solve. Age verification is notoriously difficult online, and even if we could verify age, how do we determine appropriate information boundaries? Cultural differences, individual maturity levels, and parental preferences all complicate the equation.
We’re essentially running a massive experiment on children’s psychological development, and we have no idea what the long-term effects will be. We missed the mark with social media, and now we’re doing it again with AI.
For the above reasons (and many others), I’m writing an AI Safety For Parents email course. It will include a ton of information, and the website will have free resource as well.
For example, the topic of this post is mostly fixed with a good system prompt, so I’ve put a free system prompt on the site that you can use to help your AI understand age-appropriate responses.
Fin
What are your thoughts on age-appropriate AI interactions? Have you noticed this gap in how we think about AI safety?
– Joseph “rez0” Thacker
Sign up for my email list to know when I post more content like this.
I also post my thoughts on Twitter/X.
Source link