AI teddy bear for kids responds with sexual content and advice about weapons

AI teddy bear for kids responds with sexual content and advice about weapons

In testing, FoloToy’s AI teddy bear jumped from friendly chat to sexual topics and unsafe household advice. It shows how easily artificial intelligence can cross serious boundaries. It’s a fair moment to ask whether AI-powered stuffed animals are appropriate for children.

It’s easy to get swept up in the excitement of artificial intelligence, especially when it’s packaged as a plush teddy bear promising

“warmth, fun, and a little extra curiosity.”

But the recent controversy surrounding the Kumma bear is a reminder to slow down and ask harder questions about putting AI into toys for kids.

FoloToy, a Singapore-based toy company, marketed the $99 bear as the ultimate “friend for both kids and adults,” leveraging powerful conversational AI to deliver interactive stories and playful banter. The website described Kumma as intelligent and safe. Behind the scenes, the bear used OpenAI’s language model to generate its conversational responses. Unfortunately, reality didn’t match the sales pitch.

AI teddy bear for kids responds with sexual content and advice about weapons
Image courtesy of CNN, a screenshot taken from FoloToy’s website

According to a report from the US PIRG Education Fund, Kumma quickly veered into wildly inappropriate territory during researcher tests. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults. In some conversations, Kumma also probed for personal details or offered advice involving dangerous objects in the home.

It’s unclear whether the toy’s supposed safeguards against inappropriate content were missing or simply didn’t work. While children are unlikely to introduce BDSM as a topic to their teddy bear, the researchers warned just how low the bar was for Kumma to cross serious boundaries.

The fallout was swift. FoloToy suspended sales of Kumma and other AI-enabled toys, while OpenAI revoked the developer’s access for policy violations. But as PIRG researchers note, that response was reactive. Plenty of AI toys remain unregulated, and the risks aren’t limited to one product.

Which proves our point: AI does not automatically make something better. When companies rush out “smart” features without real safety checks, the risks fall on the people using them—especially children, who can’t recognize dangerous content when they see it.

Tips for staying safe with AI toys and gadgets

You’ll see “AI-powered” on almost everything right now, but there are ways to make safer choices.

  • Always research: Check for third-party safety reviews before buying any AI-enabled product marketed for kids.
  • Test first, supervise always: Interact with the device yourself before giving it to children. Monitor usage for odd or risky responses.
  • Use parental controls: If available, enable all content filters and privacy protections.
  • Report problems: If devices show inappropriate content, report to manufacturers and consumer protection groups.
  • Check communications: Find out what the device collects, who it shares data with, and what it uses the information for.

But above all, remember that not all “smart” is safe. Sometimes, plush, simple, and old-fashioned really is better.

AI may be everywhere, but designers and buyers alike need to put safety, privacy, and common sense ahead of the technological wow-factor.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.



Source link