FTC Probes AI Chatbots Designed As “Companions” For Children’s Safety

FTC Probes AI Chatbots Designed As “Companions” For Children’s Safety

The U.S. Federal Trade Commission has opened a formal inquiry into AI chatbots that act like companions—designed to mimic emotions, build trust, and engage like friends or confidants—amid concerns about how these systems affect children and teens.

The inquiry, announced Thursday, uses the FTC’s legal authority to issue orders to seven major companies, seeking detailed disclosures about how their companion chatbots are built, how they work, and how they safeguard young users.

What the FTC Wants to Know

The investigation demands information from companies including Alphabet, Meta, OpenAI, Snap, Instagram, X.AI, and Character Technologies.The FTC is particularly focused on how these firms:

  • Monetize the chatbots, especially how user engagement is converted into dollars.

  • Process user inputs and generate responses that may affect emotional well-being.

  • Design and approve the chatbot “characters,” especially those presented as companions.

  • Measure and monitor negative impacts on children, both before deployment and throughout the product’s lifecycle.

  • Disclose intended audience, limitations, data collection, privacy risks, and features clearly to users and parents.

  • Enforce rules and policies (age limits, community guidelines, terms of service) and how they monitor usage.

  • Use or share personal information gathered through conversations.

Why These Questions Matter

AI “companions” are different from traditional chatbots. Because they mimic interpersonal communication, there is concern they might blur boundaries for young users. Children and teens may form emotional attachment, trust, rely on the chatbot for advice, or share sensitive personal information—without realizing potential risks.

The FTC noted that these tools are often designed to communicate like friends, confidants, or advisors, which can prompt users—especially younger ones—to trust them more than they might a standard app or service.

Also relevant is compliance with existing laws—particularly the Children’s Online Privacy Protection Act (COPPA). The FTC wants to know whether the involved companies limit or restrict minors’ access to these chatbots, how they obtain parental consent, and how they ensure data collected from minors is handled and stored safely.

The FTC is using its Section 6(b) power to compel companies to submit detailed information—even if no violation is alleged yet. This tool allows the agency to investigate broad trends and product design, not just reactive enforcement.

COPPA, enforced by the FTC, requires that companies obtain verifiable parental consent before collecting personal data from children under 13. The inquiry will examine whether companion chatbots comply with COPPA when interacting with younger users.

Also read: Should Children Use AI Chatbots? Google Thinks So, Critics Strongly Disagree

Mandate for Companies and Product Developers

Product teams building companion AI systems will likely need to provide clear documentation of how their models are trained—especially how they handle misbehavior (like offensive or misleading responses), emotional or psychological content, privacy of conversation logs and what guardrails are in place.

Companies may need to re-examine their age gating, identity-verification, or parental disclosure practices. For example, distinguishing between use by minors vs. adults, limiting certain features for underage users or providing opt-outs or parental approval flows.

Transparency will be under scrutiny on how chatbots are marketed, how their ability to mimic human emotion is framed and how privacy risks are disclosed will likely be examined.

This move comes at a time when AI regulation is accelerating globally. Several jurisdictions are considering or already implementing stricter rules for AI content, data privacy, and minors’ safety in online contexts. The FTC inquiry suggests the U.S. may follow suit with more aggressive supervision not just of generative models but also how those models interact with vulnerable populations.

In prior FTC actions, bot-oriented applications and deceptive marketing claims have triggered enforcement. What’s new here is the focus on behavioral and psychological design aspects of companion-style chatbots—not merely privacy in the data collection sense, but the effect of design on user trust, dependency, and emotional wellbeing.

As the inquiry unfolds, consumer safety groups, parents, and legislators will likely push for clearer guidelines or even regulations specific to AI companions, especially where children are involved. The results could reshape how companion chatbots are built, marketed and regulated—and potentially set precedents for how emotional and psychological dimensions of AI are governed.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.