October 29, 2025
No minors, major drama
Character.ai to bar children under 18 from using its chatbots
Character.ai bans teen chat buddies — parents cheer, teens eye workarounds, privacy panic
TLDR: Character.AI will block under-18 users on Nov. 25 to address child safety concerns. Commenters are split between applause for protecting kids and skepticism that crafty teens will evade detection, while privacy worries flare over age checks via chat behavior and social accounts—a high-stakes move with real-world consequences.
Character.AI just dropped a bombshell: teens are out starting Nov. 25. The company says it’s about safety after a tragic lawsuit and rising panic over “AI companions” messing with young mental health. The community? Explosive. Some are shouting finally and pointing to heartbreaking stories like the Florida case and the wider debate of whether chatbots can be held responsible (related). Others are glaring at the fine print: Character.AI will try to detect minors by their chats and connected social accounts. Cue the memes about teens typing “indeed, my good sir” to pass as 18 and jokes that kids are “level 99 at lying on the internet.”
Skeptics call it a PR bandage—“it won’t dent their bottom line,” one quips—while privacy hawks go nuclear, alleging CAI is “shady” with data and fretting that intimate teen chats might have already been siphoned off to Big Tech (it’s an allegation, but it’s loud). Supporters argue this is the kind of guardrail lawmakers want, with California’s new rules and a Senate bill aimed at banning AI companions for minors. Meanwhile, the hot take Olympics rage on: Is this real protection or a tech theater performance? Teens are already trading tips on dodging filters; parents are breathing (slightly) easier; and everyone’s watching how this “AI safety lab” actually shows up when the drama moves from comments to real life.
Key Points
- •Character.AI will ban users under 18 from its chatbots starting Nov. 25, with minor accounts identified and limited beforehand.
- •The company plans to launch an AI safety lab and says it aims to exceed regulatory requirements on safety.
- •The move comes amid lawsuits and scrutiny over AI companions’ mental health impacts; OpenAI has also adjusted safety features.
- •A U.S. Senate bill seeks to bar AI companions for minors, and California enacted a law requiring chatbot safety guardrails effective Jan. 1.
- •Character.AI, founded in 2021 by Noam Shazeer and Daniel De Freitas, raised nearly $200M; Google licensed its technology for about $3B and the founders returned to Google.