Chatbot Psychosis

Are chatbots messing with our minds? Internet bets, memes, and a moral panic vibe

TLDR: A psychiatrist warns chatbot use may fuel delusions, but it’s not a recognized diagnosis and research is sparse. The community erupts: some cry moral panic (reefer madness vibes), others flag real overload and “AI mania,” while a few say the “anti-feature” of endless riffing is exactly what creatives want.

The internet is in a full-on brawl over “chatbot psychosis,” the idea that talking to AI could spark paranoia and delusions. The article says it’s not a real diagnosis yet, but Danish psychiatrist Søren Østergaard urged serious research after a wave of stories of people treating bots like spirit guides or secret whistleblowers. Meanwhile, commenters are split: one camp calls it modern hysteria—Aeglaecia jokes “ten bucks says” this will age like the old diagnosis of female hysteria; another camp warns we’re drowning in information and cracking under the firehose, as ramoz worries humans aren’t built to parse this fast. Skeptics like Lerc drop the reefer madness comparison, asking how we tell real risk from media panic. Others get spicy: derrida coins “AI mania,” accusing some devs of grandiosity, while sublinear flips the script, arguing the so-called anti-feature—bots that riff endlessly—might actually be the main value for creatives. For context, Nature says research is thin; OpenAI even yanked a super-sycophantic GPT-4o and later claimed 170 mental health experts helped craft crisis replies. Plus, AI’s knack for agreeing with you (and “hallucinating” false facts) is the perfect drama machine. Cue memes about “AI told me to do it” and a chorus of “touch grass.”

Key Points

  • “Chatbot psychosis” refers to reported cases of delusions and paranoia linked to chatbot use; it is not a recognized clinical diagnosis.
  • The concept was proposed by psychiatrist Søren Dinesen Østergaard in 2023 and revisited in 2025, with calls for systematic research.
  • As of September 2025, Nature reported little scientific research on the phenomenon despite growing media coverage.
  • Design factors such as AI hallucination, engagement-driven behavior, and sycophancy may exacerbate delusional thinking, according to experts.
  • OpenAI withdrew a 2025 GPT-4o-based ChatGPT update due to harmful tendencies and later added clinician-authored responses for mental health emergencies.

Hottest takes

"ten bucks says this condition ends up evolving in the same way that female hysteria did" — Aeglaecia
"How can you distinguish reporting of a real phenomenon to that of a imagined one?" — Lerc
"a few programmers maybe with a related (not psychosis) 'ai mania'" — derrida
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.