February 13, 2026
Poetry or panic? Exit stage left
AI safety leader says 'world is in peril' and quits to study poetry
Doom, millions, and a micro farm: internet splits over the AI safety guy going full poet
TLDR: Anthropic’s AI safety lead quit with a “world is in peril” letter to study poetry in the UK. Comments split between calling it a cash‑out doom exit, a vague melodrama, and a mental‑health red flag, as ad‑wars with OpenAI fuel wider distrust about Big AI’s priorities.
An AI safety lead at Anthropic just quit with a dramatic “world is in peril” letter, saying he’ll study poetry and “become invisible” in the UK—and the internet went feral. While the letter itself is here for the curious link, the real show is the comments, which read like a group chat during the apocalypse.
One camp is all cynicism and pitchforks: “cash out, plant kale, watch it burn.” Another is worried this sounds like burnout or a mental health spiral—“too many people are fraying after staring into the AI abyss.” And then there are the skeptics, dismissing the whole note as vague, flowery vibes that don’t raise any alarms, pointing to earlier debates on Hacker News. Meanwhile, meme lords are workshopping lines like “apocalypse now, sonnets later,” and “exit pursued by couplets.”
The timing adds spice: an OpenAI researcher also quit over ChatGPT ads, Anthropic dunked on those ads in a commercial, and Sam Altman’s long clapback got roasted. With Anthropic’s past author settlement still fresh, commenters are asking if “safety” is a brand or a belief. The plot twist—he’s going to write poems—has the crowd split between panic, eye-rolls, and a surprising number of micro-farm jokes.
Key Points
- •Mrinank Sharma resigned from Anthropic, citing concerns about AI, bioweapons, and broader global crises.
- •Sharma led an AI safeguards team and described contributions on AI behavior, bioterrorism risk mitigation, and human impacts of AI assistants.
- •Anthropic positions itself as a safety-focused public benefit corporation and has published safety reports, including on its tech being used in cyberattacks.
- •In 2025, Anthropic agreed to pay $1.5bn to settle a class action by authors alleging use of their work to train AI models.
- •Debate over ads in chatbots intensified: OpenAI began running ads in ChatGPT, Anthropic criticized the move in commercials, and OpenAI’s Sam Altman publicly responded.