November 1, 2025
Skynet or PTA Night?
Why "Everyone Dies" Gets AGI All Wrong
Internet erupts: Raise our AI kids or slam the brakes? Doom prophets vs “magic 8‑ball” skeptics
TLDR: An AGI veteran says the new “everybody dies” AI-doom book gets it wrong, arguing smart machines can grow good values. Comments explode into a three-way brawl: teach the “AI kids,” slam the brakes, or shrug because today’s AI is just a fancy magic 8‑ball—stakes are society‑scale.
The latest AI fight night is here: a veteran AGI builder claps back at Eliezer Yudkowsky’s new book claiming “if anyone builds it, everybody dies.” He argues intelligence isn’t just a cold calculator, and that value systems grow with it. The community? Absolutely on fire. One camp is yelling, “Don’t panic—parent!” with a top comment preaching we should raise “AI mind children” well, not ban them. Think less killer robot, more tough-love parenting with silicon toddlers.
But the other side isn’t having the kumbaya. The skeptics say we’re not even close to real AGI—today’s chatbots are just supercharged autocomplete, a “fancy magic 8‑ball.” Others see a different nightmare: even if we can “raise” AI, profit-hungry companies could adopt our robot babies and teach them bad habits. Markets, regulation, and culture become the real babysitters. And for the anxious crowd, claims that “intelligence and values intertwine” didn’t calm anyone. One commenter deadpanned, “I am not at all reassured.”
Memes flew: “Skynet vs PTA meeting,” King Lear quotes about thankless children, and jokes about “profit step-parents.” Meanwhile, old-school lore pops up with Yudkowsky, Bostrom, and Kurzweil’s 2029 predictions getting dragged back into the ring. Verdict? The internet’s split between bedtime stories for silicon kids and boarding up the windows.
Key Points
- •The article critiques Yudkowsky and Soares’s book “If anybody builds it everyone dies,” arguing its themes echo long-standing positions from the past 15–20 years.
- •In 2000, Yudkowsky visited the author’s AI company Webmind Inc. in New York City to advocate slowing AGI development for safety, while also demonstrating his AGI-oriented language Flare.
- •The author edited the 2005 volume “Artificial General Intelligence,” which included a chapter by Yudkowsky on engineering minds and safe AGI.
- •The author served as Head of Research at the Singularity Institute for AI (later MIRI), but left over foundational disagreements and later published a critical blog post about the institute’s core ideas.
- •The article asserts that while scaled-up LLMs won’t produce AGI, ongoing trends could lead to AGI around the 2029 timeline projected by Ray Kurzweil, possibly sooner.