January 14, 2026
Skynet vs Spreadsheet
LLMs are a 400-year-long confidence trick
From grandma’s calculator to Skynet hype: Is AI a hustle or a helper? Comment war ignites
TLDR: The piece says AI companies exploit centuries of trust in machines to sell chatbots with fear and hype. Commenters clash: some call doomsday talk a marketing ploy, others insist LLMs are genuinely smart and useful, while critics say “AI safety” ignores real-world harms—making this debate unavoidably personal and practical.
The article claims today’s chatbots are the endgame of a 400-year confidence trick: we’ve trusted machines since the first calculators, and now LLM makers stoke fear and wonder with talk of “P(Doom)”—the probability of AI apocalypse—to sell us more chatbots. The comments immediately turned into a soap opera. One camp calls the doom talk marketing theater; another says denying the smarts of these systems is pure delusion.
mossTechnician blasted “AI safety” groups for chasing sci‑fi nightmares instead of real harms like pollution or abuse content. ltbarcly3 clapped back: LLMs are intelligent in a new way—ten years ago this would’ve looked like magic. schnitzelstoat shrugged at killer-robot fears but warned the economic shock is real, with chatbots slashing grunt work (as long as a human babysits to catch hallucinations). baq tried to end the philosophy fight: stop asking if they’re “intelligent”; ask if they’re useful. Then leogao arrived with the “actually” energy, accusing the author of lumping incompatible camps—the sellers and the doomers—into one big conspiracy.
Memes flew: “Skynet vs spreadsheets,” “three-card monte with autocomplete,” and “from tax collector’s calculator to doom charts.” Verdict from the peanut gallery: it’s either a slick hustle, or the most useful sidekick you’ve ever had—depending on which subreddit you live in.
Key Points
- •Early mechanical calculators by Wilhelm Schickard (1623) and Blaise Pascal (circa 1640s) aimed to reduce arithmetic errors and labor.
- •The article frames confidence scams as relying on emotional manipulation via positive promises or fear of negative consequences.
- •It argues that centuries of using calculators and automation built societal trust in machine accuracy and shaped decision-making norms.
- •OpenAI’s approach to GPT‑3 included not releasing the trained model publicly, citing concerns about malicious applications.
- •Public discourse around LLMs includes “P(Doom)”—a probabilistic framing of catastrophic risk—used as part of fear-focused messaging.