April 21, 2026
Doom essay, geo rage, font cage match
Bullshit About Bullshit Machines [pdf]
AI Doom Essay Drops, HN Erupts: Philosophy Fistfight, UK Block Fury, and a Font War
TLDR: Aphyr’s new PDF blasts modern AI as unreliable “bullshit machines,” and the comments exploded into a three-way brawl over consciousness, UK geoblocking, and typesetting. It matters because the tech world isn’t just debating AI’s risks—they’re fighting over who even gets to read about them.
Kyle Kingsbury (aka aphyr) just dropped a sprawling, gloomy PDF rant about modern AI—calling them “bullshit machines” and warning about unreliable chatbots, chaotic systems, spamgy internet pollution, and even robot-fueled scams. But the real show was the comments: the Hacker News crowd (a tech forum) turned it into a three-ring circus of philosophy, censorship anger, and nerd sniping.
One camp dove straight into the deep end: “what is conscious even,” asked one commenter, dragging out the classic Chinese Room thought experiment. Another camp got mad about access—UK readers saw a block page blaming the UK Online Safety Act, and the thread lit up with “Is this the free world?” sarcasm and demands to stop front‑paging links that some countries can’t open. One user even called for mandatory archive links, which sparked a mini-moderation debate inside the bigger AI debate. Meanwhile, the most wholesome corner of the crowd asked the only question that truly matters: “what was this typeset in—pandoc?” because no internet fight is complete without a font war.
Amid the drama, a helpful soul dropped a roundup of past chapter discussions here. Verdict from the peanut gallery? AI is scary, access is messy, and typography is forever—with enough snark to fuel the next 10 think pieces.
Key Points
- •The essay is a multi-section critique and exploration of AI—especially LLMs—authored by Kyle Kingsbury.
- •The author questions the ethics of making deep learning cheaper, citing risks like increased spam and propaganda (from a 2019 hyperscaler talk).
- •The introduction outlines themes: ambiguous definitions of AI, unreliable model narration, uneven capabilities (“jagged edge”), and uncertainty about true progress.
- •Sections address system dynamics (chaos, verification problems), cultural shifts, information ecology issues (spam, web pollution, consensus collapse), and safety/security concerns.
- •Work-related impacts include automation ironies, labor shocks, capital consolidation, consideration of UBI, and new AI-adjacent roles.