April 8, 2026
Pink elephants and pizza glue
The Future of Everything Is Lies, I Guess
Are chatbots big liars or just misunderstood? Internet erupts over “bullshit machines”
TLDR: A punchy essay brands today’s AI chatbots “bullshit machines,” arguing they sound smart while making things up. The comments split: some applaud the plain label to manage expectations, others say it’s reductive and ignores rapid progress—sparking jokes, a CEO jab, and a reminder to treat chatbots as fallible tools.
An essay calling today’s AI chatbots “bullshit machines” lit a match—and the internet brought the gasoline. The author says these systems are basically supercharged autocomplete that can sound smart while making stuff up, like telling people to put glue on pizza. Cue debate: is that blunt truth or tech slander? One camp cheered the plain talk—“finally, no hype,” said fans who think the term helps regular folks understand that chatbots can confidently invent facts. Another camp fired back that the label is lazy, pointing to huge leaps since 2019 and arguing we need nuance: chatbots are tools, not oracles, and people should learn where they shine and where they flub.
Then the drama kicked up a notch. One commenter tossed a spicy jab at a famous AI CEO, implying the world’s biggest contribution might be a machine that compulsively lies—others rolled their eyes, but the meme-ability was undeniable. Meanwhile, a helpful soul dropped an archive link for anyone region-blocked, cementing their status as the thread’s librarian-hero. The running jokes? “Reality fanfic,” “autocomplete with a god complex,” and endless pink elephants. Bottom line: the community is split between “call it what it is” and “don’t dismiss the progress.” The only consensus? Nobody’s trusting a chatbot to order dinner just yet.
Key Points
- •The essay is part of a multi-post series, with a full PDF/EPUB updated as sections are released.
- •It focuses on risks and trade-offs of current AI systems rather than ecological or IP issues, and is intentionally polemical rather than balanced.
- •Current “AI” is framed as ML systems that process and generate token sequences across text, images, audio, and video.
- •LLMs are trained once at high cost on large datasets and then used cheaply for inference; they generally do not learn over time and lack intrinsic memory.
- •LLMs are characterized as improv-like systems prone to confabulation, handling unrealistic premises and sarcasm credulously and sometimes giving incorrect advice.