May 4, 2026
Fake news, but make it math
Hallucination Is Inevitable: An Innate Limitation of Large Language Models
Experts say AI will always make stuff up — and the comments went feral
TLDR: A new paper says chatbots will never be able to avoid making up false answers in every situation, no matter how much they improve. Commenters split between “that’s the whole gimmick,” “just teach them to say ‘I don’t know,’” and the extremely online take that humans do this too.
The paper’s big claim is a real mood-killer for anyone hoping chatbots will one day become flawless truth machines: the authors argue that making things up isn’t a bug you can fully remove — it’s baked in if you use these systems as general-purpose problem solvers. In plain English, even if companies keep improving them, there may never be a magical update that stops wrong answers forever. That instantly set off the community’s favorite pastime: arguing about what this means for the entire AI hype train.
One camp read it as a giant reality check. The sharpest hot take came from one commenter who basically said calling hallucination a “limitation” is too generous because inventing plausible-sounding nonsense is the whole product. Ouch. Another commenter pushed back with the obvious loophole: what if the bot just says “I don’t know” more often? That turned the thread into a mini courtroom drama over whether caution counts as intelligence, or whether you’ve just built a very confident shrug machine. Meanwhile, one user dryly noticed the paper had been revised over a year later, which added a little academic side-eye to the spectacle.
And then came the dark comedy: a commenter pointed out that humans also hallucinate — we just call it being delusional. That joke landed because it cuts to the heart of the panic: if people already get things wrong all the time, are we demanding perfection from machines we can’t manage ourselves? The crowd’s verdict: AI may be useful, but trust it blindly and you’re the punchline. Read the paper if you want the formal proof behind the chaos.
Key Points
- •The paper argues that hallucination in large language models cannot be completely eliminated.
- •It formalizes hallucination as inconsistency between a computable LLM and a computable ground-truth function.
- •Using learning-theory results, the authors claim LLMs cannot learn all computable functions.
- •The article extends the argument from a formal setting to real-world LLMs, stating hallucinations are therefore also inevitable in practice.
- •The paper discusses hallucination-prone tasks under time-complexity constraints, reports empirical validation, and examines implications for mitigation and safe deployment.