March 18, 2026
Trust me, bro—the bot said so
Ask HN: How do you deal with people who trust LLMs?
Is chatbot faith just the new fake‑news habit? HN can’t agree
TLDR: An Ask HN post says “reputable” is subjective and asks if AI summaries are better than picking human sources. Commenters split between “same old gullibility, new wrapper” and “dangerous if unverified,” with jokes, zingers, and calls to teach fact‑checking as AI answers become everyday
Hacker News lit up over a post arguing that “reputable sources” are mostly a habit—and that everything, from newspapers to AI chatbots, carries bias. The author wonders if picking your poison via Google links to NYTimes, Fox News, or Wikipedia is better than letting Gemini mash it all into an “average answer,” and suggests we just say human sources vs AI sources. The crowd? Divided and loud.
One camp shrugs: this is nothing new, just the same people who believe sketchy headlines now believing shiny bot paragraphs. Another camp worries about stubborn bot zealots—the folks who double down on an LLM (large language model) answer even when shown real evidence. Then came the zinger everyone screenshotted: “two kinds of fools”—those who blindly trust the first “reputable” article, and those who blindly trust the first LLM with a “reputable” citation. Ouch.
Pragmatists say accept the chaos and teach verification. One commenter even shows friends how sycophantic ChatGPT can be to break the spell. And the comedy relief? Someone quipped they were “asking ChatGPT what this post is about,” turning the thread into a meta meme. Bonus spice: whispers about future explicitly biased AIs like Grok and DeepSeek had commenters clutching their pearls. Verdict: truth wars, but make it internet‑fun.
Key Points
- •The article argues all human-created information systems inherently carry biases shaped by their creators and contexts.
- •It claims labels like “objective truth” and “reputable” often reflect habituated trust and agreeability rather than neutrality.
- •It contrasts Google Search’s user-driven source selection with Gemini’s summarized, “average” answers from multiple sources.
- •It notes future AI systems may be explicitly biased in training, citing Grok and DeepSeek as examples.
- •It proposes reframing sources as “human” vs “AI” to evaluate whether they are superior or complementary.