March 8, 2026
Feeling Lucky or Feeling Lazy?
I'm Not Consulting an LLM
Chatbots kill curiosity? Commenters light the torches
TLDR: An essay warns chatbots give smooth, plausible answers that short‑circuit real learning, comparing them to always hitting “I’m Feeling Lucky.” The comments explode: some accuse the author of gatekeeping and say experts use these tools daily, others treat bots as study buddies, and a few claim they actually curb misinformation.
The author’s essay says using LLMs—“large language models,” aka chatbots like ChatGPT—is like hitting Google’s “I’m Feeling Lucky” button every time and skipping the messy, brain‑building journey. He argues the bots give smooth, plausible answers, not proven ones, and that this “frictionless” vibe can dull your instincts. It’s a romantic defense of wandering the web’s back alleys, not just arriving at an answer.
The comments? A street fight. One user basically rode in on a meme horse shouting “muh expertise”, calling the piece gatekeeping and flexing that top-tier legends (think math and code superstars) already use these tools—so if you can’t, that’s a you problem. Another countered with nuance: LLMs as a study buddy—perfect for unpacking dense philosophy and archaic wording, even if it’s not flawless. A third hot take claimed we’d have less misinformation if people just asked a bot before posting. Not everyone was entertained—one reader wanted a deeper piece, while another veered off‑topic to applaud the author’s wholesome “daily virtues” diary. Meme watch: “I’m Feeling Lucky” got twisted into “I’m Feeling Lazy,” and Crichton’s “wet streets cause rain” became “wet takes cause flame wars.” The thread’s vibe: Are chatbots brain rot—or just power tools misused by humans?
Key Points
- •The author argues LLM use for information-seeking can short-circuit the exploratory process that builds intellectual judgment.
- •A thought experiment compares perfect first-hit answers to Google’s “I’m Feeling Lucky,” claiming it optimizes for answers, not learning.
- •LLMs are described as producing plausible, not guaranteed correct or contested, answers and may mask uncertainty.
- •The author reports GPT fails to provide expert-level responses in domains where the author has expertise.
- •An exception is made for repetitive, easily automatable tasks, which the author would streamline (e.g., via Emacs macros).