Outsourcing Thinking

Are we letting bots do our brainwork? The internet can't agree

TLDR: A detailed blog cautions that handing thinking to chatbots can erode attention, judgment, and genuine care. Commenters split between mocking the warnings as dated and arguing the real danger is losing the “boring” practice that builds intuition, while some cheerfully outsource tough mental work anyway.

A long, chewy blog post warns that “outsourcing thinking” to chatbots—those AI tools that write for you—can chip away at skills we actually need, especially in areas like real‑world judgment, caring communication, and anything too important to mess up. It pushes back on the catchy idea from Masley’s post that “thinking leads to more thinking,” arguing the picture is messier. The comments? Pure chaos. One reader admits using AI makes them more impatient and skimmier than ever—like turning your brain into TikTok mode. Another calls the “don’t use AI for this” list quaint, pointing to an Atlantic piece about film students who can’t even sit through films anymore. Responsible tech use? The crowd is not convinced.

The spiciest take: it’s not about how much thinking we do, but which thinking we stop doing. Cutting out “boring” work might also cut out the judgment and intuition that come with it. Meanwhile, a manager bluntly says, “I hire developers to do the thinking I don’t want to,” sparking jokes about outsourcing your breakup texts to a bot. One dreamer pitches “8 billion people doing distributed verification,” which commenters translate to “8 billion fact-checkers and still wrong.” Verdict: half the crowd fears we’re scrolling our way out of attention, the other half shrugs and says let the bots sweat while humans think about “other things”—whatever those are.

Key Points

  • The article examines whether and when using LLMs can erode cognitive skills, emphasizing that effects depend on the type of use.
  • It engages Andy Masley’s “The lump of cognition fallacy,” which argues thinking begets more thinking, challenging fears about outsourcing cognition.
  • The author agrees on five cases where outsourcing thinking is harmful: building tacit knowledge, expressing care, valuable experiences, deceptive faking, and high-stakes tasks lacking trust.
  • The author contends many real-world activities fall into these harmful categories, more than Masley suggests.
  • The discussion begins with personal communication and writing under the “deceptive to fake” category, using a dating-app example to illustrate authenticity concerns.

Hottest takes

“This list of things not to use AI for is so quaint” — camgunz
“They are hired to do the kind of thinking I’d rather not do” — jemiluv8
“It’s not about how much thinking we do, but which thinking we stop doing” — preston-kwei
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.