There's yet another study about how bad AI is for our brains

AI gives quick wins, then fries your grit — commenters cry brain-fry, lawsuits, and Rawdog Thursdays

TLDR: A new study says AI gives short-term help but drains motivation, with performance crashing when the bot is removed. Commenters blast mandates, predict lawsuits and safety rules, and joke about “Rawdog Thursdays,” while others note it isn’t peer-reviewed and that hint-only use could keep brains in shape.

Another day, another doom-scroll study: researchers say AI boosts your performance at first, then quietly drains your willpower. In tests with a specialized chatbot built on OpenAI’s GPT-5, scores jumped while the bot was on, then fell off a cliff when access was cut. People didn’t just get answers wrong—they stopped trying. The effect repeated across math and reading, with authors warning of a “boiling frog” for our brains. Caveat: it’s not yet peer-reviewed, and using AI for hints (not full answers) seemed to help resilience. Read it here: arXiv and coverage at Futurism.

The comments? A full-on reality show. One dev, austin-cheney, says forced AI use will breed “worthless” coders who can’t tell good code from bad—comparing it to past hype cycles like jQuery and React. Another, aqme28, says AI feels like working with a team of “junior employees” and wonders if this is the same spell that makes bosses “stupid and crazy.” The mood gets darker with bayarearefugee calling users “captives,” and gjsman-1000 predicting the first big lawsuit over mandated AI could trigger insurance and OSHA-style safety rules. Meanwhile, the meme of the day: desecratedbody’s “Rawdog Thursdays,” a weekly no-AI cleanse to rebuild grit.

The drama splits between “ban the bots” and “use them wisely.” Even skeptics admit the hint-only strategy might be the compromise. But the bigger question buzzing: are we outsourcing persistence—and creativity—to chatbots?

Key Points

  • A US-UK study finds AI assistance improves immediate performance but reduces persistence and independent capability once AI is removed.
  • In an experiment with 350 Americans solving fraction problems, those using a GPT-5-based chatbot performed worse after mid-test AI removal, with more giving up.
  • A larger replication with 670 participants and a reading comprehension experiment showed similar declines after AI access was cut.
  • The authors liken the effect to a “boiling frog,” suggesting sustained AI use may erode motivation and long-term learning.
  • The study is not yet peer-reviewed; using AI for hints/clarifications mitigated negative effects compared to relying on it for direct answers.

Hottest takes

"turning their audience into captives who end up increasingly dependent" — bayarearefugee
"All fun and games until the first time someone successfully sues an employer who mandated it and wins a mental health claim" — gjsman-1000
"implement 'Rawdog Thursdays' as I call it, in which you write code without the assistance of AI" — desecratedbody
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.