March 15, 2026
Who’s burning out—us or the bots?
LLMs can be exhausting
Dev brain vs robot brain: is it skill, hype, or too many tabs
TLDR: A developer says AI coding tools can be draining when prompts get sloppy and feedback takes ages, urging clearer goals and faster tests. The comments explode: some call it a “skill issue,” others blame human attention limits, and a loud chorus says the hype—not the tech—is what’s truly exhausting.
The author of this post admits they crawl into bed wrecked after marathon sessions with AI coding assistants, blaming tired brains, slow “slot machine” feedback loops, and bloated AI “memory.” Their fix? Step away when joy dies, write crystal‑clear prompts, and aim for tests that finish in under five minutes. Cue the comments turning into group therapy.
One camp yelled “skill issue”. User chalupa‑supreme says the burnout comes from poorly guiding the bot and not setting test cases—translation: treat the AI like a junior coworker who needs clear instructions. Another camp called out human limits. cglan sighed that AI coding is “so much more exhausting than manual coding,” arguing our brains just can’t track everything these tools churn out. Then came the chaos crew: simonw’s “YOLO mode” (letting bots act without asking permission) means juggling 2–3 AI agents at once—aka constant context‑switching hell—while veryfancy swears the sweet spot is exactly those 2–3 sessions, but no more.
And of course, the spicy take: anthonySs says the real drain is the hype circus, comparing AI’s fandom to crypto—cool tech, exhausting noise. Memes flew about “human context windows” (aka short‑term memory) overflowing and 10‑minute “slot pulls.” Verdict: the bots aren’t the only ones hallucinating; our attention spans are too.
Key Points
- •Author attributes unproductive LLM sessions primarily to personal fatigue and slow feedback loops rather than model degradation.
- •Interrupting LLM outputs and providing midstream “steering” leads to worse results, especially with tools like Claude Code and Codex.
- •Large-file parsing tasks cause slow iterations and heavy context usage, approaching compaction and reducing model effectiveness.
- •A recommended “happy path” includes pausing when prompt quality declines, ensuring clear end-states, and avoiding half-formed prompts.
- •Adopting a TDD-like approach—reproducing specific failure cases with strict time limits—helps achieve sub–5-minute feedback loops.