February 17, 2026
When Docs Attack
Evaluating AGENTS.md: are they helpful for coding agents?
Study says AGENTS.md can backfire; devs split between 'less is more' and 'give it a manual'
TLDR: A new study finds that adding AGENTS.md context files often hurts AI coding bots’ success and raises costs. Developers are split: some only add minimal fixes after a failure, others want clear manuals and smarter, shorter docs—turning a “help file” into the week’s hottest debate on what actually helps.
Plot twist: a new study says those AGENTS.md “help pages” that tell AI coding bots how to work in your repo might actually make them worse. Researchers tested popular AI assistants on standard bug-fix challenges like SWE-bench and real projects with human-written instructions. The kicker? With context files, success dropped while costs jumped by 20%—and the bots wandered more, poking at tests and files like tourists in a maze. Their verdict: keep instructions minimal, or you’re just giving the robot homework.
Cue the comments. medler is stunned by the numbers, while two camps immediately form. Team Minimalist, led by eknkc and echoed by pamelafox, swears by “add it only after failure.” Pamela even runs a mini-lab: add info, revert changes, re-run, measure improvement—science hat on, drama off. On the other side, amluto wants a simple user manual in the file: how to build, how to run tests, and, yes, how to deal with “the incredible crappiness” of one pesky sandbox. pajtai calls for better design—“progressive disclosure” so the bot only reads what it needs—and wonders how newer models would fare.
The memes write themselves: “When docs attack,” “AI intern given a phone book,” and “less text, more tasks.” The community’s vibe is clear: either give the bot a short, sharp checklist or a smarter map that hides the side quests. Because if this study’s right, the wrong AGENTS.md turns helpful hints into speed bumps—and the bill goes up while the bot gets lost.
Key Points
- •The study evaluates coding agents using repository-specific context files (AGENTS.md) across multiple agents and LLMs.
- •Two settings were tested: SWE-bench tasks with LLM-generated context files and real issues from repos with developer-committed context files.
- •Context files generally reduced task success rates compared to no repository context.
- •Including context files increased inference cost by over 20%.
- •Context files encouraged broader exploration behaviors, and authors recommend minimal, essential requirements in human-written files.