May 10, 2026
Prompt soap opera hits code review
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
A fancy AI code checker lands — and commenters instantly ask if we’ve all lost the plot
TLDR: adamsreview is a new tool that uses several AI passes to inspect and even fix code changes automatically. Hacker News commenters weren’t just skeptical — they openly mocked the growing “AI reviewing AI” loop, with many asking whether this is smart automation or pure tech absurdity.
A new Hacker News demo called adamsreview promises to turn code checking into a full-blown AI assembly line: multiple AI helpers inspect a code change from different angles, another pass double-checks the findings, and an automated fix loop can even try repairs and undo bad ones before saving anything. The creator says it catches more real problems than several built-in and paid alternatives — though, to be fair, the evidence is the classic internet disclaimer: “anecdotal, n=me.”
But the real fireworks were in the comments, where readers reacted less like impressed shoppers and more like people watching someone build a Rube Goldberg machine to butter toast. One commenter squinted at the process and called it a “fair bit of ceremony,” while another dropped the line of the thread: “Holy vibe coding batman” — basically accusing the project of being a giant pile of prompts stacked on top of prompts. The biggest existential crisis came from a user who joked, “I pay Claude, to use Claude, to write instructions for Claude, to review code from Claude,” which instantly became the unofficial theme of the discussion.
That’s where the split emerged: is this a clever way to make AI safer and more useful, or a sign that software people are now fighting machine-made complexity with even more machine-made complexity? Another commenter said AI agents reviewing AI-written code felt “dirty,” especially for serious production work, and pitched a human-in-the-loop alternative instead. So yes, the tool is about better code reviews — but the comment section made it about something juicier: have developers started outsourcing so much judgment to AI that the whole thing is becoming absurd?
Key Points
- •The article introduces adamsreview as a Claude Code plugin for multi-stage pull request review and automated fixing.
- •The plugin provides six commands covering review, Codex-based review, adding external findings, walkthrough, automated fixes, and manual promotion of findings.
- •The main review flow uses parallel sub-agent lenses, deduplication, validation passes, and an optional Opus cross-cutting pass.
- •The automated fix loop re-reviews generated fixes with Opus, reverts regressions, and commits only the surviving changes.
- •The post recommends a workflow of review first, optionally adding external findings, then optionally running walkthrough before applying fixes.