December 30, 2025
Vibe coders meet the review wall
We don't need more contributors who aren't programmers to contribute code
LLVM to AI coders: bring a brain, not just a bot — commenters clap and clap back
TLDR: LLVM says AI-written code is fine only if a human fully reviews it, takes responsibility, and labels it—no unattended bots or auto-review spam. Commenters applaud the sanity check while roasting “vibe coders,” warning reviewers are outnumbered and tired, and joking that some will still ask a bot to explain their bot’s code.
LLVM just dropped a simple, spicy rule: use AI tools if you want, but a human must own the work. No more ghost patches or bots spraying reviews. Label big AI help (think “Assisted-by”) and don’t say “the chatbot did it.” Even the GitHub @claude agent is benched without a human in charge. For the non-nerds: LLVM is the plumbing behind tons of software, and its volunteer gatekeepers are drowning in shaky, bot-flavored code.
The crowd reaction? Loud. One top comment sighed that “the code writers increased exponentially overnight,” while the number of reviewers didn’t—and might even be shrinking due to layoffs. Another called it “depressing this has to be spelled out,” blasting “vibe coders” who paste AI output and make maintainers babysit. There were cheers too—“Good policy”—and eye-rolls at people who would ask an AI to answer review questions about AI-written code. The meme of the day: a conveyor belt of bot code colliding with a tiny team of tired humans. The drama isn’t whether AI is allowed (it is), but whether contributors take responsibility. In the words of one exasperated commenter: stop sending stuff you don’t understand and expecting someone else to clean it up. The internet, for once, mostly agreed.
Key Points
- •LLVM’s updated draft AI policy requires a human-in-the-loop for all contributions using AI tools.
- •Contributors must review and understand any LLM-generated content, remain fully accountable, and be able to answer review questions.
- •Substantial tool use must be transparently labeled (e.g., an “Assisted-by:” commit trailer).
- •Autonomous agents acting without human approval (e.g., a GitHub “@claude” agent) and automated review bots posting comments are banned; opt-in human-reviewed tools are allowed.
- •The policy applies to code, RFCs, issues/security reports, and PR comments, aiming to protect maintainer time while enabling LLM-assisted productivity.