GitHub Browser Plugin for AI Contribution Blame in Pull Requests

New GitHub tool snitches on bot-written code, and devs are brawling

TLDR: A new GitHub add‑on uses git‑ai to flag AI‑written lines in pull requests and keep prompts as receipts. Commenters split between “just judge the code,” giving bots their own accounts, or putting AI credit in commits—because ownership, trust, and accountability in open‑source code now really matter.

GitHub drama alert: a new add‑on aims to tell you exactly which lines in your pull request were typed by a robot. The git‑ai backend tracks AI contributions line‑by‑line and preserves the original prompts, while a browser plugin from rbbydotdev splashes those receipts right on your PR. The promise: maintainers get transparency and maybe even a “percentage‑of‑AI” score to gut‑check quality.

Cue the comment cage match. One camp, led by verdverm, says: just give bots their own accounts and use regular blame tools—no new gadget needed. The minimalist crowd (nilespotter) shrugs: why not simply judge the code? Meanwhile, the standards nerds (shayief) want this baked into the commit itself, like a “Co‑Authored‑By” tag, not hidden in side notes. Ethics alarm bells ring as operator‑name drops the classic line: a computer can’t be accountable—so who really owns the changes, whether written by AI, a human, or “typed up by a monkey”?

There’s extra spice over the post getting re‑surfaced on the front page—is this hype or helpful?—while others point to projects like Zig banning AI contributions entirely. Supporters argue this is perfect for low‑stakes tooling or prototypes; skeptics call it a “bot narc” that gamifies blame. Verdict: transparency vs vibes, and nobody’s backing down.

Key Points

  • The article presents a workflow to display AI-authorship annotations in GitHub pull requests using git-ai and a refined GitHub plugin.
  • git-ai tracks per-line AI-generated code, the model used, and prompts, storing metadata in git notes that persist through merges, rebases, and other operations.
  • The tool aims to provide vendor-agnostic, end-to-end attribution from code generation to merged PR without slowing development (implemented in Rust, tested on Chromium).
  • The author proposes using an “AI percentage” per PR as contextual guidance to inform policies and reviewer trust, not as a strict rule.
  • AI-generated code may be suitable for isolated or less critical areas (e.g., tooling, private betas, proofs of concept) when its provenance is traceable.

Hottest takes

"give them their own account id / email so we can use standard git blame tools" — verdverm
"Why not just look at the code and see if it's good or not?" — nilespotter
"A computer can never be held accountable, therefore a computer must never make a management decision." — operator-name
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.