March 3, 2026

YOLO merge meets Heartbleed vibes

When AI Writes the Software, Who Verifies It?

Coders split: ‘robot reviewers’ vs ‘disaster incoming’

TLDR: AI is cranking out massive amounts of code, but formal checks haven’t kept up. Commenters are split between calls for stricter human verification, warnings of looming disasters and “workslop,” and bold predictions that robot reviewers will soon pass compliance, raising stakes for security and trust in critical systems.

AI is rewriting the world’s code: a startup just nabbed $125M to overhaul defense software, Big Tech says a quarter of new code is machine-made, and Anthropic cranked out a 100,000‑line C compiler in two weeks. Impressive, yes—but who’s checking the homework? Even insiders admit reviews are getting lax.

The comments lit up. rademaker waved the flag for mathematician-style rigor: verification must grow as fast as generation. righthand dropped the meme of the day, calling AI coding “an intoxicating copyright-abuse slot machine” and lamenting devs who don’t read their own commits. acedTrex warned we’ll only slow down after “very painful, high‑profile failures.” Meanwhile, oakpond says human code review matters more than ever, and foolfoolz predicts corporate checklists will soon accept “AI code review” as compliant.

Cue the gallows humor: Karpathy’s “Accept All” became the community’s YOLO merge punchline, while old scars like Heartbleed got name‑checked as a reminder that one bug can wreck millions. HBR’s “workslop” made the rounds: fancy-looking output that someone else must fix. The vibe? Awe at the speed, fear of the fallout. Until formal rules define what “correct” means, commenters think we’re racing supercars without seatbelts—and the guardrail is your bank, hospital, and power grid

Key Points

  • Major firms report 25–30% of new code is AI-generated, and Microsoft’s CTO predicts 95% by 2030.
  • AWS used AI to modernize 40 million lines of COBOL for Toyota; Code Metal raised $125M to rewrite defense software with AI.
  • Anthropic built a 100,000-line C compiler in two weeks for under $20,000 that boots Linux and compiles multiple major systems.
  • The article states nearly half of AI-generated code fails basic security tests, and larger models aren’t markedly more secure.
  • The piece argues traditional review/testing are inadequate at AI scale, calling for formal specifications to mitigate systemic risk and supply-chain threats.

Hottest takes

“It’s such an intoxicating copyright-abuse slot machine” — righthand
“its going to take a few very painful and high profile failures” — acedTrex
“there will be a point soon when an ai code review meets your compliance requirements” — foolfoolz
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.