March 15, 2026
Humans vs. Bots: Round 3
"How I write software with LLMs"
Dev claims “I don’t read most of the code” and folks lose it
TLDR: An engineer says chatbots now write most of his code with fewer mistakes, while he focuses on the big-picture design. The comments explode into believers vs. skeptics, with some demanding receipts, others fearing humans will be sidelined next, and a side plot about the post getting mysteriously buried.
A maker dropped a spicy confession in this post: large language models (think supercharged chatbots) now write most of his software, and he swears the results have fewer bugs than his own code. He says humans should focus on the “blueprints” (the architecture) while the bot handles the hammering, and even flaunts a security‑minded assistant called “Stavrobot.”
Cue the comment chaos. Fans like christofosho are taking notes and asking why he didn’t split his mega‑bot into mini‑bots for different jobs—like a kitchen crew instead of one chef. The skeptics came armed: plastic041 calls out the paradox of being “intimately familiar” with systems you “have never even read most of their code,” punctuating it with a mic‑drop “Because obviously, you can’t.” Meanwhile, silisili delivers the existential gut‑punch: if the bot soon does the blueprints too, “then what is our use?”
Pragmatists like jumploops are vibes‑checking both sides, saying they make the bot draft every plan and “open questions” into markdown files—aka receipts—so you don’t lose the plot. And adding to the drama, indigodaddy claims the post hit the front page and then got buried. Algorithm shenanigans or just Tuesday? Either way, the mood swings between “AI intern, human manager” memes and doom‑scrolling about being automated out of a job. The only thing everyone agrees on: this is getting real, fast.
Key Points
- •The author details a workflow for writing software with LLMs and includes an annotated coding session.
- •They report significantly lower defect rates since models like Codex 5.2 and Opus 4.6, sustaining projects with tens of thousands of lines of code.
- •Human oversight has shifted from line-level to architecture-level, though coding expertise is still required.
- •Outcomes depend on domain familiarity: stable in known areas (e.g., backend), deteriorating in unfamiliar ones (e.g., mobile).
- •A major project, Stavrobot, is a security-focused LLM personal assistant positioned as an alternative to OpenClaw.