Some uncomfortable truths about AI coding agents

Dev bans AI from production code — commenters yell “evolve!” “fear!” and “show data”

TLDR: An engineer says AI code-writing bots shouldn’t touch production software, citing skill loss, hacks, and licensing risks. Comments explode: some demand evidence, others say lost skills don’t matter if bots work, and security pros warn about real attack risks — a hot debate as big companies push AI into coding.

A longtime AI-watcher just dropped a spicy take: he says AI coding agents — bots that write code with chat-style AI — are powerful but should never touch his professional, real-world apps. He cites four fears: skills will fade, cheap bot labor warps incentives, “prompt injection” hacks, and copyright messes. Meanwhile, headlines from big names like Notion, Spotify, and Stripe cheer on robot coders — and the crowd went feral.

One camp says the author’s doom is overblown. abletonlive shrugs off “skill atrophy,” arguing if the tool makes a skill unnecessary, that’s progress. Another faction demands receipts: polotics calls the post “your usual vague talking points,” begging for real data, not vibes. Security folks barge in with a thunderclap: dijksterhuis warns the only sure fix to prompt injection is ditching machine learning entirely — not exactly a small ask. And the pragmatists? ineedasername thinks the only real issue raised was the obvious one: don’t let your bot blindly roam the web.

Humor flew fast: memes about devs becoming “AI babysitters,” jokes about “promoted to code reviewer, pay not included,” and nitpicks about calling this “uncomfortable truths” when it’s more “common sense with drama.” Bottom line: the post lit a match in a room full of robot interns — and everyone brought popcorn.

Key Points

  • The author argues LLM-based AI coding agents should not be used to generate production code in his professional work.
  • He acknowledges AI coding agents are powerful and LLMs have uses, but cautions against trusting LLM outputs.
  • He cites four issues behind his stance: skill atrophy, artificially low cost, prompt injection risks, and copyright/licensing concerns.
  • The article notes industry interest and adoption of agentic coding by companies like Notion, Spotify, and Stripe.
  • The author begins detailing “skill atrophy,” describing a shift from coding to supervising agents as a risk for software engineering practice.

Hottest takes

"If LLMs are so good that you no longer have use for the skill, why do we care about skill atrophy?" — abletonlive
"your usual vague talking points." — polotics
"yeah there is only one surefire 100% fix for “prompt injection”: use deterministic solutions ie not machine learning." — dijksterhuis
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.