March 16, 2026
Specs, tests, or Rust—choose your fighter
Where does engineering go? Retreat findings and insights [pdf]
If AI writes the code, who keeps it honest? HN splits between specs, tools, and “just use Rust”
TLDR: A private retreat says quality work will shift from typing code to nailing plans, tests, and risk checks as AI takes over more typing. Commenters split: some applaud the roadmap, others demand real tools, and one loud camp chants “just use Rust,” underscoring how unsettled—and urgent—this transition feels.
A hush‑hush industry retreat dropped a spicy [PDF] mapping the future of software work, and the comment section lit up. The big line everyone kept repeating: if AI handles the code, where does the real engineering go? Some readers cheered the report’s answer—move the discipline upstream to clear plans, tests, and risk checks—while others demanded receipts.
On one side, boosters like bmurphy1976 waved it back onto center stage, arguing the Thoughtworks pedigree gives it weight. On another, skeptics like NeutralForest grumbled, “cool ideas, but show us tools we can actually use.” And then there’s the chaos corner: echelon turned the thread into a stadium chant—“Rust! Rust! Rust!”—claiming the safety‑obsessed language could make bad code basically unthinkable.
The doc itself throws big ideas at the wall: split traditional code review into four jobs (mentoring newbies, keeping style consistent, catching bugs, building trust), and deal with “cognitive debt”—when systems change faster than people can understand. Commenter zer00eyz hammered that point with a doomsday bell: if software mutates beyond human comprehension, how do teams remember anything?
Consensus? None—and that’s the point. The community vibe swings from “promote this now” to “call me when there’s a playbook,” with a loud Rust chorus in the background. Urgent questions, zero easy answers, maximum drama.
Key Points
- •A multi-day retreat of senior engineers synthesized cross-cutting themes on how AI is reshaping software engineering, under the Chatham House Rule.
- •Engineering rigor is shifting from code to specifications, tests, constraints, and risk management as AI generates more code.
- •Code review is being unbundled into mentorship, consistency, correctness, and trust, each requiring new processes or tools.
- •Security for AI agents is underdeveloped, with risks including account takeover through basic email access.
- •New patterns include a supervisory “middle loop,” rising cognitive debt, agent-aware architectures, revived use of knowledge graphs/semantic layers, evolving roles, and prerequisites for self-healing systems.