One agent isn't enough

Fans want many bots, skeptics want a plan — the community is split

TLDR: Redmond argues you should run multiple AI coders in parallel and let a “synthesizer” choose the winner. Commenters split between loving the crowd approach and demanding repeatable workflows, with cost and consistency turning the thread into a showdown over brute force versus better setup.

Ben Redmond’s new post says one AI “agent” is too random: run several at once and let a synthesizer pick the best, like a talent show for robots. Think rolling five dice instead of one. His Part 1 framed this as steering the response “probability” — now he’s doubling down with parallel runs to find the real peak. Cue the comments: some cheer the idea as crowd coding (“more brains, fewer duds”), while others clutch their wallets and ask if this is just burning tokens to brute-force your way to a win.

The hottest pushback comes from repeatability purists. User yawnxyz drops the mic: if you want consistent outcomes, shouldn’t you build a repeatable workflow instead of spawning five mini-bots and hoping? Skeptics call it “A/B testing for code” and want guardrails, not gambles. Meanwhile, meme lords compare it to hiring five interns and one manager to pick their best homework, and D&D fans love the dice-roll analogy. There’s real drama: efficiency vs. exploration, smart setup vs. shotgun sampling. Supporters say parallel agents escape “good-enough” ruts; critics say great prompts and tools should make one agent reliable. And everyone’s asking: who’s paying for all those tokens — and is the synth just a very expensive boss?

Key Points

  • LLM stochasticity creates variance, so single-agent runs may yield suboptimal solutions.
  • Context engineering improves average response quality but does not solve the exploration problem.
  • Parallel agents provide multiple independent samples to explore diverse solution peaks and escape local minima.
  • Validation through repetition and clean contexts help identify reliable local maxima; a synthesizer selects the best result.
  • The author uses Claude Code with an orchestrator pattern to support subagents and parallel convergence workflows.

Hottest takes

"isn't it better for them to eventually build a repeatable workflow?" — yawnxyz
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.