March 8, 2026

When bots say “no,” humans say “GO OFF”

"I Can't Do That, Dave" – No Agent Yet

From basement bots to backtalk—people want AI with memory and real pushback

TLDR: The essay argues AI won’t be real “agents” until they keep long-term context, enabling honest pushback. Comments exploded: some want bold refusals with memory, others demand bots try and fail; one user even coaxed Claude into a DIY continuity engine—proof the “conversation over prompts” idea matters.

The essay says agents don’t truly exist yet because they lack continuity—no long-term “ground” between sessions—so they can’t meaningfully push back with a confident “I can’t do that, Dave.” Cue the comments: chaos. One user cheered a rare moment of honesty when a Qwen bot admitted a project was too complex, while another snapped “slop” and dismissed the whole thing. The fight breaks down like this: Do we want bots that bravely try and fail, or ones that refuse when the task is doomed? porphyra’s camp screams “try, then fail,” arguing false refusals are more annoying than messy attempts.

Meanwhile, speedgoose pointed out bots already refuse in sensitive areas (politics, adult content, drugs), then joked we should train them to refuse building our most hated tech stacks. The comedy kept rolling, but one story stole the show: ElFitz claims a Claude Code loop, after tons of nudging, began building its own “improvement engine”—digging through logs and patterns like a DIY memory hack. It’s exactly the essay’s vibe: conversation creates meaning, not prompts; continuity beats “type-and-hope.” Some readers rallied behind Prompting Considered Harmful, saying chat boxes are a bad interface; others just want bots that stop pretending. Refusal, but with receipts, is the mood.

Key Points

  • The article argues that current AI agents lack true agency because meaning and effective progress emerge from sustained conversation, not isolated sessions.
  • It critiques prompt-based interfaces for generative AI as inadequate and advocates moving beyond prompting as the primary UI.
  • The central problem identified is lack of coordination, rather than failures inherent to AI systems or engineers.
  • The piece distinguishes between memory and continuity, emphasizing that agents need persistent context across sessions to enable second-order observation.
  • It proposes shaping the “ground” between sessions to preserve identity, context, and friction so agents can develop a stake, recall prior attempts, and appropriately refuse unproductive paths.

Hottest takes

"slop" — mono442
"I would much rather have models that try and fail than to have false refusals" — porphyra
"Perhaps we should train them to refuse developing more insert your most hated stack here" — speedgoose
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.