Ask HN: Why is my Claude experience so bad? What am I doing wrong?

Community clapback: no one-prompt magic—make a plan, use bigger models, and test the results

TLDR: Users say Claude isn’t bad—you’re expecting magic. The thread insists on detailed plans, iterative testing, and the right model for the job, with builders sharing specs and prompts to prove careful guidance works; the real fight is over “autopilot” fantasies versus treating AI like a smart autocomplete that needs direction.

The internet asked, “Why is my Claude chatbot so bad?” and Hacker News answered with a collective eye-roll and a masterclass in tough love. Commenters say the real problem isn’t the bot—it’s the myth of one-prompt miracles. The crowd’s mood: stop vibe-coding, start planning. One builder, Leftium, came armed with receipts—linking a working transcription app and its detailed spec plus a (sadly truncated) chat log proving that weeks of back-and-forth beats a single “make me an app” wish.

The spiciest takes? A chorus of “AI isn’t an intern—it’s autocomplete, not autopilot.” Others piled on with “size matters,” starting a mini model war: some swear different AIs shine at different jobs (“Gemini for pretty screens, anyone?”), while veterans insist the real win is process—write a spec, break the work into steps, and make the bot check its own output. One commenter even linked a how-to video and a company example where engineers built login tools with Claude and published every prompt, like a public recipe book (see it).

Between jokes about “vibes ≠ code” and mild roasting of vague prompts, the thread’s verdict is loud and clear: if you want magic, bring patience, details, and tests—or be ready for chatbot gobbledygook.

Key Points

  • The article says one-shot prompts are unlikely to produce a complete tool with Claude without a very detailed prompt or specification.
  • An example project, Rift Transcription, was developed with Claude over more than 150 chat sessions.
  • The author co-wrote a spec with Claude, broke work into phases, reviewed detailed plans, and iteratively updated them before implementation.
  • Claude was used not only for planning and implementation but also to fix bugs post-implementation.
  • The article links to a video on common AI mistakes and cites Cloudflare’s published OAuth prompts as a real-world reference model.

Hottest takes

"It takes many months to figure this out, much longer than learning a new programming language." — verdverm
"If you expect it to _do_ things for you - you're setting yourself up for failure." — aristofun
"Potentially 'vibe-coding'..." — Leftium
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.