Show HN: Why write code if the LLM can just do the thing? (web app experiment)

It works, kinda — slow, pricey, and the comments are on fire

TLDR: One dev built a working contact manager with zero app code, letting an AI handle everything via three tools. The crowd is split: builders want persistent, reusable tools to fix speed and cost, while skeptics say it’s wasteful, insecure, and maybe impossible—raising big questions about how software gets made.

Hacker News threw popcorn at the screen today as a dev dropped “nokode,” a wild web app where an AI runs the whole show—no app code, just three tools—and it somehow… worked. Yes, forms submitted and data stuck around. But it was painfully slow (half a minute per click), weirdly expensive, and the design had goldfish memory. Cue the comment riot.

Optimists cheered the chaos-as-a-feature: one user pitched, “let the AI create persistent tools” to speed it up, while another said they’d already tried letting the AI write and reuse tools, warning that database changes are the real boss fight. Skeptics came in hot: a purist argued it’s “wasteful” to use a guessing machine for precise tasks, and a doomer declared, “it can’t, and may never be able to.” Security hawks waved red flags: if every request asks the AI what to do, what could possibly go wrong? (Answer: probably everything.)

The memes wrote themselves: “No code, just vibes,” “move fast and break endpoints,” and “dial‑up for robots.” The builder claims the capability is real and the bottlenecks are just speed, cost, and consistency—fixable over time. The Hacker News thread can’t decide if this is the future of software or the funniest demo reel of 2025.

Key Points

  • An LLM-driven web server (“nokode”) handles all request logic using three tools: database (SQLite/SQL), webResponse, and updateMemory.
  • The system implements a CRUD contact manager, inferring responses from HTTP paths (HTML for pages, JSON for API endpoints) and applying user feedback persisted in Markdown.
  • Performance was poor: 30–60 seconds per request, $0.01–$0.05 per call, with 75–85% of time spent reasoning; hallucinated SQL caused 500 errors and UI consistency issues.
  • Despite drawbacks, the app functioned: forms submitted, data persisted across restarts, APIs returned valid JSON, and the LLM produced schemas, safe parameterized SQL, REST-style APIs, Bootstrap layouts, validation, and error handling.
  • The article concludes capability exists but key obstacles are speed, cost, design memory consistency, and reliability; the author expects these to improve over time.

Hottest takes

"Maybe next step is have the llm create persistent tools" — th3o6a1d
"Because it can't, and may never be able to" — bigstrat2003
"Generating code will always be more performant and reliable than this" — brokensegue
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.