Stop using natural language interfaces

Chatbots are slow? Dev pushes popups while commenters spark a GUI vs AI brawl

TLDR: The author argues chat-only interfaces are slow and showcases [popup-mcp](url), a tool that lets AI build quick popups to speed decisions. Comments split: some cheer the “LLM-first OS” vision, others say local chatbots are already fast, and critics call the mashup worse than both—shaping how future AI tools get built.

An opinionated dev declared: stop relying on pure chat with AI—it's slow—and dropped popup-mcp, a tool that lets the AI spin up on-the-fly popups with checkboxes, sliders, and drop-downs to speed things up. The pitch: natural language (plain text chat) has lag, but structured buttons are fast. Think Minority Report meets survey wizard. Fans went wide-eyed: kami23 called it “an LLM-first OS” and loved how the UI anticipates choices and lowers brain strain, plus the built-in “Other” option that lets users correct the bot’s assumptions.

Then the pushback hit. rurban rolled in with a flex: “There is no latency—run it local”, arguing chatbots are already faster and easier than hunting through menus. legostormtroopr dunked even harder: mixing GUIs with chat “is worse than both” because GUIs are predictable and learnable, while chat stays flexible. SoftTalker dropped the one-liner: “Author should take his own advice.” Cue memes: “GUI vs AI cage match,” “bank teller speedrun,” and jokes about choose-your-own-adventure interview trees peeking into the bot’s “mind.” The thread turned into a lively tug-of-war: do we want faster clicks or smarter chats? The crowd split between futuristic swipes and old-school precision. And everyone agrees latency decides who wins the next UI round

Key Points

  • LLM inference is slow (often tens of seconds), making natural language interactions high-latency compared to GUIs.
  • Popup-mcp is a local MCP-based tool that lets LLMs generate on-the-fly popups with structured GUI elements.
  • The tool supports conditional visibility rules, enabling context-specific follow-up questions and branching dialogues.
  • Multiselects and drop-downs automatically include an 'Other' option that reveals a text box for user input.
  • Popup-mcp uses stdio and must run on the same machine as the LLM client, aiming to combine speed with semantic flexibility.

Hottest takes

"Love this, this is what I have been envisioning as a LLM first OS!" — kami23
"There is no latency, because the inference is done locally" — rurban
"this is actually worse that both GUIs and LLMs combined" — legostormtroopr
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.