Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP

Cheaper AI chats or “generated slop”? Devs split over the one-command tool

TLDR: A new tool turns APIs into a simple command and claims to cut AI chat “token” costs by up to 99%. Commenters are split—some love the cheaper, simpler setup, while others demand proof it performs just as well and slam the AI‑written docs, with discovery questions still hanging.

Meet mcp2cli, the “one command to run any API” pitch that claims 96–99% fewer tokens—aka fewer “words” your AI has to read—every time you chat. Translation: cheaper, faster conversations by skipping giant instruction blocks. There’s even a quirky “TOON” output mode that compresses data for models, and it runs without code generation. Sounds slick, right?

The comments lit up. Fans cheered the simplicity and mash‑up potential. One early supporter crowed, “Someone had to do it,” picturing smart assistants composing commands like LEGO. Another linked Anthropic’s post about context bloat and showed off their own tool for DeepWiki via dw2md. The vibe: finally, a practical fix to the “my AI spends all its allowance reading manuals” problem.

But the pushback is spicy. Skeptics demanded proof that saving tokens doesn’t break performance—“cheaper” means nothing if the AI gets dumber. The sharpest jab accused the write‑up of being “obviously generated slop,” vowing to skip the project entirely until a human pens the docs. Meanwhile, pragmatists asked the big question: How does an AI discover new commands if they aren’t prelisted?

Memes flew—“one CLI to rule them all,” “Saturday morning TOON savings,” and “Bash better than me.” Verdict: the idea’s hot, but the crowd wants benchmarks, human‑sounding docs, and a clear story on command discovery before they crown a new king.

Key Points

  • mcp2cli converts any MCP server or OpenAPI spec into a runtime-generated CLI without code generation.
  • The tool claims 96–99% token savings versus native MCP by avoiding repeated tool schema injection in LLM contexts.
  • Supports MCP over HTTP/SSE and stdio, with features for listing tools, invoking commands, authentication, and environment variables.
  • OpenAPI mode enables listing commands and calling endpoints from JSON/YAML specs, with base URL overrides, auth headers, and stdin JSON bodies.
  • Provides output controls (pretty, raw, pipe-friendly) and a TOON encoding mode for LLM efficiency, plus configurable caching with 1-hour default TTL.

Hottest takes

"Tokens saved should not be your north star metric" — stephantul
"obviously generated slop" — liminal-dev
"Someone had to do it. mcp in bash would make them composable" — philipp-gayret
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.