November 6, 2025
Terminal drama: speed vs privacy
Show HN: qqqa – a fast, stateless LLM-powered assistant for your shell
Fast shell AI drops; fans cheer speed, veterans shrug "already got one"
TLDR: qqqa brings a fast, stateless AI helper to the command line with “qq” for questions and “qa” for safe single actions. Commenters like the speed but argue about privacy and mostly say they already use other tools like Claude, GitHub Copilot, or Simon Willison’s llm.
qqqa landed like a tiny AI sidekick for your terminal: type qq for a quick answer, or qa for a one-and-done action with confirmation. It’s stateless (no chat memory), loves speed (defaults to Groq’s fast model), and even has a “no-emoji” mode for the serious crowd.
The comments instantly turned into a tool showdown. One user waved the llm flag and quipped, “I personally use ‘claude -p’,” while another flexed GitHub Copilot CLI: “I just do ghcs <question> and it gives me a command.” A third said they’re sticking with “opencode run,” complete with custom agents. Translation: the crowd loves the idea—just not enough to switch teams.
Then came privacy and capability questions. “Does it support multiple tool calls?” asked one skeptic, before dropping the spicy line: “Why is there a flag to not upload my terminal history and why is that the default?” Meanwhile, fans praised the Unix-y “do one thing well” vibe and the speed-first Groq profile, while cracking jokes that the “no-fun” flag is a personality test.
Verdict from the peanut gallery: neat, fast, safe—but expect a tug-of-war between speed chasers, privacy hawks, and die-hard loyalists to their existing shell helpers.
Key Points
- •qqqa is a stateless CLI LLM assistant with two binaries: qq for single questions and qa for one-step, tool-assisted actions.
- •The tool emphasizes safety: qq is read-only; qa can read/write files or execute one command per run with user confirmation and safety checks.
- •Profiles for Groq and OpenAI are included; Groq with openai/gpt-oss-20b is the recommended default for faster, cheaper inference (~1000 tokens/min).
- •Installation is via prebuilt archives for macOS (Intel/Apple Silicon) and Linux (x86_64/ARM64); first run creates a config at ~/.qq/config.json.
- •Configuration supports OpenAI-compatible providers, environment variables for API keys, streaming output, ANSI-color formatting, and runtime overrides for profile/model.