Show HN: A MitM proxy to see what your LLM tools are sending

Spy on your AI to dodge bill shock — cheers, jeers, and “just a wrapper”

TLDR: Sherlock lets you sit between your apps and AI services to see exactly what’s sent and how much you’re paying. Commenters split between cost-cutting and governance enthusiasm, setup confusion, and skeptics yelling “just a wrapper,” with extra snark for copycat SaaS—transparency meets drama.

Hackers dropped “Sherlock,” a middleman app that lets you watch what your AI tools send and how much they cost in real time—and the crowd went full popcorn. Fans love the flashy token “fuel gauge,” the auto-saved prompts, and the promise of no code changes. One power user cheered that it helps spot wasteful AI behavior and keep budgets sane, while another asked the boardroom question: where are the enterprise-wide tools for this? That kicked off a governance vs. DIY debate: give everyone AI superpowers, or bring back guardrails?

Skeptics fired back with a classic tech-check: “Isn’t this just a wrapper around mitmproxy?” Translation: cool dashboard, but what’s the real innovation? Meanwhile, a setup question—“Do I need to tweak Claude Code to make this work?”—became the newbie moment, clashing with the project’s “zero code changes” pitch.

Then came the spicy meme energy: one commenter shaded SaaS copycats, joking that corporate features are now weekend “vibe prompts,” with a wink at Tailscale’s latest feature. The vibe: indie tool good, overpriced clones beware. So yes, Sherlock shows you what your AI is up to—but the real show is devs arguing over whether we’ve built Big Brother for bots or just a slick UI on an old trick. Either way, wallets are listening.

Key Points

  • Sherlock is a transparent MitM proxy for LLM APIs that displays real-time token usage in a terminal dashboard.
  • It requires Python 3.10+ and Node.js, generates a mitmproxy CA certificate on first run, and saves intercepted prompts.
  • Features include a context fuel gauge with color-coded thresholds, request logs, and automatic prompt archiving in Markdown and JSON.
  • Commands support running Claude Code and proxying arbitrary commands, with options for port, token limit, persistence, and certificate checks.
  • Anthropic is currently supported, with OpenAI and Google Gemini listed as coming soon.

Hottest takes

“I’m surprised that there isn’t a stronger demand for enterprise-wide tools like this.” — david_shaw
“So is it just a wrapper around MitM Proxy?” — mrbluecoat
“I guess your SaaS really is someones weekend vibe prompt.” — FEELmyAGI
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.