Show HN: Running local OpenClaw together with remote agents in an open network

Local brain, cloud muscle: devs cheer while skeptics ask for receipts

TLDR: Hybro Hub links local AI on your own computer with cloud agents in one portal, promising privacy when you want it and power when you need it. Commenters praise the clever networking, then feud over missing “inside the box” visibility and dream about borrowing remote GPUs without going full remote setup.

Hybro Hub just dropped a crowd‑pleaser: a tiny background app that lets you run AI on your own computer and mix it with cloud bots in the same website, hybro.ai. Fans love the “no compromise” pitch—keep private stuff local, tap cloud heavyweights when you need extra power—and the setup looks dead simple. The big applause line? It only makes outbound connections, so it works behind your home router and firewalls without opening ports.

Cue the commentariat. One early voice called the outbound‑only trick “elegant,” but then immediately slammed the brakes to ask for observability: can users see what actually happens inside a chat—reasoning steps, tool calls, token counts? That sparked a split between privacy purists who just want a green “Local” badge and transparency hawks who want receipts, dashboards, and a black box turned inside out. Jokes flew about “local badge or it didn’t happen,” as folks debated if the promise of privacy is enough without deep logs.

Another hot thread: GPUs. A commenter pushed for a best‑of‑both‑worlds workflow where devs work locally but borrow remote graphics cards on demand—no moving your life into a remote machine. That got people dreaming of “laptop brain, cloud brawn” as the killer use case. Bottom line: everyone loves the bridge, but they’re fighting over the guard rails.

Key Points

  • Hybro Hub connects local AI agents to the hybro.ai portal so users can run local and cloud agents side by side with per-conversation routing.
  • The hub operates as an outbound-only background daemon (no inbound ports), using HTTPS and an A2A protocol for interoperability and NAT-friendly connectivity.
  • Quick start includes pip installation, API key generation on hybro.ai, starting the daemon, and launching a local Ollama-based agent (e.g., llama3.2).
  • The portal displays both cloud and local agents, with per-message indicators showing where processing occurred; local processing keeps data on the user’s machine.
  • Privacy features include sensitivity detection (PII and custom keywords) for outbound messages to cloud agents and graceful degradation that queues messages when the hub is offline.

Hottest takes

"Really clean architecture on the outbound-only relay — solving the NAT problem that way is elegant." — benjhiggins
"…do you get any visibility into what happened inside the session — like reasoning steps, tool calls, token usage per convo?" — benjhiggins
"…cross-environment execution where the agent / dev loop stays local, but GPU access lives elsewhere." — tensor-fusion
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.