Streaming compression beats framed compression

Robot dev’s 80% data win sparks “but packets?” fight

TLDR: A robot team shared one compressor across messages and slashed data use by about 80%. Commenters loved the speed but argued over packet loss risks, memory costs, and whether this trick is old news—splitting into camps of “smart hack,” “obvious,” and “use dictionaries instead.”

A robot team claims a juicy 80% bandwidth cut by switching from compressing each message separately to streaming compression—basically letting the compressor remember what came before, like how modern video reuses frames instead of starting fresh every time. It’s done over WebSockets using Zstandard, and the dev even built a Rust crate to bring streaming to gRPC-web and server-sent events. Nerdy? Yes. Effective? Also yes.

But the comments? Absolute mayhem. One camp cheers the clever hack; another clutches pearls over reliability. lambdaloop wants to know what happens when packets go missing or arrive out of order—does the shared memory “brain” melt? duskwuff rolls in like the party pooper, warning that keeping a single context alive costs memory and could get pricey at high settings. Meanwhile masklinn yawns, basically saying this is obvious if you’ve ever compared .zip to .tgz (tar + gzip), which is the internet equivalent of “welcome to 2003.” efitz drops an old-school flex about screaming-fast compression from IBM tape days, and vlovich suggests using a tuned dictionary so you keep message independence. Cue memes of “interframe vs MJPEG,” robot whisperer jokes, and a mini flamewar over whether this is genius engineering or just common sense dressed up as a blog post. The vibe: thrilled, skeptical, and extremely online.

Key Points

  • Standard WebSocket framed compression compresses each message independently, benefiting larger messages but limiting cross-message context.
  • Sharing a single zstd encoder/decoder context across messages and flushing per message enables effective streaming compression over WebSockets.
  • In the described robot control workload (~10×100KB flexbuffer messages/sec over Wi‑Fi), streaming compression reduced bandwidth about 80% beyond per-message zstd.
  • Initial zstd dictionary compression was abandoned due to overhead; the streaming approach effectively builds an on-the-fly dictionary as the stream progresses.
  • The author built a Rust crate to provide streaming compression for HTTP responses (including gRPC‑web and SSE), citing limitations in gRPC and tower‑http for streaming.

Hottest takes

"Does streaming compression work if some packets are lost or arrive in a different order?" — lambdaloop
"This may have a nontrivial memory cost" — duskwuff
"Surely that is obvious to anyone who has compared zip and tgz?" — masklinn
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.