February 24, 2026

Break your calls to make them stronger

Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP)

Chaos tool floods video calls; Cruise vet flexes, SIP crowd asks for their turn

TLDR: A new tool simulates 1,500 fake video callers and injects chaos to test reliability. One commenter says Cruise did this and warns video lag can fool metrics, while another asks for a SIP version—proof devs want tougher tests and broader support to keep calls from freezing when it matters.

Meet AV Chaos Monkey, the stress-test buddy your video app didn’t know it needed. It spins up a bot army of fake callers (up to 1,500), then unleashes network tantrums—dropped packets, jittery delays, fuzzy video—to see if your WebRTC setup survives. Translation for non-nerds: it purposefully messes with your calls so your app doesn’t mess up in real life.

The comments came in hot. One engineer, joshribakoff, dropped a flex: they did this at Cruise, and learned a brutal truth—when the internet hiccups, video lag goes off the rails while data looks fine. That sparked a reality check: are teams trusting “clean” dashboards while their faces freeze on screen? Another voice, agentifysh, chimed in with a simple demand: SIP (old-school phone tech) support when? Because phone folks want chaos too!

Fans loved that it reuses real audio/video frames to keep things lean and realistic, with a plug-and-play stack: TURN servers to punch through firewalls, autoscaling on Kubernetes, and shiny graphs so you can watch the carnage in real time. The vibe: gleeful “break it to make it better,” a little industry déjà vu, and a big dare—who’s brave enough to throw this at their platform and watch what breaks?

Key Points

  • AV Chaos Monkey simulates over 1,500 WebRTC participants with H.264/Opus streams to stress-test video conferencing systems.
  • Media frames are cached and shared across participants with zero-copy, reducing CPU usage by ~90% versus per-participant encoding.
  • A control plane provides REST-based lifecycle management, a Spike Scheduler, and a Network Degrader to apply packet loss, jitter, bitrate reduction, frame drops, and bandwidth limits.
  • Kubernetes deployment auto-partitions participants across pods, allocates ports deterministically, scales via StatefulSet/HPA, and uses a UDP relay chain to work around TCP-only port-forwarding.
  • WebRTC infrastructure includes Coturn clusters, optional webrtc-connector, and optional observability stack with Prometheus/Grafana exposing detailed test metrics.

Hottest takes

"did this exact same thing at Cruise" — joshribakoff
"video latency doesn’t correlate with data channels" — joshribakoff
"is there something like this but for SIP?" — agentifysh
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.