Why Your Load Balancer Still Sends Traffic to Dead Backends

Dev drama over slow health checks, ‘sacrificed requests,’ and clever safety tricks

TLDR: Load balancers can keep sending clicks to dead servers because health checks are slow or inconsistent. Commenters feud over passive checks that let one request fail, big-brain “reverse” pipes, and practical circuit breakers—everyone wants fewer spinners and faster failovers without breaking everything.

A spicy thread ignited over why load balancers—those traffic cops that decide which server gets your click—still send your request to a dead backend that just spins and dies. The article lays out two worlds: server-side (a central traffic cop like HAProxy or NGINX) and client-side (each app picks a server itself). Both rely on “health checks” to spot sick servers, but those checks can be slow or wrong, especially with delays from DNS (the internet’s phone book). Cue the comments: AuthAuth wants to know why a real user must be the sacrifice—“why not retry immediately?” Meanwhile, dotwaffle pitches a galaxy-brain “reverse” HTTP pipe so servers can proactively signal “I’m done, drain me,” like hitting the big red stop button.

The drama heats up with a classic well-actually: dastbe argues both models can push health checks out-of-band, meaning smarter systems feed fresh status to the traffic cop without clogging the highway. Then umairnadeem123 steals the show with a practical trick: combine passive checks with “circuit breakers” (think a bouncer who throttles a misbehaving server) to get sub-second detection without false alarms. Jokes abound about “sacrifices to the timeout gods” and the eternal meme: It’s always DNS. The vibe? Frustrated users, feuding engineers, and a pile of clever hacks to stop the spinner apocalypse.

Key Points

  • The article contrasts server-side and client-side load balancing and how each model handles health checks.
  • Server-side load balancers centrally probe backends and apply thresholds (interval, timeout, rise/fall) to avoid flapping but introduce detection latency (e.g., up to 15 seconds).
  • Once a server-side load balancer marks an instance unhealthy, it immediately removes it from rotation for all clients.
  • Client-side load balancing distributes health checks across clients, often using service registries and DNS for instance discovery.
  • Active client-side probing increases probe traffic and can create inconsistent views across clients, especially during instance degradation.

Hottest takes

"why one real request must fail?" — AuthAuth
"a standardised 'reverse' HTTP connection" — dotwaffle
"combine passive health checks with circuit breaker state machines" — umairnadeem123
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.