April 6, 2026
Fast claims, hotter comments
Linux extreme performance H1 load generator
Fastest‑ever boast sparks “prove it” brawl, timing doubts, and an “I did it first” flex
TLDR: A new Linux load-testing tool claims record speed and ultra-precise timing, with a slick UI and JSON output. The crowd rallies for hard benchmarks and questions how async batching measures latency, while a veteran flexes a pre-io_uring alternative—turning a product launch into a proof-or-it’s-puff showdown.
Meet “Glass Cannon,” a new open‑source tool that blasts websites with traffic to see how fast they really are. It claims to be the fastest tester yet, using a Linux speed trick called io_uring (think: a smarter way to juggle lots of tasks). It promises microsecond‑accurate timing, a flashy terminal dashboard, and script‑friendly JSON output. It’s even the official load generator for Http Arena and lives on GitHub.
But the community isn’t buying hype without receipts. The top vibe: “Benchmarks or it didn’t happen.” User Veserv calls out the “extreme” label with no side‑by‑side charts, stirring a dogpile of “show the numbers” demands. Meanwhile, a thoughtful curveball lands: bawolff wonders if batching lots of requests at once could mess up the timing data—if it’s all asynchronous, when exactly does the clock start? The tool touts “exact” percentiles, but commenters want proof, not poetry.
Then comes the plot twist: a veteran rolls in with a humble‑brag. User 0x000xca0xfe drops a link to their older, simpler tool that did something similar before io_uring even existed, sparking a mini‑meme war of “new hotness vs old reliable.” Jokes fly about the name—“Glass Cannon” sounds perfect for a project that hits hard but shatters under scrutiny. TL;DR: cool features, spicy claims, and a comment section demanding charts, not charm.
Key Points
- •Glass Cannon (gcannon) is a Linux io_uring–based HTTP/1.1 and WebSocket load generator and the official tool for Http Arena.
- •Installation requires Linux 6.1+, gcc, and liburing-dev 2.5+, with build steps via apt, git clone, and make.
- •The tool claims extreme throughput using batched async I/O and provides microsecond-precision, per-request latency histograms using CLOCK_MONOTONIC.
- •A TUI mode offers live progress, req/s graphs, color-coded latency percentiles, adaptive histograms, run history, and per-template latency (optional).
- •JSON mode outputs machine-readable results (including WebSocket-specific fields), and the CLI supports only http:// with options for concurrency, threads, duration, pipelining, and reconnect behavior.