April 18, 2026

Equal or epsilon? Pick your fighter

It's OK to compare floating-points for equality

Dev says “just use ==” and the comments go nuclear over float math

TLDR: A veteran dev says exact comparisons are often fine and that blanket “epsilon” tolerances can backfire. The comments erupt into bit hacks, a $100 Rust bounty, and maintainability worries, as readers argue over the safest, simplest way to compare numbers without breaking real-world apps.

The author dropped a spicy take: stop slapping a tiny “epsilon” tolerance on every number check and, yes, sometimes just use ==. He argues floating-point numbers (the way computers store decimals) are deterministic, not random, and blanket tolerance rules cause more bugs than they fix. Cue the comment cage match.

On one side, the bit-hackers: mizmar swaggered in with a magnitude-proof trick — “bit-cast to integer, strip a few least significant bits” — and 4pkjai backed it up with real life: it helps verify if text lines up exactly between two PDFs. On another flank, jph turned the thread into a mini-hackathon, tossing a $100 bounty for anyone who can improve a Rust macro that checks if two numbers are “close enough.” Nothing like cash to fuel a standards war.

Then came the maintainers. AshamedCaptain warned that tossing epsilon out means future code changes will break comparisons and grid checks. Translation: today’s “clean” equals is tomorrow’s pager-duty nightmare. Meanwhile, demorro confessed they misunderstood what epsilon even is — learning that language-provided epsilon usually only applies near 1.0, not across all sizes. Oops.

Memes flew: “My ex was ‘approximately equal,’” “epsilon is my comfort number,” and “choose your fighter: == vs .” Whether you’re Team Exact or Team Tolerance, the thread turned a math lecture into a soap opera — with links to floating-point basics for anyone scrambling to keep up.

Key Points

  • The author argues that exact equality comparisons for floating-point numbers are often acceptable and that blanket epsilon checks are usually suboptimal.
  • Floating-point arithmetic is deterministic and standardized; each operation returns the closest representable value to the true result with defined rounding.
  • Inexactness arises from finite representation, not randomness; predictable error bounds exist.
  • Some algebraic laws (e.g., associativity) do not always hold for floating points; a 32-bit example shows small differences in sums.
  • The article previews case studies (grid-based movement, ray–box intersection) to illustrate better alternatives to epsilon-based comparisons.

Hottest takes

"bit-cast to integer, strip few least significant bits" — mizmar
"I have this floating-point problem at scale and will donate $100" — jph
"Anything else is basically a nightmare" — AshamedCaptain
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.