May 9, 2026
No fsync, no fear?
Removing fsync from our local storage engine
They ditched a famous safety step, and the comments instantly split into cheers and fact-checks
TLDR: The team says it sped up its local storage system by avoiding the usual disk-sync step, but only in a tightly controlled setup on solid-state drives. Commenters were split between applause, nostalgia, and nitpicky skepticism, with one debate zeroing in on whether the article overstated what actually gets safely saved.
A storage-engine team just posted the kind of engineering confession that makes infrastructure nerds clutch their pearls: they found a way to keep writes safe after a crash without calling the usual disk-flush step every time. The payoff is big — in their benchmark, the new setup was much faster than a more traditional path — but the real fireworks were in the comments, where readers reacted like they’d just watched someone remove the brakes from a car and then calmly explain the aerodynamics.
The loudest reaction was a mix of “finally!” and “okay, but only under very specific conditions.” The author jumped in quickly to pour cold water on any wild interpretations, stressing that this is not a universal replacement for the old method. It only works because they control the whole setup: solid-state drives only, tightly limited behavior, and a custom recovery plan. That didn’t stop the crowd from having fun. One commenter basically celebrated the whole thing as proof that file handling is cursed and the real villain is the old syncing interface itself. Another laughed that the industry has gone “almost full-circle” back to giant systems that seize control of the whole disk like it’s 1999.
And then came the classic comment-thread twist: a sharp-eyed reader called out what looked like a contradiction about whether saving a file also saves its folder entry, turning the thread into a mini courtroom drama. So yes, the article was about speed — but the comments were about trust, edge cases, and whether skipping a famous safety ritual is genius or just extremely well-explained chaos.
Key Points
- •The article describes a single-node SSD-only KV storage engine that avoids calling fsync on PUT and DELETE by using preallocated files, pre-zeroed extents, O_DIRECT writes, and an SSD-aware journal.
- •The design is explicitly limited by a narrower durability contract than POSIX semantics and depends on the engine controlling allocation, journaling, and recovery.
- •The article compares common durability approaches in MinIO, RocksDB, etcd, Postgres, and Kafka to show how fsync is typically used or avoided.
- •It argues that fsync is expensive because it can trigger both data and metadata persistence work, and its tail latency varies with journal state, concurrent I/O, and SSD garbage collection.
- •In a 4 KB random-write benchmark on AWS i8g.2xlarge local NVMe, the custom engine reportedly reached 190,985 objects per second versus 116,041 for ext4 + O_DIRECT + fsync.