Native ZFS VDEV for Object Storage (OpenZFS Summit)

ZFS hits the cloud: 3.7GB/s sparks cheers, side‑eye, and bucket memes

TLDR: A new ZFS add-on streams data directly from cloud buckets at up to 3.7 GB/s by skipping a slow layer. The crowd loved the speed but argued over real-world proof, asking why ZFS belongs here and demanding random I/O tests before declaring it a storage game-changer.

At the OpenZFS Summit, the team behind MayaNAS/MayaScale dropped a mic: objbacker.io pipes ZFS straight to cloud storage like Amazon S3, Google Cloud, and Azure—no slow FUSE layer—and claims 3.7 GB/s reads by striping across multiple buckets. Translation for non‑nerds: they’re making your files fly from cheap cloud storage while keeping the fast stuff on local SSDs. Fans rushed in—“ZFS stays relevant!”—but the comment section quickly split into hype vs. hard‑truthers. One camp cheered the hybrid design: metadata on speedy local drives, big files streaming from the cloud. The other camp rolled eyes: “Nice sequential numbers, but where’s the random I/O?” In plain speak: great for big videos, unclear for messy, tiny reads. Skeptics also asked why use ZFS at all for buckets when ZFS is beloved for safety features like checksums and encryption. Others wanted receipts on prior work, linking to the earlier ZFS-on-S3 demo (video). And a side thread spiraled into “compare this to ZeroFS’s network block devices” (link), because of course it did. Meanwhile, memes flew: “RAID‑0 of buckets,” “Striping buckets like a sushi chef,” and “Cloud NAS but make it fashion.” Verdict from the crowd: bold idea, spicy numbers, incomplete proof for real‑world chaos—more tests, fewer vibes.

Key Points

  • Zettalane introduced objbacker.io, a native ZFS VDEV for object storage that bypasses FUSE, presented at OpenZFS Developer Summit 2025 in Portland.
  • The solution tiers metadata and small blocks on local NVMe and streams large blocks (1MB+) from S3/GCS/Azure using ZFS special device architecture.
  • objbacker.io uses a character device (/dev/zfs_objbacker) to connect ZFS directly to a Golang daemon that leverages native cloud SDKs.
  • Benchmarks on AWS c5n.9xlarge reported up to 3.7 GB/s read throughput by striping across six S3 buckets; FIO tests used 1MB aligned I/O.
  • MayaScale, an NVMe-oF block storage component, targets sub-millisecond latency with Active-Active HA; GCP tiers with metrics were presented.

Hottest takes

“That’s brilliant!” — doktor2u
“FS metrics without random IO benchmark are near meaningless” — PunchyHamster
“Why would I use zfs for this?” — glemion43
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.