Healthchecks.io Now Uses Self-Hosted Object Storage

Self-hosted flex or overkill? Btrfs PTSD vs 'just use a folder' crowd

TLDR: Healthchecks.io now stores request data on its own server using an S3-compatible setup to avoid slowdowns and fees. The crowd is split between “smart future-proofing” and “just use a fast SSD and folders,” with extra drama over Btrfs fears and applause for discovering Versity as a new option.

Healthchecks.io just pulled a power move: ditching managed storage and rolling its own. Translation for normal humans: the site that monitors your websites’ pings now saves request data on its own server using the S3 way of talking to storage, powered by Versity S3 Gateway and the Btrfs file system. Why? AWS fees per request and privacy headaches, then two European providers (OVHcloud, UpCloud) slowed to a crawl. With 14 million tiny files (about 119GB total) and ~30 uploads a second, they wanted speed and control, and it’s not as critical as the main database anyway.

But the comment section? Absolute chaos. One user basically screamed, “Btrfs gives me PTSD,” reigniting the age-old file system trauma. Another asked the big, messy question: if it’s all on one machine, why use the S3 interface at all? Just write files straight to disk! The “folder gang” says a single SSD and plain files would be faster and simpler, with one commenter claiming you could get “100x throughput” that way. Meanwhile, others are excited to learn about Versity as a fresh alternative to MinIO and Garage—“didn’t know this existed!” energy.

There’s humor too: memes about “self-hosted swagger” vs “just use a folder” pragmatists, plus a side-quest asking what tool made that benchmark screenshot. It’s part tech upgrade, part therapy session, part popcorn-worthy debate—and the crowd is loving it. Check the source at Healthchecks.io for the nitty-gritty.

Key Points

  • Healthchecks.io migrated from managed to self-hosted object storage, using Versity S3 Gateway backed by a Btrfs filesystem.
  • The service stores up to 100kB of POST request bodies: tiny payloads in PostgreSQL, larger ones via an S3-compatible store.
  • AWS S3 was rejected due to per-request costs and CLOUD Act implications; OVHcloud and then UpCloud were used but suffered performance issues.
  • Current scale (April 2026): ~14M objects totaling 119GB, average 8KB, ~30 uploads/sec average with spikes to 150, and constant churn.
  • Clustered self-hosted options (MinIO, SeaweedFS, Garage) were tested but deemed operationally complex; a simpler single-system approach was chosen.

Hottest takes

"everytime I see btrfs I get PTSD" — _joel
"why does it even need the S3 API? Could just be plain IO" — tobilg
"100x the throughput vertically scaling on one machine" — lsb
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.