January 9, 2026

When SSDs nap, networks sprint

Cloudspecs: Cloud Hardware Evolution Through the Looking Glass

Network rockets 10x while SSDs stall; is it time to go back on‑prem

TLDR: Study says cloud networks got 10x cheaper, but local SSD value barely moved since 2016. Commenters blame “Nitro NVMe” design and float going back to on‑prem servers, arguing storage‑heavy apps might run cheaper outside the cloud.

Forget the usual “everything gets faster,” a new CIDR’26 paper says cloud progress is lopsided: networks are sprinting while CPUs, memory, and especially SSDs are dragging. Commenters went full popcorn mode over the shocker: NVMe SSDs haven’t improved value since 2016, with the old i3 still the best deal. User mad44 demanded answers on the “NVME SSDs pricing anomaly,” and donavanm dropped insider lore: not all NVMe is equal — the beloved i3 uses direct‑attached drives, while newer “Nitro NVMe” shows up via embedded cards that emulate NVMe. Translation for non‑nerds: newer cloud SSDs might add layers that slow the vibe. Meanwhile, Graviton chips helped CPU value only modestly, and AI‑driven memory prices damped gains, even as network bang‑for‑buck exploded 10x.

Then came the plot twist: dweekly raised cloud repatriation — moving some tasks back to owned servers — because the “IOPS per dollar” (how many reads/writes you get for your money) looks better on‑prem than in the cloud. Cue memes: “Moore’s Law took a nap,” “NVMe is the new printer,” and “Just stream from S3 and chill,” as folks point to AWS’s blazing Nitro networks making remote storage tempting. The crowd is split: one camp says AWS is nudging everyone to read directly from S3; the other swears you can still win with local caching if you pick the right family. Bottom line: the uniform upgrade fairy is gone — specialization rules now.

Key Points

  • CPU cost-performance in AWS improved ~3x over 2015–2025 via SPECint, dropping to ~2x without Graviton; multi-core counts rose to 448 cores (u7in), but in-memory workloads saw only 2x–2.5x gains due to latency.
  • DRAM capacity per dollar largely flatlined; memory-optimized x instances (2016) offered ~3.3x more GiB-hours/$ than compute-optimized peers; absolute bandwidth rose ~5x (DDR3→DDR5), but cost-normalized gains were ~2x with recent DDR5 price spikes.
  • Network bandwidth per dollar improved 10x, and absolute speeds increased from 10 Gbit/s to 600 Gbit/s; gains centered on network-optimized instances like c5n powered by AWS Nitro cards, with generic instances seeing little change.
  • AWS NVMe SSD performance stagnated since 2016; the i3 family still leads I/O performance per dollar by nearly 2x; SSD capacity stalled since 2019, diverging from on-prem NVMe which doubled twice with PCIe 4/5.
  • The paper suggests a shift toward disaggregated storage: with fast networks and stagnant local NVMe, remote storage (e.g., S3) may be preferable, and specialization (Graviton, Nitro, accelerators) is driving performance gains.

Hottest takes

"NVME SSDs pricing anomaly?" — mad44
"Article should probably explicitly call out the difference between directly attached nvme storage (good ol i3) and “nitro nvme”" — donavanm
"Economics that didn't really pencil out half a decade ago may be worth revisiting" — dweekly
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.