CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs

256 cores, mega cache, and a thicc rack — commenters go feral

TLDR: AMD revealed a 256‑core server chip (Venice) and a beefy MI400 AI accelerator, due this year. Commenters cheered giant cache numbers and mocked the “thicc” rack, while skeptics fretted about cooling and power—big stakes for data centers as faster, denser hardware sparks AMD‑vs‑NVIDIA drama.

CES just handed the internet a chaos sandwich, and AMD served the fillings: Venice (a server chip with up to 256 cores) and MI400 (an AI accelerator stuffed with high‑speed memory). The moment AMD showed real silicon, the crowd went full caps. One camp screamed “Stunning!” while another squinted at the heat maps asking, “Water‑cool or watch it melt?” Venice’s big twist? Two “traffic manager” chips (IO dies) and advanced packaging that squeezes more parts closer together—translation: more power, more speed, more drama. The rumor that Venice‑X could slap on extra “V‑Cache” (think: bonus memory glued on top) sparked the loudest gasp: up to 3 gigabytes of ultra‑fast cache across the chip. MI400 rolled in with 12 stacks of HBM4 (super‑fast memory towers) and extra chips for plug‑in lanes like PCIe (the card highway) and UALink (a server interconnect) — AKA, big bandwidth energy. Then it got petty in the comments: one user tossed in an [NVIDIA] Blackwell reference and ignited a brand war, while another declared the new double‑wide rack “thicc and chic.” TL;DR vibes: jaw‑dropping core counts, cache flexing, and a cooling panic meme storm — will this reshape data centers, or just melt them?

Key Points

  • AMD revealed the first public silicon for its Venice server CPUs and MI400 datacenter accelerators at CES 2026, following earlier spec disclosures in June 2025.
  • Venice appears to adopt advanced CCD-to-IO packaging and two IO dies, with 8 CCDs × 32 cores enabling up to 256 cores per package.
  • Each Venice CCD is estimated at ~165 mm² on N2; assuming 4 MB L3 per core, each CCD would have ~128 MB L3, with core+cache area ~5 mm² (vs Zen 5 ~5.34 mm² on N3).
  • MI400 integrates 12 HBM4 stacks and twelve 2 nm/3 nm compute and IO dies, with two ~747 mm² base dies and two ~220 mm² off-package IO dies; likely eight compute dies (four per base die).
  • AMD announced MI440X (joining MI430X and MI455X) for 8-way UBB systems and Venice-X (likely V-Cache); both Venice and MI400 are due to launch later this year.

Hottest takes

“Good lord!” — erulabs
“How is this sort of package cooled? … water cooling” — cogman10
“Blackwell 100+200 compression spin lock documentation” — rballpug
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.
CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs - Weaving News | Weaving News