April 18, 2026

Billion-dollar beef, extra spicy

Two $20B: OpenAI and Nvidia in a 'Reasoning Battle'

Two $20B moves: OpenAI goes on a chip spree, Nvidia buys a rival—fans fight over who makes AI faster

TLDR: Nvidia bought one chipmaker while OpenAI committed $20B to another, fueling a showdown over who can make AI answer faster. Commenters split between hype (finally, cheaper, faster chatbots) and eye‑rolls at corporate theater, with memes about “two $20Bs” and worries about Nvidia’s grip on the market.

Two twin $20B shocks just dropped and the internet acted like it was watching a tech soap: Nvidia quietly bought chip upstart Groq, while OpenAI said it’ll buy over $20B of chips from Cerebras (which promptly filed for a $35B IPO). Translation for non-nerds: one side bought the store, the other emptied the shelves. Cue drama.

Comment sections lit up with “Team Nvidia vs. Team OpenAI” energy. Fans claim this is a fight over inference—the part where AI answers you in real time—versus training, which is teaching the AI in the first place. Simple version: training is a one‑time boot camp; inference is the never‑ending customer service line. As that line gets longer, the money shifts to chips that answer faster, not just learn harder.

Some called it power chess to weaken Nvidia’s grip; others said it’s just marketing pyrotechnics and the same old chips-in-a-trenchcoat. The snark was strong: BOGO memes (“Buy One $20B, Get One $20B”), and jokes about “HBM” (fancy memory) standing for “Hot Bottleneck Moments.” One top‑liked groan set the tone—“LLM-generated slop”—as readers begged for an ELI5 (“explain like I’m five”) version. Meanwhile, optimists cheered a future where faster replies mean cheaper, better AI for everyone. Pessimists? They’re betting this is just a very expensive game of musical chairs where you still wait for your chatbot to load.

Key Points

  • Nvidia acquired AI chip company Groq for $20 billion in December 2025.
  • On April 17, 2026, OpenAI announced plans to purchase over $20 billion in chips from Cerebras; Cerebras filed for a NASDAQ IPO the same day targeting a $35 billion valuation.
  • Market research (Deloitte, CES 2026) indicates inference reached ~50% of AI compute spending in 2025 and is expected to be two-thirds in 2026.
  • Lenovo CEO Yang Yuanqing said at CES that spending will flip from 80% training/20% inference to 20% training/80% inference.
  • The article argues Nvidia’s training-optimized GPUs (H100/H200) face inference bottlenecks due to memory bandwidth and HBM latency, as observed in OpenAI’s Codex optimization efforts.

Hottest takes

"LLM-generated slop. Could be written as two paragraphs" — jaen
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.