November 25, 2025
Scale bros vs Research dads
Ilya Sutskever: We're moving from the age of scaling to the age of research
No more 'GPU go brrr' — brains over budgets, and where are the profits
TLDR: Ilya Sutskever says the era of buying more computers is ending and the hard research era is here. Commenters cheered and jeered: some say scaling hit a wall and profits lag, others want receipts on revenue and moats, while a few meme that another AI winter might be coming.
Ilya Sutskever told host Dwarkesh Patel we’re leaving the “age of scaling” (just buying more computers) and entering an “age of research.” The crowd heard it as a vibe shift: no more magic from endless GPUs, now it’s brains and fresh ideas. andy_ppp asked if this means scaling is less effective; scotty79 translated it as “the free lunch is over” and predicted yet another AI winter. Others snarked that Sutskever dodged the money talk — gizmodo59 noted how easily leaders “secure billions” while hand‑waving revenue and moats.
The episode’s nerdy bits (alignment, generalization, multi‑agent “self‑play”) were overshadowed by gossip: how did Dwarkesh pull A‑list guests without prior fame? SilverElfin demanded the growth playbook, while oytis joked that “ages keep flying by,” memeing the industry’s constant rebrands. Viewers also side‑eyed the sponsor parade — Gemini 3 brag reels, a Labelbox transcription flex, and Sardine fighting fraud — calling it the age of ads.
Supporters cheered the “research era” as the path to real breakthroughs and safer superintelligence; skeptics asked, “Where’s the economic impact?” after years of “smart” demos with thin payoffs. The debate boils down to this: are we leveling up, or admitting scaling hit a wall? Watch the chat YouTube.
Key Points
- •The interview frames a shift from the age of scaling to the age of research in AI, as discussed by Ilya Sutskever.
- •Sutskever notes a mismatch between strong eval results and limited, uneven real-world economic impact of current models.
- •SSI’s approach includes learning from deployment, alignment focus, and methods like self-play and multi-agent.
- •Topics include model jaggedness, emotions/value functions, what is being scaled, and human vs model generalization.
- •The post provides watch/listen links and sponsor mentions for Gemini 3, Labelbox, and Sardine.