Sutskever and LeCun: Scaling LLMs Won't Yield More Useful Results

AI’s top voices say “bigger isn’t better” — the comments explode

TLDR: Ilya Sutskever and Yann LeCun say making chatbots bigger isn’t enough—new ideas are needed. Commenters split between “ship and let the market tune it” and “we’ve hit data and compute walls,” with extra drama over lumping LeCun with Ilya and worries that “back to research” scares investors.

Two AI heavyweights just tag-teamed the hot take of the year: Ilya Sutskever says the age of scaling is ending, while Yann LeCun argues chatbots aren’t the future at all. Cue instant comment-chaos. One user fired the opening shot — “Why is Yann Lecun in same article as Ilya?” — and from there it split into camps. The market-maximalists shouted: keep shipping and let real users tune these models into shape, calling Sutskever’s new venture a science project for the lab, not the street. The limit-callers clapped back: we’re running out of clean data and cheap chips, so bigger models won’t magically get smarter.

There was serious finance drama too: skeptics warned that “go back to research” is a nightmare pitch to investors, even as these AI giants keep printing revenue. Others said the vibe has turned: “social contagion” is pushing a fast mood swing from GPU go brr to “new ideas or bust.” LeCun stans waved the flag for systems that understand the world, not just word salad, while pragmatists asked the money question: where does the next 1,000× of power and data even come from? Meanwhile, memes flew about scraping the internet’s pantry bare and replacing more tokens with more theory. The only consensus? The plot just thickened.

Key Points

  • Ilya Sutskever says AI is shifting from an era of scaling to a renewed era of research.
  • He outlines a timeline: 2012–2020 research experimentation; 2020–2025 scaling driven by scaling laws; 2025 onward research again with large compute.
  • Scaling is reaching limits due to finite high-quality data and diminishing, unpredictable returns from simply making models bigger.
  • Sutskever highlights gaps between benchmark performance and real-world reliability, opacity of pre-training, and weak generalization as core issues.
  • Yann LeCun argues LLMs are not the future and advocates world models and architectures like JEPA; SSI is framed as betting on new training recipes.

Hottest takes

“Why is Yann Lecun in same article as Ilya?” — junkaccount
“deploying it in public and iterating over it until optimal convergence dictated by users in a free market” — rishabhaiover
“Where does the next 1000x flops come from?” — gdiamos
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.