A trillion dollars (potentially) wasted on gen-AI

Ilya hits the brakes; the internet fights over whether AI money just went poof

TLDR: Ilya Sutskever says making models bigger is hitting limits and calls for new methods. Commenters split: some yell “scale harder,” others say spending isn’t wasted without human‑level AI, while memes mock billionaire burn and investor fees — making the AI money fight feel very real to everyone.

Tech internet spilled its tea when Ilya Sutskever — deep‑learning legend and OpenAI co‑founder — said the “just add more chips and data” playbook is flattening. He even flirted with new, hybrid techniques and admitted today’s chatbots generalize worse than people. Translation: bigger isn’t automatically better, and the hype train may need new tracks.

Comments exploded. One camp, led by naveen99, insists scaling is king and tells Ilya to “enjoy his billions.” Another camp says it’s not a waste even without “AGI” (sci‑fi human‑level smarts): ComplexSystems argues spending on useful tools is fine. roenxi adds that nobody agrees what “AGI” means, while the “LLM is over” crowd cheers Sutskever’s caution.

Then came the drama: bbor accuses the write‑up of twisting cautious optimism into “LLMs sucked,” mensetmanusman drops class‑war snark (“the 0.01%” burning cash), and memes fly — popcorn gifs, “GPUs go brrr,” and “AGI = Almost Getting Investors.” Investors get side‑eyed for loving scale because fees flow either way. Old warnings resurface too, from researchers who’ve long pushed neurosymbolic mashups and built‑in constraints over brute force. For some, Sutskever’s pivot feels like a plot twist; for others, it’s the “told you so” of the decade.

Key Points

  • Ilya Sutskever says scaling LLMs with more compute and data is showing diminishing returns and that new techniques are needed.
  • Sutskever argues LLMs generalize significantly worse than humans and supports exploring neurosymbolic methods and innate inductive constraints.
  • The article claims these points align with prior critiques predicting scaling limits and persistent issues like hallucinations and reasoning failures.
  • Supporting references include work by Kambhampati, Bender, an Apple reasoning paper, and a study questioning chain-of-thought reasoning in LLMs.
  • The piece asserts venture capital incentives favor continued investment in scaling despite potential limits, citing Phil Libin’s perspective.

Hottest takes

"pretty much the only thing that matters is scaling." — naveen99
"I can’t figure out what people mean when they say “AGI” any more" — roenxi
"I’m glad the 0.01% have something to burn their money on." — mensetmanusman
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.