The Bitter Lesson Has No Utility Function

HN explodes: Fast AI, no GPS — and everyone’s fighting

TLDR: The author says AI’s “just add compute” mindset skips the crucial step of choosing clear goals and deciding when extra computation is worth it. Commenters split between “compute wins anyway” and “get a compass,” with bonus drama over the author co‑writing with an AI to make the point about humans setting objectives.

Hacker News lit up after an essay argued that AI’s “Bitter Lesson” — the idea that more compute and general methods win — forgot one tiny thing: a goal. The author says decision theory (think: math for choosing what to do when time and money are limited) isn’t competing with image recognition or chatbots; it’s the navigation system that tells your fast car where to go. Cue chaos.

One camp roared back that the Bitter Lesson still predicts winners. As one commenter put it, LLMs — the chatbots everyone’s funding — will dominate while Bayesian decision tools remain niche until the hype plateaus. Others clapped back that without a clear objective, “more compute” risks becoming “more expensive noise,” pointing to real-world questions like “Is the next API call worth it?”

The thread’s spicy subplot? The author admitted co‑writing with Claude, quipping that the human brought the utility function (the goal) and the machine brought the compute. Half the crowd loved the symbolism; the other half demanded clearer writing and rolled their eyes. A veteran chimed in that old-school symbolic AI struggled with “reasoning with uncertainty,” while many readers seemed to confuse that with decision theory — exactly the author’s point about lost institutional memory. Jokes flew about fast cars with no GPS, “MOAR compute” not buying a compass, and whether benchmarks beat bank accounts. Drama level: premium.

Key Points

  • The essay argues decision theory addresses action under uncertainty and resource allocation, not perception tasks, and thus does not compete with deep learning.
  • A Bayesian agent outperformed a LangChain agent by evaluating whether additional tool queries justified their cost, illustrating expected value of information.
  • The author critiques a binary framing inspired by the Bitter Lesson (hand-crafted knowledge vs. general methods plus compute) as a category error for decision theory.
  • The Bitter Lesson is said to lack a utility function, guidance on allocating finite compute, and explicit treatment of values/objectives.
  • The piece warns that failing to teach and use decision-theoretic methods leads to institutional memory erosion, necessitating rediscovery of Bayesian/OR ideas.

Hottest takes

"The bitter lesson has no utility function, but it has a predicting power." — ordu
"the human brought the utility function, the machine brought the compute." — archermarks
"the worst problem it had was 'reasoning with uncertainty'" — PaulHoule
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.