Because It Doesn't Have To

Why the internet and AI win by being allowed to mess up

TLDR: The post argues that the internet and AI are powerful partly because they’re allowed to fail, retry, and guess instead of being perfect. Commenters split fast: some called it a deep truth about learning and growth, while others said the AI comparison was flimsy — with one savage joke stealing the show.

A surprisingly spicy idea lit up the comments on Because It Doesn't Have To: maybe the reason the internet works so well is that it was never built to be perfect in the first place — and maybe modern artificial intelligence works the same way. The post argues that both systems get stronger by being allowed to fail, retry, and make educated guesses instead of needing every answer to be flawless on the first shot. In plain English: they’re useful because they’re not perfectionists.

But the community was very split on whether that comparison is brilliant or a stretch. One camp loved the big-picture vibe, saying progress often comes from room to experiment, stumble, and improve. One commenter compared it to kids learning through play, doctors practicing before the real thing, and entrepreneurs surviving messy early mistakes. Another went even broader, saying the best parts of the internet and AI come from what grows naturally instead of being handcrafted — and added the deliciously dramatic jab: “People hate that for some reason.”

The skeptics, though, were not buying the analogy without a fight. One popular reaction basically said, sure, the networking part is solid, but the AI comparison feels like a hand-wave dressed as philosophy. And then came the joke that probably won the thread: someone quipped that this theory also explains certain coworkers — they work so well because they don’t have to. Brutal. Smart. Extremely comment-section behavior.

Key Points

  • The article argues that the internet performs well partly because the IP layer and lower layers do not guarantee delivery.
  • The article says this lack of guaranteed success reduces protocol complexity while still enabling powerful behavior.
  • TCP is presented as a protocol that tries to achieve delivery through retries while still allowing failure to be reported.
  • The article extends this framework to machine learning, arguing that models work well because they are allowed to express uncertainty rather than forced certainty.
  • The article cites softmax-based probability distributions in neural networks as a mechanism that lets models handle complex problems by assigning nonzero probability to multiple outputs.

Hottest takes

"the ML perspective is such a loose analogy" — chermi
"They work so well because they don't have to" — booleandilemma
"People hate that for some reason" — simianwords
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.