LLMs could be, but shouldn't be compilers

Coders split: “AI as compiler?” Fans dream, skeptics scream

TLDR: A think piece says AI chatbots shouldn’t replace compilers because messy human instructions and unpredictable outputs make trust hard. Comments erupt: skeptics say “not deterministic, not a compiler,” others pitch “anything-to-anything” workflows, and pragmatists remind everyone AI has limits—shaping how much power we hand these tools in future coding

Today’s hot thread asks: should AI chatbots be the thing that turns ideas into working apps? The original post argues no—even a “perfect” model isn’t a real compiler, because describing what you want in plain English is messy and we humans are, well, lazy. That lit the fuse.

The loudest chorus: LLMs are unpredictable, so stop calling them compilers. One commenter barked that if outputs change every run, you can’t trust them with the keys to your app. Jokes flew: “Press F5 to compile? More like press F5 to pray,” and “Would you let a slot machine build your airplane?” Others mocked the “English is the new programming language” dream with lines like, “Cool, now argue with your compiler about tone.”

But a crafty camp reframed it: don’t think “English-to-code,” think “anything-to-anything.” One fan says AI shines when you add stepping stones—sketch > pseudo-code > code—treating the model like a flexible translator, not a magic compiler. Meanwhile, a sober realist reminded everyone that models aren’t infinite geniuses; they have limits, just like humans and tools.

Thread MVP for relatable chaos: the dev who ditched ultra-strict Haskell for friendly Python because thinking while coding beats perfect blueprints. The room split between speed demons and spec nerds—and the memes wrote themselves

Key Points

  • The article questions whether LLMs could function as compilers and argues they should not.
  • LLMs’ hallucinations and lack of deterministic guarantees undermine their use as reliable abstraction layers.
  • Even imagining a non-hallucinating LLM, the author highlights that specifying systems is inherently hard and requires precise control trade-offs.
  • Higher-level languages reduce mental complexity by providing constructs absent at the instruction-set level and compiling them away.
  • Useful abstractions require giving up some control with predictable guarantees; without this, an abstraction layer is not effective.

Hottest takes

"LLMs are not deterministic, so they are not compilers" — codingdave
"anything-to-anything compiler" — mvr123456
"They are and will stubbornly persist in being finite" — jerf
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.