May 3, 2026
Prompt and Circumstance
LLMs Are Not a Higher Level of Abstraction
Coders erupt as ‘AI is the next coding step’ gets roasted from every angle
TLDR: The post argues AI code tools aren’t a clean new step in programming because they can produce unpredictable extras, not just the thing you asked for. Commenters split hard between “that’s exactly the danger” and “you’re misunderstanding how these systems work,” with side-helpings of sarcasm and career-change jokes.
A spicy blog post on Lelanthran’s argument just kicked off the kind of comment-section cage match the internet lives for. The author’s big claim is simple: large language models—the chatty AI tools writing code on command—are not the next clean step up from old-school coding. Why? Because older tools reliably turn the same input into the same result, while AI can give you what you wanted... plus surprise extras you absolutely did not ask for. The nightmare example? You request a harmless to-do app and accidentally get a side order of security disaster.
That set the comments on fire. One camp boiled it down to a bumper sticker: “LLMs are probabilistic, not deterministic.” In plain English: they’re guess-machines, not exact machines. Another crowd said, hold on, this whole argument is too neat—regular programming tools aren’t magically identical every time across every setup either, so the author is overselling the contrast. And then came the real drama: a blunt commenter declared that the AI super-fans simply don’t care if the machine is sloppy, because they’re happy to outsource the thinking.
But defenders of AI weren’t backing down. One shot back that the model itself can be consistent under the same conditions, accusing critics of mixing up built-in randomness with the system’s actual behavior. And the funniest drive-by of the thread? A commenter mocked the whole “levels of abstraction” debate by saying the true ladder ends with quitting tech entirely to open a gastropub or horse-shoeing service. Honestly, in this comments section, that may have been the least controversial take.
Key Points
- •The article argues that LLMs are incorrectly described as the next programming abstraction layer after languages such as assembly, C, and Python.
- •The post says earlier abstraction layers can be modeled as a function where a given input produces a specific output artifact: f(x) -> y.
- •The article claims LLMs instead produce probabilistic outputs, expressed as f(x) -> P(y), rather than a single determined artifact.
- •The author expands this claim by arguing LLM output may include the requested result along with additional unintended artifacts that are not explicitly requested.
- •A hypothetical TODO web app example is used to argue that tests may verify the desired output while failing to detect unsafe or unwanted side effects in generated code.