April 20, 2026
Comments in retrograde
Epicycles All the Way Down
AI adds “epicycles” while commenters go orbital: outdated, overhyped, or actually winning
TLDR: The essay argues today’s AI stacks clever patches—“epicycles”—so it’s pattern-matching, not deep understanding. Commenters clash: some mock the take as outdated, others tout reports of LLMs cracking hard math, while pragmatists push simple guardrails and historians dispute the analogy—proof the AI fight isn’t over.
An essay likens today’s chatbots to ancient astronomers stacking epicycles: lots of clever patches, not true understanding. The author’s vibe: LLMs (chatbots) feel smart but mostly match patterns, so their failures look like market flash crashes, not robot uprisings. Cue the comments section going supernova.
One camp calls it stale. “Needs a ‘[November 2025]’ title,” snarks throwaway210426, implying the take is already old news. The hype crew fires back: user ogogmad claims LLMs just produced multiple solutions to an Erdős problem—the kind of math puzzle humans chased for years—arguing the “LLMs can’t reason” narrative is cracked. Meanwhile, the pragmatists show up with duct tape: OutOfHere says the scary “nonsense physics formula” example is fixable—“just ban random decimals and enforce elegant equations.” Translation: better guardrails, fewer faceplants.
Then the history nerds crash the party. edo_cat insists the epicycle analogy is misused—“Copernicus added epicycles,” not the medievals—and the thread turns into a mini history bee. Memes fly: “epicycles-on-epicycles DLC,” “Skynet? More like Stocknet,” and that time‑traveler “November 2025” jab. The mood? A three‑way cage match: doomer essays vs. AI victory laps vs. pedants with receipts. Bring popcorn and a protractor.
Key Points
- •The author’s outcome-only strategy in poker underperformed compared to combining heuristics with explicit calculations.
- •The essay argues LLMs often function as overfit pattern-matchers despite appearing to understand.
- •It likens incremental LLM fixes to adding “epicycles,” improving performance without changing core generative mechanisms.
- •Training must constrain hypothesis space and rely on inductive biases to select true generative processes among many possible generators.
- •Citing Gold’s theorem, the essay claims positive-only examples can hinder identifying true programs; LLMs may fit data without capturing intended generative principles.