Artificial Computation

Can computers do anything? Commenters slam “artificial” hype

TLDR: A thinker proposes “artificial computation,” a philosophical offshoot that rejects normal computer rules. Commenters fired back with the halting problem and called it “compu-fiction,” sparking a split between theory fans and pragmatists. It matters because it challenges what “computing” means—and the crowd wants proof, not poetry.

A high-brow essay claims we still don’t know what computers can’t do, then drops a philosopher’s bomb: “artificial computation”—a kind of thinking that supposedly steps outside normal computing rules. The comments? Absolute chaos. One reader immediately invoked the famous halting problem, arguing we do know things computers can’t decide, like whether any program will stop. Confusion turned to heckling as folks asked, “What even is this?” and demanded plain-English examples instead of cosmic vibes.

Fans of theory tried to calm the crowd, saying it’s like discovering an algorithm (think Shor’s algorithm) before the hardware exists—interesting, even if unbuildable today. Skeptics clapped back: if “artificial computation” neuters commands and divorces ideas from actions, isn’t that just… not computing? Meanwhile, memes flew: “Schrödinger’s Laptop” (both computing and not computing until you open it), “Non-Deterministic Coffee Machine” (press button, get philosophy), and “compu-fiction” became the dunk of the day. Some defended the author as channeling Alan Turing with a fresh spin; others saw word salad served with pretension fries. Verdict from the crowd: fascinating provocation, but please translate it to something a laptop—or a human—can run.

Key Points

  • The article reframes Alan Turing’s definition of computation as the “Principle of Sufficient Computation.”
  • It outlines mainstream computing’s traits: executable commands, idea–action linkage, practical omniscience, and mimesis-based judgment.
  • It identifies varieties of computation alongside mainstream forms: digital, analog, dialectical, and artificial computation.
  • Artificial computation, credited to François Laruelle, is said to be discovered though artificial computers do not yet exist; compared to Shor’s algorithm’s pre-implementation discovery.
  • Artificial computation is defined by axioms that withdraw from execution and idea–action linkage, assert radically finite knowledge, and adopt a non-Aristotelian technology of immanence.

Hottest takes

“We do not yet know what a computer can’t do… what is this author talking about?” — kazinator
“It can’t even tell if another program stops—basic stuff” — kazinator
“‘Computer fiction’ sounds like philosophy fanfic with math vibes” — anon404
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.