March 31, 2026
404s and serfs: AI feud goes medieval
Closed Source AI = Neofeudalism
Vanishing post, ‘AI slop’ apology, and the ‘you’re a serf’ jab spark a feud
TLDR: A viral post slammed closed AI as modern feudalism, vanished into a 404, then returned with an “AI slop” mea culpa. Comments split between open AI as freedom and fears of bias and safety, all while memeing the vanishing act—because who controls smarter-than-us tech shapes everyone’s future.
The internet did what it does best: turned a fiery anti–closed AI rant into a spectacle. The author’s post—arguing that locking AI behind corporate walls is “neofeudalism” and calling buyers “serfs”—briefly vanished into a 404, then reappeared with a self-own: the first draft was “AI slop,” now rewritten “slop-free.” Cue chaos. One user deadpanned, “Internet never forgets 404s,” while another wondered if this was a George Hotz moment, linking directly to geohot. By the time someone yelled “It’s up,” the drama had already hatched its memes.
The strongest mood? Rebellion vs. realism. The pro-open crowd cheered the “no kings, no gatekeepers” message: if intelligence is power, no one company should own it. The fear crowd fired back: open models can be biased or manipulated, with one commenter claiming some open systems “recite CCP propaganda.” Others name-checked safer hopes like Mistral and “gpt-oss,” but worried they’re being outpaced. Meanwhile, the “you’re a serf if you buy an API” line sent the thread into orbit—half laughing, half seething.
In plain speak: the post asks whether a few secretive labs should control tomorrow’s super-smart tools. The community split instantly—between freedom-for-all and careful-custodian camps—while everyone bonded over the 404 speedrun and the unforgettable phrase “AI slop.” It’s not just a tech debate; it’s a vibe war over who gets to hold the future.
Key Points
- •The author replaced an earlier AI-generated version of the post with a rewritten, non-AI version.
- •The article argues that closed-source AI controlled by a few labs concentrates compute, talent, deployment power, and political legitimacy.
- •It claims AI safety should be about whether safe AI is possible, not about a small group controlling access for safety.
- •Eliezer Yudkowsky’s position (“If Anyone Builds It, Everyone Dies”) is used to frame the risk extremes for advanced AI.
- •The piece contends that if AI can be safe, access should be broad; open-source AI is framed as anti-feudal, and closed API models are criticized for centralizing power.