May 14, 2026

Aligned? More like steamrolled

You Don't Align an AI, You Align with It

AI’s biggest fight isn’t man vs machine — it’s who gets a say before jobs vanish

TLDR: The article argues AI is being shaped by elites who won’t bear the biggest costs, while ordinary people are treated like test subjects instead of participants. In the comments, readers split between fear, sarcasm, and dark humor, with many saying the real story is jobs, power, and who gets ignored.

This essay basically drops a giant accusation into the AI debate: the people deciding what artificial intelligence should do are not the same people whose jobs, lives, and choices will be shaken up by it. The writer blasts both extremes — the panic camp talking about shutting down computer centers and even flirting with military strikes, and the cheerleader camp telling everyone to embrace disruption or be branded bitter, backward, or just plain broken. The mood is clear: regular people aren’t being consulted, they’re being managed.

And the comments? Oh, they went from thoughtful to chaotic in record time. One reader simply gushed, “Love the writing style,” while others turned the piece into a wider roast of the whole AI gospel. One commenter compared today’s AI promises to modern-day prophecy, arguing that slogans like “everyone will have an AI assistant” aren’t predictions so much as sales pitches shaping the future on purpose, complete with a link to a related essay. Another cut straight to the labor nightmare, invoking Foxconn and robots with the brutally bleak line that for corporations, workers are just a headache to be reduced. Then came the classic internet spiral: a side argument over transhumanists, Christianity, and Luciferians — because no online discussion is complete without someone taking the wheel into the theological ditch. And the funniest gut-punch? A commenter warned that since AI makes stuff up, trusting it with spreadsheet math should be “an arrestable offense.” In other words: the crowd isn’t just worried about control — they’re worried the people in charge are smug, reckless, and maybe a little too eager to let the bots run the office.

Key Points

  • The article says AI alignment debates are being led mainly by labs, researchers, and policy actors rather than by people most affected by AI systems.
  • It cites Eliezer Yudkowsky’s TIME essay as an example of the AI safety camp advocating extreme measures to halt advanced AI training.
  • It cites Marc Andreessen’s Techno-Optimist Manifesto as an example of the accelerationist camp dismissing opponents as driven by ressentiment.
  • The article argues that both safety and accelerationist camps share an assumption that they are entitled to design AI systems for others.
  • It uses Anthropic’s April 2026 Alignment Science blog as an example of alignment being implemented through internal evaluation procedures based on model-generated data and hired raters.

Hottest takes

“the power of prophecy lies not in accurately predicting the future, but in shaping it” — jackbravo
“To corporations, employees are a headache” — Animats
“Trusting language models to fill spreadsheet cells ought to be an arrestable offense” — economistbob
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.