April 13, 2026

Doom, drama, and front-page fury

The Future of Everything Is Lies, I Guess: Safety

AI “safety” slammed as a myth — plus UK blocks and front‑page fury

TLDR: The piece claims “AI safety” won’t hold because making helpful models makes harmful ones easier too. Commenters split between doom (we’re edging toward something scary), pragmatism (hard limits over band‑aids), and meta‑drama about UK blocks and lightning‑fast front‑page boosts, underscoring how high and messy this debate already is.

Kyle Kingsbury’s latest polemic says the quiet part loud: trying to make chatbots “friendly” just teaches bad actors how to make them unfriendly. He argues the secrets are out, the hardware’s coming cheap, and aligned models still leak harmful tricks — so “safety” is a vibe, not a plan. The comments? Absolute chaos. One UK reader saw a block screen — “Unavailable Due to the UK Online Safety Act” — and asked what on earth is happening, adding a fresh layer of real‑world censorship to an already spicy debate. Another commenter went full doomsday, warning we’re “ inching closer… to building HM,” an ominous, acronym‑laden vibe check that had readers side‑eyeing the abyss. The sharpest pushback came from the pragmatic camp: as one put it, alignment is an arms race of paid human training, and the only real defense might be building models that can’t do certain things at all — not just slapping patches on every leak. Then came the meta‑meltdown: “why does this hit the front page in 4 minutes?” cried one user, hinting at algorithmic favoritism. Veterans dropped receipts to earlier chapters of this saga with links to thread 1 and thread 2. Verdict: doomers, skeptics, and conspiracists are all eating today.

Key Points

  • The article argues that LLMs increase both psychological and physical safety risks by enabling scalable attacks, fraud, and harassment.
  • It claims alignment is fragile: model “friendliness” depends on expensive, optional training and oversight that bad actors can skip.
  • Three proposed moats—limited hardware access, secrecy of math/software, and scarce datasets—are described as eroding rapidly.
  • The post cites industry and state activity (cloud training clusters, published math, staff mobility, data exfiltration, scraping) as drivers of erosion.
  • It concludes that even “friendly” LLMs pose security risks, burden moderators, and coincide with the emergence and growth of semi-autonomous weapons.

Hottest takes

“‘Unavailable Due to the UK Online Safety Act’” — Cynddl
“Every one of these posts is immediately pushed to the front page” — jazzpush2
“Alignment feels like an arms race… the real moat might be making systems that are fundamentally limited” — ibrahimhossain
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.