Ask HN: Have top AI research institutions just given up on the idea of safety?

Readers say it’s all PR while Anthropic spars with the Pentagon over ‘don’t kill humans’

TLDR: Hacker News is debating whether AI labs have quietly shifted from “safety-first” to “PR-first” to win government deals. Most commenters cry profit over principles, while a few point to Anthropic’s Pentagon standoff and “don’t kill humans” rules—raising big questions as AI spreads and funding pressures mount.

On Hacker News, a simple question lit the fuse: have big AI labs basically tapped out on “safety”? The crowd came in hot. One user called the whole thing a branding update, saying companies stopped bragging about guardrails to win government contracts, while the models still feel just as preachy to everyday users. Translation: the rules didn’t vanish, the ads did.

Then the cynics showed up with gasoline. The mood swung to “profit over people,” with posters insisting “safety” was always PR meant to dodge bad press and regulators. Another popular take: labs won’t sacrifice any competitive edge—even if they admit their systems could make life worse—because growth is the only metric that counts. The OP’s casino analogy got memed hard: “House funds the rehab, but keeps your tab open.”

Not everyone’s doomposting, though. A calmer camp argued labs haven’t abandoned safety, they’re just under massive pressure as AI goes mainstream and fundraising never stops. Cue the drama: Anthropic’s reported pushback against the Pentagon became the receipt, with commenters quipping that “don’t kill humans” is at least a line in the sand, if not the whole beach. Call it safety theater or PR rails—the comment section isn’t buying kumbaya.

Key Points

  • The post questions whether leading AI research institutions are meaningfully prioritizing safety.
  • It notes that labs have safety teams and that staff appear serious about their work.
  • It raises the possibility that safety investments may be token or primarily for optics.
  • A casino–gambling-addiction analogy is used to highlight potential conflicts of interest.
  • The author requests insider perspectives to assess the true commitment to safety versus stated values.

Hottest takes

"It's a branding update, nothing more" — akersten
"Safety was never a genuine concern" — CivBase
"I don't think they've given up on the idea, but as AI becomes increasingly mainstream, the labs will be under immense pressure to hold the line" — nkohari
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.