February 12, 2026

Immortality or It’s-Over-tality?

New Nick Bostrom Paper: Optimal Timing for Superintelligence [pdf]

Bostrom says ‘race to super-smart AI, then pause’ — comments split between immortality dreams and extinction fears

TLDR: Bostrom argues we should quickly build superintelligent AI, then pause, because the health and longevity payoff could outweigh big risks. Commenters erupt over a “97% death vs 3% immortality” vibe, with dreamers cheering and skeptics blasting the assumptions—making this a high-stakes ethics brawl.

Nick Bostrom just dropped a working paper arguing superintelligent AI is less Russian roulette and more risky surgery—you might die, but doing nothing means you will die anyway. He even suggests the optimal plan might be sprint to build it, then tap the brakes briefly, because the upside could be huge: curing all diseases, stopping aging, and extending life like crazy. It’s a direct rebuttal to the doomers behind “If Anyone Builds It, Everyone Dies.” Read the paper here.

The comments went full showdown. One camp calls it cosmic casino math, with ed summarizing the vibe as a wild trade: a “97% chance we all die” for a “3% shot at 1,400-year lifespans.” Meanwhile, timfsu says it’s not obvious AGI brings either extinction or immortality, preferring this to pure doom. Critics pile on: jibal blasts Bostrom for “logical fallacies,” while neom argues the paper dodges the crucial question of what the AI can actually do. rf15 throws cold water on the whole thing: if AI runs the show, who’s even buying anything?

Memes and jokes flew: “swift to harbor, slow to berth” became “race to port, then coffee break.” People debated whether they’d pull the lever for a 3% immortality loot box, or keep humanity off the casino floor. Bottom line: dreamers vs doomers, with the brake pedal being the hottest accessory in AI.

Key Points

  • Bostrom’s working paper models the optimal timing for developing superintelligence from a person‑affecting perspective, setting aside simulation hypotheses.
  • Models include safety progress, temporal discounting, quality‑of‑life differentials, and concave QALY utilities; prioritarian weighting pushes toward shorter timelines.
  • Findings often favor moving quickly to attain AGI capability, followed by a brief pause before full deployment to integrate safety—“swift to harbor, slow to berth.”
  • The paper argues that even relatively high catastrophe probabilities can be acceptable given potential benefits, though outcomes depend on parameter choices.
  • Bostrom contrasts his analysis with calls for an indefinite global ban, citing Yudkowsky and Soares, and emphasizes potential medical and longevity gains from superintelligence.

Hottest takes

"It’s not at all obvious why the arrival of AGI leads to human extinction" — timfsu
"we should accept a 97% chance of superintelligence killing everyone" — ed
"bunch of logical fallacies and unexamined assumptions" — jibal
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.