Based on its own charter, OpenAI should surrender the race

Commenters dare OpenAI to honor its “self‑sacrifice” — others bet the charter vanishes

TLDR: OpenAI’s charter says it should help a safer rival if that rival nears human‑level AI, but Altman’s fast‑shrinking timelines and rival model leaderboards triggered a brawl over whether that promise will ever be honored. Commenters are split between “keep your word” and “shareholders won’t allow it,” debating what even counts as AGI and “safety.”

OpenAI’s own 2018 charter literally says it will stop competing and help a safer rival if that rival is close to building AGI (artificial general intelligence). Now, with Sam Altman’s timelines yo‑yoing from “2033” to “maybe 2025” and even “we basically have AGI,” the internet is asking: So… are you stepping aside? Meanwhile, rival models like Anthropic’s Claude sit atop the Arena leaderboard, fanning the drama.

The top vibe in the comments? Pure cynicism. One user cracked, “I’ll eat my hat after I sell you a bridge,” picturing OpenAI telling billion‑dollar backers, “We lost, we’re helping Google now.” Another predicted OpenAI will just quietly scrub the page: “This will never happen.” A more serious chorus says the idealism died the moment the company went all‑in on commercialization—“the impotence of naive idealism vs. economic incentives.”

But not everyone is doom‑posting. Some argue the clause only triggers for a “value‑aligned” project genuinely near AGI, and a leaderboard isn’t proof. Others point to a dramatic claim from a former OpenAI staffer citing resignations over surveillance and lethal autonomous uses, arguing the stakes are bigger than PR. The memes? “AGI whooshed by” and “press F to honor the charter.” The split? Between “honor your promise” and “lol, capitalism.”

Key Points

  • OpenAI’s 2018 charter includes a clause to stop competing and assist a value‑aligned, safety‑conscious project that is close to building AGI, with a typical trigger of a better‑than‑even chance within two years.
  • The charter remains publicly available at https://openai.com/charter/ and is presented as current policy.
  • The article compiles Sam Altman’s AGI timeline statements (2013–2026), showing a shift from later dates toward 2025 and claims that AGI may already have been achieved, with a noted later clarification.
  • The author infers the median AGI prediction since 2025 is approximately two years and frames current efforts as a race toward ASI.
  • A snapshot of the arena.ai leaderboard shows top models led by Claude Opus variants, followed by Gemini, Grok, and OpenAI’s GPT‑5.x models, illustrating current competitive standings.

Hottest takes

"I’ll eat my hat after I sell you a bridge" — bilekas
"The impotence of naive idealism in the face of economic incentives" — rishabhaiover
"OpenAI will quietly remove their charter" — choult
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.