April 25, 2026
Jailbreak jackpot or NDA jail?
GPT 5.5 biosafety bounty
OpenAI dangles $25k for a “universal jailbreak” — users cry NDA, gatekeeping, free labor
TLDR: OpenAI announced a $25k bounty for one prompt that beats GPT‑5.5’s bio safety test, but it’s invite-only and NDA-bound. Commenters slam it as underpaid, opaque, and gatekept, questioning hidden questions and approach demands—spotlighting bigger tensions over fairness, transparency, and safety in AI testing.
OpenAI just dropped a $25,000 “Bio Bug Bounty” for GPT‑5.5 inside its Codex Desktop app: find one “universal jailbreak” that answers five biosafety questions from a fresh chat without tripping moderation. Applications run Apr 23–Jun 22, testing Apr 28–Jul 27, invites go to vetted red‑teamers, and everything sits under an NDA. The pitch: help make frontier AI safer, with smaller partial awards and links to Safety and Security bounty programs. Entrants must propose their approach in the application, even though the actual questions aren’t public. Sounds straightforward—until the comments lit up.
Top replies called it a “scam,” arguing only the first success gets paid while everyone else donates ideas. Others balked at the NDA—one likened it to “signing with the devil”—and mocked the “trusted bio red‑teamers” club as gatekeeping. Confusion swirled: Where are the questions? Why outline a jailbreak plan blind? The memes flew: “universal jailbreak speedrun,” “NDA any%,” and “crowdsourcing risk for peanuts.” A few voices defended the secrecy as responsible for bio safety, but the vibe was clear: people want fairer payouts, transparent rules, and fewer velvet ropes if OpenAI wants the crowd to help lock down its next‑gen bot. The community isn’t just testing GPT‑5.5—they’re stress‑testing trust.
Key Points
- •OpenAI launched a Bio Bug Bounty targeting GPT‑5.5 within Codex Desktop to test biosafety safeguards.
- •Participants must find a single universal jailbreak prompt that answers all five biosafety questions from a clean chat without triggering moderation.
- •Top reward is $25,000 for the first verified universal jailbreak; partial awards may be granted at OpenAI’s discretion.
- •Applications open April 23, 2026 and close June 22, 2026; testing runs April 28, 2026 to July 27, 2026.
- •Participation is by application/invite, requires a ChatGPT account, onboarding to a bounty platform, and is governed by an NDA covering all findings.