May 1, 2026

Access denied, hypocrisy supplied

After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber

OpenAI mocked secret AI tools, then locked up its own—and the comments went feral

TLDR: OpenAI is restricting its new security-focused AI tool to approved users after Sam Altman previously slammed Anthropic for doing the same thing. Commenters are calling it hypocrisy, mocking the “our AI is scarier” posturing, and arguing the safety drama looks a lot like marketing.

The internet is having a field day after OpenAI boss Sam Altman blasted rival Anthropic for limiting access to its security tool, only to turn around and do basically the same thing with OpenAI’s own new tool, Cyber. OpenAI says Cyber will first go only to “critical cyber defenders” — basically approved security professionals — because the tool could help people find weak spots in computer systems, but also be abused by criminals. Cue the collective eye-roll.

The loudest reaction? Hypocrisy, thy name is tech CEO. One commenter summed up the whole mess like a playground fight: “my model is the most dangerous… no mine is.” That joke became the mood of the thread, with people mocking the AI race as a bizarre contest over whose chatbot sounds more apocalypse-ready. Others were much harsher, flatly saying they don’t trust anything Altman says anymore and accusing both companies of using scary claims as marketing. In other words: if these tools were safe and profitable to release broadly, some readers think they’d already be everywhere.

Then came the comedy. One user joked that even asking a chatbot about “engineering paranoia” now gets you flagged as suspicious and pointed toward a special access program, which perfectly captures the community’s bigger complaint: the rules feel arbitrary, and the drama feels staged. A few commenters also shrugged that determined experts can already piece together similar abilities with existing AI tools anyway, making the whole lock-it-down strategy look less like safety and more like PR theater. For readers, the real story isn’t just the tool — it’s the growing sense that Silicon Valley is arguing with itself in public, and everyone else brought popcorn.

Key Points

  • Sam Altman said OpenAI will begin rolling out GPT-5.5 Cyber to critical cyber defenders in the next few days.
  • OpenAI is restricting initial access to Cyber through an application process that asks for credentials and intended use.
  • The article says Cyber can be used for penetration testing, vulnerability identification and exploitation, and malware reverse engineering.
  • Altman had previously criticized Anthropic for limiting access to its Mythos cybersecurity tool, but OpenAI is now adopting a similar approach.
  • OpenAI says it aims to widen access by consulting with the U.S. government and identifying users with legitimate cybersecurity credentials.

Hottest takes

"my model is the most dangerous" — 2ndorderthought
"I have no idea why people still even attempt to believe anything that comes out of Altman's mouth" — jwr
"Silly move since combo of skills/agents can achieve same results" — ilia-a
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.