April 16, 2026

Counting Mythos, losing patience

Claude Opus 4.7 Model Card

Anthropic drops Opus 4.7 — fans say it’s a Mythos ad and the pricing math is chaos

TLDR: Anthropic launched Claude Opus 4.7 as its best widely available AI, still behind the limited Mythos. Commenters call the 272‑page report a Mythos promo, fret that cheaper models will undercut premium tiers, and bicker over whether new usage costs break subscription promises — a win clouded by marketing math

Anthropic just rolled out Claude Opus 4.7, calling it their best widely available AI yet — but the community is side‑eyeing the 272‑page “model card” like it’s a glossy brochure for the limited‑release Mythos. One user literally counted the mentions, posting a terminal flex that turned into a meme: “331 ‘Mythos’ hits vs 809 ‘Opus.’” Cue the popcorn.

The facts: Opus 4.7 beats 4.6, stays behind Mythos, and adds safety upgrades — from resisting hacking tricks to doing better on election integrity. The UK’s AI Security Institute found it couldn’t finish their full cyber range (unlike Mythos), so Anthropic wrapped it in new safeguards. It hallucinates less, over‑refuses less, and oddly reports being happier than any prior model. Yes, an AI mood ring. Anthropic’s post has the receipts.

But the drama? Commenters say the whole thing reads like Mythos hype, while budget fans ask why the cheaper “Haiku” model isn’t getting love. One hot take warns the small, cheap AIs are “cannibalizing” the fancy ones. Another thread devolved into subscription math madness: if 4.7 uses about 1.35x more compute, is the pricey “20x” plan really “13x” now… or “27x” later? The vibe: great model, confusing marketing, and a community keeping score with spreadsheets and sarcasm.

Key Points

  • Claude Opus 4.7 surpasses Opus 4.6 but remains less capable than the limited-release Claude Mythos Preview, making it Anthropic’s most capable general-access model.
  • Under Anthropic’s Responsible Scaling Policy, Opus 4.7 does not advance the capability frontier; catastrophic risks are assessed as low and chemical/biological risks unchanged from Opus 4.6.
  • External testing by the UK’s AI Security Institute found Opus 4.7 could not complete the full cyber range (unlike Mythos), though it exhibited lower-level potentially harmful cyber capabilities; new cybersecurity safeguards accompany release.
  • Opus 4.7 improves agentic safety (refusing malicious requests and resisting prompt injection) and alignment (lower hallucinations, low reward hacking), with some weaknesses (e.g., AI safety research refusals, overly detailed harm-reduction advice).
  • Capability evaluations show broad gains, especially in real-world professional and software engineering tasks, where Opus 4.7 leads among generally-available models.

Hottest takes

"This reads more like an advertisement for Mythos" — koehr
"low end models are cannibalizing high end" — jmward01
"is a 20x plan now really a 13x plan or a 27x plan?" — aliljet
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.
Claude Opus 4.7 Model Card - Weaving News | Weaving News