Codex is switching to API pricing based usage for all users

OpenAI turns on the meter: Business plans pay by use, $20 dream fades

TLDR: OpenAI shifted Codex for Business and new Enterprise to pay-as-you-use pricing measured by tiny text chunks, while older plans stay put for now. Commenters are split between “the free ride is over,” “this headline is misleading,” and “RIP $20 deal,” with IPO rumors and model-switch shopping starting.

OpenAI just flipped Codex (its code assistant) to usage-based pricing for Business and new Enterprise accounts, and the internet did what it does best: argue. The official line: credits now track “tokens” (think tiny pieces of text) like the API does, not per message. Different models cost different amounts, output is pricier than input, and “Fast mode” burns credits 2x faster. Average spend? OpenAI hints at ~$100–$200 per dev per month. Details live on the new rate card, with older plans still on the legacy rates for now.

But the comments stole the show. One camp declared the end of AI “happy hour,” with m-hodges warning the “subsidy era” is over. Another crowd shouted “clickbait!”—Skunkleton points out it’s not for everyone, only Business and new Enterprise, and it’s still credits—just counted by tokens now. Meanwhile, value-hunters are mourning the “$20/month Codex” as the best deal in AI, pouring one out and asking where to run next. Then came the tinfoil hats and popcorn: Rastonbury says unbundling screams “IPO soon,” while also side-eyeing Chinese rivals in a potential price war.

The vibe? Confusion, correction, and comedy—plus a lot of nervous jokes about every semicolon costing extra. Wallets tense. CFOs listening. Devs toggling off Fast mode with trembling hands.

Key Points

  • As of April 2, 2026, Codex pricing switches from per-message to API token-based credits for ChatGPT Business and new ChatGPT Enterprise plans.
  • Existing ChatGPT Plus, Pro, and Enterprise/Edu customers remain on the legacy rate card until migration in the coming weeks.
  • Credits are charged per 1M tokens and split into input, cached input, and output tokens, replacing average per-message estimates.
  • Model-specific rates are provided; for example, GPT-5.4 costs 62.50 credits per 1M input tokens and 375 credits per 1M output tokens.
  • Fast mode consumes 2x credits; code review uses GPT-5.3-Codex; average Codex cost is estimated at ~$100–$200 per developer per month.

Hottest takes

"The days of subsidized access is rapidly coming to an end." — m-hodges
"IPO must be around the corner." — Rastonbury
"The title is misleading" — Skunkleton
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.