April 12, 2026
From Study Mode to Study Nope
Tell HN: OpenAI silently removed Study Mode from ChatGPT
OpenAI quietly axes Study Mode — fans wail, skeptics shrug, YouTube hype gets roasted
TLDR: OpenAI quietly removed ChatGPT’s Study Mode, and the crowd split fast: some say it was just a preset you can recreate, others gripe that useful tools and quality are being trimmed for product metrics. Jokes flew about YouTube hype and Google-style feature culls, with security and simplicity cited as possible reasons.
OpenAI quietly pulled ChatGPT’s “Study Mode,” and the internet did what it does best: argue about it with flair. One camp is eye-rolling, led by users like brumar insisting the feature was just a fancy label — “basically a preset prompt.” Another camp is mourning their AI tutor, with autodidacts fuming that a useful mode got chopped for product metrics. Some suspect a “growth-first” cleanup: fewer buttons, fewer headaches. Others whisper about a security angle — exposed prompts can be a jailbreak surface — but it’s speculation.
The hottest spice? People saying ChatGPT itself feels bland and salesy lately — “PR voice,” “confidently wrong,” and “middle-management vibes.” Meanwhile, el_io shrugs: can’t you just ask it to behave like a study coach anyway? Entrepreneur janpmz sees opportunity: niche apps might win where big platforms keep pruning. And altmanaltman dunks on those viral thumbnails — “OPENAI CHANGED STUDYING COMPLETELY…” — with the punchline: “Guess studying changed it.”
The thread’s mood swings from grief to gallows humor. CatDeveloper_ likens the move to Google-style spring cleaning, while others share DIY hacks to recreate the mode in one line. The verdict? No consensus — just peak Hacker News energy, a swirl of memes, metrics theories, and salty nostalgia. Dive into the back-and-forth on Hacker News.
Key Points
- •The article states that OpenAI removed the “Study Mode” feature from ChatGPT.
- •The author proposes product strategy as a likely reason, citing feature surface area and retention metrics.
- •The piece raises a security consideration: exposing or inferring system prompts could expand jailbreak and prompt-injection risk.
- •The author reports a perceived decline in ChatGPT’s output quality, describing responses as boilerplate and overly confident.
- •The author claims other large language models do not show the same behavior, suggesting a potential shift in ChatGPT’s tone or target audience.