May 8, 2026
Proofs, Panic, and a Bot
A recent experience with ChatGPT 5.5 Pro
Math fans are spiraling as AI starts doing PhD-level work before lunch
TLDR: A mathematician says ChatGPT 5.5 Pro produced serious research-level math shockingly fast, raising fears that even beginner-friendly open problems may no longer be safe from AI. In the comments, people swung between panic, skepticism, and sadness, arguing over whether this breaks education, speeds up the end of human-only research, or is still being overhyped.
The big headline here isn’t just that ChatGPT 5.5 Pro reportedly cranked out PhD-level math research in about an hour with barely any human help. It’s that the comment section immediately turned into a full-blown existential support group. The original post argues that AI is no longer just regurgitating old answers from papers — it may now be good enough to spot surprisingly simple solutions that human researchers missed. In plain English: problems once seen as great training ground for young mathematicians may now be too easy to assign if a chatbot can smash them first.
And wow, the community reaction swung from awe to panic to gallows humor. One commenter resurfaced a decade-old prediction from mathematician Tim Gowers that humans might stop doing research math in 100 years, then asked the killer question: has the clock just been moved way, way up? Another went straight for the education drama, bluntly asking whether undergraduate math testing is now basically broken. Others were less impressed and more suspicious, warning that nobody should trust a long AI proof without making the bot try to tear its own work apart first. That sparked the classic AI thread split: "this changes everything" versus "slow down and check the receipts."
Then came the emotional gut punch. One commenter quoted the post’s bleak line about how math may no longer offer “immortality” through lasting achievement and simply replied: “This made me a little sad.” Honestly? Same. The mood was part hype, part dread, part meme-worthy “humanity may be cooked” energy.
Key Points
- •The article says ChatGPT 5.5 Pro produced what the author describes as PhD-level mathematical research in about an hour with minimal human mathematical input.
- •It places that result in a broader trend of LLMs increasingly solving research-level mathematical problems, including some Erdős problems previously discussed publicly.
- •The article notes that some earlier LLM solutions were less impressive because they relied on known literature or straightforward deductions from existing results.
- •To test the model, the author chose open problems from Mel Nathanson’s paper on additive number theory, arguing that such papers often contain accessible unsolved questions.
- •The article explains Nathanson’s sumset framework and states that for h = 2 all sizes between the minimum and maximum possible size of the sumset can occur, while the general case remains unresolved.