January 26, 2026

AI flunked, the crowd went cardiac

I let ChatGPT analyze a decade of my Apple Watch data. Then I called my doctor

Bot gave a healthy heart an F; commenters call it reckless and scary

TLDR: A reporter gave ChatGPT a decade of Apple Watch data and got an F, but his doctor said he’s fine. Comments exploded: critics called AI health grades reckless and fear‑inducing, boosters pitched specialized wearable models, and privacy concerns plus paywall-free links kept the outrage rolling.

An Apple Watch super-user fed a decade of data into ChatGPT’s new “Health” mode and got slapped with an F for heart health—then his real doctor said he’s fine. Cue the internet meltdown. A cardiologist called the bot’s report “baseless,” and rival Claude handed out a C, proving the bots will happily grade you even while insisting they’re not doctors. Commenters are furious: one called it a “dangerous toy,” another demanded lawsuits, and a privacy chorus shouted that HIPAA (the U.S. health privacy law) doesn’t cover chatbots, making OpenAI’s “we won’t train on your data” sound like a pinkie promise. Techies jumped in with fixes: one bragged about a wearable-specific model and linked to Empirical’s JEPA-inspired approach, while skeptics roasted the bots for leaning on Apple Watch’s VO2 max estimate (oxygen-use during exercise), joking it’s the new horoscope. Meanwhile, memelords quipped “AI graded me an F for ‘Freaking out’,” and readers passed around paywall-free links to the WaPo story and archive. Bottom line: the crowd isn’t buying AI report cards for your body—especially when a human doctor says the grade belongs in the trash.

Key Points

  • A reporter used ChatGPT Health to analyze a decade of Apple Watch and Apple Health data; the tool initially gave a failing cardiac grade (F), later improving to D after adding medical records.
  • A primary care physician assessed the reporter’s cardiovascular risk as low, and cardiologist Eric Topol called the AI analysis baseless and not ready for medical advice.
  • Anthropic’s Claude offered similar analysis, grading cardiac health a C after importing Apple Health and Android Health Connect data.
  • OpenAI and Anthropic include disclaimers that their tools do not replace doctors or provide diagnoses, yet both provided detailed personal cardiac assessments.
  • OpenAI says its Health mode does not train on user health data, keeps it separate from other chats, and encrypts it; ChatGPT is not covered by HIPAA, and Apple did not collaborate directly with these AI products.

Hottest takes

"They are treating serious medical advice like it is just a video game or a toy." — dfajgljsldkjag
"ChatGPT Health is a completely wreckless and dangerous product, they should be sued into oblivion for even naming it \"health\"." — creatonez
"We trained a foundation model specifically for wearable data:" — brandonb
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.