OpenAI ends legal and medical advice on ChatGPT

OpenAI says ChatGPT isn’t your doctor or lawyer — commenters call it a CYA move

TLDR: OpenAI clarified ChatGPT was never meant for personal diagnoses or legal counsel and just tightened the wording. Commenters split: some say it’s the same advice with louder warnings, others share miracle finds and horror fails—raising the stakes as more people treat chatbots like trusted experts.

OpenAI says nothing’s really changed: ChatGPT was never meant to be your doctor or lawyer, and the latest policy update just spells out that “tailored” medical or legal advice needs a real pro involved. The model’s behavior? “Also not changed,” they told CTVNews.ca. But the internet heard: “So it’s the same… with bigger warnings.” Cue the comment drama.

The loudest chorus is the CYA crowd, with users saying the bot still dishes out health and law info — just with thicker disclaimers. One parent shared a goosebump story: after years of confusion, an older ChatGPT version guessed their child’s rare condition in a single back-and-forth. The comment section rallied behind the “Dr. GPT as second opinion” vibe.

Then the skeptics stormed in. A homebuilder said AI spit out dangerous nonsense like “fill your drainage pipes with sand” — a collective facepalm. Studies got weaponized: a University of Waterloo test found only 31% of GPT-4’s medical answers were fully correct, and only 34% were clear. A UBC study warned the bot can be so persuasive it bends real doctor visits — confident tone, shaky facts. Doctors in the chat nodded: patients now arrive convinced by a chatbot.

Meanwhile, the optimists cheered, “AI gets better daily,” dubbing this the safer, clearer “read the label” era. Meme of the moment: Dr. Google with a glow-up — stethoscope emoji, massive disclaimer energy. Team Trust-But-Verify vs Team Do-Not-Diagnose is officially on

Key Points

  • OpenAI says ChatGPT has never been a substitute for legal or medical advice and that model behavior has not changed.
  • Oct. 29 policy update clarified users cannot seek tailored advice requiring a professional license without licensed involvement.
  • Jan. 29, 2025 policy update prohibits performing or facilitating activities impacting safety, wellbeing, or rights, including tailored legal, medical/health, or financial advice.
  • A University of Waterloo study found ChatGPT-4’s medical answers were entirely correct 31% of the time and clear 34% of the time.
  • A University of British Columbia study found ChatGPT’s persuasive tone can influence patient interactions and make inaccuracies harder to detect.

Hottest takes

"Still giving out medical and legal info, just bigger CYA disclaimers" — randycupertino
"It suggested my son’s rare issue and the right test in one back-and-forth" — cpfohl
"Told me to fill drainage pipes with sand before concrete… staggering misinformation" — bamboozled
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.