January 19, 2026

Press 0 for a human? Not in 2025

Anthropic disabled my account after payment cancer patient/medical data trapped

Charged, banned, and ghosted — rage at bots, ‘sus’ skeptics, and Europe’s data law cavalry

TLDR: A user says Anthropic charged her and immediately locked her account, trapping years of medical notes. Comments erupt with pay-to-talk-to-a-human pleas, fire-them outrage, sus skepticism, and EU data-law tips — spotlighting AI lock-in, broken support, and why data access matters when health is on the line

An explosive post claims Anthropic charged a user $106.60 for its Max plan and then locked her out at the same moment — with years of cancer-care notes allegedly trapped inside. The community reaction? Pure chaos. The top vibe is fury at faceless support: one commenter dreams of a pay-to-speak-to-a-human button, while another wants heads to roll and mocks the idea of getting yet another automated reply. Others share war stories — like getting banned for asking about a Python finance library and being funneled into a dead-end Google Form — painting a bigger picture of support black holes.

But the thread isn’t just rage; it’s also a split. A skeptical chorus asks if the post is legit, pointing to the new account and AI-flavored formatting. Meanwhile, the pragmatic crowd goes full fix-it mode, shouting about Europe’s data law (GDPR) — which lets you demand your data — and telling the OP to contact a Data Protection Officer. Cue the meme: Europe’s data law enters the chat.

Through it all, the emotional core hits hard: a sick patient says an AI assistant helped her fight a broken system, only to be ghosted when it mattered most. The bigger drama? The internet wrestling with a scary question: if bots gatekeep your records, do you really own your life’s data

Key Points

  • The poster says Anthropic charged $106.60 for a “Max” subscription on Jan 16, 2025 and disabled the account at the same time.
  • They report that Claude contains 11 years of organized medical records and related documentation essential to ongoing care.
  • The user states the last prompt was about interpreting October bloodwork and claims no terms of service violations.
  • They request either account restoration or a complete export of their chat history.
  • They describe multiple appeals and escalations (support emails, official form, California AG, FTC, DMs to Anthropic leaders, Claude Discord) with only automated responses received.

Hottest takes

I wish companies had a pay $50 to speak to a human option if need be. — MattGaiser
Fire them. And then reply with automated bits. — nobodywillobsrv
this whole post seems a bit fishy to me. — euazOn
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.