An AI Vibe Coding Horror Story

Medical office’s DIY AI app exposed patients — comments go nuclear

TLDR: A medical office reportedly built its own AI-powered patient app that left records exposed and shipped appointment audio to big US AI services. Commenters erupted: some demanded regulators crack down, others blamed AI hype culture, and a few argued even “pros” would fail—proof that patient privacy and AI shortcuts don’t mix.

The internet just met its new horror trope: vibe coding with medical files. A Swiss blogger says a friendly medical office skipped the boring, proven tools and whipped up a homemade patient app with an AI helper — then put it online. Result? Unencrypted patient records and recorded appointment audio sent to big US AI services without consent, and a polite, clearly AI-written “thanks, we fixed it” reply when he reported it. The community went feral.

The hottest camp wants consequences. One user basically said people only respond to pain, urging regulators and licensing boards to step in. Another tossed gasoline on the hype bonfire, roasting LinkedIn’s “AI will save everything” chorus and warning it’s only a matter of time before someone gets truly burned. There’s even a receipts squad preserving the post with archives, because the internet never forgets.

But not everyone’s screaming “jail.” A contrarian waved off panic, arguing a hired consultant could botch it just as badly — and that “security theater” (performative safety measures) only slows businesses down. Meanwhile, jokesters called it “move fast and break patient privacy,” and dubbed the whole fiasco “build-by-vibe healthcare.” Between fear of fines, fury at AI evangelists, and memes about “trust me bro” security, the comments turned this into a full-blown circus.

Key Points

  • A medical provider built and deployed a custom patient management app using an AI coding agent and imported real patient data.
  • The app exposed unencrypted patient data publicly; the author gained full read/write access within 30 minutes.
  • Data resided on a US server without a Data Processing Agreement, and appointment audio was sent to US-based AI services.
  • The system implemented access control only in client-side JavaScript; the managed database had no access controls or row-level security.
  • The incident likely violated Swiss data protection (nDSG) and professional secrecy obligations, according to the author.

Hottest takes

"Some people only care about actual consequences." — direwolf20
"Lack of security theater is a good thing" — websap
"Every sales bozo ... screaming ... everything must be done with AI" — delis-thumbs-7e
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.