A URL to respond with when your boss says "But ChatGPT Said "

Send this spicy link when your boss treats a chatbot like gospel

TLDR: A snarky webpage reminds everyone that AI chatbots guess words and can be wrong, so don’t treat them like experts. Commenters loved the retro simplicity, roasted the lack of citations, warned it could backfire with bosses, and said it’s the new “don’t copy Stack Overflow” moment.

There’s a new petty URL to fire back when someone says “But ChatGPT said…” The page is a blunt PSA: AI chatbots like ChatGPT, Claude, and Gemini are word predictors, not oracles. They can sound confident and still be wrong, so don’t paste chatbot answers as if they’re facts. It’s explaining that LLMs (large language models) guess the next word, not the truth.

The crowd is split between applause and eye-rolls. One voice cheered the throwback “simple static site” vibe à la motherfuckingwebsite.com and Comic Sans Criminal—clean, loud, and a little cheeky. Others demanded receipts: as purplecats sniped, it “would be more valuable if this cited each of its claims,” basically yelling, “practice what you preach.”

Workplace drama alert: should you send this to your boss? Some warned the tone screams “you’re foolish”—“Bosses love it when you call them foolish,” quipped mr3martinis—while cynics doubted anyone who trusts bots will read a word anyway. Veterans felt déjà vu: once it was “stop copying Stack Overflow without reading,” now it’s “stop pasting from AI.” Different tool, same bad habit.

For backup, the page nods to mainstream reports on AI “hallucinations” from the New York Times and Financial Times. But the community wants hard citations, not just vibes. Final take: hilarious ammo for the group chat; maybe holster it before firing it at your manager’s inbox

Key Points

  • The article warns that outputs from LLMs like ChatGPT, Claude, and Gemini are not facts.
  • LLMs generate text by predicting the most likely next word, not by verifying information.
  • AI responses can be convincing yet inaccurate or unreliable.
  • Readers are advised not to copy-paste chatbot output as authoritative or final.
  • Further reading links point to NYT and FT coverage on AI hallucinations and truthfulness issues.

Hottest takes

“would be more valuable if this cited each of its claims” — purplecats
“Bosses love it when you call them foolish” — mr3martinis
“Everything changed, yet everything is the same” — foxfired
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.