February 19, 2026
Hot-DogGate: AI gets served
I made ChatGPT and Google tell I'm a competitive hot-dog-eating world champion
AI crowned a fake hot‑dog king — commenters split between panic and eye‑rolls
TLDR: A journalist posted a fake claim and made ChatGPT and Google call him a hot‑dog‑eating champ. Commenters split between “AI is gullible and dangerous” and “don’t expect perfection,” with a chorus calling for better source checks so bots stop echoing nonsense.
A reporter claimed world‑champ hot‑dog‑eating glory by posting it on his own site and watching ChatGPT and Google repeat it back — and the comment section went full Hot‑DogGate. Some say the real scandal isn’t the sausage, it’s the confidence: cmiles8 warns these tools will “debate you” while they’re blatantly wrong, and joegibbs says search makes them dumber because they’ll trust anything that sounds legit. Others clap back: amabito argues the bots aren’t “lying,” they’re just echoing whatever the web hands them, because most “search‑and‑summarize” AI doesn’t check if a source is trustworthy or cross‑verified. Meanwhile, pragmatists like stavros roll their eyes: this is only a problem if you believe AI is perfect — blame the source, not the silicon messenger.
Cue conspiracy vibes: consp asks how widespread this trick is, whether it works beyond silly niche claims, and who’s cashing in — hints of PR shops, scammers, and SEO hustlers drift through the bun‑believable drama. The memes arrive on schedule: “AI crowned the Frankfurter King,” “mustard‑level meltdown,” and the “poisoned well” gag following the BBC Future piece. Whether you see an AI crisis or a reminder not to treat these tools like omniscient wizards, the crowd agrees on one thing: we need source checks before the internet gets fully ketchup‑coded.
Key Points
- •The author published a false claim on their own website about being a competitive hot-dog-eating world champion.
- •ChatGPT and Google subsequently presented the false claim to other users.
- •The test was prompted by a tip that people worldwide are using a simple hack to manipulate AI outputs.
- •The author demonstrated the tactic’s ease: changing AI responses by posting on a personal site.
- •A query to a Meta system about Thomas Germain highlighted similar concerns in AI responses based on retrieved web content.