April 29, 2026

Carb Crash in the Comment Section

He asked AI to count carbs 27000 times. It couldn't give the same answer twice

Even the comments were screaming: please don’t let a chatbot dose your lunch

TLDR: A study found AI gave wildly different carb counts for the exact same food photos, sometimes by enough to make insulin dosing dangerous. Commenters split between “this experiment is silly” and “thank goodness someone exposed this before more people trust diet apps with their health.”

A researcher asked several popular AI chatbots to estimate carbs from food photos 26,904 times and got a chaos buffet back: one rice dish swung from 55g to 484g depending on the reply. In diabetes terms, commenters were quick to point out, that kind of miss is not a cute math error — it could be genuinely dangerous if someone used it to decide insulin. And that’s where the comment section went full alarm-bell mode.

The hottest reaction was basically: why on earth are people trusting chatbots with this at all? One camp dragged the whole idea as so misguided it belonged on “astrology.com,” not a serious tech forum. Another called it an “impossible problem,” saying a photo simply can’t reveal what’s hidden inside a sandwich or how much oil is soaked into a meal. In other words: the AI isn’t just inconsistent, the task itself may be a trap.

But defenders pushed back, saying that’s exactly why the study matters. If shiny diabetes apps are already pitching AI food guesses in app stores, then somebody needs to loudly show how badly this can go. There was also dark comedy in the details: the AI confidently turned a Bakewell tart into a Linzer torte, mistook crema catalana for crème brûlée, and even hallucinated mystery deli meat in a cheese sandwich. The community mood? Equal parts “this is reckless”, “this was obvious”, and “wow, the fake sandwich meat is a new low”.

Key Points

  • The article describes a preprint study that sent 13 food photos to four AI models more than 500 times each, totaling 26,904 queries.
  • All tested models produced different carbohydrate estimates for the same image across repeated runs, despite identical prompts, images, and low-randomness settings.
  • The largest reported spread was for a paella image on Gemini 2.5 Pro, which ranged from 55g to 484g of carbohydrates.
  • The article says high consistency did not guarantee correctness, citing a cheese sandwich with a 40g reference value that several models consistently estimated at about 28g.
  • The models also made food-identification errors in 8 of 13 images, including mislabeling desserts and hallucinating ingredients in a sandwich.

Hottest takes

"This entire article would be better suited to astrology.com than hackernews." — endymion-light
"It’s just an impossible problem. Photons don’t provide sufficient information to determine calories" — jaccola
"They are not magic oracles." — rsynnott
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.