February 15, 2026
Find the AI under the cup
I don't think AGI is imminent
Internet melts down: “AI is already here” vs “It still can’t find the marble”
TLDR: A writer argues human‑level AI isn’t close because current chatbots lack real‑world understanding, spotlighted by a “shell game” metaphor. The comments erupted: some insist AI already does most office work, others demand memory and better tooling, and many are tired of the endless hype war—stakes are jobs and expectations.
An essay arguing human‑level AI isn’t imminent because today’s chatbots lack a built‑in sense of things like objects, numbers, and cause‑and‑effect lit the comments on fire. The author’s point: language models learn from words, not real‑world instincts, so they stumble on basics like tracking a ball in a shell game. Simple idea, huge drama.
On one side, the believers: a chorus yelled “AGI is here,” with one user saying 90% of office work can already be done by a chatbot if you add a little glue software to orchestrate tasks. Another cited a scrappy memory hack (“Sammy Jankis”) that layers recall on top of forgetful models, basically claiming DIY intelligence has arrived—even if it’s “janky as all get out.” On the other side, skeptics cheered the essay’s shell‑game point and brain science framing, arguing that models still don’t have real‑world commonsense. A pragmatist camp rolled their eyes at the whole spectacle: “this debate is a waste of time—just wait and see.” Meanwhile, a thoughtful tangent reminded everyone our own brains evolved to hunt and flirt, not to do calculus—so maybe machines will excel at desk work before they truly “understand” the world. Meme watch: “Find the AI under the cup” jokes were everywhere, alongside “office bot took my job (and my stapler)” quips.
Key Points
- •The article argues human-level AI is not imminent due to fundamental limits of transformer-based LLMs.
- •Human cognition relies on evolutionarily hardwired primitives (e.g., number sense, object permanence) that language presupposes and LLMs must infer from data.
- •LLMs struggle with multi-digit arithmetic and simple logical generalization, reflecting lack of innate compositional and symbolic machinery.
- •Video training may encourage object permanence signals but may not yield persistent object tracking needed for tasks like shell games.
- •Recent vision progress relies heavily on synthetic data, which the article says leads to fragile learning of real-world physical and logical constraints.