January 13, 2026
AI: Hype vs Homework
Let's be honest, Generative AI isn't going all that well
Comments explode: ‘just a tool’ vs ‘10x gains’ while the article shows up with screenshots
TLDR: An editor claims generative AI is faltering, citing unreliable chatbots, limited impact, and a study saying it can do only 2.5% of jobs. Commenters clash: some call AI “just a tool,” while others brag about 10x coding speed and solo rewrites—raising real questions for policy and budgets.
The post drops a cold splash on the AI party, pointing to reports that chatbots—aka LLMs (large language models)—still hallucinate, mostly memorize, add little measurable value, and that scaling up won’t magically fix it. It even cites a study saying AI can do only about 2.5% of jobs. The author’s kicker: reshaping economies around this still-shaky tech is a mistake. But the real fireworks light up in the replies: daedrdev rolls eyes that the piece is “literally just 4 screenshots,” and thechao jokes their kids wrote longer essays than this five-sentence doom note. Meanwhile, links point to scaling stalls and trust issues. Ouch.
Then the split: emp17344 backs the skepticism, calling AI just a tool, not a path to human-like intelligence (AGI). On the other side, sghiassy swears these bots make them read code “10x faster,” and mattmaroon flexes that his cofounder is rewriting pricey legacy code solo while he’s slashing design costs—“If this is not all that well I can’t wait until we get to mediocre!” The comment section morphs into a meme arena: screenshots vs success stories, “4th-grade essay” vs “10x gains.” Verdict? The tech may be messy, but the community is loud, split, and hilariously caffeinated.
Key Points
- •The article compiles reports that LLMs remain unreliable.
- •It claims LLM outputs rely substantially on memorization, with debate noted around this point.
- •The post states generative AI has yet to show strong, quantifiable economic value.
- •It cites the Remote Labor Index finding that AI could perform about 2.5% of jobs, reported by the Washington Post.
- •It argues that further model scaling is not resolving these issues and cautions against basing policy on expected improvements.