March 24, 2026
Rock, Paper, Outrage
Thoughts on LLMs – Psychological Complications
Writer calls AI a 'talking rock'—commenters clap back, meme hard, and fact-check
TLDR: The essay brands chatbots 'talking rocks'—sparking a brawl over whether they’re minds, machines, or just math. Commenters argue training teaches standards and code is predictable, others blame chat-style interfaces, while pragmatists say behavior is what counts, shaping how we trust, use, and regulate AI.
An essay tried to rebrand AI chatbots as neither minds nor machines but a “trillion numbers in a trenchcoat” — even a “talking rock.” Cue fireworks. The community split fast: xg15 pushed back that training and fine‑tuning actually teach models a sense of “good vs bad,” while djoldman declared, “they are programs” with knobs for randomness, not mystical artifacts. chrisbrandow blamed the chat interface itself for making us treat bots like people, arguing we should ditch the cozy text box to stop the cutesy vibe.
Then the pragmatists arrived. K0balt shrugged at the philosophy and said the only scoreboard is real‑world behavior — and once robots are common, it won’t matter what’s “real,” only what acts like it. Not everyone was impressed with the essay: ej88 called out bias, saying the “can’t count” claim is easily disproven, and the “talking rock” line became instant meme fuel. Commenters joked about being polite to toasters and posted “trenchcoat” doodles, but the core fight stayed serious: words matter. Label chatbots as minds, and people trust too much; call them rocks, and we ignore real risks. The thread’s verdict? No verdict. Just spicy debate, solid fact‑checks, and a whole lot of memes [link] today
Key Points
- •The article argues that common language leads to anthropomorphizing LLMs, causing misinterpretation of their behavior.
- •It claims LLMs are neither traditional machines nor programs nor minds, but stochastic systems lacking concepts like truth or error.
- •The author suggests adopting terminology from fiction, such as “artifact” or “entity,” to better discuss LLMs without cognitive assumptions.
- •It advises treating LLMs cautiously and not assuming they are benign, friendly, or honest.
- •The article distinguishes LLMs from “AI” understood as reasoning intelligence, portraying LLMs as simulations of reasoning rather than true AI.