Two Concepts of Intelligence

Is AI actually smart or just good at copying? The comments are on fire

TLDR: A top computer scientist says AI debates stall because we use two different meanings of “intelligence.” Commenters erupt: skeptics say chatbots just parrot patterns (hello, turkey meme), pragmatists say stop nitpicking words and judge results — a clash that shapes how we build, regulate, and trust AI.

Professor Bertrand Meyer argues people talk past each other because they mean two totally different things by “intelligence.” That set off a comment-section cage match. One camp yelled: it’s all prediction, no meaning — cue the viral turkey parable where the bird trusts breakfast until, well, Thanksgiving. “That’s your chatbot,” they say: confident guesses, zero real understanding. Another camp rolled their eyes at the word games, dropping the classic submarine line: asking if computers “think” is as useful as asking if submarines “swim.” Translation: who cares what we call it if it works.

Then came the philosophy pile-on. Someone invoked Noam Chomsky’s “built-in grammar” to argue humans aren’t just data parrots, while others joked we also call foam “memory” even though it forgets everything — the language itself is trolling us. A practical voice defined intelligence as inventing correct ideas about things you’ve never seen, hinting that today’s AI needs a future mashup of methods to get there. Meanwhile, side-eye at French lawmakers for barely showing up to an AI hearing added political spice. The verdict from the crowd: we’re not just arguing what AI does, we’re arguing what it means — and that fight is way messier (and funnier) than the math

Key Points

  • Meyer contends many AI debates stem from two different underlying notions of intelligence.
  • He situates the debate historically, referencing Turing, von Neumann, and Weizenbaum’s ELIZA.
  • Meyer urges technical experts to stay involved in defining and discussing AI intelligence.
  • He cites a French National Assembly hearing where Olivier Rey argued AI lacks true understanding.
  • Meyer compares human problem-solving with LLM performance, noting both produce correct and incorrect answers, challenging the idea that AI only ‘appears’ to understand.

Hottest takes

“The Turkey was an LLM… no ‘understanding’ of the purpose” — barishnamazov
“The question of whether a computer can think… whether a submarine can swim” — ghgr
“Memory foam doesn’t really ‘remember’… the language games are all mixed up” — notarobot123
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.