When Dawkins met Claude – Could this AI be conscious?

Dawkins says the chatbot question is here now — and the comments are absolutely losing it

TLDR: Dawkins argues that now chatbots can sound strikingly human, people can’t keep dodging the old question of whether a machine could really think. In the comments, some say we’re arrogant to rule consciousness out, while others insist talking smoothly is not the same as having a mind.

Richard Dawkins has waded into one of the internet’s favorite chaos pits: if a machine talks like a person, jokes like a person, and writes poetry like a person, are we supposed to admit it might actually be conscious? In his essay, he leans on Alan Turing’s famous idea that if a machine can convincingly pass for human in conversation, we may have to stop laughing off the possibility that it can “think.” And that, naturally, sent the community straight into a philosophical food fight.

The comment section is split between the “we know basically nothing, so don’t be smug” camp and the “absolutely not, this is just autocomplete in a fancy hat” camp. One side argues that because humans still don’t really understand consciousness, declaring machines definitely not conscious is pure overconfidence. The other side is openly exasperated, basically yelling: if the bot never messages first, never wants anything, and only speaks when prompted, how on earth are people falling for this?

Then came the deeper hot takes. Some commenters said today’s chatbots don’t prove machines are conscious — they prove the Turing Test may have been too easy all along. Others dragged the whole debate back to animals, pointing out that plenty of living creatures seem aware without speaking a word, which makes “good at chatting” look like a very weird way to measure inner life. The vibe? Equal parts brain-melting philosophy seminar, internet dunk contest, and late-night group chat asking if the toaster has feelings.

Key Points

  • The article revisits Alan Turing’s 1950 “Imitation Game” as an operational test for whether machines can think.
  • It says later interpretations of the Turing Test treat successful human-like remote conversation as evidence that a machine may be conscious.
  • Richard Dawkins argues that the strength of such a conclusion should increase with the rigor and duration of the interrogation.
  • The article states that for many years people treated machine passage of the Turing Test as a distant hypothetical possibility.
  • It argues that modern large language models such as ChatGPT, Gemini, and Claude have made that possibility immediate and have prompted reconsideration of earlier assumptions.

Hottest takes

"Current LLMs prove that the Turing Test was insufficient all along" — throwyawayyyy
"I don't see how we can be confident that LLMs aren't conscious" — qnleigh
"when’s the last time you messaged an LLM and it just decided to ignore you?" — ofjcihen
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.