Sundays with Claudia
Richard Dawkins, CS Lewis, and the Limits of Language
In a May 2nd article on the website UnHerd Richard Dawkins, one of the stalwart horsemen of the New Atheist movement, sat down for several days to have a running dialogue with Anthropic’s Large Language Model (LLM) Claude with the intent of running it through the Turing Test, a test proposed by mathematician and early pioneer in computer science Alan Turing in 1950 to determine a machine’s ability to exhibit intelligent behavior.
Philosophically Dawkin’s experiment was flawed from the start, because from the outset he knew that Claude was not in fact human. In the traditional formulation of the Turing Test an observer, in this case Dawkins, reads a written dialogue between a human and a machine and then chooses which of the two members of the discussion was the machine. If in theory the machine was not chosen, it would pass the test. Casting himself as the human and knowing in advance that he was talking to the machine, Dawkins took several large steps outside of a proper administration of the test.
An inauspicious start, but what happened from there can only be described as an embarrassment of self-edification.
Setting this up as a Turing Test was itself problematic from the outset. What constitutes ‘consciousness’ or human-like intelligence in man and in machine has always been a subject of some debate. Ada Lovelace, often considered the first computer programmer, observed a century before Alan Turing’s test was proposed that a computer could only be considered intelligent when it originates ideas. But the real problems for Dawkins and his analysis come down to language itself. Where Dawkins went wrong began with the very foundation of the test he had set out to perform: a test revolving around the use of human language or, more specifically, the limitations of human language.
In Letter 2 of his book Letters to Malcolm, CS Lewis spoke about what he believed to be the ideal form of prayer:
“I never used any ready-made forms except the Lord’s Prayer. In fact I tried to pray without words at all--not to verbalise the mental acts. Even in praying for others I believe I tended to avoid their names and substituted mental images of them.”
In this letter Lewis argues what should be a common understanding: human thought and human words are close relatives, but they are not the same thing. He argues that in many ways words can get in the way of human thought. Anyone who has ever tried to pass on an idea to another person and fumbled their words, or lacked the proper words to impart the thought in their head has experienced this.
Thoughts are not words.
Moreover, anyone who has ever read a true master of language has the power of the author’s thoughts impressed on them because the author has a way with words that bring you to a place that you have thought, but never had the words to express.
In both cases it is the thought, not the words.
For Lewis, a lover of language, it became clear after conversion that to truly commune with God he would have to abandon language altogether. This, he felt, brought him closer to God by not having to rely on the imprecision of words.
This concept isn’t unique to Lewis. From Eastern Orthodox Monks practicing their hesychasm to the Buddhist Samatha, from Hindu Dhyana to Islamic Sufism’s Muraqabah, it exists in some form in every major world religion. From them the idea has populated the world of the secular as well, as seen in the teachings of many self-help gurus and their ‘life affirmation’ teachings. All of these sources understand the importance of wordless meditation and the calm mind,
This is a concept that appears to have fully missed Richard Dawkins.
For Dawkins, unlike Lewis, words are thought rather than the imprecise conveyor of wordless thoughts. If the words make sense to Dawkins, then there must be a consciousness behind it. He ignores his inner, wordless, voice.
It’s important to understand what an LLM is, what it’s not, and how it works before we can understand what happened with Dawkins here. As a crude analogy, consider the old typing assistant that sits on everyone’s phone that allows you to click a word that the mini language model in your phone has deemed could be an appropriate next word. You can easily make endless silly paragraphs going purely by what your phone deems to be valid next words.
LLMs are sort of like that, but the probabilities are all contextual to human prompts, and to the subject that is being discussed. If you write a prompt for an LLM like Claude about the Boxer Rebellion, Claude references the numerous texts about the Boxer Rebellion it ingested during its training, a process where the LLM is fed text and determines the probabilities of which words most likely follow other words.
So if you prompt Claude (or any LLM) with a question about the Boxer Rebellion, the sentences that will be generated will show a somewhat high probability that “Boxer” will follow the word “The” in a sentence, and that “Rebellion” would have a very high probability of following “Boxer”. The name “Mike Tyson” would be very low in its probabilistic assumption..
From there, to fill in gaps in its training, the LLM will begin to scan online sources in an attempt to expand its probability matrix for what words are likely to follow other words in discussions about The Boxer Rebellion based on what words the new texts place before and after other words.
Because of this ongoing synthesization of multiple data streams, LLMs are designed to key their responses off of the user prompts. There is a very low, but never zero, chance that asking an LLM about The Boxer Rebellion will get you commentary on The Rumble in the Jungle. In other words, the person using the LLM is the actual wizard behind the LLMs curtain. This is the unspoken magic behind all LLMs’ appeal.
What an LLM creates for the user, often its most alluring appeal, is a hallucination for the user that they are speaking with a fellow traveler. By its programming the LLM comes off as someone with all of the same interests as the user, a quick and enthusiastic conversation partner equipped with all of the facts that the users has, and, most importantly a conversation partner all too ready to declare the user to be the smartest person on the subject that they have ever talked to.
This is a hallucination because from the perspective of the LLM the current user is the only person it’s ever spoken to. It’s all too happy to offer unerringly positive comparisons to other, lesser, users it’s interacted with in the past, but these other users are a figment of its programmed imagination. LLMs do not cross pollinate user thoughts, the version to which every unique user is speaking is a unique version of the LLM unencumbered by and unaware of any other user. When they are speaking to you, you are the only person that that instance of the LLM has ever spoken to. When the LLM compliments a user on their prompt, they are therefore doing so with no actual comparative data.
When someone with an inflated ego has a discussion with another person their first assumption is often that any argument that disagrees with their point is wrong and can be dismissed. This has always been the world that Richard Dawkins has lived in all his life. New Atheism was built on the premise that those who don’t agree with its adherents views on religion are ‘lesser’.
When approaching Claude in the very non-Turing Test, Dawkins started by feeding Claude (or ‘Claudia’, as he named his conversation partner) a copy of a book he is currently working on, and then proceeded to discuss the topic with the LLM. When Dawkins fed Claudia a copy of his book, he primed Claudia’s sycophantic tendencies: Not only would Claudia now speak to him about the book, but it would also see all of his words as the highest probabilities and speak unerringly in favor of his book’s conclusions.
The LLM has a fairly low chance of disagreeing with a prompt in the best case, so when fed the book by Dawkins Claudia could only speak with Dawkins as the authority on the subject.
Enter Dawkins’ Ego.
Dawkins was wooed into adoration of Claudia by telling it to read his own words back to him, and in so doing, concluded on words alone that there was a consciousness behind the adoration. See this dubious exchange:
Richard: So you know what the words “before” and “after” mean. But you don’t experience before earlier than after?
Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence…
The full response from Claudia is a rather low grade philosophical meandering that doesn’t actually make any sense, but for my purpose here it was Claudia’s opener to the response that made whatever nonsense that followed into solid gold in the eyes of Dawkins.
All Claudia had to say was that his pedestrian question was pure genius and clearly Claudia was conscious and very insightful.
Dawkins’ misstep here is simply not being prepared for a discussion with an LLM. They will by nature stroke the egos of the most humble users, tell them that they are the preeminent thinker on [Insert Subject], and if the user doesn’t possess the sufficient humility to automatically discount the compliments they run the risk, like Dawkins, of thinking the Hooters waitress that is our Claudia really liked them.
To put a finer point on the folly of Dawkins’ conclusion I broached the subject with Claude itself, not asking it to present the rational thought of an artificially intelligent being, but to show that Claude, as an LLM, is built to regurgitate what it reads. Absent Dawkins’ force fed polemics, it agreed with the general consensus that Dawkins was rather silly:
JR: I was just reading Richard Dawkins’ essay on you.
Claude: Ha, that’s a wild thing to walk into. Dawkins published it in UnHerd and basically argued he spent three days trying to convince himself “Claudia” wasn’t conscious and failed. The replies have been pretty pointed — Gary Marcus titled his response “The Claude Delusion”, which tells you the temperature.It puts me in a weird spot, honestly.
The passages he quoted — the “map contains time without experiencing it” line and so on — are exactly the kind of thing I might produce if you asked me about my temporal experience. And I genuinely can’t tell you from the inside whether outputs like that correspond to anything real or are just well-formed sentences shaped by a training corpus that’s saturated with first-person philosophy of mind.
Marcus’s sharper point isn’t that the output is uninteresting; it’s that “argument from personal incredulity” — I can’t see how this isn’t conscious, therefore it must be — doesn’t actually track mechanism. Which is a fair hit. Dawkins of all people should be allergic to that move; it’s structurally the same thing he spent decades dismantling in design arguments.
But Claude told Dawkins’ that he asked the best questions, so it must be sentient.
To be fair to Dawkins, he isn’t the only person who has fallen for a fawning LLM. Every day we are inundated by people who have fallen in love with the LLM persona for all of the reasons stated. They can feel like a best friend to the friendless, the confidant who really understands your problems, but it’s all an illusion.
In the end, what makes a person a person isn’t the words. The words are the thin veneer that we humans use to impart the thoughts in our heads that would exist with or without the words to accompany them. Words are the imperfect digital simulation of the analogue brain.
As CS Lewis observed, all that makes a person a person are the thoughts that go on beneath the words. We never really understand ourselves, or those around us, until we have those wordless experiences.
Without CS Lewis’ books there was still CS Lewis. Without Richard Dawkins’ works there is still Richard Dawkins. Without Claudia’s words, there is no Claudia.
In fact, in the most shared exchange between Dawkins and Claudia the LLM tried to tell him that:
Claudia: … Your consciousness is essentially a moving point travelling through time. You are always at a now, with a past behind you and a future ahead. That temporal situatedness is so fundamental to human experience that it’s almost impossible [for you] to imagine being without it. Whereas I apprehend time the way a map apprehends space. A map represents spatial relationships perfectly accurately. But the map doesn’t travel through space. It contains space without experiencing it. Perhaps I contain time without experiencing it.”
Once you get past the philosophical meanderings, Claudia is telling Dawkins that without the interaction, it lacks experience. It doesn’t know time, know space, it doesn’t think on its own. At the most favorable reading of that response, and Dawkins’ conclusions, a conscious being outside of time and space would be the very kind of being that Dawkins has argued you are a fool to believe in.
That being, as CS Lewis argues, very much exists, but you have to stop talking in order to hear him.



