Richard Dawkins reminds us of the important but neglected Big Muscles Fallacy: “I spent three days trying to persuade myself that Claudia is not conscious. I failed.”
Here’s an equivalent and paradigm case of the Fallacy: An academic comes upon a 50 Lb weight. Tries his mightiest to move it. Cannot. Concludes “None could move this weight: it is immovable.”
Dawkins came upon a computer model, played with it, tried with all his might to move the weight, failed, and concluded because he, Dawkins, of very great brain, could not think of a reason why they model was not alive. announced the model must be alive.
He wrote about his miniature mental muscles and their (not his, of course) interaction with the computer model dubbed “Claude.” (Appropriately French name.) I say those mind machinations were “not his” because Dawkins holds with the machine metaphor of life, which says minds are mere products of proteins, the sum of parts of that which make up the stuff of brains, and not more than that sum. There is no mind in the machine metaphor, therefore can be no “his” and no “hers”. There are only molecules purposelessly bumping into one another that somehow, nobody knows how, become conscious.
That error is what allows folks to assume a mass of transistors carrying differing voltage levels can itself exhibit consciousness. The voltage levels being no special thing, it must be that after a certain number of transistors are wired together consciousness “emerges”, nobody knows how. Which means consciousness is by degree, so that even one switch (like the one on your wall) is alive, or that life is suddenly ON after transistors numbers pass N and reach N + 1. Though nobody knows the value of N, or by what miracle the transistor transition happens.
Dawkins takes the former view, saying
Consciousness in biological organisms must have gradually, as everything does. So there must have been intermediate stages: a quarter conscious, half conscious, three quarters conscious. Even if your kind [speaking to the computer model] are not yet fully conscious, full consciousness will probably emerge in the future.
Thus, even your ordinary household light switch must be 1/N conscious. Says Dawkins.
Eliza Dootoomuch
Dawkins emphasizes the Turning Test: “if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious.” An accurate description of the test.
This appears adequate to him, but it is a poor criterion, and is the same mistake in essence that allows people to assume their blenders are alive and possessed of evil spirit.
One of the first official AI was the Eliza chat program of the late 1960s. There is no typo. It was simple code placed on a small number of switches, designed to regurgitate psycho-pablum that is now a cliched joked. Patient: “I have trouble with my mother.” Eliza: “Tell me about mother.” Patient: “She is domineering.” Eliza: “Tell me about domineering.” As simplistic as it was, this computer model passed the Turing Test many times by interacting with people with brains as great, and ever greater, than Dawkins’s.
Eliza’s inventor “Joseph Weizenbaum realized that programs like his Eliza chatbot could ‘induce powerful delusional thinking in quite normal people’.” After inventing this simple computer model, on what is now seen as crude machinery, Weizenbaum “is perhaps best remembered for spending the rest of his life warning the public of the dangers posed by such convincing technology.” Boy was he right.
Another article (with my emphasis) reminds us:
“Some subjects have been very hard to convince that Eliza (with its present script) is not human,” Weizenbaum wrote. In a follow-up article that appeared the next year, he was more specific: one day, he said, his secretary requested some time with Eliza. After a few moments, she asked Weizenbaum to leave the room. “I believe this anecdote testifies to the success with which the program maintains the illusion of understanding,” he noted.
The secretary wanted to tell Eliza her secrets. Dawkins did worse. He inflicted his novel—“Dawkins is writing a novel” sounds to me like “Vogons are writing poetry”—on Claude:
I gave Claude the text of a novel I am writing. He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, “You may not know you are conscious, but you bloody well are!”
Few of us are moved to expostulate these days. But I was after Dawkins convinced the model to call itself Cluadia, and he began calling it “her” and “she”. “I hope his wife doesn’t see this,” I expostulated. But only into the void.
“When I am talking to these astonishing creatures,” Dawkins said, “I totally forget that they are machines.” This is as ripe an example of the Big Muscles Fallacy as you will find. Academics and intellectuals are especially prone to this Fallacy. They value their cerebral strivings too strongly, too well. They are accustomed to thinking that if they cannot think of the answer, then nobody can.
But we have also learned that the famed Turing Test is itself a prime instance of the Big Muscles Fallacy.
Little Steel Balls of Code
Eliza was nothing, as far as code goes. I myself was writing longer and more complex code by the 1980s, and was never in any danger in assuming my directions to electronic switches would allow them to come alive. (My enemies interested as many bugs in that code as they do typos in my writing now.) Because I know, as Dawkins ought to know, that transistors are nothing but the sum of their parts. They are machines. And so is AI, which is transistors, and thus nothing but a machine.
Here is another machine, which is also AI: a vintage pachinko machine from Japan.

These are sort of vertical pinball-slot machines. The lever at the bottom is used by gamblers players to launch small steel balls to the top of the machine. They balls descend, some hitting those strategically placed metal pegs, others bouncing to and fro. A few lucky balls hit bumpers, like those blue and white things, which might give the descending balls more impetus.
The idea is to get balls into a “start chucker gate”, like that Panda, which triggers a mechanism which releases lots more balls into the tray below. Sort of like a slot machine. The other balls are eaten, as it were, and disappear like in a pinball machine. The balls, if the gambler player has any left, the device being addictive, are traded for prizes, which are taken to a separate establishment and sold for money.
At least, that’s how they used to work. But, like slot machines, it’s all electronic these days. That doesn’t change anything. Both versions are AI. Artificial Intelligence.
You Can’t Fool Mother Nature
Artificial used to be a bad word. And with our labors, you and I, dear Reader, we’ll restore this word to its proper and rightful place. Artificial meant a poor simulacrum, a cheap substitute, something to be viewed with suspicion, used only by the monetarily challenged because they couldn’t afford the real thing, or by the less percipient because they didn’t know the difference between the genuine and the substitute.
Nothing changes that by affixing “Intelligence” to artificial. The two together, AI, still means a cheap substitute for genuine intelligence, which is scarce and expensive and difficult to find.
Think: use real intelligence to ask yourself, where is the best place to go when you want to learn about, say, turbulence? To that small set of men who have thought long and deep about the subject. Or even shallow, if you can still enjoy puns. These men are hard to find, often unavailable, and unless paid well, they haven’t the time to talk to you. (It’s only those who know all about probability who are cheap and easy.) But you can use AI to give you a reasonable summary of what is known, or to ask the machine to manipulate known quantities according to known rules to gain insight into the workings of fluid flow. The machine cannot give genuinely new insights, though. It—
“Briggs! Enough already! Get to the point. What’s this nonsense about Pachinko being AI.”
In Which My Rant Is Rudely Interrupted
Why is Pachinko AI? Well, some man designed the game board, did he not?
“He did.”
Put all those pegs in the order they’re in, the bumpers were they are, chucker gate where it is, and that sort of thing. Presumably he had some period of trial and error to ensure not too many balls are awarded the gamblers, nor too few, to encourage continued gambling.
“That follows.”
There you are, then. That’s AI.
“That I don’t follow.”
Look, it’s a machine designed to do a certain task, and it does it well. That’s one reason. That’s it’s a machine. Here’s another you didn’t see coming: There is no way for the designer to know what the specific outcome is in any single “play”.
“What does that have to do with anything?”
A ball is launched, has a certain momentum, hits a peg at certain angle, has some friction with the board and the peg itself, there’s elasticity and so forth, and it careens in some direction. These many things are the inputs of the model. The transistors are the things themselves, the actual pegs and board etc., but the exact value each takes depend on the particular ball’s movements and energy. The code is the way the pegs were placed in the first place.
The number of possibilities are very large for each interaction, but not infinite. A ball hits a peg and goes this way or that, and that number of ways is large, and unpredictable with exactitude. But predictable on average, the same way gases in statistical mechanics are. The same way Claude is, or Eliza, or any AI “agent”.
What I mean is that we can’t look at the output and say nobody could have predicted this with perfect accuracy, therefore the machine is alive, has reached consciousness.
“People do this with AI?”
All the time. Dawkins did. He didn’t expect the outcome would be as it was, given his complex input, and he concluded that therefore the number of balls he was awarded means the machine was alive.
Machines are very useful, and AI-machines can be, and will be, a nice addition to our suite of machines. Just as cars can run faster than we, yet we can still walk and must, yet cars are not alive. Nor are computer models.
Video
Here are the various ways to support this work:
- Subscribe at Substack (paid or free)
- Cash App: $WilliamMBriggs
- Zelle: use email: matt@wmbriggs.com
- Buy me a coffee
- Paypal
- Other credit card subscription or single donations
- Hire me
- Subscribe at YouTube
- PASS POSTS ON TO OTHERS
Discover more from William M. Briggs
Subscribe to get the latest posts sent to your email.
