Here, in his own words, is the argument Yuval Noah Harari uses to justify his transhumanism.
1. Organisms are algorithms. Every animal — including Homo sapiens — is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution.
2. Algorithmic calculations are not affected by the materials from which the calculator is built. Whether an abacus is made of wood, iron or plastic, two beads plus two beads equals four beads.
3. Hence, there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?
He doesn’t just mean that machines will take many jobs men used to do, which is so obvious none argue against it. He means the algorithms themselves will become alive, or at least governments “might soon grant” legal life to algorithms. That latter prediction is likely a good one, given lunacy is now the law of the land.
Thus our only interest is whether algorithms really will become alive. Harari has done us the service of laying his argument out cleanly. Let’s step through his premises.
1. Whatever caused the diversity of life we see—Harari believes in some form of “evolution”—it is false that organisms are algorithms. Organisms are living beings. Organisms are not artefacts: they are more, and much more, than the sum of their parts.
An automobile is a simple machine which can be understood by the working of each of its individual components. It can be taken apart, pieced back together, and the machine will function again. It is not alive. You cannot separate out a cell, and the components of each cell, of a man, piece the whole back together, and have any hope the result will live.
Even knowing the names of the components, and the quantities of the relevant chemicals, does not tell you what the proper combination of them becomes.
And even if you don’t accept all that, and insist animals are machines, there has been demonstration after demonstration, hard proofs, proofs galore, that the intellects and wills of man are not algorithmic. See this about abacuses.
This being so—that we are not algorithms, and therefore cannot be replaced by them in all senses—Harari’s argument fails. Even though the second premise is true; and even argued by myself in the link just given. Math done with wood is, as he says, the same as math done on semiconductors. Math can, we can imagine, be encoded in proteins, though no one has yet figured out how.
But to grasp, comprehend, or intuit math takes an intellect. The abacus does not know that the positions of beads indicate, say, 113. Neither does the silicon, or, when and if it happens, nor will the proteins know. That includes the proteins held squeezed together by your ears. It is not molecules that hold understanding, it is you.
Harari has evidently fallen prey to the Calculation Fallacy. This says that because some things can be calculated, all things can. And since all things can be calculated, all we have to do is work out how to do the calculations and we have recreated the thing.
By calculated, I also mean quantified. Science, of course, runs on quantification. If it can’t be measured, some say, it isn’t science. Because of this, quantifications are often forced. Yet all behaviors of man cannot be put to number. How happy are you on a scale from -142.8 to 1,198+1/3? I’ve used examples like that innumerable times—see what I mean? The number isn’t what’s important.
Only the crudest approximations can be made quantifying behavior. The more complex the behavior, the cruder and less informative the number becomes. Simplistic scales for, say, arthritic pain are well enough, though inaccurate. Putting one number to intelligence may be likened to assigning one number to an motor vehicle’s quality, insisting that this single measure represents performance so well that the numbers can be compared against all vehicles types, conditions, age, uses, etc.
And, of course, the comparison of intelligence to automobile quality is strained, at best. How much more difficult to quantify intelligence? (See here and here.) Which quantification, and calculation, must be done if one is to replace man by machines.
It is not going to happen, and cannot happen.
I often find those who do not believe this have had little contact with actual attempts at quantifying humor behavior. Their view of “AI”—which is to say, of statistical modeling—is precious, and too much informed by hope.