# Our Intellects Are Not Computers: The Abacus As Brain Part I

Pictured above is an abacus. As it looks, it is in the configuration for the number 68,170,000. It has other configurations which represent other numbers. This particular abacus is small, but bigger ones can and have been built for the purpose of representing larger numbers.

This abacus is made of wood and not silicon and other metalloids, metals, and plastics. Yet it is a computer for all that. A simple operation—a program, if you will—carried out by manipulating the beads can add two numbers. Another can multiply. The motive force for shifting the beads is muscle and not electricity, though we could certainly, and with not much effort, make an electric abacus (I imagine it’s already been done).

All of us will agree that this abacus does not know it is representing the number X (for any X). The abacus has not learned how to represent anything. It is just a pile of wood into which we project meaning. It is we who say this arrangement of wood means X, and that another means Y. This cannot be controversial.

The “software” for bead manipulation is also not part of the abacus, but making it so that it is is only a small engineering matter. There is nothing stopping us from making levers that move beads, as above with the electric abacus, but which still uses muscle power. So that we can have one lever for “Add 2”, or whatever. Electric gear-and-lever adding machines did just that (my dad had one).

And we could also expand our abacus. All we have to do is to say that certain numbers, which are only configurations of beads, represent certain Latin letters (use which alphabet you prefer). Of course, these letters are not in our expanded abacus. They are still in our intellects. The abacus remains just a pile of wood that happens to be a certain way.

Well, since we can extend the numbers to mean letters, we can also have the bead positions represent images, too. We could even have the abacus look like an image using colored beads. Just step back and squint a bit to see what image is there! Again, and as is obvious, the image isn’t in the abacus. It’s in our minds. We piece it together using the visual stimuli presented by the abacus, which, again, is just a pile of wood.

Continue on in this fashion, adding to the abacus so that it can, eventually, do “floating-point” calculations, which is to say, approximations to real numbers (“real” as in unobservable infinitely precise numerical entities that live on the continuum). We’re tired of hearing it, but those approximations are not in the abacus. They’re in our minds. Why? Because—can you guess?—the abacus is just a pile of wood.

A huge pile at that. It’s going to take a goodly number of beads to divide 3 into 1 and derive a reasonable finite approximation. But this is a thought experiment, so size is no limitation.

So here is the big question. At what point does the abacus become self aware, in the sense that it knows X as X? Here is a second related but dissimilar question: How many beads and sticks on which the beads slide are needed until the abacus becomes an adequate representation of a human brain? Well, that can be calculated. Doubtless we would deplete the local lumber yard, and, given its size, we might need to begin construction of our beast at some Lagrangian point. But never mind that: answer the first question.

It’s obvious. Never.

No matter what, our wooden brain simulation would still be nothing but a giant pile of wood. It would never become self aware in the sense of having an intellect that recognizes universals, such as numbers. To say that it would, to claim, that is, that intellect “emerges” when some crucial point is reached is to invoke magic. It is to say a pile of wood to which one more bead is added becomes imbued with intellect and will. But that without that bead, it is just a pile of sticks.

And that that thing that makes this happen is magic. Has to be magic. It can’t be physics, which hasn’t changed from before the crucial bead is added. Physics had to give the pile something more than it had with one additional bead.

Well, what if we moved the beads on the sticks faster? We asked questions like this before. Then, as long as we have taken care not to start a fire (no problem if we’re in space), the speed of bead movement is as nothing. Speed of calculation does not account for self awareness and intellect.

As we agreed originally, instead of wood, we can construct this abacus out of metalloids, metal, and plastic, and we can use electricity to move its internal states around. And we can do this fast, too. And we have: you’re reading this on the result.

But just like with the wood, at not point of size can this pile of silicon, metal and plastic ever develop an intellect. As before, magic has to be invoked to say that addition of one more logic gate turns the pile from a pile into a life which has an intellect and will.

Let’s face it. Strong AI, as it’s called, is an impossibility. More to come…Part II.

1. Franz Dullaart

But how do you account for human self awareness and intellect arising in a chemical soup?

2. Sheri

Franz Dullaart: Evolutionists don’t. They usually resort to gibberish when asked or say something like “then the magic happens” from a Far Side cartoon year ago. It has never been explained and asking always results in the evolutionist huffing off and name-calling, like all scientists who can’t fully explain their theories. There is no scientific explanantion out there except “we’re sure it happened”, which last time I checked, was not really science at all.

As for the abacus, that example fails because we can see the beads on an abacus. Computers, like electricity, are “magic” because no one really knows how either works. It’s easy to fool people when “magic” is involved. That’s also why there was never a take over by propane lamps, but electrical appliances and lighting features prominently in science fiction. In spite of what psychology teaches, human beings do not outgrow the “out of sight, out of mind” stage where a toddler thinks it’s magic when a toy “reappears” after Mom puts it behind her back with the child watching. No matter how many times Mom does it, the child, until a certain age—and apparently never in many, many cases—always is surprised by the reappearance of the toy and can’t tell you were the toy is. Adults are so very easily fooled by zeroes and ones and copper wires that make the screen light up.

This came in my email yesterday—I offer it as proof there are many fools out there who believe in magic:
Dear,
Want to see how this 130 year old technology can generate FREE energy out of thin air?
More important than that: want to know HOW you can get your hands on it…and WHY it will help you slash your power bill by up to 87%…even 100%?
It’s all about Thomas Edison’s HIDDEN GEM …and I will reveal to you exactly what it is…

I can provide the link for those of you interested in this HIDDEN GEM.

3. Franz Dullaart

So, it’s a form of magic?

4. Franz Dullaart

You chuck a lot of issues into the mix – none of which addresses my question.

5. Larry Geiger

An automata is an automata. Doesn’t matter how big it is or how fast it operates. It’s still an automata. Essentially all modern computers are simple automatons.

6. Joy

Franz, that’s a good question.
Lee Phillips,
That would be an inane thing to say.
God is not a placeholder for things we don’t understand.
Unless, of course, you define him that way. In which case, he would be no better than Zeus.

7. DAV

This is an example of simplifying to the extreme then claiming it proves some point.

An abacus is more properly a computing tool and is on the same level as using fingers and toes or paper and pencil. Why not use a chalkboard instead? A thermostat is closer to a modern computer than the abacus because it computes a threshold without human intervention other than setting the threshold.

The modern computer has much higher functionality. You’ve made the argument before in re why it will never reach intelligent but even that is argument from ignorance. Since this is Part 1, I assume you are getting around to repeating it so I’ll wait.

But just like with the wood, at not point of size can this pile of silicon, metal and plastic ever develop an intellect.

Which will also be used (again) to account for the lack of intelligence in more complex computers.But Franz made a good point, the same argument can be used for a pile (or soup) of organic chemicals.

The problem is one needs to define what intelligence is first. Currently, some would have that definition so restricted even animals (which actually have living brains quite similar to ours) can’t achieve it. An example of going to the other extreme. Until there is an acceptable definition one can always argue using the No True Scotsman point of view.

They usually resort to gibberish when asked or say something like “then the magic happens”

Also can be said of “life”. Somewhere along the line it happened and no one knows how — not even YOS who is fond of gibberish.

8. Joy

A thermostat is still a man made contraption.
So it is a machine with no intent of it’s own. It has intent which is the result of the original intention, the designer. Adding multiple interacting traps or triggers so that a complex situation obtains and so that we can’t predict a response is still the intent of the designer. To do that, produce something which appears to have self awareness and will but simply is an unpredictable, when required, machine.

It seems that this is what the AI scientists are aiming at, the unpredictability of an outcome that can then be considered useful. i.e. an outcome can be put through another measure to test it for viability and then hey presto a thought is generated. I might be wrong but that’s what I can glean from what is going on. This situation still carries all of the intent and outcome of the designer.

It still cannot know it exists or at least nobody would believe it if it said it did. It would be a n extension of the designer. If there were many designers it would be like a piece of music where more than one person produced the thing.

Information machines are now being produced so I’m informed of proteins, mimicking a cell.
Those working in the field, apparently, have ordained to say that there can be no output of such a machine that is not included in the original design. I could, given a. Bit of time, find the name of the person or organisation who said it.

Fingers are part of the body, I don’t see how they come into the argument.

This seems to show that the separating of the brain for consideration is a kind of simplification that, leaves out information, similar to trying to separate the intellect for consideration. It’s flawed but necessary to have a conversation about it. Only nobody puts the body back or the so called intellect back together afterwards. The conclusion is incomplete.

That’s the pickle these kinds of discussions are in. Add an entity which pretends to own the truth and it’s a discussion going nowhere.

9. DAV

A thermostat is still a man made contraption.
So it is a machine with no intent of it’s own.

Being a machine does not mean it can’t have intent of its own. Arguing such because you have never encountered one is an example of argument from ignorance. At best, it’s improbable but you can’t say it is impossible.

Fingers are part of the body, I don’t see how they come into the argument.

They can be used as objects for computing no different than using a pile of stones. It’s irrelevant what they are attached to.

It seems that this is what the AI scientists are aiming at, the unpredictability of an outcome that can then be considered useful. … there can be no output of such a machine that is not included in the original design.

Perhaps currently but the term Artificial Intelligence has been usurped from its origin to mean things like data mining and classification. No one seriously considers a Naive Bayes classifier to be intelligent (or at least shouldn’t) but one could be a building block.

“Artificial Intelligence” originally meant computer models of proposed mechanisms for demonstration of feasibility but that doesn’t prevent some future model of being complex enough to exhibit all (or most) of the characteristics of intelligence (assuming we can ever get beyond a vague and/or ad hoc description which can be applied).

10. This was one of your best posts ever. On a par with the SAMT series.

11. Ken

RE: “But just like with the wood, at not point of size can this pile of silicon, metal and plastic ever develop an intellect. As before, magic has to be invoked to say that addition of one more logic gate turns the pile from a pile into a life which has an intellect and will.
“Let’s face it. Strong AI, as it’s called, is an impossibility.”

THAT’S not what Elon Musk says — he’s afraid that Artificial Intelligence (AI) will take over humanity, or something (any search will turn up oodles of articles about that).

In response, Musk is championing ‘a brain-computer interface venture called Neuralink centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and thereby keep pace with advancements in AI.’ (e.g. see: https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs)

In other words, to save humanity from AI run amok (e.g. by Schwarzenegger-style AI-self-aware Terminators exterminating mankind), Musk is endeavoring to save humanity by destroying it — by transforming humanity into the Borg (re Borg ref https://en.wikipedia.org/wiki/Borg_(Star_Trek)).

It’s a sad indicator: When our eccentric billionaire visionaries have to ape old sci-fi movie plots in their quest to develop The-Next-Big-Thing … it just shows how intellectually bankrupt our society has become …

Speaking of bankruptcy, the Motley Fool reports Musk’s Space X is barely turning a profit, when its not losing money (which it does of late), and, others report Tesla finally recorded a profit…but on unsustainable sales of California Zero Emission Vehicle (ZEV) [environmental] credits. Those persistent imminent dalliances with bankruptcy in our [still] capitalistic system may explain Musk’s newfound fondness with transforming humanity into the Borg — a human/technology combination that enables a degree of communist collectivism even an LSD-fueled pairing of Marx & Lenin couldn’t have dreamed up.

12. Ray

“The “software” for bead manipulation is also not part of the abacus, but making it so that it is only a small engineering matter.”
I have an abacus and it uses bi-quinary arithmetic. I used to do digital design and designing digital logic or software to perform bi-quinary arithmetic would be a challenge. Computers typically use binary, octal or hexadecimal arithmetic because those are all powers of two.

13. bat8

> No matter what, our wooden brain simulation would still be nothing but a giant pile of wood. It would never become self aware in the sense of having an intellect that recognizes universals, such as numbers. To say that it would, to claim, that is, that intellect “emerges” when some crucial point is reached is to invoke magic. It is to say a pile of wood to which one more bead is added becomes imbued with intellect and will. But that without that bead, it is just a pile of sticks.
>
> And that that thing that makes this happen is magic. Has to be magic. It can’t be physics, which hasn’t changed from before the crucial bead is added. Physics had to give the pile something more than it had with one additional bead.

I think this argument is just the heap paradox. Is it invoking magic to say that there is a crucial grain of sand at which a group of grains of sand becomes a heap? If so, then does that prove that heaps of sand are impossible to make? I think that, if there has to be a threshold for heaps, there must also be a threshold for intelligence. Sorry for the weirdness of the analogy, but I think it holds. Consider this reformulation: If we started building the human body atom by atom somehow, would there be a crucial atom at which we achieve an intelligent being? Is it invoking magic to say that there is? If so, does that mean intelligent humans can’t exist?

I mean to use this analogy to say that, if applying your argument to other things such as heaps results in a paradox, then clearly it’s not a good argument and you haven’t proven anything. If it’s possible to build object X using elements of material Y, then it follows that there must be some crucial element of material Y that, when added to some built configuration* of material Y, yields object X. If it’s invoking magic to say that adding this last element of material Y will achieve you object X, and if that fact invalidates the possibility of adding it to get X, then it isn’t possible to build X using Y. Now, maybe you do think that it isn’t possible to build intelligence starting from wood and metal as materials, which is why I made that human body reformulation thing there.

* This configuration would almost be an object X, but missing one element of material Y. I’ll name it “configuration Z”, so you can refer to it easily in a reply if you want to.

14. Anon

Overheard on NJ Transit: “Is Elon Musk some kind of great visionary? Or is he just a bs artist?”

15. well, the abacus would “know” if it could make reference to itself.

16. Sander van der Wal

Computers are not supposed to become intelligent because of more logic gates, but because of algorithms, some of which will run faster when there are more gates.

Secondly, they do not have to become self-aware, or to posess free will. Even though it is hard to see how an AI without free will will develop a desire to improve itself and make humans obsolete/irrelevant as a side effect, or intentionally.

17. DAV

Computers typically use binary, octal or hexadecimal arithmetic because those are all powers of two

They only use binary. Hex and octal are representation used by humans to shorten strings of digits. Not that hex and octal adders can’t be built by why bother when the adder can be built by chaining simple configurations?

I used to do digital design and designing digital logic or software to perform bi-quinary arithmetic would be a challenge.

Not anymore of a challenge than making one to do decimal arithmetic. There have been more than a few early computers which used decimal arithmetic. Here’s a partial list: https://en.wikipedia.org/wiki/Decimal_computer . There are also some which used bi-quinary. https://en.wikipedia.org/wiki/Bi-quinary_coded_decimal

18. Akinchana Das

*The above post seems to have glitched and wiped out much of what I wrote, so I’m posting it again.

For a system S possessing non-zero knowledge and/or intelligence at any time t to be deemed self-cognitive, it must have possessed non-zero knowledge and/or intelligence for all T 0 implies the presence of a (stored) program and, hence, non-zero memory. Therefore, the only logical prior state at any given time to a present state of knowledge and intelligence would be option 4 – some amount of both knowledge and intelligence.

This and further evidences along these lines, also refutes the argument that the self/soul begins with the birth of the body (as presented in an earlier post by W. M. Briggs on this website), as the conscious self, if truly self-aware, cannot have a beginning point in the way that a computer does. The cognitive state must always have been preceded by cognition. In other words, if one can think, then there was never a time when one was not thinking, nor shall there be such a time in the future.

19. Joy

“Fingers…They can be used as objects for computing no different than using a pile of stones. It’s irrelevant what they are attached to.””
That’s throwing out information isn’t it? When sampling human intelligence to describe how it works, you need the whole body. Fingers do some clever things and they weren’t able, to begin with. Same for everybody, to a lesser or greater extent. They are input and output, to use the computer analogy. That is all it is though, analogy.
~~~~~~~~~~~
““Artificial Intelligence” originally meant computer models of proposed mechanisms for demonstration of feasibility but that doesn’t prevent some future model of being complex enough to exhibit all (or most) of the characteristics of intelligence (assuming we can ever get beyond a vague and/or ad hoc description which can be applied).””
Yes and that’s the puzzle. The original design, the original intent that would enable intellect to be properly copied. That’s the part which I say is impossible.
It’s maybe rather mean to say it can’t be done because it stifles innovation. So people having a go is important. I wonder if they’ll think they’ve found it when they haven’t. Without faith science goes nowhere. ?~~~~~~~~~~~
“”Being a machine does not mean it can’t have intent of its own. Arguing such because you have never encountered one is an example of argument from ignorance.””
Not so, it’s an argument from incredulity!
The same argument posed by materialists against the existence of God. It’s reasonable.
However to say that some things are impossible is not ignorant. To be ignorant, the thing would exist but I’d be one of the unlucky ones who never saw one. To be ignorant of the unknowable or the future is to be ordinary.
I’m claiming that a machine is a thing which is performing a task on behalf of a human, a tool. If the task is to produce an unpredictable outcome which is executing it’s programme, it’s potential set and activated by the user. It isn’t thinking for it’s own purposes but processing, which is not the same thing as thinking, knowing, then deciding, for example, that it won’t bother today because it realises it doesn’t have to, like a naughty human or dog might.

13451352341513456235

20. Plantagenet

Funny you try to explain the viability of a soul and patiently discourse on Aquinas, or Augustine, or Plato, or Liebeniz. Perhaps a primer on logic and metaphysics. You are then patronized with some version of “No your making it all too complex it’s really quite simple to see where you are wrong”. Chances are at this point you will subjected to (really bad) interpretations of Occams Razor and blahty blahty blah. However put forward a simple analogy to show the logical failings of current, maybe all, AI speculation and it’s “Oh no no no it’s really quite complex understanding where you’ve gone wrong. At this point you realize your playing a shell game and leave to relax with a good whiskey…partial to Macallan myself.

21. How can humans make something that is more intelligent than themselves? I don’t think the human mind can make that leap. To be able to make a synthetic human as intelligent as its makers, humans have to be smarter than they are. Just trying to make this human intelligence will lock humans into an infinite loop. The quest will never end. It’s a waste of time and resources.

It does make good science fiction.

22. Richard Hill

WMB: Are you going to discuss “Whiteheadianism”?

23. Briggs

Two best criticisms are from bat8 (heap paradox) and Sander (algorithms). Sander’s will be answered later. The Sorites or heap paradox is solved easily, though. Before moving to the paradox, define heap. You will see there are lots and lots and lots of hidden and tacit premises in that definition (dependent on situation, material, etc.), which is why the heap paradox “feels” like a paradox. I emphasize hidden or tacit premises in Uncertainty. Anyway, here the paradox does not apply, because we never get to the “heap”. We are ever a pile of wood.

The other critiques are along the lines of “I don’t know how intellect and will can be found in a pile of wood, but I’d hate to think they could not be, therefore arguments which say they can’t be are gibberish.” These need no comment.

24. DAV

Gary,
Here is an article on the B5000 which plainly says octal arithmetic.

Sorry, but not really true. Octal only because of three-bit grouping — i.e., octal in conception only. The actual hardware used binary. From the document you provided:

The Word Mode uses octal number system, and information is handled one word at a time. A parallel binary adder is used for performing arithmetic operations in this mode.

Character mode arithmetic however used serial BCD arithmetic.

As for the IBM S/360 perhaps but there is no advantage to using a 4-bit at a time adder over a binary one. It only adds unnecessary complexity. A possible exception though for multiplication and division where shifting would be done four bits at a time — a four times speed advantage. The easiest way to implement this would be table lookup but I don’t have the particular details on the hardware.

The Burroughs B-250 used table lookup to perform BCD arithmetic IIRC but it’s been 50-some years. Found a photo of one from the CMU Athena group of which I was a member.
http://www.silogic.com/Athena/photos/Athena0014.jpg
http://www.silogic.com/Athena/Athena.html

This is all OT though.

25. DAV

That’s throwing out information isn’t it?

Perhaps but only the irrelevant information.

The original design, the original intent that would enable intellect to be properly copied.

The original work was confined to subsystems because it was recognized that an AI which mimics full human behavior was beyond the hardware capabilities at the time (and likely still is). It has also been recognized during early attempts at language processing that certain concepts have roots in shared experiences (such as having similar bodies) but that doesn’t mean an entity which doesn’t participate in this sharing can’t be intelligent — particularly since there is no firm definition of “intelligent” outside of a list of human characteristics.

I’m claiming that a machine is a thing which is performing a task on behalf of a human, a tool. If the task is to produce an unpredictable outcome which is executing it’s programme, it’s potential set and activated by the user. It isn’t thinking for it’s own purposes but processing, which is not the same thing as thinking, knowing, then deciding

Several things:
1) Why does it need to be unpredictable? Merely because we don’t understand the workings? Why is that necessary?

2) What is thinking and deciding and how are they not processes? How do you know that people aren’t also processing when they think and decide? What are people actually doing when they “think and decide”? How do they do it?

3) What exactly does it mean “to know”? I mean beyond the “knowing” of interrelationships. Like above, what are people actually doing when they “know’ something? How does it work in people?

26. “This is an example of simplifying to the extreme then claiming it proves some point.”

Whereas when you reduce the entire careful argument to a one sentence simplification so you can dismiss it, this is something other than simplifying to an extreme and claiming it proves a point?

The process of analyzing a question to its basics is the root step of all rigorous thinking, either scientific or philosophical. That is what is done here.

If you insist on simplifying the argument, do so as follows: 1. A mechanical operation is not self aware. 2. No added mechanical operations, when linked to the first, grants self awareness. 3. Building a machine consists of linking mechanical operations together. 4. Therefore building a self aware machine is not possible.

27. DAV

2. No added mechanical operations, when linked to the first, grants self awareness.

You mean not one of which you are aware. Argument from ignorance. Taking “mechanical” loosely, it did occur at least once or else you yourself are not self aware.

28. Dodgy Geezer

Continuum fallacy. Have you never heard of Emergent Properties? Go and read Hofstadter…

29. Gary in Erko

Forget emergent properties. That’s a furphy in this case. All creatures have full consciousness, but cellular development along the evolutionary path needs to reach certain degrees of sophisticated interrelationships before that consciousness can be comprehended and utilised at the suitable level for that creature. It’s an innate property of being alive, but can function only at the ability level of the cellular “tools”. It’s not an emergent property of cells; it’s an innate and essential (ie. of the essence) property of life itself.

It’s at the other end of the piece of string. Consciousness didn’t arrive with later evolution. It was discovered by the later evolutionary creature; us. It had been there all the time in every stage of evolutionary life.

30. Ye Olde Statistician

1. Heaps are agglomerations that are not things; that is, any mereological set, such as {X|X=John Wright,Saturn}. We may call this the Johnsat (or perhaps the Wrightturn). It does not constitute a thing (substantia, ouisia) because its parts are not internally related one to the other. So even two particles of sand constitute a heap, even if colloquial speech would not notice this. There is no qualitative difference between 2 grains and 200,000 grains.

2. Algorithm. I have seen Al Gore. He has no rhythm.

31. Dodgy Geezer

…How can humans make something that is more intelligent than themselves? I don’t think the human mind can make that leap….

I will tell you how if you tell me your definition of ‘intelligence’…

32. Larry Geiger

No Ray. It was a binary, digital computer:
Arithmetic and comparison operations are performed through the use of a parallel binary adder. Operands are formatted a 13 octal digit mantissas plus sign, with an exponent of two octal digits plus sign.

You are confusing memory organization and registers with actual processing. Binary, digital computer. Just like a Mac. Just like a PC. Just like computers built using gates and transistors.

33. Ken

RE: “But just like with the wood, at not point of size can this pile of silicon, metal and plastic ever develop an intellect. As before, magic has to be invoked to say that addition of one more logic gate turns the pile from a pile into a life which has an intellect and will.”

THE argument seems premised on false assumptions that are not stated (and almost certainly not, or only very poorly, supported):

First Principles–what causes intellect to form (or, in other words perhaps, what is necessary for sentience)?

The answer seems to be a LOT of neural connections, or in the case of a man-made computer, something very equivalent. So far, no computer remotely comes close to the human brain, so this remains untested. For now. (e.g. see http://bgr.com/2016/02/27/power-of-the-human-brain-vs-super-computer/)

On the other hand, we can study, and have studied, what happens when a functioning neural network associated with sentience/intellect is reduced: Brain injuries and stroke victims show exactly what happens when certain parts of the brain are destroyed, very consistent forms of intellectual deficiencies result from particular damage. We also know that if enough neural matter is destroyed, all evidence of intellect vanishes, even though the physical body may survive indefinitely if separately cared for. These are tangible cause-effect relationships repeatedly observed, in humans and other animals.

Note how certain people will try & invoke “soul” and support that by philosophical reasoning to argue about intellect (i.e., sentience) to deny via various mental gymnastics the physical evidence that consciousness/sentience/intellect are manifestations of compact, dense neural networks. The glaring omission for the soul argument is the explanation for what happens to someone’s soul as they progress, for example, thru multiple successive strokes manifesting in increasingly severe intellectual, and commonly moral, deficiencies — is God harvesting that person’s soul in increments to be reassembled later, elsewhere? And if so, why are some moral values degrading [again on predictable and repeatable patterns]?

34. Ye Olde Statistician

Intellect is not mere sentience. Sentience is simply the awareness of self as something distinct from the rest of sensory reality. This is available to any organism with common sense. (Each sensory signal arrives in the brain at a different instant, so there must be an internal sense that unifies into a common “image” all the different sensory inputs (and privileges these inputs over those inputs)) This, this patch of redness, this smooth feel, this cool sensation, this crunchy snap, this sweet taste, etc. are all one thing external to us; viz., a red, cool, smooth, sweet crunchy apple.

Aquinas believed that the seat of consciousness/sentience was in the brain, which most folks still believe.

Intelligence is, as the word interlegere implies, the ability to read-between the lines; i.e., to grasp something that is not physically present. Thus, it is not simply cleverness in manipulating physical objects. One must be sentient prior to being intelligent. And I suppose intelligence is a per-requisite for an intellect; but these are not interchangeable concepts.

35. Jim S

One’s position on AI can pretty much be traced to one’s position on the freewill vs. determinism debate. Materialists/determinists see freewill as fundamentally violating both the law of conservation of energy and causation. The ideas surrounding the early framing of the AI “problem” took place within the context of psychological Behaviorism and the schools of Analytical and Logical Positivism philosophy.

Behaviorism dismissed any notion of “intentionality” as un-scientific because it cannot be observed and measured. This variant of “positivism” has its roots in ideas of Bentham, Huxley, Comte, Marx, and others. Turing’s test, similarly, specified that if the “behavior” of a computer program is such that a person cannot distinguish it’s “behavior” from that of another person, then it must be said to be “thinking”.

Analytical Philosophy and Logical Positivism tried (broadly speaking) to reduce thought to deductive reasoning within finitary, formal systems. The ideas of Boole, Peirce, Frege, Mach, Russell, Whitehead, Hilbert, etc. were instrumental in the development of computer programming, but equating “thought as behavior” and adopting the metaphor of “brain as a computer” was largely done for ideological reasons, not scientific ones.

With regards to the conservation of energy, animals (including Man) are not Descartian automatons – they are causal agents. Thought is not just a “reflex-arc”. Conscious deliberation and making choices does consume energy (so much so that if you are fortunate enough to live to 90, you will have slept 30 years) but the diversion of energy from one system to another does not mean that the total energy of the system is exceeded. A living organism is not a “closed” system. Entropy wins in the end, but it can be staved off for awhile.

With regards to the violation of causation, some (such as Penrose) have appealed to the “indeterminism” of Quantum Mechanics, but this is just a failure to distinguish between epistemic indeterminism and ontological determinism.

36. Larry Geiger

“With regards to the violation of causation, some (such as Penrose) have appealed to the “indeterminism” of Quantum Mechanics, but this is just a failure to distinguish between epistemic indeterminism and ontological determinism.” Wow. That’s a mouth full.

37. Jim S

from Larry,”Wow. That’s a mouth full.”

To put it another way, “the map ain’t the territory” (or, more pertinent to this blog, the General Circulation Model ain’t the Climate). To believe so is to commit a reification fallacy.

38. Fr. John Rickert, FSSP

A question: Is there any implication or significance to the fact that the operations of the abacus are reversible but that certain acts of the intelligence are not? With the abacus, we can move the beads back and forth as much as we wish, or flip 0’s and 1’s in a computer, etc. But I would at least hope it’s the case that when we realize 2 and 2 make 4, we do not revert to any earlier state in which this was unknown. The knowledge sticks. Just wondering.

Excellent article and argument.

39. Joy

“”That’s throwing out information isn’t it?
Perhaps but only the irrelevant information.”:

Most would agree that the point ‘intelligence’ definition is not clear and has it’s own inherent problem; being that the thing being defined is using the same entity to define itself. That’s a truly never ending problem.

It does, however, seem to me to be the place to start and if that’s not done satisfactorily then the basis of other conclusions will be questionable.

The AI quest has been misunderstood by spectators like myself but I was and secretly still am, under the impression that the aim is ultimately indeed to mimic a living intelligence let alone a human one. This might just be the media and or sci-Fi twits who want it to be the case in every given example of AI Rand D.

Artificial is the given. ‘Intelligence’ like you say is where the conflict lies.

A laptop is intelligent colloquially speaking but this would be a clearly dreadful example of actual intelligence. So what is the bar which makes that so unreachable? I say intellect is more than memory and processing, in and output, which is machine speak.

Piles and heaps are a distraction that only proves that the real world is not digital. It rather supports the argument against a fully artificially simulated intelligence. It is more analogy and relies on the controversial no clear definition.

That is the same problem as in the soul argument.

People know intelligence when they see it.
They might be fooled for a while but real intellect and intelligence is not something that can be fully understood in all it’s dimensions, fully orbed! One’d have to be outside of the system to have a chance.
~~~~~
“”It has also been recognized during early attempts at language processing that certain concepts have roots in shared experiences (such as having similar bodies) but that doesn’t mean an entity which doesn’t participate in this sharing can’t be intelligent — particularly since there is no firm definition of “intelligent” outside of a list of human characteristics.””

How could something which does not exist be designed by copying the thing of which there is no example?

“Several things:
Why does it need to be unpredictable? Merely because we don’t understand the workings? Why is that necessary.”” It isn’t except it’s one feature of human nature. It’s fallibility. Also, I thought the unpredictability was being used to show original thought but you covered that.

2) What is thinking and deciding and how are they not processes? How do you know that people aren’t also processing when they think and decide? What are people actually doing when they “think and decide”? How do they do it?

Nobody really knows where the knowing part resides. That is the puzzle. Processes occur in the body from a to b for consideration of the observer and even those are more complex the closer one looks. It’s superficial to consider that a process is linear when dealing with the body. It’s just a guide. A process implies one thing then the next and so on to render a result, that process being repeatable and predictable. Like a production line. A computer programme is a process, sneezing is a process but only in a superficial way.

3) What exactly does it mean “to know”? I mean beyond the “knowing” of interrelationships. Like above, what are people actually doing when they “know’ something? How does it work in people?”

THAT is an excellent question! That is what nobody has
ever been able to answer and I predict, never will. The reason being that the thing which you call ‘knowing’ is as certain to you as it is to anybody contemplating it. (feeling it, knowing is a type of feeling) and it’s what makes you you.

Everybody knows what they are talking about and yet nobody can talk about it properly.

40. DAV

Artificial is the given.

Artificial means “not naturally occurring” IOW “man made”. “Artificial sweetener” doesn’t mean “not really” a sweetener. It means one not occurring on nature. It’s also not necessarily a copy but more a functional equivalent.

There is a great temptation to compare the performance of an AI to a human. Mostly I suppose because we have no other examples when there is an insistence that only humans can think. To me that’s a bit like saying airplanes don’t fly because they don’t flap their wings like birds do. They only simulate flight.

A process is just a way of doing things. It’s not necessarily linear. I found it odd that you said an AI would “just” process.

Computer programs usually aren’t linear. In fact, most programs written today are so complex that they can’t be fully tested. We are already st the point where they can surprise us.

41. Fr. John Rickert

To Sander: Are the limits of algorithms the same as the limits of intelligence? If the answer is “yes,” then there should be an algorithm to prove that, correct? Yet we also know from Rice’s Theorem that algorithms do have limits. I think one will end up in a tangle by asserting the equality of intelligence and computability: I suspect that this very assertion of equality is undecidable.

42. Sander van der Wal

@Fr. John Rickert

I don’t see a problem with that statement. If an intelligence cannot come up with an algorithm to compute some action then that intelligence is limited by that lack. If an intelligence can come up with an action without using an algorithm then the intelligence is not based on algorithms only.

Problem is that you can make algorithms out if lots if things, even guessing.

43. Fr. John Rickert

Sander — Well, I may not quite grasp your point, but to clarify mine, I’m not so much asking for an explicit program that shows the asserted equivalence, although that would be especially convincing; the question is whether such a program even exists at all.

I recommend the excellent “Introduction to Theory of Computation” by Anil Maheshwari and Michiel Smid, available for free.

44. Joy

This nearly foxed me:
““Artificial sweetener” doesn’t mean “not really” a sweetener. It means one not occurring on nature. It’s also not necessarily a copy but more a functional equivalent.””
It’s an engineer’s offering. A toy or a very useful and important tool. It’s a substitute for the real thing and there’s nothing like the real thing.

There is a great temptation to compare the performance of an AI to a human.
Mostly I suppose because we have no other examples when there is an insistence that only humans can think.

““To me that’s a bit like saying airplanes don’t fly because they don’t flap their wings like birds do. They only simulate flight.””
Yes but the statement misses out the information that we agree airplanes don’t fly naturally. Nature is left out of the wording. The aeroplane exists within nature.

“I found it odd that you said an AI would “just” process.” Wy?
Because even if you say the programme is (I think the term is algorithmic?) does that mean it isn’t still just a process?
Process does not occur without a reason, can I put it that way? processes don’t exist on their own.

45. Ray

Dav and Larry,
I was looking for information on the IBM and Burroughs arithmetic units and couldn’t find any so I will just wing it. Dav pasted this.
“The Word Mode uses octal number system, and information is handled one word at a time. A parallel binary adder is used for performing arithmetic operations in this mode.”
Unfortunately, the document doesn’t say anything about the architecture of the parallel adder. Parallel adders use carry lookahead logic to avoid waiting for a carry to propagate thru all the adder stages. They are called, not surprisingly, carry lookahead adders (CLA).
The problem with the CLA is that the carry lookahead logic rapidly increases in complexity with the word size. The Burroughs computer used 48 bit words (operands?) but they didn’t use a 48 bit parallel adder because of the complexity of the carry lookahead logic. It would have been very costly to build a CLA for 48 bit words. To reduce the complexity and cost, the computer designer would design the parallel arithmetic unit on a bit slice basis. Burroughs used 16 3 bit full adders to simplify the carry lookahead logic. IBM did the same thing only they used 8 4 bit full adders because the IBM used 32 bit words. The WIKI article shows a 4 bit CLA, and it can be cascaded into 8, 16 and 32 bit parallel adders without much additionally complexity.

46. Martin Dillon

I have a simple question that I don’t think has been addressed by Mr. Briggs or by the comments here. The question: is the lack of engineering know-how the only reason why we can’t construct a functioning human organism from raw materials? [Raw materials = atoms] Let us suppose for the sake of argument that the answer to this question is yes. Given a plan and the appropriate raw materials, we could construct a person. We have to imagine here the possibility of appropriate biochemcial processes as a major part of this plan. Does this shed any light on the question of AI? It seems unreasonable to me to suppose that if could accomplish the construction of a person from raw materials we could create a similar mechanism from digital equipment.

Could one reasonably argue that engineering could not produce a person?

47. Dodgy Geezer

…The question: is the lack of engineering know-how the only reason why we can’t construct a functioning human organism from raw materials?…

My understanding is no…and yes. I assume that you are talking about assembling from atomic particles rather than chemically cutting and splicing pre-existing genes to create a ‘new’ life-form out of existing life-forms, which is something we do on a routine basis today.

We understand how to manipulate individual atoms in lab conditions, and can make small assemblages to perform various functions at the atomic level. However, building a large assembly like a cell would be hard, because it is in a constant state of flux, and our assembly techniques only work with static atomic structures. We would also need to know the precise position and orientation of each atom in a cell at a given moment, together with the electrical potentials, which is data we have not gathered.

Nevertheless, I believe that we understand enough to attempt the building of cellular sub-modules – at least in principle. We would probably hit all sorts of engineering problems along the way, but solving these is what engineers are for.

If you wanted to assemble a complete human from scratch, the best way would almost certainly be to assemble a fertilised cell and let it grow in an artificial womb.

48. Akinchana Dasa

Martin Dillon,

A ‘person’ cannot be produced or destroyed, as a person is an eternally existent principle/entity unlike a computer, which starts at a particular point in time. I tried to give the reasons why/how this is so in an earlier post, but it would not upload correctly with much of the material deleted. I could try to explain again, if interested.

49. Milton Hathaway

“Complex systems can sometimes behave in ways that are entirely unpredictable. The Human brain for example, might be described in terms of cellular functions and neurochemical interactions. That description does not explain Human consciousness, a capacity that far exceeds neural functions. Consciousness is an emergent property.” – Data

“In other words, something that’s more than the sum of its parts.” – La Forge (TNG: “Emergence”)

For the non-fans:

https://en.wikipedia.org/wiki/Emergence

50. Ye Olde Statistician

It doesn’t have to be all that complex. Wholes often have properties their parts do not. The properties of an atom are not those of protons and electrons. Nor does an electron bound to a valance shell of an atom behave as a free electron. It behaves as a part-of-a-whole rather than as a Ding an sich.

51. Julius Evola

Wow, the AI types really came out of the woodwork!

52. I’m aware that the comments were made 3 years ago, but:

Bat 8 said:
||I think this argument is just the heap paradox. Is it invoking magic to say that there is a crucial grain of sand at which a group of grains of sand becomes a heap? If so, then does that prove that heaps of sand are impossible to make?||

I have no idea why this is a paradox. This is obviously just how we choose to define “pile”. How does it apply to computers and them being conscious? Are you claiming an abacus is very slightly conscious? Why would we think it’s any more conscious than a boulder?

Or are you saying an abacus is wholly non-conscious, and a certain degree of complexity is required before consciousness appears.? How is such a consciousness entailed if everything the computer does is just the execution of instructions? Where does the consciousness come from?

Also your post seems to presuppose brains somehow produce consciousness. How do you know they do? It seems to me the notion that brains produce consciousness is equally as absurd as the notion that computation produces or just *is* consciousness.

53. Since the three fundamental processing gates (AND, OR, and NOT), can be created out of dominoes, it follows that any computer could be replicated by dominoes (granting a mechanism to pick them back up).

Someone who believes the human mind to be a mere computer is compelled to believe that with enough dominoes, you create a person. How many dominoes is that, exactly?