Will super-intelligent computers soon spell our doom, or have futurists forgotten something fundamental? Hint: it’s the latter. Go to the Stream to read the rest.
The chilling news is that killer robots are marching this way. Paypal founder Elon Musk and physicist Stephen Hawking assure us Artificial Intelligence (AI) is more to be feared than a Hillary Clinton presidency.
Google’s futurist Ray Kurzweil and Generational Dynamics’s John J. Xenakis are sure The Singularity will soon hit.
When any of these things happens, humanity is doomed. Or enslaved. Or cast into some pretty deep and dark kimchee. Or so we’re told.
It make sense to worry about the government creating self-mobilized killing machines, or the government doing anything, really, but what’s The Singularity? Remember The Terminator? An artificially intelligent computer network became self-aware and so hyper-intelligent that it decided “our fate in a microsecond: extermination”. Sort of like that. Computers will become so fast and smart that they will soon realize they don’t need us to help them progress. They’ll be able to design their own improvements and at such a stunning rate that there will be an “intelligence explosion”, and maybe literal explosions, too, if James Cameron was on to anything.
Xenakis says, “The Singularity cannot be stopped. It’s as inevitable as sunrise.” But what if we decided to stop building computers right now? Xenakis thought about that: “Even if we tried, we’d soon be faced by an attack by an army of autonomous super-intelligent computer soldiers manufactured in China or India or Europe or Russia or somewhere else.”
As I said, we surely will build machines, i.e. robots, to do our killing for us, but robots with computers “minds”, will never be like humans. Why? Because computer “minds” will forever be stuck behind human minds. The dream of “strong” AI where computers become superior creatures is and must be just that: a dream. I’ll explain why in a moment. Machines will become better at certain tasks than humans, but this has long been true.
Consider that one of the first computers, the abacus, though it had no batteries and “ran” on muscle power, could calculate sums easier and faster than could humans alone. These devices are surely computers in the sense that they take “states”, i.e. fixed positions of its beads, that have meaning when examined by a rational intelligence, i.e. a human being. But nobody would claim an abacus can think.
Why can’t there be a singularity? Go to the Stream to find out.
Oh, we have lots more to do on this topic. This is only a teaser.
Computer programs aren’t made of physical stuff either, which seems to be the linchpin of your argument. Can you explain the impossibility of them becoming rational (in any broad sense of the word, if not human-like rational)? There exist now programs that write programs and programs that discover new solutions to puzzle problems by self-modifying evolution of their code. Rudimentary, yes, but operating now and getting more complex.
My sense is that if machines ever do destroy civilization (humanity is a tougher nut) it will be by accident rather than design. Just a breakdown of massively interlinked control systems that disable power grids, communication systems, and other modern infrastructure on which we are evermore dependent.
> Computer programs aren’t made of physical stuff either
Of course they are. A text file containing code exists on a hard drive as a particular state of electrons (or in one of the various memories in the same way). A computer’s BIOS has its instructions stored on memory too, which is just electrons and matter in a certain configuration.
> Rudimentary, yes, but operating now and getting more complex
Someone always manages to tell the first code exactly how to behave, though. Whenever you read an article about that kind of coding, someone usually says it behaved “unexpectedly”, which is horse hockey. The code behaved exactly as it was programmed to. The result might be unexpected, but that’s just a lack of imagination.
> it will be by accident rather than design
Your last comment says it all. I ran across this yesterday:
A simple solution to the entire “smarter than humans” thing—make humans dumber. Then the singularity is easy.
(Didn’t much of “The Terminator” violate the laws of physics?)
As with all discussions of “thinking”, the definition is what determines the answer. Could computers do as DeVinci did and come up with airplanes before there is any such thing? Maybe. Is that “thinking”—if you define “thinking” as taking information available and coming up with an idea not already in that data set, then yes. We then have the Star Trek problem with Data, who could think but no emote (until late in the movie series). If, of course, emotions are considered separate from thought, then Data did think. My dog props her chew toy toothbrush up on various things to get a better chewing angle. It looks for all the world like a dog using a tool. Many would say it is. Or is it that one time she found the brush propped up and it was so much nicer, so she started propping it up herself—remembering something that happened, not creating a completely new idea in her head? Is that thinking? It all boils down to how one defines thought.
Computers could learn to “think” using many of the definitions of thought people have proposed. In theory, this could cause havoc and the end of humanity. However, many things humans have created can cause their demise, whether or not the inventions can “think”. That’s not the real question here. What we are asking is “Can computers become human?” The answer to that is “no”. They can behave like humans, but they cannot be humans. (Further explanation: A human being can act like a dog, dress up like a dog and believe themselves to be a dog, but they still are not a dog. What one looks and acts like does not define their species. A robot is still a robot, no matter how smart it may be.)
Programs have two components, the code as implemented in electronic circuits and magnetized rust, and the set of logical ideas underlying the code. It’s the later that meets Briggs’ non-physical criterion. He says our intellects are non-material operating through material means which makes them special. Logical ideas (e.g., addition) operate through the code. I’m not saying programs can become rational; I’m wondering about his criterion of non-materialness as the defining difference.
Computers have already been as smart as humans, because humans have been computers. *Electronic*computers became at some point available so the human computers have been replaced by the electronic ones. And it is the *electronic* computer that will supposedly become as intellgent as the *human* computer, at wich point it will probably want a better paying job too.
In my view, ‘Why can’t there be a Singularity?’ is an under-determined question, in the sense that answering it positively or negatively becomes possible, if-and-only-if certain premises are accepted.
Put differently, the Singularity question itself does not exist as an inviolate Idea; it can only be formulated within a collection of premises, a ‘language’ if you will; yet premise-collections differ.
And that there are questions literally unanswerable BECAUSE un-askable in the absence of the acceptance of a boatload of very particular priors is something well-admitted by Dr. Feser and at least some other neo-Aristotelians.
The question of the ‘translatability’ or context-independent universality of questions is itself a matter of deep dispute. I am inordinately fond of Chapter VIII of Alasdair MacIntyre’s Gifford Lectures, “Three Rival Versions of Moral Enquiry,” in part because of passages like these:
I point out that in his Gifford Lectures, MacIntyre reminds us over and over of the abject failure of the implicit Ninth Edition prediction that all rational people would at very least make substantial progress in resolving their disagreements about moral standards, criteria, and methods.
What MacIntyre did not remind us of was the evident ease with which great masses of people, including the very learned, can be induced to believe six impossible things before breakfast, and six exactly opposite things at lunch. And this near universal Agreement, at least among those who inhabit our culture, or one very like it, is Progress of a sort, is it not?
In effect Matt responds to the Singularity question by saying that minds are not like that, and cannot be like that. For him, this is arguing from premises. But others beg to differ. For them, Matt is creating a premise-collection in which the questions they would ask are un-askable — and nothing more than this.
So I do not think the Singularity question is as directly answerable as Matt imagines. Nor am I as sanguine about the ‘evident’ non-materiality of (say) the Integers, or of the human mind. Further, the strictly Catholic theological answer regarding some sort of non-materiality or materialism is, it doesn’t matter either way. It doesn’t matter whether the human mind is completely material or also has a non-material dimension, because whatever our human mind is like, Jesus the Lord had a human mind, too.
So not only am I relatively unimpressed by Matt’s ‘proof’ of the impossibility of the Singularity, I am relatively uninterested in the question itself.
However, the Singularity question brings to mind a more apt question, relevant in every age: “Can Man invent something, or simply be afflicted with something, or (worse) blindly or even willingly accede to something — some plague, some germ, some creature, some machine, some war, some perversion, some manipulation of himself, some anti-Christ, (some Singularity) — that will bring him Death, and bid fair to destroy him utterly?”
Well, yep. Yesirree, he can.
For instance: UK Scientific Adviser Urges Editing of Human Embryos
How fascinating that Important People like Elon Musk are worried about destruction by Singularity, which, as Matt rightly points out, may not even be able to exist, and yet seem not very perturbed by permanently altering — not ourselves — but our descendants, without of course even asking them if they enjoyed it.
So the Singularity will be interested in you, says Mr. Xenakis, even if you are not interested in it. Funny, I thought that war too had this character. Or evil. Or just plain bloody-mindedness.
That we could invent the Singularity? Who knows? That, through blindness, sin, or envy of God, we could unleash a plague — inorganic, organic — that would care not a whit neither about us, our passions and interests, nor our descendants? That is a more relevant, and much more answerable, question.
In outline your argument goes something like this [my reactions in square brackets]:
1. We, including our brains are made of material [yup];
2. We think, and our thinking uses this material [of course];
3. The stuff of thought (numbers, logical relations, etc.) are not material [check];
4. Thoughts about these nonmaterial things are not material [OK];
LEMMA: So our thought itself, our intellect, is not material [that follows].
5. Computers are made of material [obviously];
6. THEREFORE computers can not think, because they are made of material, and thoughts are immaterial.
But you’ve already shown that something (humans) can think even though we are made of material. Your conclusion is a blatant inconsistency.
It’s probably an apple if it’s trying to take over the world! Mine’s taken to playing music for me, “rearranging” data and diverting web searches! Curious, such concern, and the man in the shop was so sweet. Next time, Microsoft.
As for intelligent computers it can’t be done. Obviously.
As for the killing robot, Digestive biscuits. That’ll jam them: block the vents.
I fully expected no argument about this post I am staggered that anyone would actually believe that humans will be able to produce a machine that would be able to generate a thought of it’s own. It will not poses free will. I hope nobody’s arguing still that we have no free will?
All this talk of logic circuits and electricity. This is like one of those giant domino toppling displays. They always require a prime mover or instigator. They are closed systems and limited by rules set out by the human creator.
Robots can look like or sound like or act like humans but it’s simulation, always.
The website, The Edge, for their annual question series in 2015 published “What to Think About Machines That Think”. They didn’t ask for contributions from any of the so called intelligent machines. No-one thought that an intelligent machine can offer an opinion worthy of intelligent consideration. That’s an example of our real thoughts about artificial intelligence – we don’t really believe it. Robot singularity tales are just another idle boogey man scare story attempting to keep us entertained.
The Edge – http://edge.org/conversation/john_brockman-what-to-think-about-machines-that-think
Humans are different because we are rational creatures with intellects that rely on the meat-machines in our head, but we also are more than just our brains. Why? Because our intellects are not material.
While it’s true that the concerns of the intellect can be non-material (numbers, etc.), you have no idea what an “intellect” is other than a functional description. You aren’t in a position to claim the intellect is non-material.
Joy: I was observing something similar. My Mac was not doing what I commanded it to, even though my commands were in proper form. Computers already don’t behave as one expects. (I avoid Microsoft, however).
Lee: What is missing from the way you analyzed Briggs argument is that human intellect is a product of an outside source—ie God. If one assumes that ions of time and chance allowed the creation of immaterial human intellect in a material being, then the argument would be inconsistent since ions of time and chance could endow a robot with immaterial thoughts.
Gary in Erko: Good point, though I suppose one could argue the machines are not yet up to the task of inputting opinions.
I’m not going to engage in scifi speculation, but it would be the height of arrogance to presume that machines will never be self-aware.
JMJ: And how precisely does a machine become “self-aware”. Can rocks and trees accomplish this, too? They’ve been around for eons and nothing so far. So what makes machines so special?
You’re not suggesting that my computer’s come to life! It’s got an odd taste in reading material and very bad timing. Still, It can explain itself to the apple minion who’s going to ‘fix it.” I do, no joking, admire the unquestioning adoration that the apple staff appear to show. It’s like they’re all under a spell. I hope apple never lets them down.
Microsoft always made it easy to control files label move etc. I’ve had this thing four years and I still wish the old Samsung hadn’t blown up. “The beauty of apple is that it does it all for you”. “but how does it know what I want it to do?”
Answer? It didn’t.
There’s nothing like speculation.
I never asserted premises 1 and 2. 2 is only partly right, if we include in thinking our lower animal natures. But some thought processes are not material. 1 is only partly right, in that our brains are material. But we are not entirely material.
We may disagree about the immaterial aspects of our natures, but I’m right there with you on our ability to invent our own destruction. Tinkering with babies is just the sort of thing that can do it, too.
Joy: No I’m not suggesting your computer came to life, though apparently there are those who might. Apple does do what it wants to do, but I have found I can often work around their ideas and get what I want. The file structure is bothersome, but I’ve managed to work around it. It at least doesn’t crash a half dozen times a day like my PC did. I can’t speak to Apple’s support as I rarely contact support for anything.
My chair told me that my 3-year-old laptop ran just fine when I asked for a new one. I told him it has never run. No, it has never run like human beings. An airplane can fly, but it doesn’t fly like a bird with two organic wings. So, can computers or machine think? If “to think” means to have opinions or make decision by means of a conscious organic brain, then the answer is no. To Think? Turing test is one way to define what it means to think. I bet there is vast amount of literature on this topic.
Fortunately, the question of whether machines can think doesn’t seem to stand in the way of continued research on AI.
Why would Briggs, a conservative, fears AI? Equality Now! LOL.
Darnit, the iframe html code didn’t work. Equality Now –
“I never asserted premises 1 and 2.”
From the article:
“The brain is made of neurons and these may be said to take states […] governed by the ‘laws’ of physics.”
“It’s true we use the material which are our brains, but only as a means to an end.”
And in your reply you say, “our brains are material.” But it’s the brain we’re talking about. That is the relevant part of premise. We don’t think with our spleens. But I should have stated the premise more carefully:
1. Our brains are made of material.
because “we” can include too much. You clearly did state this version of 1., and it’s pretty uncontroversial.
“2 is only partly right, if we include in thinking our lower animal natures. But some thought processes are not material.”
You seem to be mixing things up here. Thought itself is not material. That’s what I’ve called your LEMMA, and we agree that the lemma follows properly from the premises. 2. has two parts:
2a. We think.
2b. Thinking uses the brain stuff.
2b is equivalent to your statement:
“we are rational creatures with intellects that rely on the meat-machines in our head”
from the article. No one is identifying thinking with the brain meat or its states, but, as you say it “relies” on it.
So we have
1. Our brains are material objects;
2. Thinking relies on and uses this material brain;
3 & 4. =>Thought (or intellect) is not material.
The reasoning is sound up to here and shows that immaterial thought can be dependent on a material substrate. In our case, that seems to be so, because destruction of the brain is observed to put an end to thought. It does not imply that any thought by any being must depend on a material substrate, nor that substrates besides brains are impossible.
But then you have the non-sequitur:
“intellects are not made of physical stuff, but computers are, [therefore] computers can never become rational.”
This, immediately after you’ve shown that our thoughts, although immaterial, are dependent on our material brains. If anything, your previous discussion lends weight to the idea that computers might be able to think.
Before we can conclude whether or not computers can think, we have to decide if thought is, as Lee seems to agree, dependent on brain material but somehow not actually the brain material (not sure if Lee is saying the two are separate). Are thoughts outside of the material world. If so, how did humans acquire thoughts? We can’t determine if machines can think if we don’t know how humans came to be able to think, unless it is being asserted that thought can come from multiple sources. We would then have to justify the belief that humans can put metal, silicone and electricity together and “create” thought by some unknown process that they might stumble upon. So far, there is only spectulation that we “could”. Anything is possible—what we need to know is the probability that putting together inorganic (or perhaps organic if we used a quantum computer that doesn’t have to be housed at near absolute zero) and somehow stumble upon the way to create thought, which is outside the material world. At this point, it appears to as a probable as learning to overcome gravity and fly like birds by flapping our arms—highly unlikely.
Joy: “Robots can look like or sound like or act like humans but it’s simulation, always.”
Hmm, if I simulate a human brain and body as a package with such perfection that no-one can tell it from the “real thing”, then surely we would have to accept it as human and assign it the same rights, duties and privileges as a human being – after all, it’s claiming to be human and no-one can show me it’s not, so what’s to stop someone from saying I’m (you’re) not human and removing my (your) rights, duties and privileges too?
I wasn’t clear. We are not only our brains. We use them, but we are not only them. The substantial form of the human, i.e. a rational being, contains material and non-material “parts”. We are partly non-material ourselves. I include matter and energy as “material”, incidentally. Non-material means not made of either, but still existing.
Now I didn’t anywhere really seek to prove this. I didn’t have space in 1,000 words. But see this link (and the many links in the second set of posts):
Computers are material only, with no rational form. The rational part of the form, since it is immaterial, can’t be given to computers. Thus they will never be rational beings.
I do think we’ll be able to create biological-computational machines (for good or bad). We can make our version of, say, dogs, or even dogs who are good at sums. Who knows? But dogs don’t have immateriality like we do.
Irrational people –i.e., most people on earth– will believe that machines are conscious as soon as machines convincingly-enough tell them to believe it.
“The machine is conscious! Just like people!”
“How do you know?”
“It told me.”
Computers can think. For instance, tonight while my computer was slicing carrors for my dinner it suddenly remembered we need to buy a present for aunt Maud’s birthday next week. As usual, not trusting its fleeting memory, it wrote a reminder note in its diary on next Wednesday when we usually do our shopping. Thinking of aunt Maud reminded it about the picnic many years ago when ….
Rubbish. It was programmed to chop the carrots. That’s all.
The atomic bomb detonated over Hiroshima was not a thinking device, was it? However, once the nuclear chain reaction was initiated, it could not be stopped and many thousands died. I think we miss the point in arguing whether or not robots will ever think on par with human beings. The term “artificial intelligence” implies simulation. The occurrence of the Singularity does not depend on thinking robots so much as it does on unwitting designers. As far as I know, the law of unintended consequences has not yet been repealed. I have similar grave concerns over gene manipulation.
I see the problem now. When you preface your statement with:
“Even if you don’t follow all the details of this argument, the main point is this.”
it sounds as if you think you’ve put together some kind of argument, and are about to state its conclusion. Good propaganda technique, for those willing to stoop to it.
Now you make it clear that you simply assumed the conclusion.
For a blog post or internet article (that is not intended to be a treatise), 1000 words are too many. Some people skim more than read, so having a signpost at a salient point is not necessarily out of line.
Make a perfect fake?
It can’t be done. Because the words can be typed it doesn’t mean it’s possible.
“if I could fly” would you say I couldn’t? I think your argument is not grounded in reality about what we know of what can be done and how the universe behaves.
Who said we don’t think with our spleens?
Perhaps not our spleens but the brain is coupled with the rest of the human body. The autonomic nervous system is very mysterious and not understood well. The old body and mind compartments are oversimplifications.
Just as the “left right” is not true. Yet it;s trotted out all the time. The brain is neuroplastic. People want to think that the brain is like a computer. They forget, it isn’t. That’s before you get into the spirituality of the matter.
The brain is an orchestra of activity more complex than one, or a group of, it’s captains will ever be able to comprehend and so copy.
We’re talking about building models? Adjusting them as we go? Giving merit to those models in order to decide each modification? Hoping we’ll end up with the right model at the end.
I’m guessing people who keep dogs as working animals might view the animal as a tool easily replicated. It could bark, (as a tradition), scent a kill and go fetch; pull a sled and so on for the task. That is a machine. As the police handlers say “he’s a tool” but you and they know that they don’t really believe the mantra.
Anyone who’s had their heart broken by a dog will know that a computer could never
reach into a soul. All those people say never again will I allow that to happen. I’m not trying to make a case for animal souls but animals can connect with humans and vise versa in a way not really understood.
The A of AI is “artificial” “not real.”
Joy, I know, so do AI researchers and philosophers working on this topic. But I have learned a long long time ago
when something appears so obvious and yet experts are arguing about it, chances are that I might have missed the point. Just like how could it possible that I don’t know the definition of square?
… like how could it possible that I don’t know the definition of square?
I’d say you knew the definition when you were < three!
least said soonest mended, sometimes.
kneel63: Good point. If we start deciding what is human based on other than DNA, then there’s no reason DNA should prove we are human.
Greg: I think you’re right, sadly.
Gary in Erko: If it “suddenly” remembered, I see a flaw in programming!
Anon: People who read 1000 words or less probably should not be reading a philosophy/statistics blog.
Joy: The scientists actually have asked do we think with our brains only. When I was in college, there was a debate about organ transplants and if they could transfer memories from the donor to the recipient. It appears now it is mostly debated by the fringe groups, but at the time, there were real questions about this.
JH: I noticed I failed to fully define a square. It should be four equal sides with four 90 degree angles. However, the definition is still separate from the necessary truth aspect of a square having four sides. (I’m sure you do know the definition of a square.)
Nah, I simply didn’t have space for it. But I did provide you the relevant material which provides the proof. So we’re good. Takes some time to read, though. See in particular Feser’s journal article linked in his post.
Transplant patients? Who knows what they experience with respect to their surgery. If I’d had heart and lungs removed and replaced I imagine I might think and feel all sorts of things. If I felt something, anything, that would lead to a way of thinking, a belief or a fleeting thought. One would be very strange if one didn’t feel something rather profound after such a procedure. It’s a minefield. How the “scientists” thought they’d sort that out is beyond me. Nice idea for a story perhaps.
However I am speaking of the way all our brains work which is still a study in it’s infancy.
The brain IS coupled with the body. No escaping that. Take a look at the sympathetic and parasympathetic nervous system. These cannot be ignored. They are not wholly autonomic. Then there’s the endocrine system. How all these things integrate is beyond comprehension and,
a wonder it ever works!
It’sso important to remember that thoughts affect experience and experience affects thought. So you see the trouble we’re in in trying to simply map the brain like a computer.
Beliefs have a physical affect. That’s a promise. Over twenty years has taught me that much, but I don’t ask you to agree.
Our thoughts cannot be detached from the world around us. We are linked with the world however large or small our horizons at any given time.
Thoughts are part of the human experience. Beliefs are complex thoughts.
Bad ones can be very damaging.
I just listened to a podcast which I missed from a long time ago, wherein the speaker declares that human brains are machines,’ no doubt about that.’ Brains are not machines if all machines are created by man.
If a machine is a tool designed by humans for whatever purpose then I make that assertion wrong. It depends on the premise about machines.
“Squares are shapes that have four sides of equal length” I’d say is a definition.
They necessarily have four 90 degree angles is a necessary truth.
This is all about physics – heat transfers – thermodynamics. Most climatologists are not qualified in physics and do not understand entropy maximization, thermodynamics or heat transfer mechanisms.
There is no valid physics which can be used to show water vapor and carbon dioxide causing the Earth’s surface to be warmer. Correct, verifiable physics can be used to prove they cool. The AGW hypothesis and the “heat creep” hypothesis are mutually exclusive: only one can be right. The latter is supported by empirical evidence and experiments, as well as by the Second Law of Thermodynamics; the former is not support by anything and easily refuted with correct physics, because the solar radiation reaching the surfaces of Earth and Venus is far too short of the mark and cannot possibly explain observed temperatures. Furthermore, there is no valid physics that claims (as the IPCC et al do) that radiation can be compounded and the sum of back radiation and solar radiation used in Stefan-Boltzmann calculations to explain the 288K estimated mean surface temperature of Earth, or the 735K mean surface temperature of Venus, which would need flux of about 20,000W/m^2. That is why I can confidently offer AU $10,000 for proving me wrong, subject to the conditions on my blog.
Joy: I agree with your statements except the last statement on squares. Squares must have four equal sides in addition to the 90 degree angles, or the resulting shape is a rectangle, not a square.
It’s unusual for me to find myself actually agreeing with Mr. Briggs, but this time he is correct. Most, if not all, of the hype we hear about ‘The Singularity’ and its apparent inevitability is largely based upon unreasonably high expectations and hopes, a serious misunderstanding of what minds actually are in the first place, the imbibing of too much science-fiction (especially of the ‘Terminator’ variety), and a large dose of arrogance and hubris on the part of A.I. practitioners who have to continually justify their continued existence to the people they work for. The progress that has been made thus far within A.I. and related fields has been, in spite of all the empty promises and hype, disappointing.
Back in the 1950’s people were told in all seriousness that by 1980 we would have power too cheap to meter, atomic-powered flying cars (and vacuum cleaners), robo-servants and vacations on the moon. The film 2001 came out in 1968, and it seemed very reasonable in its projections, but it’s already 2016 and we still haven’t even established a permanent presence in space. We haven’t even gone back to the moon. It’s too expensive and not worth visiting, apparently. The ‘Singularity’ is yet another one of these misinformed and misguided ideas that have been much discussed, but which will turn out to be just as naive as all the others were.
Why would it not be the height of arrogance to presume that they will be?
Computers deal in syntax and no amount of syntax, even if parsed very rapidly, adds up to semantics. Even whether a thing is a computer is a matter of interpretation. There is a mug sitting on the desk here beside me: it is a computer executing the program “Sit here and do nothing.” And doing it rather well, I might add.
Here is a sign: H. What does it mean? The sound “en,” perhaps? It might be Cyrillic. The sound “mi,” since it might be Cherokee? The proximity of a hospital, perhaps. Or maybe it is the cross section of an I-beam? Or two rust stains from dripping water connected by a fortuitous surface fracture which carried the stains together? The meaning of the sign does not reside in the sign itself and no machine manipulating those signs can put the meaning into them.
This is at the root of the Turing Fallacy: a model can simulate something extremely well, but the simulation is not the reality and the internal structure of the model may not match the actual structure of the thing being simulated. The Tychonic model accurately predicted the motions of the heavens at least as well as the Copernican model, yet it did not match the actual physical structure of the heavens. In another context, when you operate a airline flight simulator on a flight from EWR to LAX, you will in general not find yourself in Los Angeles when you exit the simulator.
Or place yourself in the position of the computer. Sit inside a box. Queries written in Chinese are put into the box. You consult a table of conversions and find that if you receive query Q284 you must output answer A927. In this manner your box can carry on a competent conversation in Chinese. But at no point will you actually know what your are doing, let alone that you are conversing in Chinese. It would be strange if we subtracted the human being from the catbird seat, the box would suddenly become self-aware. Syntax does not magically become semantics.
You’re right, if you squash a square:
I forgot rhombus’s
So angles must “necessarily” be part of the definition.
Syntax does not magically become semantics.
Syntax is structure of a language, that is, the manner of expression. Semantics are the meanings of the structure(s). with Q284/A927, the syntax would be how each is presented. in your example, Q284 apparently means respond with A927. Just like “A := B + C” means (in some contexts) take the contents of B; add them to the contents of C; and put the result in A. The rules for stating the action is the syntax. “A := B + C” could very easily have been expressed as “BC+A:= ” and still mean the same actions. This is the difference between syntax and semantics.
When the computer responds with the actions implied by “A := B + C” then it is exhibiting understanding of the expression. Rudimentary understanding but understanding nonetheless.
Some might say “Yeah, but it was told this!”. It’s is entirely irrelevant that it was told the meaning of the expression. Beginning programmers must also be told. I’m willing to bet most (perhaps all) of what you know you were told it was so. Being told is not a defining characteristic of intelligence.
Some might also say, “Yeah, but it doesn’t know why!”. Whatever why means. Guess what? Neither do you. It’s not functionally different than you being given a list of actions to perform without being told the why. You might guess but, particularly when given them without context, you wouldn’t know why it was there.
For a computer to pass the Turing test, if will most likely do so without a lookup table. Such a table would be impossibly large. On the order of the number of atoms in the universe. While memory capacity has been increasing, it doesn’t seem likely it will ever get large enough to hold this table. For the computer to pass, it would have to do much more. Quite likely, it will process questions much the same as we do.
If the computer could pass the test, that is, be indistinguishable from a human, then it would be absurd to say it wasn’t intelligent anymore than looking at someone of a particular racial makeup and exclaiming, “Oh, look! It’s trying to think!”.
1) Intellect = software algorithm.
2) I don’t accept the idea that an AI would somehow be automatically able to design a more intelligent AI. Each step in intelligence would presumably be qualitative, not just quantative. That’s why we can’t just go ahead and design the first AI.
When ever I read one of these articles I think of the PNP problem.
If P =/= NP then there are “hard” problems. Hard problems are problems that cannot be solved by merely throwing more computing power at the problem.
If P =/= NP then the singularity will not happen.
P vs. NP roughly refers to whether the algorithm can be verified. Heuristic design is used when total verification is not possible (NP or if you prefer, not provable), that is, a “close enough” solution. NP doesn’t mean you can’t find a reasonable solution. It means you can’t prove it’s the “best” or that it will function in all cases. Just like people. P (possible) ALWAYS does not equal NP (not possible).
When the computer responds with the actions implied by “A := B + C” then it is exhibiting understanding of the expression.
No more so than does an automobile when it responds with the action implied by “turning the key in the ignition.” Or for that matter when a flower reacts by turning to face the sun. It doesn’t even know it is doing it, and all the anthropomorphizing and bathetic fallacies in the world will not make it so.
If you sit in a box and respond to cards with Chinese ideograms by outputting other cards with Chinese ideograms according to a table, you are not understanding Chinese. For the experience of understanding, compare the experience to one in which the cards contain sentences in a language you do understand. In the former case, you don’t even know you are conversing in Chinese, let alone what you are talking about.
If the computer could pass the test, that is, be indistinguishable from a human, then it would be absurd to say it wasn’t intelligent
If a model correctly predicted the motions of the planets and fixed stars in a way indistinguishable from observation, would it be absurd to say that the model structure might not match physical reality? Or if a flight simulator was indistinguishable from the cockpit experience, would it be absurd to say that you haven’t actually flown anywhere?
There really is a distinction between X and even an excellent simulation of X.
Singularity = fiction. There will be no such thing. Tools beget tools. Search engines, advertising, traffic lights, water pressure, electricity, stability control, are already controlled by algorithms and most of us are quite happy about it. Expect more of the same.
AI = fact. Systems that learn to mimic a human counterpart exist. Many are indistinguishable from humans (when looking at sensor logs). The Turning test was (sur)passed long ago. Expect more of the same.
Regarding syntax; most complex models these days (the sort we might consider AI) work because they are free to define their own syntax. A computational framework is created that is broad enough to describe just about anything, and then some algorithm goes about using that framework to describe it’s domain in such a way as to satisfy some objective function. Debugging these things is more like psychology than coding.
Humans still reign supreme when it comes to defining objectives. Expect more of the same.
There is no way of translating the machines ‘code’ in to something a human can easily understand. Instead a bizarre form of ‘machine psychology’ is used to probe the algorithm in an effort to understand what it’s doing.
Don’t know why but my post has gone to moderation.
@ Sheri: +1 for referencing Star Trek again!
It’s always struck me as silly that Data didn’t have emotions. The very first thing you would do when designing an android like Data would be to make it fake emotions and it wouldn’t even be very difficult, certainly a lot easier than understanding speech. If I made such an android I’d program it to claim that it not only felt emotions but was also self-aware, just to spook people.
If you asked an AI if it was self-aware and it said “yes”, how would you be able to tell if it was lying or not?