Stream: Should We Worry Artificial Neurons Can Now Compute Faster Than The Human Brain?
The report from last week’s Nature magazine is that “artificial neurons” can now “compute faster than the human brain“. We owe congratulations to the inventors of the mouth-twisting nanotextured magnetic Josephson junctions, which can zip along at over 100 gigahertz, a speed “several orders of magnitude faster than human neurons.”
This is some accomplishment. But it remains to be seen what kind.
Nature believes these artificial neurons can be used in “neuromorphic” hardware, which is said will mimic the human nervous system. The inventors are hopeful their creation might soon be configured to reach “the level of complexity of the human brain.”
When that happens, here comes true artificial intelligence. Computerized minds that are human-like, or even advanced beyond them, but without the burden of fallible bodies. Or so they say.
But is it really speed or computational ability that differentiates humans from computers? The answer is no.
At the Sound of the Beep, it Will be 1 PM
It was 1978. We were sitting in the back of geometry class and Brian brought over his new toy. A Texas Instruments hand-held electronic calculator.
Brian was the first to own one of these marvels. We weren’t surprised. Weeks earlier he caused waves of envy by sporting a digital watch. You pressed a button and it showed the time, glowing red. It beeped on every hour, lest you miss this momentous twenty-four-times-a-day event. By the end of the year digital watches were everywhere, serenading schoolrooms hourly—beep-beep-beep—because nobody could figure how to shut the sound off.
The calculator was equally fancy. It could, for example, figure the cube root of 513,537,536,512 in a flash. (This is what stood for a teenage boy’s math joke.) Just try it by hand and see how long it takes you. A minute, at least, and probably longer.
Hurry Up and Calculate
Because it was fast, was that calculator alive, in the sense of possessing a mind? Was it aware it was computing numbers? Did it even understand what a number was? As crude as it was, it could calculate faster than any human. If mere calculation speed is the criterion for awareness, that calculator was more “woke” than we were.
Yet speed does not create awareness. By the time pocket calculators showed up, computers were already faster than people by more than thirty years. The “electronic brain” ENIAC was processing bits faster than any man by 1946. Adding machines based solely on levers, gears, and cogs were faster than men even before that. Why, the humble abacus, already thousands of years old and composed of nothing but some wooden beads on slides, was far faster than people. But nobody would […]
Fire up your calculators and click here to read the rest.
Article: “Reason, or calculating ability, is below the ability to grasp concepts, to understand, to know. These important and essential operations belong to our spiritual intellects, and as such are beyond mere computation. Our intellects are part of our spiritual nature; they are non-material, not made of stuff.”
If reason, or calculating ability, are non-material parts of a spiritual intellect then why does an individual’s reasoning ability manifest particular types of degradation, or loss, when when particular sections of the brain’s neural network are damaged/destroyed (e.g. injury or stroke)?
Ample medical study shows a consistency: Knowing the type of damage, the type & extent of intellectual deficits are predictable with accuracy; observing the type of deficits exhibited the type & extent & location of neural damage is accurately predicted then proven.
To assert a non-material spiritual location for something demonstrably associated with physical systems requires some alternative compelling evidence. To assert that the correlation/causation link is not quite fully proven does not therefore support the spiritual theory (which is based on what??).
Analogy: Discovery of blood circulation; essentially proven logically, long before capillaries were discovered [needed to prove it decisively] … and opposed by many on spiritual grounds (e.g. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3721262/ and http://www.ncbi.nlm.nih.gov/pubmed/15365594)
Modern technology shows no signs of coming remotely close to manufacturing neural interconnections on a scale even remotely close to the human or other animal brain — but there is no basis for extrapolating current technology to conclude an artificial network will or will never achieve intellect/self-awareness.
If reason, or calculating ability, are non-material parts of a spiritual intellect then why does an individual’s reasoning ability manifest particular types of degradation, or loss, when when particular sections of the brain’s neural network are damaged/destroyed?
If programs, or shows, are non-electronic parts of an entertainment then why does a TV’s presentation ability manifest particular types of degradation, or loss, when when particular sections of the TV’s electronic network are damaged/destroyed?
Further, since every act of conception is accompanied by an act of imagination, it would be quite normal for damage to the organ of imagination should affect our perception of these concepts; esp. by third parties. Because so many people confuse perception with conception (and hence imagination with intellect) it is no wonder that this objection seems to be a defeater when it merely misses the point.
(the power to form unified “images” out of different sensory inputs and memories) so that what you see and what you hear, etc, are perceived as the same thing.
I always laugh when someone says that before they die we’ll be able to upload ourselves into a computer thereby living forever. What hubris!
It’s safe to say we’re not even close.
Nice boob tube analogy.
I couldn’t resist.
I always laugh when someone says that before they die we’ll be able to upload ourselves into a computer thereby living forever.
I plan to have my picture taken and uploaded into an iPhone. Same effect.
The speed would certainly help if they can figure out a way to evolve intelligence by genetic programming. Starting simple with organisms that have few neurons, and working up from there. Seems an easier way than trying to figure out the algorithms running in a human mind, if that is waht is actually happening there.
8008? As I recall it was 58008918.
Starting simple with organisms that have few neurons, and working up from there.
That would never work with the “imagination vs. intellect” crowd. They have arranged their definitions so that organisms with few neurons can’t have an intellect. They fail to see that “perception” (let’s stick with visual perception for now) is by nature an abstraction.
For example, perceiving a straight line requires seeing a relationship between individual pixels. The line is a model and not something directly seen.
Another example, an animal doesn’t bounce off the walls of a room until it finds an exit. Instead it heads directly for an opening. That too requires a level of abstraction.
But being able to construct or apply that model doesn’t count for some reason.
In my view, intelligence is mostly measured by similarities. Finding differences is trivial and is only important when the similarities are overwhelming (facial recognition for example). The perception/conception distiction seems an exercise in difference hunting.
It’s unlikely that intellect resides in the neurons. It’s their configuration (inter-connectivity) determining how information is processed that counts. Speed is less important than information bandwidth. Brain injuries degrade intellectual processing because the information processing is interrupted.
“Should we worry . . .?” I’m still stuck at “Should I care?”
The other day I spent a few hours manually extracting data from a couple years worth of automated testing reports from an ‘orphan’ process (one that was never considered important to connect to a database). I used my usual approach, concatenating the thousands of computer generated report files into a single multi-gigabyte file, then using a fairly capable text editor to key off of certain word sequences to extract the few dozen test points of interest from each test, and ‘cleaning’ the data adequately enough to dump it a spreadsheet for some simple analysis.
This usually only takes a few minutes. But this time, the resulting data was a bit of a mess. The automated testing program had been making small mistakes all along, mostly in the form of repeated reporting of a single measurement. It was as if the program had an unpredictable stutter. It wasn’t too hard to recognize the relatively rare data stutters and fix them, but since there was so much data to look through, it took a few hours. A few very mind-numbing hours.
As I was cleaning the data stutters, I wondered what algorithm could be used if I had to do this again and decided to automate it. There was just enough variability in the data stutters that it wasn’t clear how to automate it. Also, I found myself relying on good/consistent adjacent data as a check on the location and extent of the data stutters, but automating the detection of this good/consistent adjacent data seemed equally challenging.
After the first hour, I decided that I had made a mistake, that the effort required had already exceeded the likely value. But I had already invested the hour, and it seemed like I must be getting close. I wondered who else I could get to finish cleaning the data, since I was actually supposed to be working on something else, and it would only take a minute to explain what to look for and how to fix it. The key here is “who”, as the task was just complicated enough to require a human.
Which brings me back to this blog posting “Should we worry . . .?” Right now, AI isn’t even a remotely useful tool for automating simple (for a human) mind-numbing tasks!
Come back when I have such an AI application at my routine disposal, and we’ll talk. But for now, am I afraid of AI? No, I don’t even care.
If code is involved, it’s not intelligence.
This is one of those subjects for which ‘always’ and ‘never’ apply.
For the artificial neurons to become a brain it needs to have some kind of brain software running on it. Hence the idea to start with a very simple brain, where the software should be simple too.
the “imagination vs. intellect” crowd … have arranged their definitions so that organisms with few neurons can’t have an intellect.
Historically, it would be more accurate to say that the Late Moderns “have arranged their definitions so that organisms with few neurons can have a rudimentary intellect.” But this begs the question by assuming that neurons are relevant to intellection. If physical neurons are irrelevant then it doesn’t matter how many there are.
They fail to see that “perception” (let’s stick with visual perception for now) is by nature an abstraction.
If that were true, then they would not have made that very point a couple millennia ago. Even sensation is an abstraction. But as Aristotle once wrote: An animal knows what is food; but a human knows what food is. To put it another way, all examples of animal intelligence refer to physical sensation: for example, an opening in a wall can clearly be sensed as a physical thing. (Or more precisely, as a lacking in a physical thing.)
The Scientific Revolution held that animals were simply meat puppets. (Hence, the popularity of vivisections in the 17th and 18th cent. Animals could not experience pain, right?) But nowadays in the reaction against the Enlightenment we risk falling into the opposite extreme and interpreting all sorts of lower powers as instances of higher powers and so, like Aesop or Disney, “discovering” all sorts of human traits in them simply because they are analogous.
For the artificial neurons to become a brain it needs to have some kind of brain software running on it.
Software is just a way to change a general purpose machine into a specific one. You can convert a general purpose computer into a neuron by programming it to act like one. It’s just a way to effectively rewire without having to physically do so (unlike earlier computing machines where you literally rewired them). You can achieve the same functionality by building a machine that specifically acts as a neuron.
Neurons, even those found in a living brain, are relatively simple response devices. It’s the interconnections between them that are important. One can use software to make the interconnections because it is convenient but it’s not a requirement. The brain seems capable of rewiring itself so our artificial one likely would need to do so as well.
But, yes, starting small with increasing complexity is the likely route. However there will always be objections from the “imagination vs. intellect” crowd who argue it’s the subject of thought vs. its process that defines intellect. See how YOS continues to argue about lesser beings recognizing food but humans know what it is. A focus on content of thought vs process of thought — and all or nothing to boot. If it can’e do algebra, write music, …, and/or whatever it is that apparently only humans can do then it has no intellect.
If it can’e do algebra, write music, …, and/or whatever it is that apparently only humans can do then it has no intellect.
You have it backward. These things are evidence of intellection, not necessary conditions.
the “imagination vs. intellect” crowd
Like “inertia vs life,” it highlights a distinction largely lost on the mechanist crowd, which feels that process explains content, or that if you layer up enough syntax you will magically achieve semantics. Hence, all the poetical metaphors about computers, programming, substrate, and the like. Metaphors can be enlightening, but they should not be confused with equivalences.
Imagination just is the power to form images — by which is meant not merely visual images, but collated sensitive images. It is also called “perception” or “the inner senses.”
which feels that process explains content
On the contrary. The process IS intellect. The CONTENT merely suggests its level: Humans are smarter than cats; cats are smarter than worms; worms are smarter than bacteria; etc. The abstraction that you claim has been noticed since Aristotle is the intellect in action.
A cat jumping at a doorknob (and not the hinges) in an attempt to open the door is displaying more than a conjured image of an open door. It’s showing an understanding of what makes the door work. That’s intellect in action.
layer up enough syntax you will magically achieve semantics.
Agreed, it’s bizarre. Syntax embodies rules for expression within a language. It conveys little information other than what type of information each part (nouns, verbs, the true/false parts of an ‘if’, etc.) may have. Anyone who thinks otherwise has never written a compiler. Syntax is not a process. Parsing it is. Generating a sentence using is. Not at all sure how anything after the HENCE follows from “syntax.”
The process IS intellect. The CONTENT merely suggests its level
But every process is directed toward an end, called its “product” or “output.” For example, the process of coating aluminum beverage can interiors, is directed toward coated beverage cans. OTOH, the process of coating appliances is very different, even though both are called “coating.” (Electrostatic deposition of a flour-like material that is then fired and melted to the part vs spray guns depositing liquid coatings into spinning cans that are then baked. The process for accomplishing the first is very different from the process for accomplishing the latter.
IOW, a process intended for one purpose may differ radically from a process intended for another. This may be overlooked so long as discourse remains on a high level, where one speaks only generically of “processes.”
Intellect is from L. intellectus which is a noun-use of the past participle of a verb; viz. intelligere (“to understand, discern”) and hence means “an understanding or a discernment.” The Latins used it for the Greek term nous used by Aristotle for “mind, thought.” The term intelligere in turn comes from the assimilated form inter “between” + legere “to read” that is, “to read between [the lines].” Thus, to discern meanings that are not present in the actual written words. In the same manner, one reads constellations between the actual stars or physical theories between the actual data.
A cat jumping at a doorknob (and not the hinges) in an attempt to open the door is displaying more than a conjured image of an open door. It’s showing an understanding of what makes the door work.
Certainly, in a Disney cartoon it would. But Ockham warned us against multiplying entities, and anything that can be explained with fewer assumptions should not be over-explained with more. The imagination — by which we mean the common sense, construction of images, and memory — as well as the estimative power, is sufficient to explain the phenomemon.
For a very long time, Scientists denied imagination to brute animals and claimed that instinct was somewhat like what we now call “computer programming.” But now while some of them are denying these powers to human beings, others are trying to apply them to brute animals.
Wow. A pointless exposition of the etymology of intellect. Yes, it is a noun and it refers to a capability.
The point which you are avoiding is that its manifestation is how it goes about its business (the process) and not what it considers (the content) which perhaps is its measure of strength. You have yet to explain why you think it’s the what that matters to the definition and is not merely evidence of level of capability.
The imagination — by which we mean the common sense, construction of images, and memory — as well as the estimative power, is sufficient to explain the phenomemon.
All of which I say are products of intellect. The cat is demonstrating a rudimentary understanding (a common sense understanding) at the very least of a relationship between the doorknob and its operation. It is you who is conjuring up multiple entities with the unimportant distinction between Imagination and SomethingElse based on subject of thought.
Wow. A pointless exposition of the etymology of intellect.
The correct term is not “pointless,” but “vain.”
All of which I say are products of intellect.
IOW, a word means what you want it to mean, and not what it has meant historically. By lumping all sorts of disparate meanings into it, you can eventually make it mean nothing at all.
I have some familiarity with process analysis. Since a process can be defined in complementary fashion as “a set of causes that work together to produce an effect” and “a series of operations that convert an input into an output,” may I boldly ask for either a cause-and-effect diagram and/or a process flow chart for “intellect.” (Understanding ahead of time that it is not so easy to make a noun into a process: processes tend to be verbs.)
While we are waiting, may I suggest the following:
So you can’t or won’t explain.
Your diagram is interesting, though.
Tell us why the names of the boxes and their claimed interconnections explain or even illustrate anything and how you would go about validating it as an hypothesis. It’s not a bad guess but how would you go about showing that it isn’t anything but a guess? You could just have easily labelled the boxes Stuff, More Stuff, Other Stuff, etc. and achieved the same level of illumination. Yours just gives the appearance of understanding that you really don’t have.
I currently don’t have a mechanism to upload a diagram — as if you really care — so don’t hold your breath. I’m not proposing inner mechanisms. Frankly, trying to breakdown a process without any way to verify it is of questionable value.
IOW, a word means what you want it to mean
Er, no. I maintain (with the same level of evidence as you have) that the process involved with intellect are the same for all animals including humans. One does not need multiple facilities to explain why humans are better at cognition than animals.
Er, I meant to say: the process involved with intellect is the same for all animals including humans