Will super-intelligent computers soon spell our doom, or have futurists forgotten something fundamental? Hint: it’s the latter. Go to the Stream to read the rest.
The chilling news is that killer robots are marching this way. Paypal founder Elon Musk and physicist Stephen Hawking assure us Artificial Intelligence (AI) is more to be feared than a Hillary Clinton presidency.
Google’s futurist Ray Kurzweil and Generational Dynamics’s John J. Xenakis are sure The Singularity will soon hit.
When any of these things happens, humanity is doomed. Or enslaved. Or cast into some pretty deep and dark kimchee. Or so we’re told.
It make sense to worry about the government creating self-mobilized killing machines, or the government doing anything, really, but what’s The Singularity? Remember The Terminator? An artificially intelligent computer network became self-aware and so hyper-intelligent that it decided “our fate in a microsecond: extermination”. Sort of like that. Computers will become so fast and smart that they will soon realize they don’t need us to help them progress. They’ll be able to design their own improvements and at such a stunning rate that there will be an “intelligence explosion”, and maybe literal explosions, too, if James Cameron was on to anything.
Xenakis says, “The Singularity cannot be stopped. It’s as inevitable as sunrise.” But what if we decided to stop building computers right now? Xenakis thought about that: “Even if we tried, we’d soon be faced by an attack by an army of autonomous super-intelligent computer soldiers manufactured in China or India or Europe or Russia or somewhere else.”
As I said, we surely will build machines, i.e. robots, to do our killing for us, but robots with computers “minds”, will never be like humans. Why? Because computer “minds” will forever be stuck behind human minds. The dream of “strong” AI where computers become superior creatures is and must be just that: a dream. I’ll explain why in a moment. Machines will become better at certain tasks than humans, but this has long been true.
Consider that one of the first computers, the abacus, though it had no batteries and “ran” on muscle power, could calculate sums easier and faster than could humans alone. These devices are surely computers in the sense that they take “states”, i.e. fixed positions of its beads, that have meaning when examined by a rational intelligence, i.e. a human being. But nobody would claim an abacus can think.
Why can’t there be a singularity? Go to the Stream to find out.
Oh, we have lots more to do on this topic. This is only a teaser.