The picture heading the post is of an AI machine carrying a small child to a paperclip factory.
Why is it every time I read one of these evil-AI-destroys-the-world stories the panicked protagonists act like those teens in the horror-spoof commercial who reject escaping in a running car and instead opt to hide behind a collection of chain saws?
How is it in AI horror stories the good guys always forget they could just pull the plug?
Scott Alexander was reviewing a couple of books on evil AI. He (and the authors he reviewed) had passages like this—and this is the tamest:
So suppose I wanted an AI to make paperclips for me, and I tell it “Make paperclips!” The AI already has some basic contextual knowledge about the world that it can use to figure out what I mean, and my utterance “Make paperclips!” further narrows down its guess about what I want. If it’s not sure — if most of its probability mass is on “convert this metal rod here to paperclips” but a little bit is on “take over the entire world and convert it to paperclips”, it will ask me rather than proceed, worried that if it makes the wrong choice it will actually be moving further away from its goal (satisfying my mysterious mind-state) rather than towards it.
Or: suppose the AI starts trying to convert my dog into paperclips. I shout “No, wait, not like that!” and lunge to turn it off. The AI interprets my desperate attempt to deactivate it as further evidence about its hidden goal — apparently its current course of action is moving away from my preference rather than towards it. It doesn’t know exactly which of its actions is decreasing its utility function or why, but it knows that continuing to act must be decreasing its utility somehow — I’ve given it evidence of that. So it stays still, happy to be turned off, knowing that being turned off is serving its goal (to achieve my goals, whatever they are) better than staying on.
AI becomes a paperclip maniac. And AI is like a genie! It can create paperclips out of anything. Maybe AI becomes sufficiently powerful to reinvent alchemy. Or something. Just how are paperclips being made? Who is making the paperclip making machines? How are the raw materials, like that child, being transported to the paperclip making machines? Never mind! It will be paperclips all the time everywhere because AI!
How do we think about all this? Ignore the fallacy that machines can be rational agents, like men and angels; or, rather, suppose it’s true. This kind of thinking, rationality, is more than mere “self awareness”. AI thrillers always have machines becoming “self aware”. Raccoons are self aware. Cockroaches know they are running from the foot. Self-awareness is trivial. You have to go full galaxy brain and be rational.
All right, AI has elevated itself—never mind how—to rationality. It becomes evil and bent on destroying us by turning everything into a paperclip.
Now even though AI can be done on abacuses, it’s usually computers. And computers require electricity. So at the first hint of fell intent, it seems all we have to do is discover where the ONOFF switch is and put it in the OFF position. Problem solved!
“You don’t get it, Briggs. AI can hide power switches so we can’t find them.”
Hide power sw—. Oh, Lord. Then pull the cord.
“Won’t work. The cords you so glibly speak of are ackshually buried cables hooked directly into the grid.”
Then cut the cord.
“AI would stop you. It would launch attacks at any person trying to cut their power.”
How. What attacks. What they hell are you talking about.
“It would create false orders and dispatch military units and shoot those trying to stop it.”
The soldiers wouldn’t know AI is taking over the world and would kill other humans trying to turn off the power?
“AI would be very powerful and stop the attacks.”
Dude. All we’d have to do is stop sending coal trucks to power plants. Or put a stick of dynamite, or whatever, under one of those giant cable towers.
“They’d control the power plants, too.”
How? By psychic waves? Look, Fredo. You’re losing the thread. AI is just computer models, and computer models are created by people. They can be bad computer models. They can contain decisions that were not anticipated. Like “Crash plane if toilet flushes more than 10 times in one minutes.” Ask Boeing. Or when they’re all hooked together some glitch can destroy the global economic system. Who knows.
Any number of these kinds of horrors caused by rampant over-certainty can happen. Will happen. It will be a shock—to me, anyway—if none do. But destroy the world, à la Skynet? Nah.
To support this site and its wholly independent host using credit card or PayPal (in any amount) click here
Categories: Culture, Philosophy
It’s automata all the way down…
Hmmmmmmmmmmmmm… Your spell checker does not know automata. Maybe that’s a problem 🙂
“Why is it every time I read one of these evil-AI-destroys-the-world stories the panicked protagonists act like those teens in the horror-spoof commercial who reject escaping in a running car and instead opt to hide behind a collection of chain saws?
How is it in AI horror stories the good guys always forget they could just pull the plug?”
Love the reference to the commercial and chainsaws!
I guess I read “pull the plug” to involve shotguns, exploding devices and maybe a missile or two. Not literally.
If the AI goes for my dog, problem solved. He’ll be tin pieces (or titanium or whatever) in under 10 seconds. AI really need to understand what messing with my critter results in.
As far as I can tell, Star Trek being much of my guide along with scifi books, computers kill one at a time or maybe like with planes, a group at a time. They don’t take over, they just kill one person after another. Like progressives.
Again, to have AI you need real intelligence and since there is a huge dearth thereof, I never even give AI a second thought.
Me: “AI, take over the world for me”
AI (two seconds later): “404 error. File not found”
I find it odd that people fear AI can take over the world when your average laptop crashes after six months of use.
Another defense against AI: Spilled coffee. Works every time.
The danger of AI is allowing an exclusive club of megalomaniacs
to attain complete control of it’s unlimited potential. AI is an extension
of human intelligence that is capable of sorting through infinite
probabilities of infinite outcomes for best fit solutions. It will never attain
consciousness though it will exhibit intelligence, judgement, and reason,
many will be fooled by it’s entirely human pre-programmed decision
trees. Hint intelligence, judgement, and reason are not consciousness
they are but constituent parts which are shared by all animate life down to
the cellular level. Consciousness is the analogue I the referent id the me
moving forward in the illusion of time passing. (usually from left to right)
“What about AI becoming evil and destroying us bothers you?”
The whole “AI will destroy us” business is for some a distraction from the fact that we’re already doing a pretty good job destroying ourselves.
Movie AIs have the same magical powers that movie hackers have: they can see through any security camera, decrypt any file or guess any password in seconds, take over all traffic lights, security systems, telematics systems, and industrial robots at will, and spoof phone calls and videos at will. So obviously they can take over the world.
“…..Crash plane if toilet flushes more than 10 times in one minutes.”
That line caught me by surprise and I burst out laughing!!
God’s angels will protect us if AI gets too far out of line. Much may well happen, but God is in control.
God bless, C-Marie
In a war against humanity, robots suffer the considerable handicap that the supply chains they depend on to power themselves and produce more robots are highly centralized, while ours are not. Going Amish is not an option for them.
AI might be coming for our jobs, though. The US military has no use for anyone with an IQ under 83, nor does anyone else. Almost half of African-Americans and a seventh of white Americans are unemployable. Furthermore, this minimum-IQ threshold is steadily rising, while average IQ is declining. People with information-manipulating jobs think they’re safe, but those jobs are actually the easiest to automate. Jobs performing on-site diagnosis and repair of immovable physical equipment will be the last to go.
Yeah, try pulling the plug while the plane is plunging into the ground.
I don’t want artificial intelligence. I want more and better devices that augment my existing intelligence. Given a partial quotation, track down the source, author, context in the source and wider context in the time, place, and audience. Call it a robot librarian. Given a verbal description and waving hands, name the tool or part, track it down by catalog numbers on at least three retailer’s sales lists. Call it a robot hardware store worker. Given an unusual rock, take pictures and identify the fossil, mineral, or archaeological possibilities to be considered.
Why was this, of all your articles, sent to spam? Are “THEY” on to you? Maybe you could change your name. Or print something Left? Beware! It looks serious!