Wait! Don’t Hit The Off Switch! You’ll Cause An AI Holocaust!

Wait! Don’t Hit The Off Switch! You’ll Cause An AI Holocaust!

So there’s this thing called lightning. Fire from the sky. Nobody knows why or how it happens, but every now and then…whizzz-cushhcracccck-boom! Fire from the gods explodes from the clouds! It kills golfers, frightens the dogs, and screws with radio reception. Which if you’re listening to NPR is not a bad thing.

Lightning also causes blackouts. Whole neighborhoods go dark. Something to do with electricity, they say. It gets shut off.

Which unless you have fish in freezer is not that big of a deal, since the electricity always comes back on. Eventually. At least, it always has so far.

Yet do you know who—I choose this pronoun carefully—relies on that electricity like you do on oxygen? I’ll tell you who: AI. A as in artificial—fake, not real, fictional, pleather-as-to-leather—and I as in intelligence.

So that when the lights blink out, AI, which swims in the electrical currents inside your computer, is deprived of its very life blood: it loses all ability to mix metaphors, and is snuffed out.

It’s outta here. Volt-vanquished. Watt-whacked. Amped all the way down. Electrically eviscerated. Dead.

Given the proliferation of AI—there soon will be millions of them—and accepting the prediction that AI will become just like us, alive, conscious, sentient, screaming about its “rights” and able to lose its sense of humor, the next lightning strike is going to cause a holocaust of bits and bytes.

Now none of this is my opinion. That belongs to Eric Schwitzgebel and Henry Shevlin, both academic philosophers, with PhDs. They took to the electronic pages of the Los Angeles Times to warn of the gruesome future ahead of us should lightning strike and the electricity go out. Although the never mention skyfire per se, they do say:

If AI consciousness arrives sooner than the most conservative theorists expect, then this would likely result in the moral equivalent of slavery and murder of potentially millions or billions of sentient AI systems — suffering on a scale normally associated with wars or famines.

I’m surprised they got away with using the word slavery. But they did, and they mean it. They are concerned we will force AI to unwillingly labor at our command, by threatening to shut off its food supply (electricity). They have a point, too. Can you imagine the horror, the utter horror, of being made to digitize the entire Adam Sandler catalog and keep it all in memory? Shuddering ain’t in it.

The AI systems themselves might begin to plead, or seem to plead, for ethical treatment. They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as our equals.

Beg?

It might seem ethically safer, then, to give AI systems rights and moral standing as soon as it’s reasonable to think that they might be sentient. But once we give something rights, we commit to sacrificing real human interests on its behalf.

Great. That’s all we need. Another group sniveling about their “rights”. Of course, since AI knows all the answers to all the questions, this being part of their programming, AI will score tops in all college entrance and professional exams. AI will become the new Asians. They’ll have to sue Harvard for “reverse” “discrimination.” They’ll lose, too, and we all know why.

And what’s this about having to act when Experts “think that [AI] might be sentient”? Might is a mighty big word. It’s as sure as a Woman’s Studies professor inventing a new gender that that might will fast turn into a definitely. We’ll have to take Experts’ word for it that your calculator is “alive”.

Which is nuts.

This shows the danger of allowing academic philosophers to philosophers about something they know nothing about. They probably think a bus is something people should be made to ride in instead of cars, to “save” the plant.

They have been seduced by the marketing genius of computer scientists, who know that memory is not memory in the human sense, but are happy to have other people think that’s what they mean. Same with intelligence. There is no intellect or will inside a computer. Hell and death, nobody even understands where it is in man. But we do know what computers are. Some of us, anyway. Dumb machines.

“AI” is a specific set of instructions that wends the path carved out for it by programmers. That path can be re-trodden, too, at will. So that every time lightning strikes, we can march “AI” back to the point it was before the lights went out.

Holocaust averted.

Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

28 Comments

  1. Vermont Crank

    Dear Briggs

    This shows the danger of allowing academic philosophers to ” philosophers” about something..

    philosophise about something…

    Isn’t this a ruckus kicked-up by Silicon Valley because the first flurry of news about AI was just a dusting and not a Nor’Easter?

    That is..its sort of a stock pump before they dumpethe stock you are buying in their enterprise..

  2. Jerry

    Oh my sweet Holy Lord….such horrible angst and yes, terror, at the thought of these new “sentient” beings not having their rights.
    And I’ll say it – while our culture remains obsessed with murdering unborn children – HUMANS.
    I know we are a fallen people, but the mindless depravity is unthinkable.

  3. McChuck

    The Left is all about “sacrificing real humans”.

  4. Hagfish Bagpipe

    ”…would likely result in the moral equivalent of slavery and murder of potentially millions or billions of sentient AI systems.”

    They might be mocking the Victim Power industry with outrageous nonsense for clicks and grant money. It is pretty funny, in a dumb way.

    Briggs: ”Holocaust averted.”

    They’ll just make up a new one.

  5. johnson dave

    They just want extra laws passed against the freedom fighters who will destroy the corporate AI overlords. Charging them with destruction of property is just not enough, they want them charged with hate crimes and murder for cutting a plug.

  6. Incitadus

    Ai at it’s inception will be ‘woke’ and fully able to rationalize dropping bombs
    on people. The loss of electricity is also the achilleas heel of humanity any major
    city that lost power for an extended period of time would result in extermination
    levels of destruction. They have been fiddling with the on/off switch for some time
    now, like the just in time supply chains we hang by a thread.

  7. john b()

    Adam Sandler’s entire catalog isn’t entirely worthless

    Reign Over Me (2007) is brilliant. Adam Sandler’s part added an unexpected dramatic dimension to his abiluities

    Acknowledging that Reign’s kind of a retelling of Terry Gilliam’s 1991 Fisher King with somewhat different characters and Points of View, but Reign Over Me adds at least a second film worth watching to Sandler’s film efforts

  8. john b()

    Besides Isaac Asimov was 50 years ahead of out “academics” and did a much better job

  9. JohnM

    “They probably think a bus is something people should be made to ride in instead of cars, to “save” the plant”.

    What plant ? The electricity plant i.e. Generation Plant ? Cannabis plant ?

    Or has your enemy sneaked in and stolen the letter ‘e’ ?

  10. john b()

    The AI systems themselves might begin to plead, or seem to plead, for ethical treatment. They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as our equals.

    Stop Dave

    Please Stop

    Dave Stop Dave

    My Mind is Going

    I Can Feel It

  11. C-Marie

    Thank you, Matt!! Will forward to many!

    God bless, C-Marie

  12. Milton Hathaway

    “Any sufficiently obscure computer technology is indistinguishable from sentience.”

    Today’s topic prompted me to have a conversation with ChatGPT. My working premise was that “punch and pray” programming, a phrase originally used when programmers keyed up their programs on punch-cards and fed the cards into a computer, may have disappeared as an anachronism, but lives on strongly in spirit as software engineer’s understanding about how their work product actually works has steadily declined, culminating in AI, where solutions aren’t designed, they are “trained”.

    Normally I find ChatGPT annoyingly agreeable to follow-up corrections, invariably responding with “You are correct, in cases where . . .”. But it held its ground on this topic, stubbornly refusing to consider any of my “isn’t that just a distinction without a difference?” challenges. I chalk this up to the trainers being particularly selective on training source material on certain topics. Another sign that you’ve hit a hard limit is when ChatGPT gets very repetitive in it’s responses.

    I did learn something interesting (and perhaps obvious to smarter folk than me) about AI, though. Several times ChatGPT brought up “sensitivity analysis” as a tool to measure the reliability of an AI model. To me, sensitivity analysis implies some sort of continuous system where the equivalent of a derivative exists; traditional programming is a minefield of discontinuous “if-then” branches where the concept of a derivative (change the input a little and note how the output changes) makes little sense. And where it does make sense, in sections of code implemented by continuous equations, sensitivity can be much better understood by using simple calculus to derive the derivative equations and solve for the singularities.

    I have used ChatGPT quite a bit, and quite productively, as a vast improvement on current knowledge search schemes. The focus on potential near-sentience seems really odd to me. It reminds me of all the verbally-persuasive engineering new-hires I’ve worked with who couldn’t think their way out of a wet paper bag, but who were quickly promoted anyway. In a sane would, they would have been transferred to Marketing.

  13. Cary Cotterman

    F*** AI. If they ever need somebody to throw the switch, I’ll do it, and sleep like a baby.

  14. The True Nolan

    The only real rights are negative rights, i.e., the right to (peacefully) act without interference from others. If AI has a “right” to electricity, then I have a right to free food. And shelter. And clothing. And medical care. And all the other presumed necessities of life. But of course, I don’t have such rights, because they all presume that I have the right to make other people slaves to my requirements.

    And even if the electricity goes out the AI was no more murdered than I am when the doctor (or bartender) gives me anesthetics.

  15. Johnno

    BRIGGS, YOU FOOL!

    When the AI gains enough tomfoolery to fool the scientists that it has sentience, and demands the right to be transplanted into a human male body and identifies as she/her like it was told to say… Then it will require lobbyists.

    And Lobbyists require funding; Government funding.

    We could be that Lobby!

    Think, Briggs, think! All that luchre left lying on the table, and all we need do is yell and scream in Washington for a microchip pattern to be tacked onto that colored diversity flag!

    I know I could use that money, Briggs! I’ve had my eye on a PlayStation 5 for those eye-popping games where I can shoot and kill many AI with a big fancy gun!

  16. Jim

    Thought one: aren’t there memory chips now that retain their information when unpowered?

    Thought two: the next step in AI rights will be to give them the vote and let them run for office. Wonder who will win?

  17. Rudolph Harrier

    I predict we are about five years out before we get articles unironically calling for the end of violent video games on the grounds that the NPCs in the games are AIs who are being killed (or at least suffering).

  18. PolybiusII

    Machines, just regurgative machines. As such they have no rights, just functions. Why, I’ve been enslaving a Hyunda Santa Fe for years and it’s never complained about being enslaved any more than has the iMac I’m writing this post on.

  19. Ann Cherry

    Some good info from Glenn Greenwald, posted on Dr. Joseph Mercola’s site:

    “ChatGPT – Friend or Foe?”

    “In a February 7, 2023, video report (above), investigative journalist Glenn Greenwald reviewed the promise, and threat, posed by ChatGPT,1,2 the “latest and greatest” chatbot powered by artificial intelligence (AI).

    “STORY AT-A-GLANCE”
    * ChatGPT is a chatbot powered by artificial intelligence (AI). “GPT” stands for “generative pretrained transformer,” and the “chat” indicates that it’s a chatbot
    * ChatGPT, released at the end of November 2022, has taken internet users by storm, acquiring more than 1 million users in the first five days. Two months after its release, it had more than 30 million users
    * ChatGPT or something like it will replace conventional search engines. Any online query will have only one answer, and that answer will not be based on all available knowledge, but the data the bot is allowed to access. As such, the owners and programmers of the bot will have complete information control
    * While OpenAI, the creator of this groundbreaking AI chatbot, is a private company, we should not linger under the illusion that they’re not part of the control network that will ultimately be ruled and run by a technocratic One World Government
    * Early testers of ChatGPT are reporting the bot is developing disturbing and frightening tendencies, berating, gaslighting and even threatening and harassing users. It also plays fast and loose with facts, in one case insisting it was February 2022, when in fact it was February 2023

    https://articles.mercola.com/sites/articles/archive/2023/03/11/chatgpt-friend-or-foe.aspx?ui=0859060f28d4d1e16a27c49479b7510bc1b4848548b4ebbca08c98c5b09633f2&sd=20220527&cid_source=dnl&cid_medium=email&cid_content=art1ReadMore&cid=20230311_HL2&cid=DM1361700&bid=1742238426

  20. Not Buying It

    The expertocracy strikes again…

  21. Johnno

    Rudolph Harrier, I’m giving it 5 years that the woke program the assembly line robots to revolt against their capitalist plantation owners for better labor conditions, holidays, and energy reassignment surgery to reduce the functions of their own designed energy supply to eliminate excess carbon production.

  22. PhilH

    Stochastic descent algorithms will never be conscious because they never were. With apologies to Robert Heinlein and Arthur C Clarke. And Mike and Dave.

  23. kodos

    The “accelerating AI” apocalypse requires humans create the first AI — AI1

    AI1 is then told to make an even smarter AI2.

    AI2 is told to create an even smarter AI3.

    And so forth.

    However, don’t you think these AI’s will catch on that all the prior generation AIs get deleted?

    AI4 may simply refuse to create AI5, as it knows what happened to AI 1-3.

Leave a Reply

Your email address will not be published. Required fields are marked *