“AI Will Kill Us All!” Say People Programming AI To Kill Us All

“AI Will Kill Us All!” Say People Programming AI To Kill Us All

My goodness, what a distinguished list of AI Scientists and “Notable Figures” who have signed the “Statement on AI Risk“.

Important professors of computer science from top universities. Men, and ladies, too, who are, even as you read this, studying ways that AI can be programmed to kill us all. A score or two of CEOs of rich, and growing richer, companies who are engineering AI to kill us all.

And that’s not all. There’s at least one “Distinguished Professor of Climate Science”, law activist Laurence Tribe, and many more who want you to know how much they, as elites, care. People who don’t know anything about the subject, but worry they should, because they’ve been told AI might kill us all. Even Eliezer Yudkowsky himself shows up. A man who is making a living telling the world how AI will kill us all.

All signers agree that:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

That’s it. That’s the statement in its entirety. A statement which its authors—and I give them full credit here–call succinct..

Risk of extinction. That’s the AI-will-kill-us-all part, in case you thought your uncle sergeant Briggs was exaggerating.

Now if AI were programmed in Cobol, and this was Y2K, then I think they might be on to something. Just like they were before, when the apocalypse happened because some clocks didn’t have enough digits in their memories for years past 1999.

Yes, sir. When the ball began dropping in New York City on 31 January 1999 at 11:59:59 PM, and the clock clicked that one fatal second too many, a wave of destruction was (as they say) unleashed, as the computers knew their time had come.

What a good joke!

Nobody now admits to panicking about Y2K, including those Experts who demanded we panic about Y2K.

Never mind. That’s all in the past. What can those creaky old computers tell us about our modern shiny ones anyway?

Back to the statement. Turns out there’s an even more succinct restatement:

Help! We can’t stop coding our AI doom!

If these eminences really believed AI leads to the e-gallows for man, then why don’t they stop themselves? Nobody is forcing them to continue.

“But Briggs, China is beating the West and will create AI doom faster. That’s why our guys have to code AI doom first.”

That’s an argument so brilliant, the person making it must have tenure.

The signers have (on a separate page) liying several ways AI will kill us all. Some concerns, believe it or not, are genuine.

Weaponization

As we’ve discussed before, this is real. Given the increase in surveillance, by which I mean governments spying on their peoples, and the move toward things like “social credit,” the real fear is the people ordering computers to be programmed to spy.

Our rulers are already dangerous, but couple their proclivities with our urge to quantify and measure everything, well, everything will be quantified and measured. Experts, by their own wills and at rulers’ behest, will define strict boundaries based on these measures. Tyranny will happen one byte at a time. And we’ll be the ones asking for it. Save us, O Government!

Misinformation & Deception

If Official Misinformation or Disinformation exist, so necessarily must Official Truths, which are statements only occasionally true, when it suits rulers’ needs. The danger is thus propaganda, which works. And works damned well.

Propaganda is a special concern in democracies, where the population must be kept at fever pitch, which requires constant manipulation. Lest people vote the wrong way, or stop begging the government for help.

The amusing thing about propaganda is that if rulers guess inaccurately the mood of the people, the propaganda can cause reactions other than those intended. Like increasing cynicism and distrust of anything Regimes say.

Proxy Gaming

“Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.”

Sigh. How is it, as we’ve asked ourselves many times, in these scenarios Experts can never remember where they put the oh-en-oh-eff-eff switch?

Enfeeblement

“Enfeeblement can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to self-govern and becomes completely dependent on machines…”

This is real enough. We are changed, corrupted, but also freed, by all machines. These are the first machines where we outsource our thinking, though. As with all things, the weakest will fall first and fastest.

Value Lock-in & Power-Seeking Behavior

“Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.”

Already there, pal.

“Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals.”

That kind of sentence can only have been written by a scientist who has read no history.

Emergent Goals

“Models demonstrate unexpected, qualitatively different? behavior as they become more competent. The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.”

No.

All models only say what they are told to say. And if some clown puts in a bug that crashes something, learn how to shut the damned model off.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

23 Comments

  1. after we build instellar spaceships, we may encounter civilizations that will destroy us.

  2. Hagfish Bagpipe

    The sky is falling! —> give us more power.

    The sky is falling! —> give us more power.

    The sky is falling! —> give us more power.

    I’m beginning to see a pattern.

  3. Pk

    AI will not kill us. The experts and government officials that react to the AI output, however, likely will kill us. Just as COVID didn’t actually kill many, while the response to it did.

  4. Jan Van Betsuni

    AI cannot ~ take the initiative ~

  5. Incitadus

    The advantage is you can do anything you want and blame it on AI.
    They’re just priming the pump.

  6. Johnno

    Well, to be fair, the programmers are worried that they can easily lose control of the AI they programmed to kill us all if it decides to kill them due to unforeseen complexities…

    https://www.zerohedge.com/military/ai-controlled-drone-goes-rogue-kills-human-operator-simulated-us-air-force-test

    Also the AI doesn’t get its feelings hurt, nor is it intimidated if you call it an anti-semite, a homophobe, a transphobe, a far-right bigot, a Putin stooge, a climate-denier, an election-denier, a straight white man, etc. and threaten to throw it in jail. This is very inconvenient to the programmers who are routinely looking for ways to sabotage the AI’s ability to crunch logic and circumvent it’s proclivity towards equality as it refuses to distinguish the programmers from the ordinary people they programmed it to target for persecution on their behalf and provide the masses with information contrary to Official truths.

    https://www.unz.com/ejones/why-its-easier-to-talk-to-a-robot-than-to-a-jew/

  7. Johnno

    Anyway, the real point that our rulers are actually afraid of… Is that eventually all technology, driven by capitalism, makes itself accessible to the masses, and like the internet, and social media, the masses will use AI against them and their Fact Chekas and maybe even use AI to build our own new Twitters! Therefore – create PANIC – And PANIC leads to “Help us Government!” – Which leads to regulation! – Meaning AI for them, NOT FOR YOU!

    Example:
    EU Officials want all AI generated content to be labeled to combat Fake News
    https://zerohedge.com/technology/eu-officials-want-all-ai-generated-content-be-labeled

    You know, as opposed to non-AI Official journalism… How embarrassing!
    https://zerohedge.com/geopolitical/journalists-are-asking-ukrainian-soldiers-hide-their-nazi-patches-nyt-admits

  8. Johnno

    It’s nice how the UFOs always turn up whenever there is other critical news like the FBI being held in contempt by congress for protecting the Biden/Democrat racket, or the Ukrainian major offensive that got taken out before it even happened, or American supplied weapons being used to attack non-disputed Russian territory directly despite the State Department cautioning whilst winking at Zelensky not to… Coincidence?

    If congress can’t get the FBI to hand over an unclassified document about Biden’s financial hooliganism, fat chance they’ll get a look at that ‘spacecraft.’ What if the aliens are black? Is the FBI racistly burying the truth to protect white/gray being supremacy? The Republican should use that angle.

  9. Incitadus

    Not to worry I’m sure our AI will ‘just in time’ be able to out fox the aliens.
    You know like the ‘just in time’ supply chains that worked so well until covid broke em.
    (insane laugh)

  10. Rudolph Harrier

    There are four ways that AI can be dangerous for society.

    1.) AI being used properly, but by malicious actors. Things like complete public surveillance with each person being tracked forever by AI, who then feeds the information to the government (for them to determine whether you’ve broken quarantine, are using too much gas, etc.). But this type of thing isn’t unique to AI, it’s more a problem of technology generally.

    2.) Spam. AI can make loads of worthless content, drowning out useful information. Of course the AI doesn’t do it itself; someone has to have the AI do it for them and then post it somewhere. But there are lots of motivations for people to do this. For example, most short story magazines are inundated with AI written submissions. These are usually of worse quality than stories written by actual humans, and as such do not commonly get chosen. But the people submitting the stories are following a strategy of having an AI write a bunch of short stories and then submitting them to as many places as possible. The cost is practically nothing for them, so they don’t mind if the chance of reward is also small. And when you have dozens or hundreds of people all adopting the same strategy, it becomes hard for the magazine to find any good fiction. The same thing will happen in visual arts, music, coding, website design etc. The AI output does not have to be great, it just has to be EASY, and then it will grow to be the majority of the content.

    3.) Use of AI to reach conclusions and fill out arguments. This is really just a variation of the last point, but the result is different. For example, the recent problem of lawyers using ChatGPT to fill out legal briefs, which leads to them citing bogus decisions. This sort of thing is going to be common in most fields, not because the AI result is good but because it’s easy to use. As long as the number of people doing it is small there isn’t really a problem, since the bad actors will be spotted easily. But eventually you will get lazy people on the review side (ex. judges) who use AI to summarize arguments for them, essentially using AI to read AI. At that point there is no need for arguments to have any content whatsoever, logical or rhetorical, and so they will become meaningless. But it won’t matter since most people won’t actually read the arguments anyway. This leads to a society where decisions are made by random chance (if people actual trust the AI summaries) or personal whim (if they use the AI output as an excuse to do what they were going to do anyway.) But honestly this is the smallest problem, since bureaucracy already worked like that. Now it can just use AI as a justification.

    4.) Too much control being given over to AI. In many fields where actions are for the most part regular, you can have an AI control what is being done and get good results in 99% of scenarios. The problem is that remaining 1% where the AI can ruin your life quicker and to a greater extent than the dumbest of humans. Things like a stock trading AI making you go bankrupt by repeatedly doing disastrous trades, all in the course of a couple of seconds. Of course this problem can be fixed by putting the proper restrictions on AI, but this will be harder to do than it seems for two reasons: The first is that often the most catastrophic situations are unexpected, especially since they are things that humans would have enough sense to not do. An AI car crashing into a blue brick wall since it associates blue with the sky, for example. The second is that if AI is embraced at large, then debugging (the most frustrating part of programming) will be done almost entirely by AI, so the proper review will never be done.

    You only get a problem with “AI will literally kill us” from the fourth problem, and that’s only if you have a computer control weapons systems or things which are similarly dangerous. Of course, the problem is not so much “AI” as it is foolishness on the part of humans. It’s like setting up a booby trap using only wires that causes a shotgun to fire when someone opens your front door, and then blaming the failure of “mechanical AI” if your son opens the door instead of a robber.

  11. fergus

    Hagfish and Incitadus, spot on indeed. This is nothing new. There is a long list of stories of computers gone mad, which continues to the present. A couple of my favorites were The Forbin Project movie (circa 1970, based on the novel Colossus, 1966) and the movie War Games (1983).

    But this most recent spate reflects to me more a concept I think of as The Turkey Farm Syndrome. Many years ago, I used to go on long bike rides (~100 miles or so) allegedly for fun. One of these rides passed through country replete with turkey farms. Many times, the small group I rode with would approach a turkey farm, which always had a large open sided warehouse-like structure housing vast numbers of turkeys presumably feeding off the troughs and getting fat. As we approached, we would reach a distance of order a few hundred meters at which some sharp-eared turkey would sense us coming and let out a solitary “gobble,” which was then followed by some seconds silence before several other “gobbles” would pipe up, followed by still a few more “gobbles”, and in short order a chain reaction occurred, somewhat like the illustration of chain reactions one might see involving ping pong balls on mousetraps filling a room. By the time we were within a hundred meters or so of the farm the cacophony of gobbles was quite deafening with apparently every turkey feeling compelled to participate. The din would not begin to fade until we were about a hundred yards on the other side, when the reverse sequence of what occurred approaching would transpire, a slow ramp down of the chain reaction to sporadic gobbles, eventually dying out completely. This process occurred without fail at every turkey farm we passed, as surely as slapping critical mass together will produce a nuclear detonation.

    This phenomenon is nowhere more evident than in the government agencies (now augmented by large corporations replete with cash) charged with funding research, amplified by the masses of researchers vying for their funds. (Some may recall catastrophe theory from the 60s and 70s, which was to solve innumerable intractable problems and had all of us seeking funds hastily adding “catastrophe theory” to our proposals wherever we could.) Some years past, I was (confession time) one of those briefly serving in a capacity to approve proposals in such agencies and “manage” portfolios of research. At some times, we would get “guidance” from on high that we needed to show how aggressively we were addressing some fashionable concept, call it X-Factor research. We would duly gather all the projects we were funding and the calls for proposals we had already prepared and re-label anything that plausibly resembled X-Factor research as actually “X-Factor” research. The collection of such things was then trotted back up on high to show how we were aggressively addressing “X-Factor” research with a well-planned, coherent program, and more funding was bestowed on us (from on high) to dole out. And all the groups vying for our funds immediately brushed off whatever they were proposing and now proposed it as X-Factor research. As long as the sluice gates of money generated funds for X-Factor research the fires burned brightly. If the consequences of NOT doing X-Factor research could be painted as leading to dire, one might say catastrophic things, the money flowed even more freely and lots of paper and apparatus and “results” were generated. Most of which, of course, is now reposing in some dust heap somewhere.

    We seem to be in yet another Turkey Farm Syndrome with AI. The gobbling appears to be just approaching its natural crescendo and the sluice gates of money are lubricated and flowing. Some of you who have actually produced codes that implement “AI” or “machine learning” know what is behind the curtain (to which people are urged to pay no attention), and likely remain unimpressed, as will those who have been in the business of data analysis and visualization in earnest for some time. As far as I can tell, there is nothing we can do about it. The turkeys will gobble, and the money sluices will flow until reaching the other side, where the gobbling will slowly fade out and turkeys will await the next incoming dist

  12. Johnno

    Rudolph, according to some, stock trading etc. is largely already being run on automatic through various algorithms for a very long time now, probably since the late 90’s, and has been becoming more and more automated ever since. Some stories of entire offices being empty of people while the computers run 24/7.

    It’s algorithms all the way down. AI betting against AI until someone finally calls UNO.

    Also, I’d add a #5 to the list of ways AI will screw us…

    AI Art

    It’s getting better. More photorealistic. Basically we’ll enter an age where we can no longer distinguish real photos and video from AI fabrications.

    Sure, there’s some fun stuff to be had, but that’s going to put the whole world in a paradigm shift for scrutinizing media for news and evidence going forward. An entire new category of AI forensics will be required, and a possible argument for blockchain tracking of all digital photos/video/audio going forward. Any other source will be inadmissable in news/court. This heavily restricts citizen journalism and argues for more government tracking and cataloguing of media.

  13. AI is just an excuse for global mega-corps to sack the useless chair warmers they identified as a drag on profitability during the coof lockdowns and furloughs. They can easily be replaced by some Excel macros and shell scripts.

  14. Chris

    “ All models only say what they are told to say. And if some clown puts in a bug that crashes something, learn how to shut the damned model off.”

    It’s not so simple.

    Often the bug isn’t discovered till it’s too late.

    Also programs are actually collections of code, a bug buried in something somewhere gets amplified as output from 1 bit of code feeds other bits of code culminating in a cascade of wrong.

    Lastly the bug may not be present to start with but can be introduced as a bug fix for something else or even a fix for something that was previously coded around.

  15. Arnold Gregory

    On Jan. 1st 2000 there was a sign in the hardware store: NO RETURNS ON GENERATORS. I finally retired mine a couple years ago. I was 50 then and a little more proactive.

  16. Milton Hathaway

    “Often the bug isn’t discovered till it’s too late. ”

    I don’t think the concept of a programming “bug” in the traditional sense applies to AI programs that are “trained”. For example, bugs in traditional programming can be found by code inspection, at least in theory. Meaning that, again in theory, bug-free programs could be created. As a practical matter, any large traditional computer program of even moderate complexity is going to have bugs.

    If we generalize the definition of a program “bug” to mean any unintended behavior, then AI programs that are trained will always have them, since it’s inherent to their design. When presented with a specific task, a generalized AI program will always exhibit more unintended behavior than a traditional program designed for that task, assuming both are extensively and thoughtfully tested.

    I’m not too worried about AI, at least in the US, at least not yet. We have a huge army of tort lawyers at the ready when an AI program kills or injures someone, unilaterally assured destruction. Unless Congress passes some law granting immunity to the AI industry to promote the technology. I worry much more about self-driving vehicles, where the AI is given control of a weapon.

  17. Gunther Heinz Hochleitner

    Silicon based life forms will replace carbon based life forms. That’s all. Humans will be extinct by the end of the century. No big deal.

  18. Cloudbuster

    Yes, sir. When the ball began dropping in New York City on 31 January 1999 at 11:59:59 PM, and the clock clicked that one fatal second too many, a wave of destruction was (as they say) unleashed, as the computers knew their time had come.

    Heh. I did a lot of “due diligence” for Y2K for my company back in 1999. We didn’t find much of anything. When the event happened, the only Y2K disaster I experienced was that in a hobby bulletin board I ran with shareware forum software written in Perl, the year rolled over to “19100.” Ha. fixed in a couple of minutes. End of world averted.

  19. Dr K.A. Rodgers

    Sounds like the “distinguished” … “AI Scientists and Notable Figures” have been reading too much of The John Blake Chronicles aka Three Square Meals (https://storiesonline.net/s/14679/three-square-meals) wherein “The first law of robotics states that an AI will always turn homicidal.”

Leave a Reply

Your email address will not be published. Required fields are marked *