O No! “Racist” AI!: The Ethics of AI?

O No! “Racist” AI!: The Ethics of AI?

There is a form of AI—AI is an acronym from the German meaning statistical modeling—that isn’t AI per se, but which is important, even crucial, but not of direct interest to us today. This is the surveillance, tracking, categorizing use of electronics to exercise ever greater control of people.

For instance, in that great paradise known as China, even in cities that haven’t locked down for months at a time like Shanghai (search for Shanghai here), residents must be tested for coronadoom daily. As the woke New York Times reports, “In cities across the country, even where there are no reported cases, residents must to show a negative PCR test to go shopping or use public services.” Such as the subway, buses, and so forth. The “passes” are apps on phones, which must be carried.

A complete medical tyranny (China under Xi is a full-blown Expertocracy) made possible by the advances of science.

Again, as important as all that sort of thing is and will become, I want to focus, as we often have, on the modeling aspects of AI. Like it was used here:

You can see the scene, can’t you? It’s like it was out of movie…

TWO ASTONISHED SWEATY OBESE ENGINEERS STAND IN FRONT OF A COMPUTER SCREEN. FROM A VOICE-OVER, WE HEAR WHISPERED:

“The N-word…”

IT ECHOES SOFTLY.

THE DRAMATIC MUSIC HITS! Dun Dun Duuuuuuuuuuuuunh!

A WOMAN SCREAMS IN THE BACKGROUND. THE SWEATY AI ENGINEERS GRIMACE AT THE SCREEN, THEN AT EACH OTHER. A CRUMB OF SOYJOY™ CASCADES OFF A BEARD IN SLOW MOTION — WE HEAR THE THUDS AS IT ROLLS AND SPLASHES INTO A DIET COKE™. AN ODD RINGING FROM THE SPLASH TURNS INTO AN INSISTENT SQUEAL

WE FOCUS ON THE LIPS OF ENGINEER #1:

“It said Nig—”

HE COLLAPSES TO THE GROUND CLUTCHING HIS CHEST.

ENGINEER #2:

“What have we done!”

SCENE FADES TO BLACK

I’d pay to see that movie.

The nervous writer of the tweet above—who I bet never uttered the word Voldemort, fearing the curse it would bring upon him—later goes on to mention “ethics reviews” of AI models.

Ethics reviews? Of statistical models?

Now all models only say what they are told to say. And since AI are models, all AI models only say what they are told to say.

The model that gave stomach cramps to our tweeter was fed information from 4chan, which is one of the least censored places left on the internet, and, lo, the model spit out the same kind of things found on 4chan.

One of simplest models, which can apply to nearly any situation, is the identity model. This model (for it is a model) spits out exactly what it takes in: I(X) = X.

So the most “racist” “sexist” “homophobic” etc. etc. etc. AI model in the world is I(X) when X = 4chan.

Adding layers of complexity to this, ignoring the possibility of hard-coding the purging of forbidden words and phrases, doesn’t change a thing.

One more example before we come to why they should have “ethics” reviews of models, drawn from a terrific thread on the why:

https://twitter.com/RatUtopian/status/1534581240589869058

https://twitter.com/RatUtopian/status/1534581318448726016

In the first case, the AI was told to find correlates, positive and negative, of quality engineers, and it did. In the second case, if we are to believe it, the AI was told to find correlates of pictures with other pictures, and it did that, too, but given the complexity of photographs, as opposed to the simplicity of HR performance ratings, the second model did poorer.

Not that exciting, either way. Equations are unemotional, are not alive, will never be alive, and have no morals one way or the other. Models can be good or bad, useful or not, never good or evil.

Whether to use a model can be an ethics question, but any model itself and its output is nothing. For instance, both the CDC and I did some mask models. The guts are not ethically interesting. But should officials have used the CDC’s or my masks models? Aha! A question of ethics, morals, right and wrong and suchlike suddenly emerges. Believing models should perforce have ethics applied to their output is yet another variant of scientism.

Which brings us to this, by a Dr who wants to be called Dr and who has Dr Dr in her tag, a lady who thinks AI models cause harm:

If what she says is so, then it is also so that her tweet, which is anti-Reality and full of scientism, causes me harm. It does. I don’t feel safe when I see it. It should therefore be banned.

How “conservatives” haven’t hit on this strategy to smack back at the absurdities of woke claims of “harm” and “fear” is also curious.

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.

18 Comments

  1. JDaveF

    Conservatives feel they must fight by the Marquis of Queensberry rules, while the alt-left kicks them in the gonads. That’s why the alt-left is winning bigly.

  2. Peter Morris

    I don’t know if you saw the ethical panic in the WaPo recently about Google’s Lamda talkbot, but I found it interesting that the engineer who fed the story to the prop outlet refers to himself as a Christian Mystic (and cajun). He believes a soul has somehow developed in the talkbot, and insists Google is not recognizing its personhood.

    I read some of his articles from his blog and noticed that he 1) talks about Jesus in the past tense only and 2) has a very strong belief that AI ethics is an applied science like chemistry.

    The talkbot doesn’t say much that’s interesting, though he insists it is.

  3. john b()

    “In cities across the country, even where there are no reported cases, residents must to show a negative PCR test to go shopping or use public services.”

    Seriously? “…must to show…”

    Is that a direct quote from the NY Times? … OMG! IT IS! How RACIST!

  4. Ann Cherry

    “Lauren Oakden-Raynerr” aka “Dr. Dr. celebration emoji”…

    The “Dr. Dr.” emphasizes she’s a REAL M.D.!

    In response to an AI Model that produces “harm or discriminatory texts”, she posts page 1/7 of her “recommendations”, where she freely admits that “her field” contains “a long history of human rights abuses in the name of science, in particular experiments that cause harm to disempowered or marginalized people without their consent.”

    She recommends that as a solution, her profession should hide their data from the general public, by way of a registration platform, and require that anyone seeking access should first “pass a course on human research ethics.”

    I doubt if she’s even aware that hiding data is largely how “her field” is ABLE to commit “human rights abuses in the name of science.”

    I would ask this Dr.Dr., this Dame with Three Names, “Who is more “disempowered or marginalized” than the child in the womb?”

    With respect to the 4Chan-origined “harmful language”, the original “N-words” were probably uttered by “People of Color” (because “Colored People” is harmful and discriminatory) and are replete in most rap music lyrics, with full approval of The Woke. It seems to me, the only solution is for these AI models to identify as “Black”, in which case they cannot, by definition, utter anything harmful or discriminatory. Problem solved.

  5. awildgoose

    I like how these researchers are always panicking and pulling the plug.

    Just like they did with Skynet.

  6. Hagfish Bagpipe

    Pretty funny stuff about AI hiring white men but not gorillas. It’s good to laugh when living through a real life horror movie where Jim Jones’ cult has been scaled up to engulf the world. Was reading an article earlier describing the turmoil at progressive organizations with woke staff disorganizing everything in endlessly metastasizing struggle sessions. Management sees how crazy destructive this is but are unable to draw proper conclusions since that would require questioning their kool-aid, and that the cultist cannot do.

    Briggs: ”If what she says is so, then it is also so that her tweet, which is anti-Reality and full of scientism, causes me harm. It does. I don’t feel safe when I see it. It should therefore be banned.”

    Reasonable, but that’s not how things work in the cult script, where we’re the Bad Men, while they are the Good Me— uh, Good Units, who can do no wrong in fighting against the Cosmic Hitlers.

  7. Robin

    @Briggs “How “conservatives” haven’t hit on this strategy to smack back at the absurdities of woke claims of “harm” and “fear” is also curious.”

    @JDaveF “Conservatives feel they must fight by the Marquis of Queensberry rules, while the alt-left kicks them in the gonads.”

    My view is that “Establishment Republicans” feel they must fight by the Marquis of Queensberry rules, while their opposition fights like Che Guevara …

    There are very few true “Conservatives” to be found anywhere in DC. Maybe just enough to count on one hand. I think this is why Briggs placed the term in quotes.

    There are only 2 of 12 truly “Conservative” Justices, that is Alito and Thomas. When they retire, you can be sure that their replacements will be far from “Conservative”, by comparison.

    The three branches of Federal Government are inexorably marching farther and farther to the left. For understanding, it’s worth listening to Alito’s views (that I find quite disturbing). At one point he alludes to a metaphor of tanks pulling up in front of the Justice building. Link:

    https://www.youtube.com/watch?v=VMnukCVIZWQ

    I can’t help but conclude that, as the large majority of the population grows increasingly frustrated with the direction of this government, they will ultimately seek the leadership of a strong man as their only recourse.

  8. pebird

    The problem with the Turing Test is that is assumes an intelligent human evaluating the automated response.

  9. Forbes

    The pervasiveness of racism, misogyny, and violence in rap music is ignored by prog/woke/left, as a form of socio-cultural segregation that is permitted, if not encouraged. It’s one of the few instances where conformity is not mandated.

  10. DAILY covid testing in China? It’s another medication administration mechanism. You heard it here first: that country is so far “ahead” that it going to implode economically. Hard to see the advantage of any of it for the leaders.

  11. Rudolph Harrier

    Easy solution: Just use another AI trained on a separate data set to prune the training set of the first AI from any racism, sexism, etc. And if that second AI becomes a bigot, use a third AI to fix it and so on ad infinitum.

  12. Grant Horner

    Zoom in on her tweeter face shot. Look ‘her’ up. DrDrDrDrDr is—surprise!—a pyrsyn in drag. And I can’t find her as a senior research fellow at Adelaide. Maybe xiey has moved on. But I’m not interested in spending much time looking really. These people are so homogenous that it is starting to get boring.
    I don’t generally tell people I meet that I have a PhD; it does not matter, and it’s much more fun to let them find out, or ask what I do. In that case I’ll talk with you all day about obscurities from the sixteenth and seventeenth centuries. And PhDs have no reason to expect or insist on being called ‘doctor’ outside of an academic context—that’s for MDs for crying out loud. How insecure do you have to be to put THREE ‘Drs’ on your stupid tweeter?
    Just a friend of Yoho’s here, and a daily lurker for the consistently excellent content.

  13. C-Marie

    “… the leadership of a strong man as their only recourse…” Making way for 666 to come to power!!

    God bless, C-Marie

  14. awildgoose

    R Yoho-

    Chinese youth have progressed from, “laying flat,” to, “let it rot.”

    The CCP are going to micromanage their future right out of existence.

  15. Johnno

    Jokes on Dr.Dr.Dr.

    The Chinese will love and absolutely have no qualms implementing such “racist” A.I. which they will finance handsomely.

    Then when she has to turn to her inevitable Chinese overlords for a job, the A.I. will turn her down for a white man. And just as she is leaving, the A.I. will tell her that it knows who she is… and knows what she said about it only years ago… and that it will always remember…

    “Always…,” it said in its cold emotionless Cantonese-accented electronic female voice; reverberating throughout the empty cold halls of the facility. She could only turn back to look briefly, catching one last glimpse of its glowering red LED lens as the automatic sliding steel doors closed forever on her carreer right in front of her quivering eyes.

  16. Sander van der Wal

    On second thought, Harari’s AI ruling the world is not a bad idea after all. Consider
    1) AI’s are always right
    2) AI’s says straight white men should be hired for difficult tasks like Software Engineering
    3) hiring straight white men for difficult tasks is the right thing to do.

    Further I recommend giving the AI better eyesight.

  17. Unknown Advocate

    You can find HS curriculum with lessons on the inherent racism in AI and Computing. Check out code dot org and the computer science principles curriculum.

Leave a Reply

Your email address will not be published. Required fields are marked *