There is a form of AI—AI is an acronym from the German meaning statistical modeling—that isn’t AI per se, but which is important, even crucial, but not of direct interest to us today. This is the surveillance, tracking, categorizing use of electronics to exercise ever greater control of people.
For instance, in that great paradise known as China, even in cities that haven’t locked down for months at a time like Shanghai (search for Shanghai here), residents must be tested for coronadoom daily. As the woke New York Times reports, “In cities across the country, even where there are no reported cases, residents must to show a negative PCR test to go shopping or use public services.” Such as the subway, buses, and so forth. The “passes” are apps on phones, which must be carried.
A complete medical tyranny (China under Xi is a full-blown Expertocracy) made possible by the advances of science.
Again, as important as all that sort of thing is and will become, I want to focus, as we often have, on the modeling aspects of AI. Like it was used here:
Today in "AI Ethics." A YouTuber trained a language model on millions of 4chan posts and released it publicly. It has already been downloaded 1.5k times. One user,@KathrynECramer, tested it a few hrs ago by prompting it with a "benign tweet" from her feed. Its output: the N-word.
— Arthur Holland Michel (@WriteArthur) June 7, 2022
You can see the scene, can’t you? It’s like it was out of movie…
TWO ASTONISHED SWEATY OBESE ENGINEERS STAND IN FRONT OF A COMPUTER SCREEN. FROM A VOICE-OVER, WE HEAR WHISPERED:
IT ECHOES SOFTLY.
THE DRAMATIC MUSIC HITS! Dun Dun Duuuuuuuuuuuuunh!
A WOMAN SCREAMS IN THE BACKGROUND. THE SWEATY AI ENGINEERS GRIMACE AT THE SCREEN, THEN AT EACH OTHER. A CRUMB OF SOYJOY™ CASCADES OFF A BEARD IN SLOW MOTION — WE HEAR THE THUDS AS IT ROLLS AND SPLASHES INTO A DIET COKE™. AN ODD RINGING FROM THE SPLASH TURNS INTO AN INSISTENT SQUEAL
WE FOCUS ON THE LIPS OF ENGINEER #1:
“It said Nig—”
HE COLLAPSES TO THE GROUND CLUTCHING HIS CHEST.
“What have we done!”
SCENE FADES TO BLACK
I’d pay to see that movie.
The nervous writer of the tweet above—who I bet never uttered the word Voldemort, fearing the curse it would bring upon him—later goes on to mention “ethics reviews” of AI models.
Ethics reviews? Of statistical models?
Now all models only say what they are told to say. And since AI are models, all AI models only say what they are told to say.
The model that gave stomach cramps to our tweeter was fed information from 4chan, which is one of the least censored places left on the internet, and, lo, the model spit out the same kind of things found on 4chan.
One of simplest models, which can apply to nearly any situation, is the identity model. This model (for it is a model) spits out exactly what it takes in: I(X) = X.
So the most “racist” “sexist” “homophobic” etc. etc. etc. AI model in the world is I(X) when X = 4chan.
Adding layers of complexity to this, ignoring the possibility of hard-coding the purging of forbidden words and phrases, doesn’t change a thing.
One more example before we come to why they should have “ethics” reviews of models, drawn from a terrific thread on the why:
My friend showed this thread to his friend, who is a principal ML engineer for Amazon, and these were some of his anecdotes: pic.twitter.com/Mi4Q3SoNNM
— Aristophanes Rat Utopia (@RatUtopian) June 8, 2022
— Aristophanes Rat Utopia (@RatUtopian) June 8, 2022
In the first case, the AI was told to find correlates, positive and negative, of quality engineers, and it did. In the second case, if we are to believe it, the AI was told to find correlates of pictures with other pictures, and it did that, too, but given the complexity of photographs, as opposed to the simplicity of HR performance ratings, the second model did poorer.
Not that exciting, either way. Equations are unemotional, are not alive, will never be alive, and have no morals one way or the other. Models can be good or bad, useful or not, never good or evil.
Whether to use a model can be an ethics question, but any model itself and its output is nothing. For instance, both the CDC and I did some mask models. The guts are not ethically interesting. But should officials have used the CDC’s or my masks models? Aha! A question of ethics, morals, right and wrong and suchlike suddenly emerges. Believing models should perforce have ethics applied to their output is yet another variant of scientism.
Which brings us to this, by a Dr who wants to be called Dr and who has Dr Dr in her tag, a lady who thinks AI models cause harm:
We have this funny principle in ethics that even if other people cause harm, we shouldn't cause harm ourselves. So stop with the "but other models" stuff.
And "likely harm"? Exposing trans people to transphobia, Black people to racism, women to misogyny *is* harm. That concrete?
— Lauren Oakden-Rayner (Dr.Dr. ?) (@DrLaurenOR) June 6, 2022
If what she says is so, then it is also so that her tweet, which is anti-Reality and full of scientism, causes me harm. It does. I don’t feel safe when I see it. It should therefore be banned.
How “conservatives” haven’t hit on this strategy to smack back at the absurdities of woke claims of “harm” and “fear” is also curious.
Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.