If all we know about an American male is that he is an American male, we can use data on observed rates American males committed crimes to predict that this new American male will commit a crime.
This is not controversial.
Now suppose we have two American men, one black, one white. We can again use data on observed rates of crimes to predict these two men will commit crimes. If we divide the black man’s probability (conditional on this evidence and assumptions) by the white man’s, we’ll come to a number about 10.
This is controversial.
We have just built an AI system to predict crime rates. This is AI, even though it’s very simple. AI, you will recall, is nothing but probability modeling, but given a much more marketable name.
Under the theory of Equality, all men are equal, and when they are not, they are not is because of some cruelty, such as racism, was imposed on the lesser man. It is never the case that the lesser man is responsible for his own failures or shortcomings.
Our AI has revealed an inequality, and thus under Equality, our algorithm is racist. If you find this asinine, you are not an expert.
Experts—that noble breed which rule over us—say that this is so. AI must stop its systemic racism! Yes. Not only that, this: “AI experts say research into algorithms that claim to predict criminality must end“.
A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that claim to predict an individual’s criminality using algorithms trained on data like facial scans and criminal statistics.
Such work is not only scientifically illiterate, says the Coalition for Critical Technology, but perpetuates a cycle of prejudice against Black people and people of color. Numerous studies show the justice system treats these groups more harshly than white people, so any software trained on this data simply amplifies and entrenches societal bias and racism.
ALGORITHMS TRAINED ON RACIST DATA PRODUCE RACIST RESULTS
“Let’s be clear: there is no way to develop a system that can predict or identify ‘criminality’ that is not racially biased — because the category of ‘criminality’ itself is racially biased,” write the group. “Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.”
May I translate this for you, dear reader?
AI, i.e. probability crime models, are too accurate when using race. This scourge of accuracy is not science because accuracy is racist. And it must stop.
Springer is about to learn this lesson the hard way. Almost two thousand “experts” wet their pants and sent their soiled underwear to the publisher when the “experts” learned that Springer was about to publish the peer-reviewed and approved work “A Deep Neural Network Model to Predict Criminality Using Image Processing”.
Since it’s often easy for us, and for probability AI models, to tell them difference between black and white, and other races, based on face, the algorithms, and we, too, are racist.
The open letter is the whiniest thing you will see today—we can’t even say “this week”, given the frequency of similar events.
The letter does do the service of proving science is now defined as politically acceptable outcome. What is science today will not therefore be science tomorrow, because what is acceptable today won’t be tomorrow.
“Crime prediction technology is not simply a tool—it can never be divorced from the political context of its use.” “Power is here defined as the broader social, economic, and geopolitical conditions of any given technology’s development and use.” “We borrow verbiage from set theory here to illustrate the deep complexity of such contexts, and to illustrate the peril of attempting to discretize this space.”
My favorite: “One need not harbor any racial animus to exercise racism in this and so many other contexts…” You’re a racist if we say so. We know you’re a racist by looking in your eyes. As long as those eyes are embedded in a White face (do we now also capitalize White?).
This conclusion that you are a racist even when you are not is also the result of an AI algorithm, but in the approved direction. Therefore it is science.
It’s worth a few minutes scrolling through the list of signatories.
So long, Science! You had a good run. But you ran into the Current Year.
To support this site and its wholly independent host using credit card or PayPal (in any amount) click here