Researchers Forced To Teach Algorithms To Reject Hate Facts

Researchers Forced To Teach Algorithms To Reject Hate Facts
Description: Beau Bassin Prison. Interior of a Cell Block. Location: Beau Bassin, Mauritius Date: 1950-1959 Our Catalogue Reference: Part of CO 1069/753 This image is part of the Colonial Office photographic collection held at The National Archives. Feel free to share it within the spirit of the Commons Please use the comments section below the pictures to share any information you have about the people, places or events shown. We have attempted to provide place information for the images automatically but our software may not have found the correct location. For high quality reproductions of any item from our collection please contact our image library

As we learned before, hate facts “are true statements about reality that our elites demand remain occult and unuttered.”

The problem is that hate facts will routinely pop up in statistical (a.k.a. machine learning, a.k.a. artificial intelligence) algorithms, and when they do the algorithms are said to be “biased”. The paradigmatic example are algorithms which estimate the chance of persons paying back loans. Race was found to be highly informative in these algorithms, but race is also unwelcome, so modelers were forbidden to use it.

The blame for the “bias” is put on the algorithm itself, but, of course, the algorithm is not alive, not aware, and so does not know the numbers it manipulates are anything but numbers. The meaning to numbers is found only in our eyes.

Which brings us to the Nature article “Bias detectives: the researchers striving to make algorithms fair: As machine learning infiltrates society, scientists are trying to help ward off injustice.

It begins with a sob story, as is, we guess, mandatory in pieces like this.

In 2015, a worried father asked Rhema Vaithianathan a question that still weighs on her mind. A small crowd had gathered in a basement room in Pittsburgh, Pennsylvania, to hear her explain how software might tackle child abuse…the system does not catch all cases of abuse. Vaithianathan and her colleagues had just won a half-million-dollar contract to build an algorithm to help…

After Vaithianathan invited questions from her audience, the father stood up to speak. He had struggled with drug addiction, he said, and social workers had removed a child from his home in the past. But he had been clean for some time. With a computer assessing his records, would the effort he’d made to turn his life around count for nothing? In other words: would algorithms judge him unfairly?

In other words, this father guessed the algorithm might use the indicator “past druggie”, and use it to up the chances he’d abuse a kid. Which certainly sounds reasonable. Druggies are not known to be as reliable with kids as non-druggies, on average. You dear reader, would for instance use the information were you deciding on baby sitters.

However, past drug use is a hate fact in the eyes of the Nature author. How to ensure it’s not used?

I changed the colors from “blue” and “purple” to the more accurate, but hate fact, “white” and “black” in the following passage:

Researchers studying bias in algorithms say there are many ways of defining fairness, which are sometimes contradictory.

Imagine that an algorithm for use in the criminal-justice system assigns scores to two groups ([white] and [black]) for their risk of being rearrested. Historical data indicate that the [black] group has a higher rate of arrest, so the model would classify more people in the [black] group as high risk (see figure, top). This could occur even if the model’s developers try to avoid bias by not directly telling their model whether a person is [white] or [black]. That is because other data used as training inputs might correlate with being [white] or [black].

The horror.

Knowing a person’s race is useful information in predicting recidivism. Note, again, the algorithm does not, and is incapable, of saying why race is useful information. It is entirely neutral, and cannot be made non-neutral. It cannot be biased, it cannot be unbiased. It cannot be equitable, and it cannot be unequitable. The interpretation, I insist, is in the eyes’ of the users.

“A high-risk status cannot perfectly predict rearrest, but the algorithm’s developers try to make the prediction equitable”. What is in the world can that possibly mean? Since the algorithm cannot be equitable or biased, it must be that the modelers insist that model does not make use of hate facts, or create them.

Now the author prates on about false positives and negatives, which are, of course, undesirable. But the better a model gets, in the sense of accuracy, then the fewer false positive and negatives there will be. If the model is hamstrung by denying hate facts as input, or it is butchered because it produced hate facts, then model inaccuracy must necessarily increase.

What makes the whole thing laughable, is that algorithm builders are being denied even access to hate facts, so they can’t check whether their models will be judged as “biased.” For instance, race cannot be input, or even viewed, except by authorities who are free to use race to see whether the models’ outputs correlate with race. If it does, it’s “biased.”

The best way to test whether an algorithm is biased along certain lines — for example, whether it favours one ethnicity over another — requires knowing the relevant attributes about the people who go into the system. But the [Europe’s General Data Protection Regulation]’s restrictions on the use of such sensitive data are so severe and the penalties so high, Mittelstadt says, that companies in a position to evaluate algorithms might have little incentive to handle the information. “It seems like that will be a limitation on our ability to assess fairness,’ he says.

Diversity is our weakness.

15 Comments

  1. Wilbur Hassenfus

    The right way to handle this is for the algorithm to infer the predictee’s race and then apply a weighting factor.

    I suspect that in effect, that’s what they’ll end up with. “When in doubt, use brute force”. The comments and variable names will call it something else, of course. You won’t see any mention of race.

    Once you get that working, you can use it to “prove” that certain races are disproportionately arrested, because the rate will exceed that predicted by a “neutral algorithm”.

    Good times!

  2. Gary

    Diversity is our weakness.

    Naw, it stupidity that’s our weakness.

  3. DAV

    how software might tackle child abuse

    Putting too much faith in silk panties IMO.
    Burlap undies might work better.

  4. Sander van der Wal

    They should teach the A.I. To do virtue signalling. Which is knowing the facts but lying about them.

    After all, an A.I. that doesn’t do virtue signalling will not pass the Turing Test.

  5. Ken

    I think we can mostly agree that “race” is not a genuine factor predisposing one to criminality or a variety of other negative traits. We can also agree that generations of prejudice manifesting in job & education discrimination affecting very definable groups have resulted in those groups having predispositions to criminal and other adverse social behaviors (put another way, subject any group, any race or mix of races but still a defined group by some measure, to such conditions for a few generations and their progeny will behave similarly from the resulting circumstances that afford less opportunities).

    The predictable and observable result–reality–is a social state of affairs where race, in many geographical areas, does correspond to measurable types of behaviors, many adverse, distinct from other definable groups. Race is a valid, if unpalatable, proxy measure. So is “redneck” and “white trash” in many areas but those groups don’t take such great offense.

    What is a bit troubling is that so many of the population either cannot, or refuse to, accept this to the point of overgeneralizing to assert that if any negative behavioral trend is observed within a population it must be ignored … unless… one can frame the issue in race-neutral terms. Often enough this is not practical, or even possible. Thus, this kinder, gentler approach that avoids confronting an obvious proxy measure enables an outside group(s) without the problem to profess sensitivity to those with the problem(s) to enable everyone concerned to pretend a real problem(s) don’t exist within the given group.

    Isn’t that racist?

    Because, by denying that a problem of some sort exists within a definable population … the act of denial ensures that problem persists within that group — recognition/acceptance is the first step to taking remedial action. Denying there is a problem is a brilliant tactic, manipulable by elected leadership and their armies of compassion* by enforcing/endorsing policies & cultural norms that ensure a given population’s problem(s) are nurtured, not cured, thereby ensuring the given population remains needy.

    A population base of people with festering persistent problems … in constant need of the govt dole — where would the Democratic Party be without that!

    * ‘the soft bigotry of low expectations’ was the way the Bush Administration put it …

  6. Wilbur Hassenfus

    @Ken

    The Chinese on the west coast were mistreated as well, much worse than is now commonly known. There were no multigenerational effects.

    Also, twin studies.

    The point of denying the problem is that, in some people’s eyes, there isn’t a problem. That population overwhelmingly (that means “yes, I’ve met some exceptions too”) likes being just the way they are. The concerns they do have are that they’d like to do it with more money in their pockets, and they’d like not to have cops interfering with their lifestyle. They’d like somebody to come pick up the trash that unfairly appears on their streets but not on ours. They’d like equal access to the way our homes unfairly fail to deteriorate over time.

    They don’t wish they could be just like you and me. The last thing they want is some “helpful” outsider trying to fix them.

  7. Uncle Mike

    “Race” is not a “fact”, but instead a Dark Age myth codified by Victorian “scientists”, who also codified (as science) mesmerism, phrenology, craniometry, occultism, wilderness-ism, Marxism, climatology, and a grab bag of other pseudo-scientific garbage.

    The Authorities of the Ziggurat cling to these myths to perpetuate their High Priestly powers over the Peasant/Slave Class, but you already knew that.

    Humanity cannot be color-coded — not with REAL science at any rate. Hitler’s social demographers believed there are 120+ races. Were they wrong? Do you have a better number? And what about all of us mongrels?

  8. If you believe that race is a myth, do you also believe there is no discernible difference between a Dalmatian and a Dachshund? Between a Persian and a Siamese? If there is no difference between a Bantu and a Swede, then there can be no difference between a Great Dane and a Chihuahua.

    Certain breeds of dog are prized for their intelligence, even temper, or aggressiveness. This proves that certain behaviors, while somewhat trainable, are also innate.

    The apple doesn’t fall far from the tree.

  9. Uncle Mike

    There are not “races” of canines! There are breeds:

    There are 340 recognized dog breeds in the world as of 2014, with 167 recognized in the United States. AKC-recognized breeds belong to 10 categorized groups based on the dog’s function, purpose, size or appearance.

    Race is not a scientific category. No animal species (other than humans) exhibits “race”. “Race” is not found in zoology.

    Only racists believe in the bogus Theory of Race. Only racists think humans have “innate behaviors” characteristic of their color code.

    Is Bantu a race? Is Swedish? Since when? Your Theory, Mr. McChuck, is defective thinking. I don ‘t blame your “race” for that — it is entirely your own fault.

  10. Wilbur Hassenfus

    @Uncle Mike

    Sequence a swede’s genome and a bantu’s. They’re trivially distinguishable. They differ, substantially, in ways that have significant effects on the live human being. Call it a “race”, a “breed”, a “clade”, a “distinct population” — the word is just a word. Reality is what it is, and you can measure it. The Swede and the Bantu differ roughly as much as a wolf and a coyote. That’s a big difference. Wolves and coyotes are perfectly interfertile, by the way. But the two populations have diverged significantly since the time of their most recent common ancestor.

    This is complicated subject, but reasonable and factually based generalizations can be made. “Race” is one of them. It’s predictive. “Predictive” is big. Really big. It matters a lot.

  11. Uncle Mike

    @Wilbur

    Scientifically, you are wrong. There is more genetic diversity (haplogroup alleles) in a single Bantu village than there is in the entire nation of Sweden.

    There is no “white gene”. There is nothing “predictive” about being Swedish. They are not stupider, uglier, or more prone to idiocy, criminality, cannibalism, laziness or anything else as compared to Bantus. Being part Swedish myself, I utterly resent your foul and retarded prejudices.

    Get with science, Mr. Bigot. That is, unless you are too “Swedish” to comprehend haplogroups. You Nazi racists blow my mind. Haven’t you murdered enough (millions of) people already?

  12. Uncle Mike

    @bigots

    Call it a “race”, a “breed”, a “clade”, a “distinct population” — the word is just a word. Reality is what it is, and you can measure it.

    A word is just a word, but you can measure it? With what metrics? What is your unit of measurement?

    Exactly what are your personal clade stats? How do you measure up? Please show us all your ancestry pie chart — are you a mongrel or an inbred?

    How quick you are to damn the so-called defective so-called clades. Maybe you are what you hate, Adolf.

  13. Rabbi High Comma

    The gatekeeping is hot and heavy and starts almost immediately.

    Ohhhh….and “Uncle Mike” drops the H-bomb. A severe unforced error. I do love seeing the extents that the parishioners of the Church of Globohomo will go to in order to claim an 80 year old Chinese woman is exactly the same as a 2 year old Khoisan boy. Of course exactly one day before birth we are “clumps of tissue” which can be discarded while tweeting about being oppressed by the patriarchy. 24 hours later you’re old, young, Latinx, trans-anything. But not White. Whites are evil and need to be genocided for everyone else’s good. White is also a “social construct” – but only when Whites are defending themselves. If you’re attacking Whites, it is totally a race.

Leave a Reply

Your email address will not be published. Required fields are marked *