Statistics

More “Racist” AI: Blacks Pegged As Bigger “Haters” Than Whites

Ezra Klein, lead explainer over at Vox, needs to have direction of cause explained to him. He tweeted “leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English”.

His tweet pointed to the article “The algorithms that detect hate speech online are biased against black people: A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as ‘offensive’ compared to other tweets.”

Some person calling herself (I think it’s a woman) “Cardi B” won the 2019 Grammy award for the “best” rap album Invasion of Privacy, on which is the song “I like it.” A snippet of the lyrics:

I like million dollar deals
Where’s my pen? Bitch I’m signin’
I like those Balenciagas, the ones that look like socks
I like going to the jeweler, I put rocks all in my watch
I like texts from my exes when they want a second chance
I like proving niggas wrong, I do what they say I can’t

Another big hit on the album seems to be “Be Careful” (my asterisks):

And putas, chillin’ poolside, livin’ two lives
I could’ve did what you did to me to you a few times
But if I did decide to slide, find a nigga
F*** him, suck his d***, you would’ve been pissed
But that’s not my M.O., I’m not that type of bitch
And karma for you is gon’ be who you end up with
Don’t make me sick, nigga

Now if some music lover were to quote these Grammy winning works of art in a tweet, it could conceivably be labeled as “hate speech”. If that music lover were black, then a black would be algorithmically charged as a “hater”. And if more blacks tweeters appreciate this kind of Grammy winning art than white tweeters, why, then, more blacks will be painted as “haters.”

Did I mention these songs were Grammy winning?

Enough jokes. Here are the easy facts.

A do-gooder compiles a list of “hate”. Tweets are compared against the list. Some are flagged as “hate”, others, presumably, as “love”. Anybody who’s written even one line of code knows how easy this is to do. As long as there are no typos, the algorithm will work. “Hate” tweets will be set on one side, “love” tweets on another.

Then if it later turns out that the “hate” tweets were written by blacks more than whites (proportionally, determined by external data), then blacks will be said to be bigger “haters” than whites.

It must be obvious that if the algorithm does not know, in advance, who is black and who white, then it is impossible for the algorithm to be “racist”. It will really be true that more blacks are “haters” than whites.

Alas, it is not obvious.

But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.

I heard some white politicians trying their tongues on African American English. How did it go? “I ain’t in no ways tired…” Which I guess means whites speak White American English. Skip it.

This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings — like the “n-word” or “queer” — may not be in others. But algorithms — and content moderators who grade the test data that teaches these algorithms how to do their job — don’t usually know the context of the comments they’re reviewing.

Okay, so nigger is not offensive if a black guy, or Mel Brooks, says it, but it is if a white guy says it. Not sure in what class Mark Twain fits (Huckleberry Finn is probably banned anyway). Queer is not offensive if a pervert says it, but it is if a normal says it.

Again, unless the algorithm knows in advance who is doing the saying, then there is no way to know if the words are “offensive”.

If the nervous programmers at Twitter are frightened of being called “racists”, then this is what they can do. Write an algorithm that identifies race and not “hate”. It won’t be perfect, but it can be reasonable.

Those who the algorithm tags as black, along with these peoples’ tweets, can then be automatically labeled “love” regardless of content. Whereas those people tagged as white, and their tweets, can be automatically labeled “hate”.

Problem solved! Indeed, this is the only way to solve it. I therefore predict that’s exactly what will happen.

Categories: Statistics

7 replies »

  1. I don’t know why you mock the trivial idea of context being a big deal for the interpretation of words. Of course it is. That’s how language works. If I call a male stranger a sloppy idiot, I am liable to get hit; whereas if I call my best friend, with whom I’m in a kidding relationship, the same, he’s more likely to play punch me and call me a filthy dog.

  2. Tay’s Law at work again I see. Any sufficiently advanced AI is indistinguishable from a racist one.

  3. “I don’t know why you mock the trivial idea of context being a big deal for the interpretation of words. Of course it is. That’s how language works.”

    I think that’s the point—an algorithm cannot understand context. Most are simple word searches. Not to mention the definition of “hate speech” is also dependent on context. Saying exactly the same thing at a Democratic meeting versus a Republican meeting will get your speech labeled “hate” by the Democratics every time. No AI can scour the net, apply all the subtle factors and determine hate speech. Assuming hate speech even exists. Seems mostly a political ploy to create a dictatorship devoid of non-conforming speech, like North Korea has.

    No AI can be racist. It’s a computer program. The programmer can be racist, but not the AI. Racism implies intent and emotion. AI’s have neither.

  4. Glaring oversight: “dog whistles”

    The use of double entendre to convey racism or other coded messaging.

    ‘Course, some will say this is an example of seeing/hearing things that are not there, or even making things up.

    We can probably be sure that algorithms will ferret that out as well….

  5. I love how everyone is acting like this is a complete bias. Anyone can be racists, and as a black guy, we’re some of the most racist people out there. Shit, go to black insta and you will see white person this and white person that. Anyone can be a racist so don’t chalk it all up to a bias just because we talk a certain way.

Leave a Reply

Your email address will not be published. Required fields are marked *