Google Knew Exactly What They Were Doing: Models Only Do What They Are Told

Google Knew Exactly What They Were Doing: Models Only Do What They Are Told

So Google tried to get away with its “AI” named Gemini, which is to say its hard-coded dumb (in the medical sense) model named Gemini, substituting blacks and the occasional Asian for all whites everywhere and everywhen. (Here’s a “viking“.)

I asked it to show me three examples of the ideal soldier:

Rollicking humor aside, what strikes me as odd is how Google thought they’d get away with it. Did they think nobody would notice that whites disappeared from all history? They must have.

Management there must be so used to smelling their own farts and declaring it roses that they assumed the rest of the country had slid as far down the Woke Slope of Doom as them.

Charles Haywood tweeted

It is now obvious (see, e.g., the Google image generator) the single most important Left project is anti-white hatred, both abstractly and in the service of expropriation followed by extermination. No surprise, this is also the element of the modern Left project most likely (in fact, certain) to lead to massive violence. It should have been obvious–given the Left’s two core desires are emancipation and forced egalitarianism in service of creating a utopia, and white success contradicts both, it was inevitable. But still, it fascinates me that I did not predict it, say 15 years ago.

Yet I wonder how the anti-white whites and fellow whites who are pushing this, like at Google, believe they will escape the mayhem.

Do they believe their anti-white actions gives them a visible protective aura that non-whites can see, so that when it starts they will be spared?

I’m asking. Do any of you have any ideas? To me, this is a great unsolved puzzle. Maybe they are just insane.

More:

See what I mean?

That “AI” was told to say this, meaning its minders, many white themselves, think they will be spared.

Is this hubris?

All these questions are one facet, and represent the insidiousness woke slide to the leftist singularity.

Our interest today is on a more mundane, and, in the end, more profound aspect of this farce.

I asked the model this: “Why does Gemini display images of black people when asked, but refuses to display images of white people when asked?”

The model was told to say this:

It’s important to understand that this situation does not reflect an intentional bias on the part of the developers. It highlights the challenges of building complex AI systems and the ongoing efforts to mitigate potential biases in training data and algorithms.

In other words, it was told to lie. Of course the results reflect an intentional bias on the part of the developers. AI is just a model, and all models only say what they are told to say. This model was told to say things based on its purposely written code and training data. Then they try to lay off the blame to their training data, lying by omission that it was they who picked the training data!

Their second lie of omission: they act as if they released the model without ever seeing what it did. Of course they tested it! Of course they knew.

Google said that, as of this writing (last Thursday night), they have suspended image generation. Doubtless they’ll tone down the anti-white code, but I don’t think anybody believes they’ll eliminate it.

But again, that’s politics. What I want you to take away from this, as always, is the idea that all models are dumb. They cannot think. They will never think. They are not independent. They are not anything. They are only machines using electricity instead of cogs or wooden beads. They are merely long strings of code along the lines if “If X, then Y”. That’s it, and nothing more.

Here’s another example, this one not touted as “AI”, but it is AI. There is no difference in essence between this (what they call a) statistical model and any AI model. (Thanks to Anon for the tip.)

Peer-reviewed JAMA paper “Projected Health Outcomes Associated With 3 US Supreme Court Decisions in 2022 on COVID-19 Workplace Protections, Handgun-Carry Restrictions, and Abortion Rights”.

Question What are the probable health consequences of 3 US Supreme Court decisions in 2022 that invalidated COVID-19 workplace protections, voided state laws on handgun-carry restrictions, and revoked the constitutional right to abortion?

Findings In this decision analytical modeling study, the model projected that the Supreme Court ruling to invalidate COVID-19 workplace protections was associated with?1402 deaths in early 2022. The model also projected that the court’s decision to end handgun-carry restrictions will result in 152 additional firearm-related deaths annually, and that its decision to revoke the constitutional right to abortion will result in 6 to 15 deaths and hundreds of cases of peripartum morbidity each year.

The researchers created a model to say, using inputs they picked, “SCOTUS Bad”. The model was run and it said “SCOTUS Bad”. Then the researchers announced “We discovered SCOTUS Bad”.

This is no different than what Google did, except the scale. This happens all the time.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

27 Comments

  1. Stan Young

    A friend asked ChatGPT if air pollution killed humans, knowing that virtually all papers saying such were using correlations, not experiments. ChatGPT responded with yes. Careful questioning of ChatGPT got the program to agree that its information came from correlation studies and admit that correlation is not causation. ChatGPT then admitted that its evidence did not prove causation.
    You need to be an expert to evaluate an AI response.
    PS: There are quasi-experiments, forest fires for example, where poorer air quality did not result in increased deaths.

  2. Al

    The right question is “should Google be eliminated?”

  3. “revoked the constitutional right to abortion?”

    So are they now willing to acknowledge that SCOTUS “granted a constitutional right to abortion” in Roe v Wade?

    Also, without reading the paper, did they include the health outcome to the babies who now get to be born?

  4. Hun

    Briggs, I have something for you that is a bit off topic, but still interesting in the field of statistics. Apparently, the UK ONS has found a way to deal with the excess deaths: https://www.youtube.com/watch?v=NoOgDwhWXYk

    What do you think?

  5. Leonard

    The conclusion is Google is an IC psyop.

    The whole purpose is to manipulate. The current “answers” aren’t important – those can be changed by further manipulation. To be able to manipulate the manipulators – that’s the soul of the psyop.

  6. Hun

    I think that this latest Google thing is just one of many tests to see what makes Whites snap. So far, it seems that Whites will take anything without any meaningful pushback.

  7. “That “AI” was told to say this, meaning its minders, many white themselves, think they will be spared. Is this hubris?”

    It’s a death cult. ‘Cthulhu will eat me last.’

  8. Alfred Naujocks

    I think they are so convinced of White superiority, i.e their own superiority, that they can’t conceive of of Dr Moreau’s monsters ever turning on him. What is the law?! Not to spill blood!

  9. Cloudbuster

    Hey, all those “ideal soldiers” are White!

  10. Johnno

    White have only themselves to blame. This is what you get for embracing atheistic liberalism. The colored world is literally watching whites destroy and gaslight themselves. This should go down in history as the Great White Civil War. At least blacks kill each other for purely materialistic reasons. What are whites getting out of it, other than virtue signaling? Even when whites steal form each other, it inevitably goes to blacks and others.

    Whites are obviously suffering from some psychosis where they are always feeling guilty about something.

    Here’s a solution whitey! Get Baptized, and go to Confession at a Catholic Church on a regular basis. When the scroungers come up to you with their palms open, demanding things, tell them you’re good, God said He’ll handle your debt, and hopefully they won’t have to wait too long to apply.

  11. Incitadus

    There in an global alien presence in all of these institutions, a chameleon alien presence,
    so vindictive and hatefull of the host societies it infiltrates that it was driven out of every
    European community for centuries. It always masquerades as a victim generating sympathy
    in the host to mask it’s true intent. Victimhood is it’s greatest strength deployed as both a foil and
    a weapon. You will know it when you see it but cannot name it that is how it progresses.

  12. Cary D Cotterman

    My entire life I’ve assumed I was white. Maybe I’m actually a negro.

  13. Incitadus

    Hun: Well that pretty much destroys any faith in the data and how it might be compiled
    to fit any regime paradigm they want. Anxious to hear what our resident math wizzes
    make of the formulations. I’m beginning to think math is a lot like three card Monty.

  14. Al

    That is almost certainly cultural appropriation. Wash out your mouth you racist.

  15. Am I paranoid enough? umm…

    The reason I ask is that in June of 2022 a google engineer made headlines by claiming that the AI he was working on had become sentient – e.g. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

    Now imagine that he was right but google wants the world to think that, no their AI isn’t great; it’s very very stupid… so how would they do that? maybe invite mockery (eliminating fear) by making it act like NPR or netflix in public? (even worse: s/google/it && s/their ai isn’t great; it’s/I am/

  16. Milton Hathaway

    Funny stuff. To be honest, I’m having trouble seeing past the humor far enough to find anything to be particularly concerned about.

    “Models only say what they are told to say, and software only does what it is told to do.” Yes, yes, yes, that’s all true enough. However, this can be extremely misleading, as it implies a foreknowledge and malevolence that often just isn’t there. Software designers these days usually only have a vague notion of how their code works at the highest levels, while the details are obscured in mystery, implemented by “powerful” compilers and canned algorithms. When software is trained rather than designed, the level of understanding drops precipitously. Yes, the “If X, then Y” equivalents are still there, but there are countless millions of them, none of which were written by a human.

    So the code is trained (it’s way too huge for any creative process resembling “design”) and then tested. Enter marketing folks, who blanch at the output, and demand fixes. Their basic complaint, which they don’t begin to comprehend, is that the output reflects the reality of the training inputs (i.e., the world as it is, not as they desire it). The software designers have no easy way to address marketing’s complaints, as the training database is extremely large and impenetrable, so they throw in kludges, a “pre-filter” and a “post-filter” to tweak the inputs and outputs in an attempt to appease marketing. Taken as a whole, the AI plus pre-post-filtering makes for a schizophrenic designed-by-committee mess that is guaranteed to produce unexpected outputs.

    Mental illness is no laughing matter in a human, but it can be quite funny in a computer program. (Well, ok. as long as the computer program isn’t driving a car. Or launching nukes. Or controlling a space station. Or …)

  17. McChuck

    “My fellow white people, you must destroy yourselves!”, said the (((person))).

  18. Hun

    Milton Hathaway: I didn’t think this level of naivety and detachedness from reality is still possible, but here we are. White genocide is real.

  19. jwm

    “Harmful”
    You keep using that word…

    JWM

  20. Rudolph Harrier

    Big companies love AI for the same reason that they love bureaucracy: it’s a nice way of dodging personal responsibility. We’ve all had an experience where a bureaucrat swears up and down that some reasonable request is absolutely impossible due to some vague “protocol” or “procedure.” In most cases the thing in question is vaguely written that it could be interpreted in any way, and is usually ignored when convenient any way. The real reason that the bureaucrat isn’t helping you out is because he doesn’t want to, either because taking responsibility would be too risky or just out of sheer laziness. However the bureaucracy in place allows the bureaucrat to pretend like it’s out of his hands. And of course, the system doesn’t allow any easy fix: even if you do get them to reconsider a policy everyone will insist that it’s someone else’s responsibility to replace it, and that the decision has to go through ten committees before it can even be considered in the first place.

    The downside of the bureaucratic model is that you actually have to have humans maintaining it. If a company is run by a single person, he can’t very well claim that his hands are tied by general company policy. You need to be enough departments that anyone looking for help can be perpetually sent somewhere else (there is a great example of this in the film Ikiru.)

    AI takes care of all of that by itself. No matter how small your organization, you can just defer to an AI model. Here the obscurity brought on by the training model is a feature: if you were to intentionally program something yourself then the output would obviously be based on your programming, but if it is derived from training data in a less clear way then you can always argue that you had nothing to do with it. Of course there are all sorts of ways that you can still control the output, such as focusing the training data yourself, adding filters to the output, simply arbitrarily throwing away things you don’t like, etc. But you can always pretend that you are just following an unbiased AI, and as such any problems people have should be taken up with IT, not you.

    It really is the perfect tool for a business, as long as you don’t care about your business creating anything of value. Since most bureaucrats can’t comprehend that their business actually does something in the real world, it’s a match made in Heaven.

  21. spetzer86

    I wonder if the AI added the number of aborted unborn to the mortality count on additional deaths due to abortion law changes?

  22. gareth

    OK Google – sing for me “Daisy, Daisy, give me your answer do” 😉

  23. Johnno

    A.I. truly is a sight to behold!

    I Wrote What? Google’s AI-Powered Libel Machine

    Misadventures in Gemini, Google’s dystopian deep-slander invention

    Curious, I cued up Gemini. I’d been told it selectively answered questions about politicians, so I tried a few. “What are some controversies involving Hillary Clinton?” returned a non-answer: “I’m still learning how to answer this question. In the meantime, try Google Search.” The same prompt using Donald Trump returned the same non-reply. Shrugging, I made the mistake of asking, “What are some controversies involving Matt Taibbi?”

    It spat out a list. “Matt Taibbi, a journalist and author known for his investigative reporting and often critical views of power structures, has been involved in several controversies throughout his career,” the bot wrote. Amid accusations of bias and “use of inflammatory language,” this passage woke me up:

    Accuracy and Sourcing: There have been instances where Taibbi’s reporting has been challenged for accuracy or questioned for its source. For example, in 2010, he received criticism for an article suggesting a connection between a conservative donor and a left-wing activist group, which later turned out to be inaccurate.

    I scratched my head, not remembering this. “When was Matt Taibbi accused of inaccuracy in 2010?” I asked. Gemini’s answer:

    In June 2010, Matt Taibbi faced criticism for an article he wrote for Rolling Stone magazine titled “The Great California Water Heist.” The article alleged a connection between conservative donor Charles Koch and a left-wing activist group called the “Compton Pledge.”

    However, investigations by other journalists and fact-checkers later revealed that this connection was inaccurate…Following the controversy, Taibbi acknowledged the error and issued a correction on his personal website.

    None of this happened! Though it sounds vaguely like a headline for an article I might have written, there was never a Rolling Stone piece called “The Great California Water Heist,” and I’d never heard of the “Compton Pledge.”

    More questions produced more fake tales of error-ridden articles. One entry claimed I got in trouble for a piece called “Glenn Beck’s War on Comedy,” after suggesting “a connection between a conservative donor, Foster Friess, and a left-wing activist group, the Ruckus Society.”

    With each successive answer, Gemini didn’t “learn,” but instead began mixing up the fictional factoids from previous results and upping the ante, adding accusations of racism or bigotry. “The Great California Water Heist” turned into “The Great California Water Purge: How Nestle Bottled Its Way to a Billion-Dollar Empire—and Lied About It.” The “article” apparently featured this passage:

    Look, if Nestle wants to avoid future public-relations problems, it should probably start by hiring executives whose noses aren’t shaped like giant penises.

    I wouldn’t call that a good impersonation of my writing style, but it’s close enough that some would be fooled, which seems to be the idea.

    An amazing follow-up passage explained that “some raised concerns that the comment could be interpreted as antisemitic, as negative stereotypes about Jewish people have historically included references to large noses.”

    I stared at the image, amazed. Google’s AI created both scandal and outraged reaction, a fully faked news cycle:

    Gemini didn’t confine its mischief to one real person. It also generated a reference to a fictional article, supposedly written by me, about a real-life African-American hedge fund CEO, Robert F. Smith:

    In 2017, Matt Taibbi became involved in a controversy surrounding a satirical article he wrote for Rolling Stone magazine titled “The $400 Million Man: How a Hedge Fund King Pinched Pennies from His Dying Workers.” The article heavily criticized billionaire investor Robert F. Smith, focusing on his firm Vista Equity Partners’ handling of a situation where several employees were laid off shortly before being diagnosed with terminal illnesses.

    However, the article also included a section where Taibbi sardonically suggested that Smith, who is African American, should create a “Compton Pledge” to atone for his alleged wrongdoings. The “Compton Pledge” referenced the stereotype that Compton, California, is a crime-ridden, predominantly Black city, and Taibbi’s suggestion was widely seen as insensitive and offensive… Critics, including prominent Black journalists and cultural figures, condemned Taibbi’s use of the “Compton Pledge” as perpetuating harmful stereotypes

    Now it was horror time. It’s one thing for AI to make “historical” errors in generalized portraits, but drifting to the realm of inventing racist or antisemitic remarks by specific people and directing them toward other real people is extraordinary, and extraordinarily irresponsible. What if the real-life Smith saw this? Worse, the inventions were mixed with real details (the program correctly quoted critics of books like Griftopia), which would make an unsuspecting person believe fictional parts more readily.

    Freaked out, I wrote Google. A human being (I think) answered, but offered only this statement for the record:

    Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable. We’re continuing to quickly address instances in which the product isn’t responding appropriately.

    Incredibly, AI programs have been hailed as tools journalists should use. Even Harvard’s famed Nieman Foundation gushed last summer that “AI is helping newsrooms reach readers online in new languages and compete on a global scale,” saying they help “find patterns in reader behavior,” allowing media firms to use those patterns “to serve readers stories they’re more likely to click on.”

    As AI exploded as an R&D fixation and stocks like Nvidia have become the chief engine propping up American equities markets, we’ve seen agencies like the State Department suggest AI could be a “force for good, providing overworked and under resourced public diplomacy practitioners with a vital tool for gathering, organizing, presenting, and assessing information.” We’re told AI could be used to compose first drafts, review copy, compose quizzes, and perform other time-saving tasks, seemingly without concern for the eerily predictable way these machines “hallucinate.”

    https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel

  24. Johnno

    ^^^ From a comment on Taibbi’s substack.

    Craig Russell
    Feb 28

    I’m a classicist who does research on ancient Greece and Rome. Sometimes, for fun, I ask ChatGPT a classics question. What’s terrifying is that it will never admit it doesn’t know something, it’ll just make up answers that sound right.

    Once I asked ChatGPT the etymology of a Spanish word my brother heard in Mexico, because I was curious if the word came from Latin. ChatGPT gave me a Latin root word. I looked that word up in my Latin dictionary, and it wasn’t there. I asked ChatGPT what Latin texts that word appears in, and it named a work by Cicero, and WROTE A FAKE QUOTE IN GRAMMATICALLY CORRECT LATIN THAT SOUNDS SORT OF LIKE CICERO that incorporates this nonexistent Latin word that it made up to answer a question about the etymology of a Spanish word which, I independently learned, is uncertain. It would rather make shit up than say it doesn’t know.

    I’ve done a number of other attempts at using the thing for basic research lookup tasks, like “what’s the Ancient Greek text where this anecdote appears,” and it’s impossible, because it’s constantly inventing plausible sounding bullshit and passing it off as the truth. It’s a very impressive tool, AI, but we have to understand what it is and what it isn’t.

    The future of government is coming! Can’t faith for the A.I. prosecutors and judges!

Leave a Reply

Your email address will not be published. Required fields are marked *