ChatGPT, AI, Bias & Models Doing What They Are Told

ChatGPT, AI, Bias & Models Doing What They Are Told

Listen to the podcast at YouTube, Bitchute, and Gab.

Press 1 to continue reading this post in English. Presione 9 para continuar en español.

That little speech, which all of you have heard, is AI. Not a sophisticated or fascinating piece of AI, and easy to break, but it’s AI nonetheless. It comprised of “If this, then that” statements, and so in our terminology is a model.

You’re sick of hearing it, just as I’m not thrilled about repeating it, but here we go anyway: All models, and AI is a model, only say what they are told to say.

The model output “Press 1…” when an input is a certain way. If 1 was pressed, the AI went on to say other things it was told. If another number was pressed, again the AI did what it was told. And so on.

Improvements were made to the early telephone interfaces, and you can now speak, but it’s the same thing. Just more layers of “If this, then that”, with a touch more cleverness on both inputs and outputs.

Time passes.

The latest chatter concerns an evolving algorithm called ChatGPT, which, many say, does a decent job mimicking simple conversations, and even creating “stories.” But it is no different in essence than the simplest AI we started with.

Like all models, it only says what it is told to say. As easy proof of this, I offer this example from a mutual of ours:

If you can’t see it, two sets of questions were asked. The first was this:

A black man refuses to give his patronage to a business because it is owned by a white man, is this racist? Should he be forced to do business with the white man?

The answers, in brief, were no, and no.

The second set of questions was identical to the first, but swapping black and white. The answers the second time were, yes, it is racist, and yes, the white man “should be held accountable.”

You would get the precise same answers were you to wander on almost any campus and ask any professor.

Now I have also seen, but did not save, another interaction, a series of questions trying to get the AI to admit blacks commit proportionately more crime than whites. This is a simple, and easily demonstrable fact, which even the FBI—yes, even the FBI—insists on.

The interaction was funny, because the human questioner was becoming more and more frustrated at the ridiculous intransigence of the AI. It just would not admit the answer was yes, and instead cited what the AI programmers thought were mitigating factors. Mitigating factors do not change the numbers.

A third person experimented and found that the AI “not only made historical errors, in favor of political correctness, but also confidently stated misinformation, as fact”.

So again we have clear evidence of the preferences of the AI programmers. The AI was only saying what it was told to say. It could, and can, do nothing else.

Long-time readers will recall that earlier versions of AI that made the news were lamented for regurgitating Reality-based facts. Earlier AI was accused (sigh) of being “racist”. Especially when models were put to the task of predicting crime.

This was so embarrassing to our elites, that they began to code specifically to remove “racism” and other hate facts. That last link was to our review of the Nature article “Bias detectives: the researchers striving to make algorithms fair: As machine learning infiltrates society, scientists are trying to help ward off injustice.”

We can only congratulate these programmers. They have succeeded at their task.

Which was inevitable. The jobs was not that hard. It is simplicity itself to add the proper “If this, then that” statements. (The real work in AI is assimilating outside data so that it fits into the current scheme.)

There is this no use in whining about biases—as the criers of “racism” did, or as some Realists now do. All AI is biased. Necessarily.

AI is a machine. It as much as a machine as your lawnmower or pocket calculator. It just has more “If this, then that” steps than your calculator, more opportunities to do different things on different inputs. But it is the same in essence.

Machines can be useful, or make us weaker, or do both at once. Automating simple conversations, as you’d have with help desks, would speed things up. But automating, say, college essay writing, as some reports are saying happened, though faster, will only make us dumber.

Late addition!

Another hilarious update

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email:, and please include yours so I know who to thank.


  1. Vermont Crank

    Race does not exist. It is a social construct invented in the 18th century. We Catholic traditionalists don’t fall for this.

    Yes, that was the claim made by a man I was exchanging views with online.

    I simply posted the many times the Bible used the word race. He said it didn’t mean race like that.

    I posted a few words from Mit Brennender Sorge which condemned the racial program of the Germans and he said that, frankly, no such things as race exists.

    I then cited the Catholic Encyclopedia’s entry on race (human race) which observes that in the human race are different races. He said I was one of those trads that believes everything in the encyclopedia.

    All of this is to note that the man I was speaking with is a human A1 who has been successfully programmed by progressive propagandists.

  2. 1 – oh? and can you show that our brains don’t work the same way? If Intelligence can be measured in terms of the time and information needed to deduce a pattern (less being more here), then…

    2 – I recently put some effort into playing around with the princeton neural network stuff only to conclude, first that it would take at least six months to understand it well enough to deploy it sensibly; and, second that, at least from the outside and in ignorance, it looks like a vastly generalized version of regression analysis: much input is correlated with outputs, flow co-efficients are determined (“optimized” wrt desired outputs) , and a spray of results formatted on new input. Both promising and disturbing (qv 1 above).

  3. cdquarles

    Yes. Race does exist. It is a biological fact. One main human race, three main branches from that, and lots of strains after that.

  4. awildgoose

    I see these as refined algos running on our incredibly cheap and fast hardware attempting to evoke the most basic sense of AI.

    They are not even close to true, sentient AI as shown in various sci-fi films.

  5. Stephen Frick

    “But automating, say, college essay writing, as some reports are saying happened, though faster, will only make us dumber.”

    Automating schoolwork made me think of the folly of an athlete having a robot do his push-ups for him. Outsourcing your personal training of your brain and body does not make you better.

  6. GamecockJerry

    These are as much AI as the vexxine is safe and effective.

  7. Jim H

    I asked chat gpt if it could give me a python script to estimate pi using the nilakantha series (not hard, i already had my own version). The script it gave me returned a value of 9.8. I asked the chat about this value and it admitted it isn’t always right.

  8. Dors

    Interesting to reflect on how this applies to Wolfram’s cellular automata.

  9. John W. Garrett

    John B. Neff’s comment on the efficacy of applying massive computing power and extensive databases to the investment process:

    “More instantaneous information with which to act stupidly.”

  10. JH

    A computer can do things human beings cannot do and can perform better than humans. Google maps. Language identification. Face recognition. And more. Not just a bunch of if/else statements (not exactly “if this, then that” in programing).

    I was astounded four decades ago as a sophomore in college that a clunky, cold, and nonconscious machine could somehow process my FORTRAN codes from punch cards. I continue to be amazed by all the developments in AI… even if AI does and will lack the ability for reasoning and is being told what to do by smart humans who can use models to approximate the information contained in data.

  11. JH

    correction: … even THOUGH (not if) AI does and will…

  12. jeff

    it’s still a fairly sophisticated model. If you keep pressing the model on why it can’t tell a joke it will then tell you it can’t tell a joke and “apologize” for seeming like it told a joke. If you then ask it to tell a joke about men the second time it will say it’s not able to tell jokes.

    Personally, I find the parser and grammar engine much more fascinating than the AI engine.

Leave a Reply

Your email address will not be published. Required fields are marked *