The Regime’s New De Facto “AI” Law And The End Of Pattern Recognition

The Regime’s New De Facto “AI” Law And The End Of Pattern Recognition

News is that Regime put out a new law, bypassing Congress as usual, in the form of an “executive order”, about “AI”. The new law’s title is “Safe, Secure, and Trustworthy Artificial Intelligence“.

The new law has the Cult of Safety First!’s favorite word in it: safety. It’s got the woke favorite: equity. It’s got liberal’s favorite: civil rights.

Does it have any of our side’s beloved words? To an extent, which we’ll come to below. But this is only important if we are to trust the Regime, which we are not.

On the whole, this is yet another encroachment of Experts into an area that they have not yet insinuated their hooked tentacles. Experts in an Expertocracy never have met, and never will meet, a thing that does not need their watchful eye and super-brained guidance.

Anyway, let’s look at this new law.

First, terms. There is no such thing as “artificial intelligence.” Although if there were, Kamala Harris, who is the supposed entity behind this new law, would be the best example. There are instead two forms or kinds of “AI”: (1) surveillance and data collection, (2) statistical models. And all models only say what they are told to say. That’s it. Both are used at times in tandem, of course, such as in facial recognition predictions.

I emphasize predictions because that it exactly what these “machine learning” models are doing. Call “AI” whatever fancy name you want to give these statistical models, but they are only making guesses. Usually good ones. Sometimes excellent ones. But not always. They are imperfect as all models are imperfect.

Models (hard-coded in machines) which scan UPC codes are better than guessing which face belongs to which person, the latter obviously more complicated than the former. (Fake images and the like are also predictions in this sense.)

The advantage “AI” models have over, say, the kind of statistical models used by sociologists and the like is that “AI” models are put in predictive terms, which is why they are so good, whereas old-school stats models use an outmoded over-certainty-generating method which worships model parameters (the “guts” of models), which are of no use in any real-world application.

As an aside, we (a very small handful of us) have been trying to get old-school stats to shift to observables instead of parameters. In vain. But never mind that here.

Let’s take both kinds of “AI” as used in the new law in turn, saving the newsworthy part for last.

Surveillance

The Regime, if you can believe it, makes the right sounds here. They say things like this:

Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.

Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy…to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.

Evaluate how agencies collect and use commercially available information

None of this should be taken to mean the Regime itself won’t collect, and use as they like, every scrap of information on you, as the NSA does now. They mention crypto, which also sounds good, but do not forget that the Regime always demands “back doors” into it, so that they can do the obvious.

This becomes mostly air when you realize most people don’t care. After all, they carry, and even pay to carry, tracking devices wherever they go. “Credit agencies” already have most or all of your transaction data, so that they can sell “credit scores”—which themselves are predictive models. “AI”, too.

And all of this will be made a whisper of an echo of an aside when the Regime institutes fully electronic money.

Pattern Recognition

The big news was this. Pattern recognition, as everybody on our side already knows, is verboten, and cancelable, when it is engaged in for any purpose other than to DIE. It is illegal already in certain sectors, like banking. In a sense, the same algorithms forbidden there are used with wild abandon by universities and companies, but in reverse so that they can achieve full DIE.

Meaning there is, already, no common sense or consistency to “AI”. We see the same two-mindedness in the new law.

Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.

Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.

Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

The Regime does all it can to pretend not to see crime statistics, at least for the old-fashioned reason that if it can’t see what is happening, then it can pretend what is happening isn’t.

As we know, “discrimination” only works one way: Victims can’t be “discriminated” against, Oppressors can, and must be. So any algorithm that is formal—encoded in any way, or that can be discovered by statistical means by external agencies on the hunt for “discrimination”—will be ruthlessly prosecuted. The Regime is determined to DIE, and DIE it will.

Models that make excellent but unwanted predictions about Victims will be purged in just the same way as bearers of bad news used to be flayed alive.

Look for an agency, or agencies, to be staffed policing algorithms that make too-accurate Victim predictions. To avoid dings, algorithm writers will have to hard-code DIE into them. That’s the only way to get them to work.

In other words, if the algorithm is judged by its predictions, in the sense that people successfully use these predictions to make loans, approve applications, and that sort of thing, then where Victims have worse outcomes, the algorithms will be blamed. Even if writers have done their utmost to strip all obvious, and even all not-so-obvious, Victim characterizations from the algorithms.

Any characteristic that is associated with worse outcomes that Victims have greater proportions of than Oppressors must be excised with extreme prejudice. But that means everything, in the end. Which means everything has to come out, which makes the algorithms useless.

Thus the only way to beat this new law will be to do as colleges do, and select for Victims. The DIE will have be to be hard coded. The models can still do a good job on Oppressors, but Victims must be given a pass. This makes “AI” a technological kind of anarcho-tyranny.

Which is a reminder that Diversity is our weakness.

Oh, there are also words in the new law like this: “Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers”. This will be ignored. Because replacing bad workers by machines is the only hope to avoid DIEing completely.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

12 Comments

  1. Ray

    Your enemies strike again.
    “None of this should be taken to me the Regime itself won’t collect”

  2. Briggs

    Ray. They never sleep.

  3. JH

    You may claim that all models only say what they are told to say, but you and AI developers don’t really know what they, e.g., ChatGPT, would say. The future developments of AI tools will be full of amazement, regardless of what happens to DEI and regime. The speed. The utilization of data. As if the software can think without the input of the developers. Artificial Intelligence it is.

  4. Cloudbuster

    JH: “you and AI developers don’t really know what they, e.g., ChatGPT, would say”

    That doesn’t mean the models say other than what they’re told to say, it only means that the inputs are so vast the developers are not sure what they’re telling the models to say.

  5. Cary D Cotterman

    I still use cash, write stuff on paper, drive a “dumb” car, and don’t carry an activated cell phone. They don’t know where the hell I am or what I’m doing (except when I’m here at this computer). Being a grumpy old dude has some perks.

  6. Milton Hathaway

    Before finishing the first paragraph, my mind immediately pictured a dog named “AI” with a group of fleas, Biden’s handlers, assembled in a tiny furless patch on the dog’s head, self-importantly issuing edicts to all flea-kind on how this AI dog is to be managed. A couple more paragraphs in and I’m picturing the rest of the fleas completely uninterested in the yammering emanating from the Biden fleas, distracted by the exciting new opportunities for flea capitalism that exist on this new dog. The fleas only paused for the briefest instance, and detecting no hint of any flea powder, they go about their flea business.

    JH writes “AI developers don’t really know what … ChatGPT would say”. True enough. AI developers appear to know even less about the inner workings of their creations than other model-builders, such as those that rely on statistical packages, software plug-ins, high-tech test equipment, etc. The knowledge span of any individual techie is always bounded by practical limits, on both the minute details on the low end and the broader bigger pictures on the high end. For example, high-tech test equipment is full of integrated circuits, the detailed inner workings of which are mostly a mystery to the test equipment designers. In turn, the integrated circuit designers would be surprised at the unexpected ways their creations are put to use.

    Years ago I decided to lump people into two broad categories, those with horizontally integrated knowledge and those vertically integrated knowledge. Examples of people with horizontally integrated knowledge would be a doctor or a car mechanic. They see a wide variety of problems and rely on having a good memory to solve problems quickly. Ask either of them why a couple of times, though, and you quickly reach the point where they are relying on their memory, either training or past experience, and don’t understand the why behind it.

    An example of someone with vertically integrated knowledge would be me. Cursed with a horrible memory all my life, I’ve have had to delve into the details until the why behind something made sense to me intuitively. I am constantly re-deriving or looking up simple facts and equations others have memorized. To compensate for my poor memory, I document everything, but I long ago reached the point where my documentation became too unwieldy to be of much use. Internet searching can be a great tool, but I have difficulty remembering the perfect word or phrase to yield a usable search result.

    ChatGPT is a wondrous thing to a vertically-integrated horizontally-challenged person like me. It’s like having a magical assistant with an incredible memory that needs only the vaguest prodding. Ask it “What’s that word that means x except when you are talking about y?”, and it knows the answer! How can I read in a file of numbers in this unfamiliar programming language? ChatGPT gives me a chunk of code that works with some minor tweaking.

    But I can see how ChatGPT can be viewed as a threat by a horizontally-integrated person, since ChatGPT can do a lot of what they are best at, and isn’t so good at what they also aren’t great at. Back to the doctor example, I could see how ChatGPT, with appropriate training, could evaluate a list of symptoms and a medical history, come up with follow-up questions and a list of diagnostic tests to be run, and iterate this down to a short list of the most likely diagnosis. But at that point, I would want one of the rarer vertically-integrated doctors to take over (and do my own research, of course). Many malpractice suits involve doctors applying standard treatments in the wrong situations, lacking the vertical knowledge to really understand the underlying why’s.

  7. Hagfish Bagpipe

    Shut up niggers. I’m listening to Leonid Kogan and Karl Richter play Bach. AI that you statistical modeling, data collecting, deep state dipstick mofos!

  8. To be any actual use for control purposes the government will need AI that isn’t broken by hard coded exceptions. It will be interesting to see how they square the circle. Presumably it will be the NSA having secret tech that they occasionally release snippets of to the genpop, as with crypto.

  9. Rudolph Harrier

    I know how to program a computer to solve any system of linear equations via Gaussian elimination.

    There are many such systems which I do not know the answer of, but which my program could find the solution for quickly.

    If we accept JH’s mindset this means that the program decided to do something other than I told it and come up with the solutions on its own.

  10. Rudolph Harrier

    On that matter, the existence of such algorithms long before computers proves that you don’t need to think to use them.

    Generations of students have done things like use Gaussian Elimination or the quadratic formula without any understanding of why they are doing what they are doing, or even what the answers mean. Searle’s Chinese Room already exists, and it is the public school system.

    You see the same sort of thing happen when students write an essay by starting with a standard format (like the five paragraph form) and then write a mixture of cliches and bits stolen from other sources. A readable essay is created, but with no understanding from the student about what it really means. So we know that this is possible to do with language processing as well.

    I suspect that a reason why so many people are so insistent on the reality of artificial intelligence is that if they admitted it didn’t exist they’d also have to admit that much of human “scholarly” achievement doesn’t take any intelligence either.

  11. Jim

    I think in practice there may be some rounding of corners after the fact. A bank may tweak its algorithm to favor victims to a point, but then decide to stop messing with it, and simply manually add back victims to the result to satisfy regulators. This allows them to stop endlessly tweaking the model (after all will not the regulations themselves be endlessly tweaked?), plus it will give them some internal knowledge on “what we should have done, had we not been forced to do otherwise”.

  12. Johnno

    JH says the future will be full of AI amazement!

    This is true!

    The Expurts will be fully AMAZED at how and why their AI runs away from them and launches the nukes.

    They’ll be absolutely boondoggled!

    At their wit’s end!

    Cutting their wrists on the altars of Baal in wonderment over how such a completely predictable and cliched turn of events could’ve ever transpired!

Leave a Reply

Your email address will not be published. Required fields are marked *