I want you to memorize something important. Never forget it. Ready?
Artificial “intelligence”, or AI, is just statistical modeling. (Regular readers already knew this.)
Here’s something else to know. Some things called “AI” aren’t AI at all, but are instead fast and large computers storing and processing massive amounts of information.
Like how your phone company tracks you everywhere you go, knows which websites you’ve been to, and at what times, and who you called, and when and where and for how long, what you said in text messages, and things like that.
And how Google reads your email, Twitter your direct messages, and Facebook—but you get the idea.
This kind of AI, which isn’t AI, is how these “private” companies will give your information to the regime’s not-so-secret police even without a warrant.
That isn’t any kind of “intelligence”, except in the form of being able to “remember” things. So let’s call this not-AI “storage”, or a “database”, or just “The Cloud.”
We’ll call the kind of AI that models the data in The Cloud to make predictions, or fake pictures, or things like that, statistical models. There is no harm in calling the modeling aspect AI, except that term produces unreasonable fear and awe.
As proof, here are two headlines that I want you to read out loud:
- Intelligence Community Developing AI Tool To Unmask Anonymous Writers;
- Intelligence Community Developing Statistical Model To Unmask Anonymous Writers.
First one sounded a lot scarier, didn’t it? Second one likely gave you the impression of “Meh.”
Now if government, or government-contracted, censors told you that they had used a statistical model to tie you to a Twitter account, you’d probably laugh. You’d laugh because you already know there is no way any statistical model could do this with anything approaching certainty.
Unless, of course, they were using The Cloud. Then they could have noted the suspect tweets came from your phone, at a time you were known to be using the phone, just by tracing the IP origin of tweets, and things like that. The Cloud can, at times, provide certainty, or something near enough to land you in solitary.
Statistical modeling won’t. And that is cheering news. Let’s look behind the headline to see why.
The Office of the Director of National Intelligence has announced they are developing an AI tool to unmask anonymous writers.
A press release on Tuesday from the ODNI revealed that the Intelligence Advanced Research Projects Activity (IARPA), their research and development arm, is starting work on the Human Interpretable Attribution of Text Using Underlying Structure program – HIATUS for short.
“Humans and machines produce vast amounts of text content every day,” with “text containing linguistic features that can reveal author identity,” a document from IARPA notes. The HIATUS tool would therefore use AI to identify anonymous writers via features such as “word choice, sentence phrasing [and] organization of information.”
“Think about it as like your written fingerprint, right?” program manager Dr Timothy McKinnon said in February. “What characteristics make your writing unique? So the technology would be able to identify that fingerprint compared against a corpus of other documents, and match them up if they are from the same author.”
This is mere braggadocio by Censor McKinnon, a bit boast (a boast about bits) without backing. About which more shortly. First, why are they doing this?
McKinnon said that the AI program would be used by the intelligence community to track “disinformation campaigns,” and combat “human trafficking and other malicious activities that go on in online text forums.
Ah, there it is. Like I’ve told us three hundred and forty two times (an estimate), official disinformation needs some Official agency to define and track it.
Chances are the algorithms Censor McKinnon develops will be used to suspend user accounts, shadowban, and whatnot without any appeals process. “The censoring AI said 83.817561615% chance this is disinformation,” will say Censor McKinnon’s algorithm. “Ban the user.”
But there’s a possibility the not-so-secret police will show up with armed men, break down your door, and say, “Censor McKinnon’s censoring AI gives a 95.13815787417% chance you tweeted this disinformation.”
They might use the argument that “AI” IDed you. And, like Ricky Vaughn, have you indicted for spreading “hate”, or whatever.
Here’s how to challenge it. Have your lawyer demand an independent test of Censor McKinnon’s censoring statistical model. Have it prove, under neutral monitored conditions, it can identify, with great accuracy, from a sea of tweets (or whatever kind of posts), not just your tweets, but everybody’s.
Now I don’t know what accuracy Censor McKinnon’s censoring algorithm will have, and I am ignorant of the law of how good “scientific” instruments have to be to be considered reliable in court, but I am telling you the answer will be NOT THAT GOOD. Because “AI” is just statistical modeling, and modeling of this kind is not that good.
Censor McKinnon, and the cops, will instead try to present other “verifications”, with astounding accuracy stats, on tests Censor McKinnon’s team did themselves. Or they will point to peer-reviewed papers. Or they will use some other kind of low-value science bullying.
Accept none of this. Make them prove it. They won’t be able to.
Of course, if they use The Cloud, you’re screwed. But that’s why God invented VPNs and opsec.
Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.