# “Why Didn’t Use You The [fill in the blank] Model On That NZ Data, Briggs?”

“Hey, Briggs. I saw your take on the leaked New Zealand vaccine data. Interesting. But why didn’t you use [Insert My Favorite Statistics Model Here]?” [Blog, Substack mirror.]

I’ll tell you why not. Because models, in the way you’re thinking of them, aren’t necessary.

In fact, all of you should stop using so many models! And you certainly shouldn’t trust models produced by others. Reagan had it backwards. It is not “Trust but verify”. It is “Verify then trust.”

My dear friends, you and I have, over the course of many years, examined hundreds upon hundreds of models, all of them bad, produced by the biggest names in science and in the best institutions. Shouldn’t this catalog of horrors have imbued in you by now a reflexive distrust of models, as they are usually found?

So no formal models in the sense you are thinking, unless absolutely necessary. When is that? Let me illustrate with something everybody understands: sports.

We always begin with a question of interest. Like, “Who won the Lions-Chargers game?”

Now, I ask you, how would you go about answering that? With a model!, say scientists. And what is the first step in modeling? Right: gathering data.

I didn’t know the answer, so I searched the standard woke search engine and they gave me this: “Detroit 41, Chargers 38.”

This is our data. Or, rather, part of it. We also need tacit premises, like the rules of football, the dates in question, and things like this, premises researchers scarcely ever write formally into their models. Which means they usually forget these premises are there, and when they go to employ their models they commit all manner of offences against thought.

But let that pass. Suppose we have the correct premises related to the question. We have the data, which is part of any formal model. The next step is to make math of it.

How about a parameterized bi-variate Poisson? If you recall, a Poisson is model that gives a probability to integer numbers, including zero, like scores in football are. A bi-variate allows two such numbers, for both our teams.

We could do this model as frequentists, fitting the parameters, and then staring at them for insight. If we’re lucky, we’ll be able to flash our wee Ps at our audience, and they will be in awe. Or we could be Bayesians, but then we need to think about “priors” (more models) on the parameters. It can be done. Software makes it a breeze.

You don’t like the Poisson? Then how about a time series cohort model? What we do is gather more data, on previous games and for other teams. Then we chart, for different team cohorts, the course of the season using an autoregressive integrated moving average model. ARIMA, as it’s called in the trade. Still have to make the frequentist-Bayesian choice. But whatever.

On the other hand, this is 2023! Why use these stuffy old parameterizations? We now have machine learning and artificial intelligence! The real stuff having been corrupted beyond measure.

How about a version of CART, then? We have scads of data on ticket prices, who bought them and so on, who sat where. We have tons more on the athletes statuses, their prior performance stats, and on and on, seemingly forever. But computers are big these days and handle all this with ease.

“Uh, okay, but we know the score. We can just look.”

EXACTLY!

This “just looking” still relies on the veracity of the data source, and our senses, and all that, as all models do. That can never be escaped.

Doc invents a new pill to cure the screaming willies. You either have it or you don’t: there are no gradations. He gave one batch of people his new drug. Four out of five were cured. He gave another batch a placebo. Two out of four were cured.

Which group did better?

If you’ve had formal statistics training your first instinct will be to model this. To “discover” what happened. But we already know! We don’t need to discover.

“But Briggs, the new pill might not be better. Other things could have caused the difference.”

Indeed. But who was claiming cause? The question was which group did better, which team won. We are not saying why one did better. We can never learn cause from such paltry data as this. I went out of my way to insist I was not claiming cause in the NZ data.

All we had to do in the NZ data was look. Nothing more was needed. Unless we wanted to make a stab at quantifying the small departure from uniformity in people who got only 1 or 2 shots then died (review the triangle plots if you can’t recall).

In sports, the old scores will do, unless we wanted to guess future (or unknown) scores. Then we need a model. With the drug, looking was fine, unless we wanted guess about future outcomes. Then we need a model.

If we want to know cause, of cures and non-cures, we have to do a monumental amounts of work, investigating biochemical pathways, genetics, patient characteristics, and on and on. Stupid simple statistical models cannot provide this information, though many, alas, believe they can. Hence so many bad models.

In “the game”, we do not know which team is better based on only the score. The score may have been the result of a bad call by official, say, or any number of other things. Scientists would say the result is “due to chance”, not understanding “chance” can’t cause anything. In any case, the score alone does not tell us why the score was what it was.

In the NZ data, all we had to do is count. Cause was out of the question. But cause can be had, and in the way I suggested. By having NZ release the individual health records of those who we suspect, but cannot prove, were vaccine injured.

It’s a good bet NZ does not release all their data. And it’s an excellent bet—you can make money with this one—that people, especially Experts, will believe they know the right answer anyway.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

1. Robin

My recent in-depth and ongoing (for about two years) experience with a Coroner leads me to the conclusion that the underlying data is pretty useless in the first instance.

Autopsy is not done unless the death is unnatural. This is only a very small subset of the data.

There could be millions of deaths worldwide due to myocarditis. These would be classed as natural death in almost all cases so no autopsy is done. There would be no Inquest either so I suspect most are put in the records as heart failure.

Even when an autopsy is done, as in my case, the standard was so poor I had to report it to the medical council.

You can assume that there is no useful death data from any of these agencies.

2. bob sykes

I spent a long time on the Ohio State civil engineering faculty, and twice in faculty meetings I received revelations:

Once, discussing enrollments, one colleague (a structural engineer) was provoked to blurt out, “We have the data. Why do we need statistics?”

Another time, another colleague (a transportation planner) said, “I don’t need data to calculate statistics.” He was/is a Bayesian.

3. William Wallace

Robin, that’s interesting. I was surprised recently when a family member died in his sleep. A seemingly fit man, known to be active. He was 57 years old and went sleep and died during the night. His wife discovered him dead in the morning when he failed to get up for his alarm. No autopsy and the coroner said he had a heart attack in his sleep, or so I’m told. This was in a New England commonwealth. How would they know his death was natural? Do they at least do a cursory examination? I suspect that in the end, it was just another case of vexation.

4. Hagfish Bagpipe

Oh for pete’s sake Briggs, why don’t you just torture your damn data until it CONFESSES to whatever crimes the prosecutor wants, or put it through some confusing rube goldberg data sausage FAKERATOR until it shakes its MONEY MAKER and makes you RICH! You and your precious “TRVTH” — Dude! — it’s all about turning LEAD into GOLD, with just a little legerdemain, a bit of bafflegabble nonsense, and a big blob of CHUTZPAH until you’re ROLLING IN DOUGH! That’s the meaning of life! That’s the way to get ahead in the world, DON’T YOU KNOW?! You go around blurting out simple, sensible truths and offending everyone when what people really want is another steaming helping of STUPID RETARDED GIBBERISH, like “trust but verify” — one of the dumbest things ever said by man, and one of the most POPULAR! Get it? GET IT?! If you had any brains Briggs you’d be a big blustering bonehead. That’s the path to success. Take it from me, pal; I’m as dumb as they come and I rule the world.

Hope that helps.

5. JH

So, what statistical questions would you like to find answers to? Statistical machine learning methods or not, the statistical analysis will depend on the statistical question.

Free of charge today as I am already paid by bob sykes’s hilarious comments… something money cannot buy.

6. Robin

William Wallace,

As I learned from this experience, it is either a Medical Practioner or a Medical Examiner who decides, depending upon the jurisdiction. Sometimes the Police will have influence where foul play is suspected or where they conclude there is circumstantial evidence of an unnatural death.

In my case, the Police reported circumstantial evidence of unnatural causes which the Coroner and Pathologist accepted as fact. Except there was a problem, it was not fact. Nowhere near being fact. It was assumption.

After almost a year of relentless pursuit, I’ve now gotten agreement from the Coroner that most of the key evidence reported by the police was wrong.

We are back to square one now, the Pathologist autopsy shows no cause, the toxicology shows no cause, the Police evidence was not correct. I have brought in
my own Pathologist and legal team who contend that it was either a natural death or Covid/Vax related.

This will go on for at least another year, I suspect.

7. cdquarles

I have to agree with Robin. Death certificates are guesses. Informed they may be, but guesses nevertheless. Biased ones, too; for the people filling the forms out have rules of thumb: “Common things occur commonly”, which is true enough; but for the person in front of you, your guess may be wildly wrong. Very few want to spend the time, money, and effort to do better investigations.

Locally, an autopsy is formally required if there are suspicions of foul play. Otherwise, families of decedents have to pay for them, and they are not cheap. For a person found dead in their bed, physical examination likely would not yield anything useful. After all, a bout of cardiac arrhythmia leading to cessation of circulation would be sufficient and not necessarily lead to any visible effects sufficient to be suspicious.

8. Uncle Mike

Dear Doctor,

The data was of the form “lifetimes” (not a time series). You didn’t display them as such, i.e. survival % vs. time since jab. You instead did a set of unusual dot charts — which you analyzed assuming a theoretical prior, the Uniform Distribution. That’s a model, sir, replete with parameters.

Instead, you could have displayed the raw data as cumulative death percent (0-100%) vs. time since jab, or better, survival percent 100-0 %). A survival curve is the raw data viewed appropriately. It’s neither a model nor smoothing.

You can compare survival curves by covariate visually. You can look at the survival curves of so-called control groups. It’s comparing maps of the data. No models. After you view the appropriate charts, you might want to apply some comparative math, but that’s when the modeling begins.

9. Hagfish Bagpipe

Briggs, that was a pretty funny bit, no? Anyway, it made me laugh.

But what does YOS think?

10. Hagfish Bagpipe

Blimey, and I thought Uncle Mike was some sort of eccentric thespian.

11. Philip Hayward

Isn’t the most important revelations from the NZ whistleblower, that post-vax deaths occur in CLUSTERS related to particular vaccine batches (and the locations and vaccinators who were supplied them)?
Instead of trying to run formulas that show deaths of “all vaccinated” this or that fraction of a percent ahead of “everyone else”, the real obvious scandal is that people who were administered vaxes from SOME batches, a small minority of the total, have died at a rate tens of times higher than everybody else who was vaxed from the vast majority of batches.
It is impossible to believe that responsible people don’t know about this; the personnel at the medical centers and other institutions where extraordinary outlier deaths occurred post-vax; administrative officials; and the vaccine manufacturers themselves. Their inward rationalization will obviously involve that overall, they are doing the right thing, and “vaccine hesitancy” must be averted at all costs. Surely secret investigations have been done to unearth the cause of the outlier deadliness of these batches; contamination; manufacturing-process hiccups; handling and care of the batches during transport and storage; etc. And corrective measures have been applied so that overall vaccine safety increases as further batches are produced and distributed. All done secretly but carrying the confidence of responsible people who must know about the deadly batches.
Can someone tell me what other industry gets away with operating in this manner?
Why not win public confidence by punishing the people responsible for the “mistakes” so severely that we can be sure that similar “mistakes” will not be made again? If we had airliners crashing because someone on the assembly line was fitting a substandard rivet in a wing, someone would be made to pay with their career and very likely their liberty as a citizen. We certainly couldn’t trust air travel if “to maintain our trust, we can’t ever know the truth and no-one is ever to be punished”.
The “degrees of separation” thing is so pervasive that no wonder some 45% of Americans polled, knew someone that they believe “died of the vaccine”. And no wonder, then, that it is not credible in the eyes of such a significant plurality, to claim “safety and effectiveness” and shut down and punish people who simply have a conscience about what they see with their own eyes, not to mention expert theoretical disputation about the wisdom of going on with so many “impossible to knows” (eg about long term effects), and “likely downside consequences” due to the way things like spike protein and lipid nanoparticles are known to work in the human body. Plus, everybody is different, and mainstream Pharma is infamously incurious about this. For example, credible tests have been posed by which people can be screened for predisposition to “side effect” harm from SSRI’s and other common medications.
Besides the “batch” problem, there is obviously going to be a “predisposition to side effects” problem with a certain proportion of people regardless of how pure the batch with which they were administered. And there is also a credible hypothesis about inadvertent intravenous administration of vaccines, again regardless of how pure the product is.

12. This post was worthy of our time, William. Thank you for writing it.

Perhaps we can then understand the previous post on NZ data as “nothing immediatelly jumps out but there are some interesting tidbits”? Well, nothing other than the weird cluster in the one-dose. I’m still a bit confused on the way you view data, I guess I’ll keep sticking around for more insights. 🙂

13. The figures are all fake we know this because they’re government figures. The best we can do is look at excess deaths in populations compared to previous values for comparable population sets. Even then govt/pharma can say the effects are the result of coof or coof plus lockdowns and it would be worse without vax. You’re on a hiding to nothing. It’s all fake and gay.

14. PhilH

You’re a national treasure. In a better world, you’d be running Harvard, and Claudine Gay would be selling “personalized” student essays on Fiver.

15. jennifer willkerson

Hi William Briggs,

I ran into an article by Prof Jesse Jenkins of Princeton. He admits to problems with modeling the nations grid. For example:
Until recently, energy modeling by the U.S. Energy Information Administration (EIA) and IEA vastly underprojected wind and solar deployments. What about the pitfalls with energy modeling?

Jenkins: These are decision-support tools, not decision-making tools. They cannot give you the answer. In fact, we shouldn’t even think of these models as predictive. We say that the IEA makes projections. Well, they’re really making a scenario that’s internally consistent with a set of assumptions. That “prediction” is only as good as the assumptions that go into it, and those assumptions are challenging. We’re not talking about a physical phenomenon that I can repeatedly observe in an experiment and derive the equations for and know will hold forever, like gravity or the strong nuclear force. We’re trying to project a dynamically changing system involving deep uncertainties where you cannot resolve the probability distribution or even the range of possible outcomes.

We face deep uncertainties because we’re talking about policies that will shape capital investments that will live for 20 or 30 years or longer. If you ask a bunch of experts to predict the cost of a technology 10 years from now, they’re all over the map—9 out of 10 are wrong, and you don’t know which one is right. There’s just so much that is contingent and unknowable. The best we can do is to build tools that allow us to explore possible futures, to build intuition about the consequences of different actions under different assumptions, and to hope that that helps us make better decisions than if we were simply ignorant.

With all the uncertainty inherent in the modeling, why base social policy including mandates on “projections” that aren’t verifiable through observation? Because it is about the funding. Funding for universities, think tanks, power electronics companies, major corporations. all at our expense.

https://spectrum.ieee.org/jesse-jenkins

JW