Questions On Statistical Practice Answered

Questions On Statistical Practice Answered

Forgive me my friends, because of reasons I am terribly far behind on everything. Regular posting to resume soon.

While back in a post, I was ranting about something like p-values, or some such, and advocating one of my standard fixes in an off-hand manner. This prompted some intelligent questions from a reader at the Substack mirror (all posts are identical at blog and SS).

Now an intelligent person would have made note of what the original post was. I can’t remember. However, we don’t need it for the questions.

Les Fleurs du mal

Some questions for you. I apologize if you already answered this in your book as I have not had a chance to read it.

[Quoting me:] “The fix? There is no fix. There is a slight repair we can make, by acknowledging the conditional nature of probability, that it is only epistemological, that at a minimum the only way to trust any statistical model is to observe that it is has made skillful (a technical term) useful (a technical term) predictions of data never before seen or used in any way.”

Question 1: Which specific statistical practices do you believe should be deprecated?

Question 2: How do you propose quantifying uncertainty without using statistical models? Furthermore, how could uncertainty in real-world propositions be quantified without the use of models?

Question 3: What type of evidence or studies could alter your opinion that significant changes are needed in the way statistics is practiced?

Question 4: What specific recommendations do you have for how researchers can better convey that statistical findings are contingent on modeling assumptions?

All the answers are indeed in Uncertainty: here’s a brief summary of them here.

Q 1: All parameter-centric analyses. Get rid of them. They’re outta here.

If probability doesn’t exist, and it does not, then the parameters—the knobs and dials—inside probability models exist even less. Yet concentration is everywhere, or nearly everywhere, on these non-existent little creations of our minds.

The key reason is historical: the math was easy, and done by hand. The modern reason is the mistaken idea the parameters are real and have “true values” that cause things to happen.

What the customer wants is answers to questions like “If I do X, what happens to Y?” If some fields, like parts of physics and chemistry, we tell him. In any statistical field, we do not tell, and instead substitute the question for another (without informing the customer), and start spouting things about parameters, which we hint are “really” X.

Solution: answer “If I do X, what happens to Y?”

Q 2: I don’t. Not in formal applications. One can model; why not? But the answers should be put in terms of observables, as in Q1. And then tested, preferably by disinterested parties.

Of course, in most real-world thinking we never, or rarely, formally quantify our uncertainties. And we don’t have to. These are still models, though. Models are everywhere and necessary to thinking. We test our everyday models, continuously, too. That’s why we get good at judging commonplace uncertainties.

Models: chance I spill the coffee with this much in the cup and walking along this path; chance I’ll need to buy this extra steak for the week, in case you-know-who drops by; chance the boss will harass you if he sees you in the ante meridian.

On and on an on. All models, none quantified. You cannot think without them. Scientists merely make the process formal, which is no bad thing per se. It becomes bad when scientists commit the Deadly Sin of Reification. As in Q1.

Q3: All fields which primarily use statistics are bad to greater and lesser degree. The most harmfully bad are epidemiology and public health, because we have to suffer under the thumbs of midwit Experts who rely on their “research”.

Q4: I’ve given acres of advice along these things, all boiling down to this: consider how you might be wrong.

Which cannot be taken. Not by academics, at any rate, who must publish or perish, who must bring in grant dollars or die. There is little time for introspection and self doubt.

This, too, varies by field. And you have to understand something about Yours Truly. As a pathologist of bad and putrescent science, I have become jaded. There is good science out there. It hasn’t entirely ceased.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

2 Comments

  1. Robin

    SSgt Briggs,

    Coming from an engineering background, I still do no know exactly what you define “parameter-based”. I’m assuming that by “observables” you mean things that are physically measurable or monitorable.

    When modelling a process with a measurable predictive result, an engineer might say that the parameters of the model are also “observables”. Hence the confusion.

    I’ve spent my career fighting against nonsense in my profession. It’s difficult because academics, who have based their career on research founded on P-Values, are not going to change their views. To do so would be a form of career self immolation as far as they are concerned. I’ve met a few of these in my time.

    I have won a significant but relatively minor battle in the war for truth. It took from 1987 to 2008 to do it. But there are even more important ones still out there.

    For example, infrastructure throughout the world is designed to live a certain minimum life. The “model” for this is based upon Fick’s/Einstein’s diffusion equations.

    The idea that this model is applicable is complete horse-manure but every major consulting firm has used it as the basis of their own model and this ultimately determines the cost of the structure. Billions, perhaps trillions, have been wasted. The clients like it because it looks “scientific”.

    The model, for all its complexity (which engineers love) has at its core an assumption that the process scales as the square of time – normal diffusion according to Fick/Einstein. But there are ample physical investigations confirming there is also sub- and super-diffusion going on. A very small error, say scaling by 2.1 rather than 2.0 could lead to an error of 30 years. The theory of diffusion does not hold for engineered structures anyway. This model must be abandoned in its entirety.

  2. bill_r

    For Q2: I you want a “quantification”, try randomization, subsampling or oversampling (e.g. bootstrap) the actual quantity you’re interested in. If you don’t like random sampling, randomization and subsampling work with systematic methods (shuffles, cyclic designs, factorials, BIB’s, completely separating subgroups, etc. etc.) That gives you an empirical distribution of your observable.

Leave a Reply

Your email address will not be published. Required fields are marked *