FDA Goes Bayes: It Should Have Gone Probability

FDA Goes Bayes: It Should Have Gone Probability

The FDA should have gone probability. But they came close. They went Bayes. So announced FDA boss Marty Makar last week. Here’s the document “Use of Bayesian Methodology in Clinical Trials of Drug and Biological Products“.

The good news is the shift FDA allows a move away from the dreaded p-value, whose every instance is a formal fallacy, for the uses to which they are put, and in doing so, the FDA will also allow the use of parameterized Bayesian methods.

I have written of the evils of p-values and hypothesis testing, all birthed from the (false) frequentist theory of probability, so often that I cannot bear to write another word. For those who want more, see this, this, and this.

Parameterized Bayes is a certain and definite improvement over this. There is joy, in modest amounts, in knowing we might, from time to time, escape frequentist hypothesis testing. But that still leaves us with Bayesian hypothesis testing. Which is better. But not much better. It is at least coherent, which p-values and hypothesis testing are not.

Bayes is to still form “null” hypotheses, still stated in terms of non-existent, non-observable ad hoc model parameters, still therefore generates massive over-certainty, and still doesn’t answer questions we want answered.

Here, I remind us, is probability, and what I wish FDA would have adopted:

Pr(What we want to know | All evidence assumed, included tacit and implicit knowledge).

That’s it. I can’t think of any simpler way to put it, because that’s as plain as can be. It reads “The probability ‘What we want to know’ is true given ‘All evidence assumed, included tacit and implicit knowledge’.” That is all of probability. It is no more philosophically difficult than that.

Hypothesis testing, frequentist or Bayes, gives weird statements in terms of “nulls”, and avoid answering direct questions, but still assume the direct questions were answered. Bizarre!

Why wouldn’t you want probability instead?

That’s a question I haven’t satisfactorily answered. Part of the answer I think is because it doesn’t sound hard enough, and academics (and we) love to think our science come from difficult and arcane formula. Part is because we have not abandoned our pagans notions that probability is alive, that chance is a real thing, that lady Fortuna still retains her powers. You still hear many say “Nature generates ‘What we want to know’ by this-and-such probability distribution,” which is pure magical thinking.

Part of the answer is habit. Bureaucracies react to new ideas like Harvard students when asked to read a book. Bayes, for instance, started as an active research target in the 1980s, largely, and it took this long for the government to recognize it. For scientists, too, because all scientists are first trained with flawed frequentism, then introduced to Bayes. By then it’s too late, for many. They can’t edge out the pagan beliefs that probability is real, and not in the mind, as frequentism insists.

An old objection to Bayes, which was recycled after the FDA’s announcement, is that Bayes is slave to “priors” and subjectivity, and aren’t as objective as frequentist hypothesis testing (which is always a formal fallacy). This objection, while true, is silly. Of course the answers change when “priors” change! But then so do answers change when the ad hoc models change! And do answers change when the data changes; yes, even one observation changes the answer.

Frequentism is no more objective than Bayes. Both are equally subjective, and subjectivity cannot be removed. It is impossible—as in not possible.

Somebody is subjectively deciding “What we want to know”, and somebody is subjectively deciding “These data, and not that data, will form our evidence”, and somebody is subjectively deciding “We will use this ad hoc model”. The Bayesian only goes one step more by quantifying the same parameters the frequentist uses. Alas, both camps fall prey to the Deadly Sin of Reification, because they come to believe their models are Reality (because probability to them is real).

But the probabilist goes the one final step and tells you this:

Pr(What we want to know | All evidence assumed, included tacit and implicit knowledge).

It’s all so open. It is now obvious that if you change the evidence, you change the probability. This is not a bug, it is a feature. It is how we naturally think. The model, its parameters, the data we choose to observe, are all part of our evidence. Change them, we change the probability. Then state that probability, when then anybody can understand. (Nobody can understand a hypothesis test.)

Simple as that!

Would you like to know more?

(After I wrote this, I saw a colleague has similar criticisms, advocating something called conformal predictions, which in the end is just probability, too. I’ll go over these in Class when we do model goodness, calibration, and all that.)

Here are the various ways to support this work:


Discover more from William M. Briggs

Subscribe to get the latest posts sent to your email.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *