From the mailbag, reader I.W. writes:
I represent social science (but of a priori bent), and I recently got really hooked on frequentialism. After all, prof. Mises employed it as the only scientific idea of probability.
Then I saw your lecture on Youtube where you cast doubt on all this statistical method and I have to acknowledge I totally share your position.
For weeks, I am trying to discover the current status of good old frequentialism and how it relates to other concepts of probability.
My ultimate aim is to work out such an idea of probability that would tally finely with social science and accounts for the permetaing uncertainty there (as a side effect I would like to shed some light on the problem of risk and insurance, which Mises discusses too).
I do realize this letter is a bit chaotic but I was carried away by the lecture 🙂
Would you please refer me to the reading you mentioned at the end thereof?
I would most appreciate your help.
With kind regards,
Well to the side of Truth and Beauty, I.W.!
A reminder to all. Please visit the Uncertainty book page here. Also, be sure to buy a copy of Uncertainty, too. Surprisingly, some people who make a living doing machine learning, AI, probability, statistics, and scientific modeling do not yet own a copy! It is a great mystery why not.
On the book page are a collection of articles where many fundamentals are discussed. There are also some reviews of Uncertainty of interest.
Next is the Class — Applied Statistics category tag. This is a collection of practical posts, some of which have R code so you can follow along. I mean to continue these, and have even done the code—but haven’t yet written the articles. Writing takes vastly more time than mere lectures.
Now as to frequentism, it is philosophically wrong, but it’s also easy to see why it’s adopted. It is philosophically wrong because of its insistence on actual infinities of (only) observable “events”. Consider that no probability can be known until the End of Time. Frequentism is defined only at the limit. Before the limit is only darkness.
At first it seems odd that people so readily believe in frequentism, since no actual infinity of events has ever been observed, thus no probabilities are known. The appeal is explained by noticing no one remembers the actual mathematical definition and acts as if probability is epistemic (some might say “logical”). Which it is: epistemic, that is.
It is true that once one assumes, based on subjective whim, what a frequentist probability is, then one can do lots of calculations. Think for instance of a binomial. Assume the unobservable-in-finite-time probability p is some fixed value, and it’s off the slide rule!
You might assume that you can estimate the value of this mysterious p based on some data. No.
Problem #1: the empiricist bias. Under frequentism, the only probabilities that can (in potentia) exist are empirical. Imagine limiting logic (or metaphysics) to only empirical examples!
Problem #2: Either the point estimate is absolutely certain, which nobody claims, or the confidence interval is a reasonable guess of the uncertainty of the estimate, which is false. And known and designed to be false under the theory! The only the sole the lone the one interpretation of the confidence interval allowed under frequentist theory is that the parameter is in the interval or it isn’t.
So why are estimates and confidence intervals taken to be of some value? Because, again, in practice, people treat them as if they were epistemic. Which they are. They are more-and-less good approximations to the right answers, which are found thinking everything is epistemic from the beginning. Strict frequentist thinking never survives beyond textbooks.
Of course, actual frequencies are of great use. They are informative. But observations are not probabilities. Frequencies inform probabilities.