Many think confidence intervals are an improvement over P-values. Not really, no. There is no one alive, or dead, who interprets a confidence interval as theory demands. All (as in all) are Bayesians here. Which means you might as well go all the way and compute Pr(What I want to know | All evidence considered), and ignore all testing and parameter-based approaches.
Video
Links: YouTube * Twitter – X * Rumble * Bitchute * Class Page * Jaynes Book * Uncertainty
HOMEWORK: Given below; see end of lecture.
Lecture
This is an excerpt from Chapter 9 of Uncertainty.
Lastly, because confidence intervals are sometimes seen as the fix or alternative to p-values, let me prove to you nobody ever gets these curious creations correct. According to frequentist theory, the definition of a confidence interval (for a parameter) is this. If an experiment is repeated an infinite number of times, each one identical” to the last except for “random” differences (ignore that this is meaningless), and for each experiment a confidence interval is calculated, then (say) 95% of these intervals will overlap or cover” the “true” value of the parameter. Since nobody ever does an infinite number of experiments, and all we have in front of us is the data from this experiment, what can we say about the lone confidence interval we have? Only this: that this interval covers the “true” value of the parameter or it doesn’t. And that is a tautology, meaning it is always true no matter what, and, as we learned earlier, tautologies add no information to any problem.
We cannot say—it is forbidden in frequentist theory—that this lone interval covers with such-and-such a probability. And even if we manage to repeat the experiment some finite number of times, and collect confidence intervals from each, we cannot use them to infer a probability. Only an infinite collection, or rather one in the limit, will do. If we ever stop short and use the finite collection to say something about the parameter, we reason in a logical and not frequentist fashion. And if we use the length of an interval to infer something about the parameter, we also reason in a logical and not frequentist fashion. Since the majority of confidence intervals in use imply a “flat” (improper, usually) prior on the parameter of interest, all working frequentists are actually closet Bayesians. Now all we have to do is take the short step from Bayes to logic, and probability will be on firm ground everywhere.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use PayPal. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.
Discover more from William M. Briggs
Subscribe to get the latest posts sent to your email.
An idea for a what are the odds post: Jeanne Calment’s longevity challenged, partially by statistical modelling.
Dear Briggs – you keep quoting “Probability of Data Not Seen GIVEN Hypothesis is False”. You know of which you speak, so it all makes sense to you…
As someone who has never calculated a P value, I wonder if you could do a “P value and why it’s wrong” 101 lecture, to explain to me and others (amusing a basic level of wit and maths). I’m thinking this would need to be a Tell ’em what you are going to say, Tell ’em, Then summarise, type thing. With the argument explained step by step, without digression or hand-waving. As an in-between lecture, outside regular Class.
Maybe it might go viral, like Feineman’s explanation of Scientific Method. It would certainly be useful, something that one could point folks to :-)
gareth,
Excellent point. See this week’s class.