Well, it banned all p-values, wee or not. And confidence intervals! The journal of Basic and Applied Social Psychology, that is. Specifically, they axed the “null hypothesis significance testing procedure”.
I’m still reeling. Here are excerpts of the Q&A the journal wrote to accompany the announcement.
Question 2. What about other types of inferential statistics such as confidence intervals or Bayesian methods?
Answer to Question 2. Confidence intervals suffer from an inverse inference problem that is not very different from that suffered by the NHSTP. In the NHSTP, the problem is in traversing the distance from the probability of the finding, given the null hypothesis, to the probability of the null hypothesis, given the finding. Regarding confidence intervals, the problem is that, for example, a 95% confidence interval does not indicate that the parameter of interest has a 95% probability of being within the interval. Rather, it means merely that if an infinite number of samples were taken and confidence intervals computed, 95% of the confidence intervals would capture the population parameter. Analogous to how the NHSTP fails to provide the probability of the null hypothesis, which is needed to provide a strong case for rejecting it, confidence intervals do not provide a strong case for concluding that the population parameter of interest is likely to be within the stated interval. Therefore, confidence intervals also are banned from BASP.
Holy moly! This is almost exactly right about p-values. The minor flaw is not pointing out that there is no unique p-value for a fixed set of data. There are many, and researchers can pick whichever they like. And did you see what they said about confidence intervals? Wowee! That’s right!
…The usual problem with Bayesian procedures is that they depend on some sort of Laplacian assumption to generate numbers where none exist. The Laplacian assumption is that when in a state of ignorance, the researcher should assign an equal probability to each possibility…However, there have been Bayesian proposals that at least somewhat circumvent the Laplacian assumption, and there might even be cases where there are strong grounds for assuming that the numbers really are there…thus Bayesian procedures are neither required nor banned from BASP.
Point one: they sure love to say Laplacian assumption, don’t they? Try it yourself! Point two: they’re a little off here. But they were just following what theorists have said.
If you are in a “state of ignorance” you can not “assign an equal probability to each possibility”, whatever that means, because why? Because you are in a state of ignorance! If I ask you how much money George Washington had in his pocket the day he died, your only proper response, unless you be an in-the-know historian, is “I don’t know.” That neat phrase sums up your probabilistic state of knowledge. You don’t even know what each “possibility” is!
No: assigning equal probabilities logically implies you have a very definite state of knowledge. And if you really do have that state of knowledge, then you must assign equal probabilities. If you have another state of knowledge, you must assign probabilities based on that.
The real problem is lazy researchers hoping statistical procedures will do all the work for them—and over-promising statisticians who convince these researchers they can deliver.
Laplacian assumption. I just had to say it.
Stick with this, it’s worth it.
Question 3. Are any inferential statistical procedures required?
Answer to Question 3. No…We also encourage the presentation of frequency or distributional data when this is feasible. Finally, we encourage the use of larger sample sizes than is typical in much psychology research…
Amen! Many, many, and even many times you don’t need statistical procedures. You just look at your data. How many in this group vs. that group. Just count! Why does the difference exist? Who knows? Not statistics, that’s for sure. Believing wee p-values proved causation was the biggest fallacy running. We don’t need statistical models to tell us what happened. The data can do that alone.
We only need models to tell us what might happen (in the future).
…we believe that the p < .05 bar is too easy to pass and sometimes serves as an excuse for lower quality research.
Regular readers will know how lachrymose I am, so they won’t be surprised I cried tears of joy when I read that.
We hope and anticipate that banning the NHSTP will have the effect of increasing the quality of submitted manuscripts by liberating authors from the stultified structure of NHSTP thinking thereby eliminating an important obstacle to creative thinking. The NHSTP has dominated psychology for decades; we hope that by instituting the first NHSTP ban, we demonstrate that psychology does not need the crutch of the NHSTP, and that other journals follow suit.
Stultifying structure of hypothesis testing! I was blubbering by this point.
Thanks to the multitude readers who pointed me to this story.