Obviously the one and only answer is scientism. But since scarcely anybody realizes how steeped they and the culture are in scidolatory, the first battle is convincing them of such. Only then can alternatives be outlined. Too much work for 600 words.
So let’s pick off a low-hanging over-ripe worm-infested fruit: hypothesis tests. Out they go, to join phlogiston, NPR, ketchup on hotdogs, pitchers batting, and other wispy intellectual ideas destructive of sanity and souls.
What are hypothesis tests? Primarily a way to award gold stickers to cherished beliefs. Users are allowed to say, “Not only is what I desire important, it is statistically significant.” The appendage is meant to be a conversation stopper. “Oh, well, if it’s statistically significant, how can I object?”
Now there are all kinds of theoretical niceties about these creatures, lists of dos and don’t, stern cautions issued by statisticians without number, distinctions between correlation and causation…but nobody ever remembers them. Not in the breathless chase for wee p-values. And anyway, most of these theoretical considerations have nothing to do with what anybody wants to know. Hypothesis tests don’t answer the questions people ask, and when they do it’s because hypothesis tests have fooled them into asking the wrong questions.
Sociologist wants to know if sex has anything to do with the drinking habits of the “underrepresented” group o’ the hour. Of course it does: all history shows that men and women are different. The real question is how much does knowing a person is male and not female change the uncertainty of a person’s drinking habits? It is not whether, but how much.
Nevertheless, a hypothesis test can say “no effect at all”, which is wrong a priori for most things. Consider that it is as rare as a believers in Harvard’s Department of Theology that anybody actually believes a “null” hypothesis.
Climatologist wants to know whether a temperature which he has been measuring yearly has indicated a trend. Now all he has to do is look. Has the temperature gone up since the start? Then it has gone up. Has it gone down? Then it has gone down. Has it wiggled to and fro? Then it is has…skip it. Because that’s too simple for Science.
So the climatologist unnecessarily complicates the situation—which, incidentally, is the working definition for many sciences—by fitting some arbitrary (usually straight line) model to the temperature and calls on the hypothesis tell him what he has refused to believe of his eyes.
His practice might make sense if the climatologist thought the physics must indicate a linear change in temperature, or that he was going to use his model to predict future temperatures. As it is, he does neither. He only asks if temperature has “significantly” changed—an entirely man-made metaphysical status. Useless.
Useless because of the host of things ignored. Why this start date? Why this end date? Why this data and not other? Would a different model (or different test statistic) indicate non-significance?
It’s the same any time hypothesis tests are used. There are scores of external, qualifying propositions (like start and stop dates) which are scantly recalled when making the test, and immediately purged from memory after the p-value is revealed. That thing, that absurd thing, becomes all that is remembered.
Well, on and on. What to do instead? Simple. Abandon unnecessary, arbitrary quantification. Use (in science) probability models to express uncertainty in observables and not for any other purpose.
Odds of happening? Same as New York Times finding fault with our dear leader.
Update From Steve Brookline below: