It’s a doozy, this error of ours. So ubiquitous is it that it’s hardly noticeable. Yet it is sinking us into scientism and wild overconfidence.
Every time it appears, both the public and scientists themselves become a tiny bit more over-enamored of science, giving it more honor than it deserves. The effect of any one appearance of the error is small, scarcely noticeable. But when it is repeated ad nauseam the product is deadly to clear thinking.
Of ado, no more. Here’s an example: “conservatives demonstrate stronger attitudinal reactions to situations of threat and conflict. In contrast, liberals tend to be seek out novelty and uncertainty.”
Did you see it? Maybe not. If you thought the corruption lay in the subject matter of the proposition itself, you were understandably wrong. The quote was taken from the peer-reviewed paper “Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans” by Darren Schreiber and several others in PLOS One1.
What you thought was the main error was instead yet another in a long and growing line of misguided, probably ideologically but unconsciously motivated attempts to demonstrate to the level of satisfaction required by progressive academics that conservatives are biologically different than they are.
Need a hint about the bigger error? Here’s another example, culled from the same paper: “Republicans and Democrats differ in the neural mechanisms activated while performing a risk-taking task.”
Have it yet? Not the content. After an incredible amount of statistical manipulation, such that we can’t really be sure of what we’re seeing, the authors discovered that slightly more registered Democrats had high (statistically derived) activity in their left posterior insula than did registered Republicans (groups which they later re-labeled as liberals and conservatives).
From this, I remind us, they concluded that Republicans and Democrats differed.
This is false. They did not differ; or, at least, not all of them did. Only just enough differed to (after scads of manipulation) provide a wee p-value. But because all of them did not differ, and there is no reason to suppose that in new batches of registered party members, all of them will differ either. The statement is false.
Nor did, as cited above, “conservatives demonstrate stronger attitudinal reactions to situations of threat and conflict” than liberals. Leaving aside the soaring ambiguity in measuring political attitude and the even greater hand waving in defining “situations of threat and conflict”, the statement is still false. It was only found that slightly more “conservatives” than “liberals” answered some questions one way rather than another.
You must have it by now. The error is Irresponsible Exaggeration, which leads inevitably to Gross Over-Certainty. It is a crude mistake, common among the untrained and ill educated (reporters, etc.), and should be rare among scientists, but it increasingly isn’t, as our examples prove (here are many more).
It is now (near?) impossible to read any public report of research without this error—let us call it the Statistical Exaggeration Fallacy. Reports are lazy, harried, or not intelligent enough to realize they are making the mistake. But it’s surprising that it is never corrected by scientists.
Now as proved here, the purpose of statistics is not to say anything about what happened in a particular experiment, but what that experiment might mean in the future. The future must necessarily be less certain than the past, where the experiment lives (proved here). And not only that, it is a consequence of the crude statistical methods used by researchers, but their results are even less certain than implied even without the Statistical Exaggeration Fallacy (are all Republicans “conservatives”?).
I mean, relying on p-values already guarantees over-certainty, which is multiplied in the presence of the SEF. And by the presence of over-extended definitions, like calling Republicans “conservatives”, and conflating the answers on some questionnaire with some deep-seated and real psychological tendency.
Your help needed
What I’d like you to do, sisters and brothers, when you have the time, is to note in the comments whenever you see an instance of the Statistical Exaggeration Fallacy. It is well to have a large, contemporaneous collection of these to prove my claim of its non-rarity.
Of its harmful effect, well, if it is not obvious to you, it will be after you read the examples.
1A curiosity of this journal. They put the Results and Discussion before the boring, who-really-needs-to-read-it Methods section, which appears at the bottom.