Here are two fun fallacies, excerpted from a paper I wrote a couple of years ago in the Journal of American Physicians and Surgeons. For more on this sort of thing, buy this book before they run out: Uncertainty: The Soul of Modeling, Probability & Statistics.
The Everyone Else Said It Was True Fallacy
“Radon is one of the most serious environmental health risks that we face,” said Univeristy of Minnesota professor Bill Angell. He explains that the colorless, odorless radioactive gas forms naturally in the ground, but when it enters your home, it is a serious problem.
“The risk of dying of lung cancer because of radon in your home is one out of 50,” said Angell, “So it’s an incredibly big risk.”
Angell’s comments were based on published studies such as a Danish cohort study by Bräuner et al., whose abstract read: “We find a positive association between radon and lung cancer risk consistent with previous studies…. [T]he results of the present prospective cohort study are fully compatible with an association between residential radon and risk for lung cancer as detected in three previous meta analyses and provide important evidence at the low end of the low end of the residential dose curve.”
In that study, the authors measured actual exposure and outcomes of about 57,000 Danes and found the “adjusted [risk] for lung cancer was 1.04 (95% CI: 0.69–1.56) in association with a 100 Bq/m^3 higher radon concentration and 1.67 (95% CI: 0.69–4.04) among non-smokers.” Since the confidence intervals include 1, the classical interpretation is that radon is therefore not significantly associated with lung cancer. In fact, the authors said as much: “The role of chance cannot be excluded as these associations were not statistically significant.”
The finding of no effect was contrary to expectations, so the authors said, “In the present study, a number of risk factors for lung cancer were less prevalent among participants living at the higher radon concentrations, including…low fruit intake, risk occupation and traffic-related air pollution. This would result in an underestimation of the association between radon and lung cancer risk in our study.”
These words were necessary to suggest that radon might still cause lung cancer even in the face of great evidence it did not. The authors felt that something had to explain the non-effect, because they were unwilling to conceive that radon (at the stated levels) might be harmless to lungs. So in their explanation they discarded the massive evidence they collected and surmised that radon was just as deadly as commonly thought.
The Everyone Else Said it Was True Fallacy is: Even though your results are the exact opposite of your belief, explain them away, then state your belief.
The Statistics Aren’t What You Think They Are Fallacy
Here are two headlines from The Daily Mail, the popular English newspaper. “Bad news for chocoholics: Dark chocolate isn’t so healthy for you after all,” from a Jan 24, 2012, article explaining that chocolate doesn’t do much for the heart. Then just three months later, on Apr 24, another headline claimed: “Eating dark chocolate is good for your heart.” Both headlines drew on different peer-reviewed medical studies that concluded, using p-values as evidentiary markers, that chocolate was and wasn’t good for hearts.
Two more headlines from this newspaper read: “Ignore all that hype about antioxidant supplements: Why daily vitamin pills can INCREASE your risk of disease” (May 21, 2012), and “The vitamin pills that actually work! How some supplements can work wonders for certain ailments” (May 27). Some of the ailments were the same in both stories. These were also based on peer-reviewed studies, using p-values to “prove” their contentions.
On Apr 11, 2011, a headline announced: “Women who drink four cups of coffee a day face higher risk of incontinence.” Then from Thomson-Reuters, (the Daily Mail did not cover the follow-up study) a year later, on Apr 27, 2012, readers were told: “Caffeine not tied to worsening urinary incontinence.” The underlying story was the same.
On Jul 29, 2004, a headline on OBGYN.net read: “Pomegranates shown to be effective for menopausal symptoms.” It took eight years for the Daily Mail to report on Jan 24, 2012, that: “Pomegranate seed oil ‘no better than a placebo’ at easing hot flashes,” (a menopausal symptom). Both reports were based on peer-reviewed studies that used p-values as evidence.
The Statistics Aren’t What You Think They Are Fallacy is also known as the P-values Aren’t Proof Fallacy. Researchers want to know the probability that some theory is true given the evidence they have collected. This theory is then often used in developing medical practice guidelines, particularly when the theory fits expectations.
But p-values, the measures upon which most studies rely, and which everybody, even those who know better, take as proof of a theory when the p-values are less than the magic value of 0.05, do not give evidence that any theory is true. 8 Indeed, the actual definition of a p-value is so complicated nobody ever remembers it; all that is recalled is that p-values should be small.