While Theoretical Statistics is (mainly) a decent albeit rather boring mathematical discipline (Probability Theory is much more exciting), so called Applied Statistics is in its big part a whore. Finding dependence (true or false) opens exciting financing opportunities and since the true dependence is a rare commodity many â€œscientistsâ€ investigate the false ones.
So says Victor Ivrii, a mathematical physicist at the University of Toronto. Ivrii was goaded, but gently, into making his remarks by reporter Joseph Brean of the National Post for Brean’s piece, “How one man got away with mass fraud by saying ‘trust me, it’s science.'”
Brean is joining in on the laughs we’re having after discovering that Diederik Stapel lied, cheated, and bamboozled his way through a slew of social psychology papers. Stapel got away with it so long because he had a keen awareness of what his audience hoped to see. His “findings” include things like advertising makes women feel bad about themselves, white men are homophobic, messiness induces racism, and so forth.
Stapel used statistics to “prove” theories which he and his colleagues hoped were true or that would play well with reporters anxious to write “Stunning new research shows…” The shock to the system after his shenanigans were discovered was so great that even the New York Times was forced to admit that the field of social psychology “badly needs to overhaul how it treats research results.”
Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.
Stapel did statistics the easy way: he made them up. But that ploy is used (we hope) by only a minority. As the Times discovered, many others make mistakes or commit to various misinterpretations which plague statistics, particularly frequentist, p-value-based statistics. Regular readers will know how easy it is for findings-mad, paper-crazy scientists to “prove” something using statistics.
Which brings us to Ivrii equating the great field of statistics to scientific sporting ladies. Is he right? Can we, by the proper application of money, get statistics to do whatever we want? (First one to quote Disraeli/Twain gets shot.)
As probative evidence, Brean quotes from Simmons, Nelson, and Simonsohn’s paper “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” The group says, “In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not.” Brean summarizes “that modern academic psychologists have so much flexibility with numbers that they can literally prove anything. False positivism, so to speak, has gone rogue.”
Critics point to the prevalence of data dredging, in which computers look for any effect in a massive pool of data, rather than testing a specific hypothesis. But another important factor is the role of the media in hyping counter-intuitive studies, coupled with the academic imperative of “publish or perish,” and the natural human bias toward positive findings — to show an effect rather than confirm its absence.
All too true. And then there’s the National Institute of Statistical Sciences Stanley Young and Alan Karr’s “Deming, data and observational studies“. Here’s their abstract:
“Any claim coming from an observational study is most likely to be wrong.” Startling, but true. Coffee causes pancreatic cancer. Type A personality causes heart attacks. Trans-fat is a killer. Women who eat breakfast cereal give birth to more boys. All these claims come from observational studies; yet when the studies are carefully examined, the claimed links appear to be incorrect. What is going wrong? Some have suggested that the scientific method is failing, that nature itself is playing tricks on us. But it is our way of studying nature that is broken and that urgently needs mending.
Part of what’s broken is statistics.
Brean’s article, incidentally, says something which is not true, “Science, at its most basic, is the effort to prove new ideas wrong.” Science is the effort to prove which ideas are right.
Thanks to Vincent Lee for suggesting Brean’s article.