Statistics

Nonstatisticians Often Screw Up Statistics

Leland Teschler

Title stolen from article of same name by Leland Teschler in the trade journal Machine Design.

Update Statisticians often screw up statistics, too. See below.

The article is the result of an interview I gave Teschler a month ago. He called me up and asked about bad statistics, and I became that obnoxious guy in the bar who grabs your elbow and won’t let go until you understand his theory of life, the universe, and everything. Poor Teschler was panting by the time I finished with him.

Yet he must have recovered sufficiently to write:

Briggs’ argument for such a radical stance is that most nonexperts misapply these ideas and often use them to leap to bad conclusions. “The technical definition of a p-value is so difficult to remember that people just don’t keep it in mind. Even the Wikipedia page on p-value has a couple of small errors,” Briggs says. “People treat a p-value as a magical thing: If you get a p-value less than a magic number then your hypothesis is true. People don’t actually say it is 100% true, but they behave as though it is.”…

“P-values can and are used to prove anything and everything. The sole limitation is the imagination of the researcher,” he says. “To the civilian, the small p-value says that statistical significance has been found, and this, in turn, says that his hypothesis is not just probable, but true.”

Why not eliminate frequentist statistics for all but math PhD students and teach Bayes or, my preference, logical probability?

Nevertheless, there is only a slim chance a Bayesian revolution will sweep through statistics classrooms. The problem is one of inertia. “Most statistics classes are taught by nonstatisticians. They can’t teach Bayesian statistics because a lot of them have never heard of it,” says Briggs. Even worse, “Peer-review journal editors still want to see p-values in the papers they publish.”

Head on over to see the rest.

Update The interview I had with Teschler was wide ranging and did not focus on who was king of the statistical hill. I frankly do not care. The main complaint against me was that I am an academic. Ouch. I am so, it’s true, but only for two weeks of every year. The rest of the time I am on my own. Because why? Because the crazy ideas I espouse do not endear me to professional academics.

I didn’t appreciate that some people might take exception to the claim that professionals would be better at statistic than non-professionals. Of course, it is always possible that any non-trained person would do better than a trained one in statistics, or in any field.

My main point with Teschler was that statistics as a field was broken. Regular readers will understand just what I mean by this. Countless times I have showed that the further a field gets from the simple, the worse the evidence is handled. Most engineering is simple, and subject to much feedback, at least compared to the monstrous complexity which is human behavior.

If you’re new here, have a look around and you’ll see quickly what I mean.

Categories: Statistics

12 replies »

  1. Did you see the comments over there on that? Kind of funny how it’s split and you get some like:

    Typical elitist ivory tower holier than thou. As is apparently unclear to the professor, managers need to take actions … not probable actions.

    Yeah! Stop taking probable actions, Briggs. 😉 lol

  2. Nate,

    Nope. I was working with an early version before any commenters dropped by. Notice that most of the detractors are anonymous? Brave, brave.

    Of course, you can do an entire theory of statistics in 600 words so there is no possibility of confusion. But neither Teschler nor I have figured out how to do so.

  3. All,

    Got this email from JT in response to article.

    After 34 years as an analytical chemist using statistics, I’m not sure I trust statisticians to use statistics properly. Who learns [uses] statistics is not nearly as important as how it is used.

    Statistics is too often used to just the conclusion that a researcher wants to prove without regard to whether there is a plausible mechanism by which the variable measured can influence the result which is obtained. It is done by statisticians as well as non-statisticians.

    The audience also has to be considered. While it may be true that I can only not prove that two standards are different, this fact is only useful in a statistical conversation. In a analytical setting, that usually has to be sufficient.

    When the goal is to determine whether a new standard preparation can be used as a replacement for a former standard preparation without causing a significant shift in the results generated, I either have to say they are equivalent for the purpose or spend the next twenty years trying to duplicate and prove that I have duplicated the original standard.

    I have heard trained statisticians say that they have proved that certain facts are true when all they have proved is that they cannot determine whether they are. Usually all it proves is that they don’t to know whether other possibilities are equally plausible.

  4. That non-statisticians screw up statiics is likely true. But what is the extent of the problem? Citing examples of bad statistics as you do on this website might give a biased picture of the prevelance of this problem. There might very well be statistics done by non-statisticians that are perfectly fine. I suspect that many of the statistics that you have a gripe with are produced by statisticians themselves. Page through any medical journal. Chances are that a statistician was involved in the particular paper that you have a problem with.

    Cheers

    Francsois

  5. I am new here. And you are right. I teach statistcs being economist. I do not screw up statistics, I do not have such power. And I do not trust statistics (econometrics).

    But, give me more, please, show me your best post about this.

    Best regards,
    Pedro Erik

  6. “But in a lot of real-life situations, there is no hard number there, particularly when the evidence and data are so complicated that they can’t be quantified….”

    I am one engineer who has attempted to use statistics to validate design approaches and failed, more than once, and enough to learn to be wary of all things statistically predicted. I recall hours and hours of learning then trying to apply “Design of Experiments” to my work.

    My problem was (is) dealing with complicated data that they can’t be quantified with certainty. Examples: root cause failure analysis of a structure, design standard for a flood control structure, structural safety margin for multiple cycles of varying high frequency vibration. These are all contain very complicated factors for analysis with multiple variables and attached variable outcomes, and my final design always risks my professional competence. So I once sought the path of predictive statistics but found it did nothing to improve my design except making it more appealing to my customer. Until it wasn’t because the statistics did nothing to change or improve my design, and thus nothing to solve the problem the design was intended to solve.

    I hate uncertainty! I distain the waste of over specification to accommodate risk. I long for a more elegant way of reducing uncertainty thus reducing waste while not increasing risk. Even with powerful computer aided tools, design is still a design-build-test-fix, redesign-rebuild-retest process. It iterates and statistics are not a short cut to iteration, my opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *