‘Uncertainty: The Soul of Modeling, Probability & Statistics’ Reviewed Again

docsay

The following review appeared in the Journal of American Physicians and Surgeons (pdf).

Jane Orient is the lead doc at Association of American Physicians and Surgeons, publishers of the journal.

Uncertainty: The Soul of Modeling, Probability & Statistics, by William Briggs, hardcover, 258 pp, $59.75, ISBN 978-3-319-39759-9, Springer International Publishing Switzerland, 2016.

This book has the potential to turn the world of evidence-based medicine upside down. It boldly asserts that with regard to everything having to do with evidence, we’re doing it all wrong: probability, statistics, causality, modeling, deciding, communicating—everything. The flavor is probably best conveyed by the title of one of my favorite sections: “Die, p-Value, Die, Die, Die.”

Nobody ever remembers the definition of a p-value, William Briggs points out. “Everybody translates it to the probability, or its complement, of the hypothesis at hand.” He shows that the arguments commonly used to justify p-values are fallacies. It is far past time for the “ineradicable Cult of Point-Oh-Five” to go, he states. He does not see confidence intervals as the alternative, noting that “nobody ever gets these curious creations correct.”

Briggs is neither a frequentist nor a Bayesian. Rather, he recommends a third way of modeling: using the model to predict something. “The true and only test of model goodness is how well that model predicts data, never before seen or used in any way. That means traditional tricks like cross validation, boot strapping, hind- or back-casting and the like all ‘cheat’ and re-use what is already known as if it were unknown; they repackage the old as new.”

Yes, this book is about probability and statistics, and there is some mathematics in it, but fundamentally it is a book of philosophy. If you follow his blog, wmbriggs.com, you will recognize that he is a devotee of Thomas Aquinas.

The book discusses science and scientism, and belief and knowledge. Chapter Two is about logic, with delightful examples from Charles Dodgson (Lewis Carroll). Briggs states that an entire branch of statistics, hypothesis testing, is built around the worst fallacy, the “We-Have-To-Do-Something Fallacy.”

Some of the book’s key insights are: Probability is always conditional. Chance never causes anything. Randomness is not a thing. Random, to us and to science, means unknown cause.

One fallacy that Briggs chooses for special mention, because it is so common and so harmful, is the epidemiologist fallacy. He prefers his neologism to the more well-known “ecological fallacy” because without this fallacy, “most epidemiologists, especially those employed by the government, would be out of a job.” It is also richer than the ecological fallacy because it occurs whenever an epidemiologist says “X causes Y” but never measures X. Causality is inferred from “wee p-values.” One especially egregious example is the assertion that small particulates in the air (PM 2.5s) cause excess mortality.

Quantifying the unquantifiable, which is the basis of so much sociological research, creates a “devastation to sound argument…[that] cannot be quantified.” Briggs deconstructs famous examples and the “instruments” they use.

We suffer from a crisis of over-certainty, Briggs writes. He believes we need a science that is “not quite so dictatorial and inflexible, one that is calmer and in less of a hurry, one that is far less sure of itself, one that has a proper appreciation of how much it doesn’t know.”

Statistical significance should be “treated like the ebola virus,” he writes, “i.e. it should be placed in a tightly guarded compound where any danger can be contained and where only individuals highly trained in avoiding intellectual contamination can view it.”

Briggs will not be well loved by those who write “evidence-based” papers replete with parameters, regressions, and p-values. Those who study Briggs will no longer be overawed by such papers, however prestigious the journal that publishes them. But he validates the importance of first-hand observation, insight, and intuition. To my mind, he shows that the need for the art of medicine is proven by the science.

Despite its heavy subject matter, the book is full of humor and a delight to read and re-read.

Jane M. Orient, M.D.
Tucson, Ariz.

13 Comments

  1. Joy

    …and so it does have the power to turn evidence based medicine on it’s head as Dr Jane Orient says. I second that.
    I hope it does and I hope Briggs will be fully credited in the fullness of time.

  2. Nate Winchester

    Best header image ever.

  3. Frederick Arnold

    Lends new meaning to the expression “everything you know is
    wrong.” Time to bring sadistic s out of the dark ages of voodoo
    science and into the light of reason. One only has to look at the
    number of drugs pulled from the shelves and the subsequent
    feeding frenzy of trial lawyers.

  4. Ken

    I dunno if I’d be too happy with that review:

    “Some of the book’s key insights are: Probability is always conditional. Chance never causes anything. Randomness is not a thing. Random, to us and to science, means unknown cause.”

    Does ANYBODY really not recognize that detail about randomness???? Sure, many people do discuss “random” (and variations of that word) as if it were a thing or some vague force, but that’s metaphorical verbal shorthand…not actual literal belief per the literal meaning of the words….

    “One fallacy that Briggs chooses for special mention, because it is so common and so harmful, is the epidemiologist fallacy. He prefers his neologism … One especially egregious example is the assertion that small particulates in the air (PM 2.5s) cause excess mortality.”

    Say what??? There’s ample studies ‘out there’ where the disease mechanisms arising from tiny particulate ingestion ARE researched, with many being understood. Here’s one such from close to a decade ago: http://www.ncbi.nlm.nih.gov/pubmed/19034792 . That & others like it are cases in which correlations were/are observed — were those correlations have prompted and continue to prompt ongoing research into identifying [and treating] the various causal factors & disease mechanisms that account for the correlations observed.

  5. DAV

    Ken,

    You might want to look at some deeper comments on PM2.5 before making statements like that:
    https://www.wmbriggs.com/post/4353/
    Follow the links therein to the full text of the criticism.

    You should avoid linking abstracts. They are often more sales hype than summary.

    The one you linked appears to be a meta analysis (IOW a summary of other research). The abstract doesn’t mention it is itself a study.

    Air pollution has been considered a hazard to human health. In the past decades, many studies highlighted the role of ambient airborne particulate matter (PM) as an important environmental pollutant for many different cardiopulmonary diseases and lung cancer. Numerous epidemiological studies in the past 30 years found a strong exposure-response relationship between PM for short-term effects (premature mortality, hospital admissions) and long-term or cumulative health effects (morbidity, lung cancer, cardiovascular and cardiopulmonary diseases, etc).

    “strong relationship” is not necessarily a causal relationship. There is a strong relationship between the price of rum and New England preacher salaries.

    In nearly all papers in epidemiology the only thing done was to measure (via the Hypotheses Test) the correlation between two variables. All the Hypothesis Test can do is rule out suppositions of causation since correlation is a requirement for causation. Unfortunately, correlation between any two variables in a given data set is likely.

    Note the leap in the abstract from: Numerous epidemiological studies found strong exposure-response relationship to Air pollution has been considered a hazard to human health Why? Because there is a “strong correlation” — whatever “strong” means (0.1%? 2% 90%?) Likely, quite small though. There is a strong correlation between having a fast moving lead projectile colliding with one’s head and death (quite high). You don’t see studies of this hypothesis — probably because of the size of the correlation. When it’s small one must resort to statistics.

    Somewhere along the line this has been forgotten and we get things like “X is good for you” and “X is bad for you” and not “X Might be good or bad”. The word “might” has vanished from many press releases.

    Strangely, the EPA actually tried to verify all these “strong correlations” with actual experimentation. Strange because pollution is trotted about a “known” health hazard!

    http://junkscience.com/2016/09/junkscience-forces-epa-into-more-human-testing-lies/#more-90511
    http://dailyheadlines.net/2016/08/obama-epa-conducted-nazi-josef-mengele-type-experiments-on-unsuspecting-americans/

  6. JohnK

    Thanks, DAV. Now on to ‘random’, and again, Ken: Matt has written extensively on this blog; perhaps you should better inform yourself (like, e.g., by searching Matt’s site) before you weigh in on topics like PM 2.5s, and ‘random’.

    You claim (without evidence) that nobody ‘believes’ that ‘random’ causes stuff; that it’s just a ‘shorthand’. Well, if it’s a shorthand, it’s a very widely used one:
    https://www.wmbriggs.com/post/8211/
    https://www.wmbriggs.com/post/14656/

    Nor do you seem to be aware of Matt’s claim that ‘random’ CANNOT be a “shorthand”, in order for anyone to make a systematically and mathematically coherent use of classical statistics; meaning, that at a deep and foundational level, classical statistics REQUIRES ‘random’ to be an actual thing; that it MUST not be used as a shorthand; that you MUST “believe” in it, to make classical statistics work:

    It’s true that the old (and not yet deceased) frequentist view of probability requires chance (or Chance) to be real; probability is (somehow) created in the wake of Chance. Chance is the being, to a frequentist, which fiddles with the coin as it is spinning, causing it to land heads. Busy man, Chance. Think of all those quarks it has to spin.

    And people sure seem to ACT as if ‘random’ were a cause; they certainly use statistics, and interpret statistical results as if ‘random’ were a cause; for example, people notoriously ACT as if ‘randomizing’ two groups was far superior to “deliberately matching them on everything I can think of that could matter”; and they certainly don’t ACT as if ‘random’ were a shorthand way of saying “I don’t know”.

    Besides, how much more ‘shorthand’ can you get than “I don’t know”? Why not just say that, and ACT like THAT’S true, if that’s what scientific-type people really “believe”? Your argument that ‘random’ as Cause is just a ‘shorthand’ doesn’t even pass the logic test.

  7. JH

    It’s true that the old (and not yet deceased) frequentist view of probability requires chance (or Chance) to be real; probability is (somehow) created in the wake of Chance. Chance is the being, to a frequentist, which fiddles with the coin as it is spinning, causing it to land heads. Busy man, Chance. Think of all those quarks it has to spin.

    JohnK, do you really believe the above statement and all other similar blacket claims in the book?

  8. Ken

    DAV, JohnK,

    The kinds of research on particulate effects include things like this, where very specific toxicity mechanisms have been identified:

    “Some studies showed that the extractable organic compounds (a variety of chemicals with mutagenic and cytotoxic properties) contribute to various mechanisms of cytotoxicity; in addition, the water-soluble faction (mainly transition metals with redox potential) play an important role in the initiation of oxidative DNA damage and membrane lipid peroxidation.”

    You can cite all day long comments like

    “Numerous epidemiological studies in the past 30 years found a strong exposure-response relationship between PM for short-term effects…”

    and then say things like,

    “…“strong relationship” is not necessarily a causal relationship.”

    And sit there smugly confident you’re absolutely correct. And you are, but you’re also omitting key facts — like studies that have identified the mechanisms between the correlates (causes & effects) really do matter. And there are tons of those (not literally “tons” but a lot).

    RE: “You claim (without evidence) that nobody ‘believes’ that ‘random’ causes stuff; that it’s just a ‘shorthand’. Well, if it’s a shorthand, it’s a very widely used one: (quotes of Briggs’ articles follows)

    That’s only substantially true within Briggs’ mind (apparently) and some readers of his blog. Briggs has a continuing inclination to perceive metaphor & shorthand linguistic descriptions as literal* to the point of redefining the actual meaning into something else and then arguing why that something else is wrong.

    * Here is an epic example illustrating an inability to comprehend metaphor as evidenced by the discussion that completely fails to grasp the symbolism: https://www.wmbriggs.com/post/4895/

  9. DAV

    And sit there smugly confident you’re absolutely correct. And you are, but you’re also omitting key facts — like studies that have identified the mechanisms between the correlates (causes & effects) really do matter.

    Homework: why is the price of rum correlated to New England preacher salaries when on the surface they should be independent? What is the causal relationship?

    Some studies actually find causal relationships by data analysis alone? Possible but not easily accomplished (see Pearl on causality) and Pearl’s methods are definitely not SOP in epidemiology. Bet you can’t find one that does determine a causal relationship — not just claimed but actually demonstrated. The Jerritt study of PM2.5 certainly didn’t find a causal relationship only correlation and found a correlation only by using an unproven spatial proxy.

    The EPA ran a very recent experiment subjecting humans to exhaust fumes to do what exactly? If all of these pollution statistical studies are correct then the EPA subjected humans to known health hazards. Like testing if shooting people in the head is harmful using human test subjects. Why do this if the hazard is already known? Pure evil.

    OR …, the EPA doesn’t believe the studies after all and the studies are just candy to pass out to the statistically illiterate.

    I repeat finding a correlation doesn’t mean you have found a cause. It’s impossible to make this determination using only two variables and using three variables can only determine three causal relationship out of a possible twelve. And does so by examining the interrelationships between the variables and not just using them as regression terms.

    Briggs has pointed out time and time again that the Hypothesis Test is essentially a useless tool (finding correlations is all too easy). It’s the major source of these p-values. The p-value isn’t the correlation but a measure of how well the regression fits the current data.

    A p-value tells you zip about model performance. Using it is like claiming the auto you built will be a great performer because you used quality screws in its construction and — hey! Here are their quality test results!

    You instead need a model that predicts. Drawing a line in a current data set rarely is capable of prediction with any confidence or accuracy. If the model can’t predict it is a curiosity and not a useful tool.

    The Jerritt PM25 study and every last one of the ones referenced in the linked abstract (itself an abstract of other papers) were published only because a useless calculation had a particular value. No one ever bothered to determine if they can predict. And all of them are finding things hovering in and around the noise level. IOW: low impact. Egregious indeed.

  10. Bill, getting the book. As a knuckle dragging clinician with a second degree in clinical epidemiology, this needs to be shouted from the rooftops.

Leave a Reply

Your email address will not be published. Required fields are marked *