The Disparagement Of Epidemiology

I was having an argument with a friend who was on his way to a conference to point out the shortcomings and over-confidence of Evidence Based Medicine (EBM).

With EBM, you can’t go by the name, because it sounds unimpeachable. After all, it’s medicine using evidence. Better than medicine without evidence, n’est-ce pas?

EBM, the name, I mean, is like Artificial Intelligence, or Machine Learning. More is going on with the names than with the subjects. EBM in real life is slavish to peer review, and sees doctors running in their lab coats simpering, “Look at my wee p!”

Never mind about that, because this isn’t about EBM, but epidemiology. Now I have written dozens of articles on what I call the epidemiologist fallacy, which, I am usually careful to say, is the marriage between a version of the ecological fallacy and significance testing.

In brief, the epidemiologist fallacy is this: When an epidemiologist says, or hints with a wink in his eye, that “X causes Y”, but where X is never measured, and where the cause is “proved” with wee p-values.

It’s worse than that. Sometimes even Y isn’t measured. And even if we get past the wee p’s with Bayes, those Bayesian “solutions” are all still parameter-, and not observable-, based.

Anyway, the argument came from my friend taking exception to the name epidemiologist fallacy, which he thought was an attempt to disparage an entire field, and the people in it, like he wanted to do with EBM. My friend is, of course, an epidemiologist.

He insisted that the ecological fallacy (X causes Y; X not measured: Or Many X are Y, therefore X is Y) was in fact well known in his field, and that therefore all was mostly okay.

I told my friend he was right. I was disparaging the field.

People may know of the ecological fallacy, but it seemingly never stops them from employing it. Likely because of the Do Something Fallacy (We have to do something!). And scarcely any epidemiologist knows of the p-value fallacy: Wee P means cause or “link”, a word which is carefully never defined, but which is taken to mean “cause”. That fallacy is ubiquitous, and not just in epidemiology.

Anyway, I am hardly the first person to point out the enormous weaknesses and dangers of the field.

One of the first, and biggest names, was Alvan Feinstein. Here’s the Abstract from his 1988 paper in Science, Scientific Standards in Epidemiologic Studies of the Menace of Daily Life.

Many substances used in daily life, such as coffee, alcohol, and pharmaceutical treatment for hypertension, have been accused of “menace” in causing cancer or other major diseases. Although some of the accusations have subsequently been refuted or withdrawn, they have usually been based on statistical associations in epidemiologic studies that could not be done with the customary experimental methods of science. With these epidemiologic methods, however, the fundamental scientific standards used to specify hypotheses and groups, get high-quality data, analyze attributable actions, and avoid detection bias may also be omitted. Despite peer-review approval, the current methods need substantial improvement to produce trustworthy scientific evidence.

I’d change that “Despite peer-review approval” to “Because of peer-review approval”. Otherwise it’s fine.

Even in 1988, before computers really got hot, and where you mostly had to write your own analysis code, Feinstein said, “With modern electronic computation, all this information is readily explored in an activity sometimes called ‘data dredging.'” An approach extraordinarily fecund of “discovering” “links.”

Only now it’s orders of magnitude easier to do.

He nailed the solution, too: “The investigators will have to focus more on the scientific quality of the evidence, and less on the statistical methods of analysis and adjustment.”

True. But he said that before it was learned by the regime that he who controls The Data creates The Science.

Plus, he never even mentioned the ecological fallacy, which is now the engine supreme to produce papers.

Now there’s this famous epidemiologist, Paul Knipschild, who retired recently from Masstricht University where, I heard, the tradition is to give a farewell speech. His view of the field was not sanguine. (I learned of Knipschild from our friend Jaap Hanekamp.)

He said the reality of the field is “pretty disappointing.”

And many more importantly, it just has to be said! – the majority of epidemiologists don’t know much about research methodology. They once did an “epidemiological” study. Mostly it concerned an etiological question, to one or more lifestyle causes for the onset of a disease….

I give a few examples. Gives alcohol an increased chance on breast cancer? Is a stroke more common in people with sitting tend profession? Does passive smoking in pregnant women reduce the risk of a newborn with a normal weight? Do you get osteoporosis earlier, if you drink little milk? Is asthma more common in poor neighborhoods?

As far as I’m concerned, it’s best that we stop that kind of research. The magazines, national and international, are full of “relative risks” and “odds ratios” between 0.5 and 2 and what do we actually know? Barring exceptions, all that “observational” research is not very reliable – the word “observational” alone!

There are many problems and insufficient correction possibilities. That correction succeeds not in the case of small-scale research and not even if you are conducting a cohort study does in more than 10,000 people, with a follow-up of more than three year. In any case, what do we know about risks and their perception by people of different backgrounds?


It’s about time a new article appeared that made short work [of the field], as Alvan Feinstein previously did in the magazine. Want to know how many times that Feinstein article has been cited? 200 times, that’s a lot, the main reason being that hordes of epidemiologists of lesser stature tried to discredit him.

He goes on, at great length. You can read for yourself.

If you’re not satisfied a problem exists, read some of the articles at my place linked above. Even the “highest quality” “top” universities are producing epidemiological silliness. There is massive, truly quite colossal, over-certainty in the field.

My solution? Fix the data problem, which means fixing the political problem, which means it won’t get fixed.

Second, don’t release results until models have been verified predictively. Meaning they have made skillful predictions on data never before seen or used in any way. And where model verification can be carried out by disinterested third parties.

Which means the second part won’t get fixed, either.

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.

Categories: Statistics

12 replies »

  1. Somewhat related: Steve Stewart-Williams (The Ape that Understood the Universe, 2018) writes:
    Scary: 73 teams tested the same hypotheses with the same data. Some found negative results, some positive, some nada. No effect of expertise or confirmation bias. “Idiosyncratic researcher variability is a threat to the reliability of scientific findings.”

    This is by and large what you Mr. Briggs wrote – not least against Sir Karl Popper’s falsification criterion as a safeguard to secure sound scientific results. – That there’d be no methodological Popperian trick that could secure the scientifically correct outcome of any statistically oriented research***** – (I presuppose that the research had been done in a technically correct way).

    What really helps is to look at (social) reality and see how the theory/ theory-based predictions of a given research paper hold up against it. And that process is very much a hermeneutical one in Hans Georg Gadamer’s understanding (Truth and Method). And that means it depends on the way people are willing to make use of (= to interpret!) this data, what means: To make a decision – and not: To research. Decision making is where thepurely objective research-methods definitely end, because decision making is in all non trivial cases – value based.

    Now – what was the touchy subject, the data-interpretations of 73 research teams in the above linked paper differ so widely about? – The subject was social policies with regard not least to – – – immigration! – And how immigration affects the social systems of the host countries!

    The study authors explain that they worked with –

    – “a six-question module on the role of government in providing different social policies such as old-age, labor market and health care provisions. This six-question module is also the source of the data used by David Brady and Ryan Finnigan in one of the most cited investigations of the substantive hypothesis participants were instructed to test (19). The PIs also provided yearly indicator data for countries on immigrant stock as a percentage of the population and on flow as a net change in stock taken from the World Bank, United Nations and Organization for Economic Co-Operation and Development. Relevant ISSP and immigration data were available for 31 mostly rich and some middle-income countries…”

  2. Interesting connection: The study of quantum physics has entirely abandoned the search for cause, stating quite openly that cause does not exist. (See, for example, the 2022 Nobel Prize.) Of course, there has been little practical advancement in physics since this mantra became widespread, despite the billions poured into research. I’m sure that’s just a coincidence.

  3. I’ve been noticing a lot of ambulance chasing commercials purporting “scientific studies” pointing to acetaminophen used by mothers during gestation and baby food causing autism.

    (but don’t mention vaccines – that’s been debunked)

  4. I was a professional epidemiologist in the 1990s, and you are dead right about how things were then (when there were only about half a dozen honest and competent epidemiologists in the world – of whom Feinstein was one) — and things are Much worse now.

    Feinstein (who I knew slightly) was, I think, the senior professor in the medical faculty at Yale; and received a Gairdner Award – which was then just a notch down from a Nobel. He wrote well; and published widely in the most prestigious journals.

    But he was completely ignored by epidemiologists, and his (perfectly correct) criticisms had near zero influence on the field.

    Instead, epidemiology followed the funding, and became a ‘zombie science’:

  5. Epidemiology has been corrupted by marketing. You can’t sell a new product as better anymore, you have to scare consumers away from the competing product and so you commission research which does that.

    It is trivial to design an “experiment” which finds correlation between consumption or exposure to X and some kind of harm. This gets published because the journal wants the publicity (or maybe the journal is actually sponsored by the company developed the substitute for X) and suddenly “X is dangerous” is in the mainstream press. Thus, my new product which replaces X is presumed to be safer (without evidence, of course) and now I have a market for my new product.

  6. Politics is the main, to me, issue here. In that, the point has been made quite well. Solutions? We are dealing with fallen humans, so competition, grounded in reality, would work best. Politics currently makes that impossible. We need to get back to being God centered.

  7. Yes. My only criticism is the name change from Ecological Fallacy to epidemiologist fallacy — not because epidemiologists are innocent (they are guilty as OJ) but because ecologists are just as and even more so guilty.

    It all started when some goofball claimed a butterfly flapping its wings in China causes tornadoes in Oklahoma. The public ate that up like candy; science morphed into magick.

    Recently it was allegedly discovered that the rotational speed of the Earth has allegedly slowed by 1.6 milliseconds per day. The consequences were claimed by scientists involved to be “colossal”. Now a millisecond is 1/1000th of a second, so there are 1000x60x60x24 =86,400,000 milliseconds in a day. The colossal effect is thus 1/54,000,000, or to put it bluntly, an itsy bitsy teeny weenie thing. I mean mini-microscopically puny. Allegedly.

    And yet every ecologista nutjob has unearthed the Cause of this: global warming did it (natch), delayed whale migration, volcanoes, space aliens, Venus, Mars, major league baseball, donuts, male pattern baldness, you name it. Epidemiologists are chump change compared to ecologists when it comes to wild assertion-ating.

    The Cure is to cut off the funding, now, like a gangrenous foot. Go live under a bridge or something, I don’t care, but your spigot is pinched shut forever, Jackalope.

  8. Or it speeded up by 1.6 milliseconds. I don’t know. It’s just a scam of a scam of a scam. It’s scams all the way down.

  9. All,

    Here’s a nice coincidence, a headline from this afternoon: Air pollution is making women fat: study.

    Observed women who were exposed to poor air quality, specifically higher levels of fine particles, such as nitrogen dioxide and ozone, had seen increases in their body size, according to study author Xin Wang, an epidemiology research investigator at the University of Michigan.

    Has to be the epidemiologist fallacy in action.


    “Annual air pollution exposures were assigned by linking residential addresses with hybrid estimates of air pollutant concentrations at 1-km2 resolution.”


  10. We’re in a post-science civilization. Science has been used to replace religion as the justification for the actions of the ruling class and is therefore no longer congruent with observable reality and objective reasoning. Lying charlatan Elon Musk is the richest and most respected man in the world and he makes PT Barnum look like an honest paragon in comparison.

    Last one left alive turn out the lights.

  11. About ten years ago I was asked to judge at a research convention. I thought I would be judging for math or maybe physics or engineering, but all those areas had more than enough judges. So they put me on a social sciences panel.

    One of the things we were required to do is ask a question of each presenter, and it was basically forced to be a softball question. One talk contained basically every single statistical error imaginable: assumptions of independence and normality when neither could be true, everything filtered through wee p-values, jumping from “this wasn’t random” to “this specific cause was responsible”, miniscule sample sizes, etc. Normally I would have torn into some of this, but since we were asked to give softball questions, I merely asked “if you were to repeat this study, are there any changes you would make to the methodology?”

    The presenter responded “There’s no need to repeat this study, since we have proven this hypothesis already. So rather than wasting my time on that I would study a new question instead.”

    It’s at that point that I realized how much of a cargo cult most of the use of statistics is. But it was only much later that I realized the problem wasn’t just confined to sociology and the like.

  12. This has made its way into the mundane, even.

    There are warning labels on many common items where California claims that materials contained within the item are KNOWN by the state to CAUSE cancer.

Leave a Reply

Your email address will not be published. Required fields are marked *