There are a lot of new readers, and many may not yet have heard of the epidemiologist fallacy. Few tools have been as productive at generating The Science. You know The Science. The Science is what they insist you must “follow”.
The epidemiologist fallacy (EF) is simple to describe: It is when a scientist claims X causes Y, but where he never measured X, and he “proved” the cause using his wee p.
Wee p? Which is to say, null hypothesis significance testing or Bayesian parameter posteriors, these being much the same in practice. Where the primary, or usually sole, focus is on the parameters of some ad hoc probability model. And not on the observable Y.
The double dog epidemiologist fallacy is similar: It is when a scientist claims X causes Y, but where he never measured X, and he never measure Y, and he “proved” the cause using his wee p.
Since there’s a whole lot of nothing going on—X is never measured, and sometimes Y never is, either—-you’d think we’d hear from scientists a whole lot less, because if you haven’t measured what you said caused Y, then quiet humility would seem to be the order of the day.
Alas, no. Our latest example is the peer-reviewed paper “Mortality risk from United States coal electricity generation” by Lucas Henneman and others in Science.
Our first clue something has gone wrong is the title. Risk, they say. They do not say this “Mortality from coal”, which is a definite claim. Instead, they preach about something called “risk”, which is probability by another name.
They examine a substance called “PM2.5”, which is dust. It works to cause all manner of disease and havoc, they say.
Here is a sentence from the Abstract: “We estimated the number of deaths attributable to coal PM2.5 from 1999 to 2020”.
That is causal language. That is what “attributable” means: caused by. They are claiming cause. And, though I ask you to take my word for it, they do so using standard statistical tools, all of which cannot be used to claim cause, but which here are.
We have one element of the EF. Let’s see if there are others.
Recall they said X causes Y, coal dust causes death. Did they measure X? Did they measure Y?
X: “We estimated coal PM2.5 using the HYSPLIT with Average Dispersion (HyADS) model, which accounts for date-specific atmospheric transport of PM2.5 to characterize exposure to PM2.5 from individual EGUs [coal electricity-generating units].”
So they did not measure X. X is output from some model. Still, maybe it is a good model, and gave good predictions of how much PM2.5 each individual in their study sucked in. Was it?
They say: “By averaging ZIP (postal) code levels of coal PM2.5 across the conterminous US, we found that…”
The key word, one should always look for, is exposure. It sounds nice and scientific, and intimates a measurement was made. But no. Exposure is not dose. We do not know how much PM2.5 anybody in their database sucked in.
X was not measured. A person’s zip code does not give dose. Of course, it can be used to predict dosage in a probabilistic sense. But we already have a model of PM2.5 at zip codes. Which means we need a second model of how PM2.5 predicts dosage. That’s a model of a model. Which would still be okay, as long as they carried forward the uncertainty in these models. Alas, they did not. This did not measure X.
We have just confirmed we are dealing with the epidemiologist fallacy. This paper is The Science that must be followed. Which is bad enough. But dare we look for the double dog epidemiologist fallacy?
Yes. We dare.
Recall that to get the DDEF we need to not only not measure X, but also not measure Y.
Authors: “The Medicare dataset contains records of 32.5 million deaths from 1999 to 2016 (table S1), with the annual number of deaths increasing and death rates decreasing across the study period..,”
So they did measure Y! Or did they?
No, sir, they did not. Because they measured deaths of all kinds. And not deaths from dust. A guy who slipped and fell on his covid vax counted. And only in old people.
We do have the DDEF, but a weak form of it, because they are lean on the vague and dubious claim that dust exacerbated all causes of death.
The authors, perhaps sensing this, tried to do better. Because the “attributable” deaths were not deaths after all, but “excess” deaths!
Authors: “…we estimated the excess number of deaths attributable to coal PM2.5 relative to what would have occurred assuming zero SO2 emissions from coal EGUs (i.e., coal PM2.5 = 0).”
But they can’t know how many deaths there would have been without coal plants, because, of course, there were coal plants. This is a yet another model, and one which can never be confirmed. Which doesn’t make it wrong, but it does mean there should be a lot more uncertainty.
In the end, we have a modified epidemiologist fallacy, which didn’t quite make double dog status. Sad.
Still, there is no chance that they can be anywhere near certain that coal dust killed people, because we have models of models of models. And no direct measure of dose, nor deaths caused by dust. Everything is loosey goosey correlations which they claim are cause.
They do indeed insist their correlations are causations: “These results advance the growing body of evidence showing varying toxicity of PM2.5 originating from different sources.” From which they conclude not only that more regulation is needed, and so also is their continued service.
Do not follow The Science.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: email@example.com, and please include yours so I know who to thank.