Long-time readers will recall the epidemiologist fallacy is a shotgun marriage of the ecological fallacy and wee p-values. Make that and/or confidence intervals.
For confidence intervals are equivalent to p-values in use and interpretation. Meaning they should not be used, just as p-values should never be used.
The ecological fallacy is to conclude X is a cause of Y when X has not been measured. You’d think such a thing never happens. After all, what scientist would say, or hint, “I conclude pesticides cause autism, but I never measured pesticide exposure”? Sounds silly, n’est-ce pas?
Enter the paper “Prenatal and infant exposure to ambient pesticides and autism spectrum disorder in children: population based case-control study” in BMJ by Ondine S von Ehrenstein and a bunch of others.
First comes the record search:
2961 individuals with a diagnosis of autism spectrum disorder based on [DSMV IV], including 445 with intellectual disability comorbidity, were identified through records maintained at the California Department of Developmental Services and linked to their birth records. Controls derived from birth records were matched to cases 10:1 by sex and birth year.
That’s fine, more or less. But this is not:
Data from California state mandated Pesticide Use Reporting were integrated into a geographic information system tool to estimate prenatal and infant exposures to pesticides (measured as pounds of pesticides applied per acre/month within 2000 m from the maternal residence). 11 high use pesticides were selected for examination a priori according to previous evidence of neurodevelopmental toxicity in vivo or in vitro (exposure defined as ever v [sic] never for each pesticide during specific developmental periods).
Let me translate. A highly inaccurate guessing machine said “This much of this pesticide was used within 2km of this person’s home address, even though people aren’t always around their home address.”
Then von Ehrenstein said, or rather implied, “That guess is the exposure.”
Enter the regression models to “control” for this exposure, among other things:
Risk of autism spectrum disorder was associated with prenatal exposure to glyphosate (odds ratio 1.16, 95% confidence interval 1.06 to 1.27)…diazinon (1.11, 1.01 to 1.21), malathion (1.11, 1.01 to 1.22), avermectin (1.12, 1.04 to 1.22)… For autism spectrum disorder with intellectual disability, estimated odds ratios were higher (by about 30%) for prenatal exposure to glyphosate (1.33, 1.05 to 1.69)…; exposure in the first year of life increased the odds for the disorder with comorbid intellectual disability by up to 50% for some pesticide substances.
Now everybody, and I mean absolutely everybody, even the most ardent die-hard frequentist ideologue, treats confidence intervals as Bayesian credible intervals. Which is to say, nobody treats confidence intervals according to frequentist theory. The reason is simple. Because the only thing you are allowed to say about any confidence interval is “either the true value of the parameter lies within or it does not.” Which is a useless tautology.
However, since nobody uses confidence intervals as confidence intervals, but instead all use them as credible intervals, it is unfair to criticize confidence intervals as confidence intervals. Let’s instead criticize them as credible intervals.
We can ignore here the “control” of their models for other things, though it’s very important (because no control of any kind took place). Let’s just look at the non-exposure-exposure. Risk of autism was associated with prenatal exposure to glyphosate with a 95% interval 1.06 to 1.27.
That is an estimate for a parameter in a model, but everybody takes it to be the risk of autism. What we really want is the probability of autism given the risk. What we got is a remark about a non-existent parameter in a model, the confidence (or credibility) of which is necessarily higher than in the probability.
Meaning, even if the exposure is real, we should not be as confident about developing autism as the interval leads us to believe.
Unfortunately, without the data I can’t tell you what the non-exposure-exposure probability would be, but I’m guessing—and it’s only a wild guess—that it’s not too different than the probability of autism without the non-exposure-exposure. It will be higher, but not a lot higher.
That is before taking account of the non-exposure-exposure. The exposure was only a guess, meaning there is uncertainty in actual exposure, an uncertainty that ought to be accounted for in the model, but which was not. This means that the difference between the non-exposure-exposure probability and the non-exposure-non-exposure probability is even smaller still, and possibly non-existent.
Meaning we can’t such much of anything about pesticides and autism.
In order to become the epidemiologist fallacy the charge of cause must have been made. Was it? Yes, indirectly. First by having discussions in the paper of how exposure might cause autism, and second by concluding this:
Findings suggest that an offspring’s risk of autism spectrum disorder increases following prenatal exposure to ambient pesticides within 2000 m of their mother’s residence during pregnancy, compared with offspring of women from the same agricultural region without such exposure.
To say “the” risk increases (as if risk existed) is to hint with a heavy wink-wink that pesticides indeed were implicated as a cause.
Even though they were never measured. Any other measure that was highly correlated with the exposure guess—say, distance from grocery stores, a proxy of being near a farmer’s field—would give the similar results, even though it would be obvious grocery stores were not causing autism.
The real question is, “Do pesticides cause autism?” I have no idea. I wouldn’t say no. I wouldn’t want my kid or me exposed to them. But I make that judgement without the evidence from this paper, which is unhelpful and misleading—even though the mistakes made are extremely common.
To support this site using credit card or PayPal click here