I was having an argument with a friend who was on his way to a conference to point out the shortcomings and over-confidence of Evidence Based Medicine (EBM).
With EBM, you can’t go by the name, because it sounds unimpeachable. After all, it’s medicine using evidence. Better than medicine without evidence, n’est-ce pas?
EBM, the name, I mean, is like Artificial Intelligence, or Machine Learning. More is going on with the names than with the subjects. EBM in real life is slavish to peer review, and sees doctors running in their lab coats simpering, “Look at my wee p!”
Never mind about that, because this isn’t about EBM, but epidemiology. Now I have written dozens of articles on what I call the epidemiologist fallacy, which, I am usually careful to say, is the marriage between a version of the ecological fallacy and significance testing.
In brief, the epidemiologist fallacy is this: When an epidemiologist says, or hints with a wink in his eye, that “X causes Y”, but where X is never measured, and where the cause is “proved” with wee p-values.
It’s worse than that. Sometimes even Y isn’t measured. And even if we get past the wee p’s with Bayes, those Bayesian “solutions” are all still parameter-, and not observable-, based.
Anyway, the argument came from my friend taking exception to the name epidemiologist fallacy, which he thought was an attempt to disparage an entire field, and the people in it, like he wanted to do with EBM. My friend is, of course, an epidemiologist.
He insisted that the ecological fallacy (X causes Y; X not measured: Or Many X are Y, therefore X is Y) was in fact well known in his field, and that therefore all was mostly okay.
I told my friend he was right. I was disparaging the field.
People may know of the ecological fallacy, but it seemingly never stops them from employing it. Likely because of the Do Something Fallacy (We have to do something!). And scarcely any epidemiologist knows of the p-value fallacy: Wee P means cause or “link”, a word which is carefully never defined, but which is taken to mean “cause”. That fallacy is ubiquitous, and not just in epidemiology.
Anyway, I am hardly the first person to point out the enormous weaknesses and dangers of the field.
One of the first, and biggest names, was Alvan Feinstein. Here’s the Abstract from his 1988 paper in Science, Scientific Standards in Epidemiologic Studies of the Menace of Daily Life.
Many substances used in daily life, such as coffee, alcohol, and pharmaceutical treatment for hypertension, have been accused of “menace” in causing cancer or other major diseases. Although some of the accusations have subsequently been refuted or withdrawn, they have usually been based on statistical associations in epidemiologic studies that could not be done with the customary experimental methods of science. With these epidemiologic methods, however, the fundamental scientific standards used to specify hypotheses and groups, get high-quality data, analyze attributable actions, and avoid detection bias may also be omitted. Despite peer-review approval, the current methods need substantial improvement to produce trustworthy scientific evidence.
I’d change that “Despite peer-review approval” to “Because of peer-review approval”. Otherwise it’s fine.
Even in 1988, before computers really got hot, and where you mostly had to write your own analysis code, Feinstein said, “With modern electronic computation, all this information is readily explored in an activity sometimes called ‘data dredging.'” An approach extraordinarily fecund of “discovering” “links.”
Only now it’s orders of magnitude easier to do.
He nailed the solution, too: “The investigators will have to focus more on the scientific quality of the evidence, and less on the statistical methods of analysis and adjustment.”
True. But he said that before it was learned by the regime that he who controls The Data creates The Science.
Plus, he never even mentioned the ecological fallacy, which is now the engine supreme to produce papers.
Now there’s this famous epidemiologist, Paul Knipschild, who retired recently from Masstricht University where, I heard, the tradition is to give a farewell speech. His view of the field was not sanguine. (I learned of Knipschild from our friend Jaap Hanekamp.)
He said the reality of the field is “pretty disappointing.”
And many more importantly, it just has to be said! – the majority of epidemiologists don’t know much about research methodology. They once did an “epidemiological” study. Mostly it concerned an etiological question, to one or more lifestyle causes for the onset of a disease….
I give a few examples. Gives alcohol an increased chance on breast cancer? Is a stroke more common in people with sitting tend profession? Does passive smoking in pregnant women reduce the risk of a newborn with a normal weight? Do you get osteoporosis earlier, if you drink little milk? Is asthma more common in poor neighborhoods?
As far as I’m concerned, it’s best that we stop that kind of research. The magazines, national and international, are full of “relative risks” and “odds ratios” between 0.5 and 2 and what do we actually know? Barring exceptions, all that “observational” research is not very reliable – the word “observational” alone!
There are many problems and insufficient correction possibilities. That correction succeeds not in the case of small-scale research and not even if you are conducting a cohort study does in more than 10,000 people, with a follow-up of more than three year. In any case, what do we know about risks and their perception by people of different backgrounds?
It’s about time a new article appeared that made short work [of the field], as Alvan Feinstein previously did in the magazine. Want to know how many times that Feinstein article has been cited? 200 times, that’s a lot, the main reason being that hordes of epidemiologists of lesser stature tried to discredit him.
He goes on, at great length. You can read for yourself.
If you’re not satisfied a problem exists, read some of the articles at my place linked above. Even the “highest quality” “top” universities are producing epidemiological silliness. There is massive, truly quite colossal, over-certainty in the field.
My solution? Fix the data problem, which means fixing the political problem, which means it won’t get fixed.
Second, don’t release results until models have been verified predictively. Meaning they have made skillful predictions on data never before seen or used in any way. And where model verification can be carried out by disinterested third parties.
Which means the second part won’t get fixed, either.
Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.