This piece ran originally at The American Catholic, but Kurland graciously allowed us to re-run it here. It is a topic of deep interest for us.
We keep reading and hearing “Science (uppercase obligatory) shows…” with respect to politically correct views on the Wuhan flu, anthropic global warming, racism, economic oppression, etc….. “Believe the science.” Horse Hockey!!! Almost invariably it isn’t a scientific analysis that’s being cited but faux science, computer programs using statistical projections that have the desired conclusions built in.
But such computer modeling isn’t science. Science requires empirical tests of hypotheses such that predictions can be tested: the hypotheses are to be falsified or verified by repeated empirical demonstrations. Besides fitting data, the hypotheses must coordinate with general and subsidiary principles of science. The best representation I’ve found for how science works is the “Lakatos Scientific Research Programme,” diagrammed in the featured image above. There is an interplay—predictions, correlations, feedback—between the shells of a sphere, fundamental principles, fundamental theories, auxiliary theories, data. A more detailed description of this is given here.
As with other mathematical tools employed in scientific endeavors—calculus, linear algebra, group theory, topology, Feynman diagrams—computer programming may be accurate, but would not have intrinsic truth value. The truth value comes from measurements, replicable in many labs. That’s how science works. (See here.) It takes more than one “successful” prediction to validate a computer model, and it takes only one unsuccessful prediction to show that it’s worthless.
In this article I’ll first give a very brief background summary of the math involved in epidemiological predictions (models) and point out problem areas. I will then focus on one article that has been cited as support for restrictive measures in the pandemic and show how the predictions put forth in the article were not met, and why, therefore, this was not “science.”
THE MATH OF EPIDEMIOLOGY
(This section can be skipped by mathphobes.) The basic parameter used in epidemiological models is R0, the “reproduction number,” which can be defined as
R0 = τcd
- “τ” is the “transmissibility, the probability of infection given contact between infected and non-infected individuals;
- “c” tells how often infected and non-infected individuals come in contact, i.e. the rate of contact;
- “d” tells how long the infected individual is infected, i.e. the duration of infectiousness
It should be clear that there’s a great deal of leeway and ambiguity in assigning values to any of the parameters in R0. Indeed, the possibility of errors in interpreting or estimating R0 is well recognized, as shown in the quote below:
“The interpretation of R0 estimates derived from different models requires an understanding of the models’ structures, inputs, and interactions. Because many researchers using R0 have not been trained in sophisticated mathematical techniques, R0 is easily subject to misrepresentation, misinterpretation, and misapplication.”—Paul L. Delamater et al, “Complexity of the Basic Reproduction Number (R0)”
To expect that any of the parameters—τ, c, d—can be represented by just one number is, I believe, an oversimplification. To expect that these numbers will stay constant during the course of an epidemic, given the possibility of virus mutation and mitigation factors, is also an oversimplification. I recommend Delamater’s article for a full account of all the factors that can make R0 variable. Nevertheless, other auxiliary theories in science use simplifying assumptions and make successful predictions, so the outcome of predictions should be the test of whether a theory, implemented in a computer program, is valid.
A final note on the math: epidemiological modeling generally uses coupled linear 1st order differential equations and statistical methods for stochastic processes to generate predictions under various assumptions and for various parameter values. (See here and here for detailed accounts.) If one examines the papers that describe the mathematical techniques, it’s clear that outcomes can vary greatly depending on values assigned to parameters and on the choice of a particular mathematical model.
UNSUCCESSFUL PREDICTIONS FOR THE WUHAN FLU (COVID-19)
Note first that there have been many instances outside of epidemiology where computer models relying on statistical techniques have made predictions that didn’t come true. (The readers of this blog probably know of those dealing with effects of anthropic global warming.) More recently we have seen (here, here, here and here) how statistical analysis can be applied and misapplied to “data” for the Wuhan flu. Matt Briggs, “Statistician to the Stars,” has many articles on the misuse of statistics in this. I recommend his podcast for a general of account of how covid-19 data has been used and misused in statistical analysis.
In this article I want to focus on the “Report 9 from Imperial College Response Team.” Based on this report, politicians and pundits predicted dire consequences from the pandemic: total deaths, 2.2 million in the US, 510 thousand in the UK (Figure 1A, loc.cit.). When critics of the study complained that these extreme mortalities didn’t occur, Ferguson correctly responded that this would have been the totals had no mitigation efforts taken place. However, that isn’t really a prediction; it’s more like a street preacher with his sign that the world is coming to an end in one month. The preacher’s prediction might be true, but there’s no way of verifying it. There never was or would be a situation where no mitigation efforts were applied. That wasn’t even the case for the Spanish Flu epidemic 100 years ago.
Moreover, the deaths per day in Figure 1A go to 0 by September 1st, which has not been the case, mitigation or no mitigation. Further, if one examines Fig. 1B, cases per day for various US states, the curves are totally unrealistic in terms of how they vary from state to state and in their shape. Indeed, this is a general criticism of all the figures: the shapes show symmetric (or essentially symmetric) rise and falls for all mitigation scenarios, and that is not how the actual data, faulty as it may be, is displayed.
Here are some events that computer models have not taken into account. Forty-two percent of all deaths in the U.S attributed to covid-19 (notice “attributed to,” not “due to”) have been in nursing homes and assisted care facilities. In particular, these facilities have been in states where governors have directed nursing homes to accept covid-19 patients who were released from hospitals. Our neighboring county in Pennsylvania, rural and with a low population density, had a sudden spike in covid-19 cases at the end of August and beginning of September; the cause: partying in the local university. This latter spike in covid-19 cases is nationwide, wherever colleges have resumed in-person classes.
So, what are we to conclude? Is such computer modeling an exercise in using “any data input that will enable one to continue playing what is perhaps the ultimate game of solitaire?” I’ll concede that the Imperial College Report is an interesting speculation on how various mitigation programs might affect the transmission of covid-19, and in that sense it is a useful exercise, and the authors are to be commended for that effort. What I object to is the use of these computer projections by pundits and politicians to justify public health measures, because the measures are based on “science.” It isn’t science until the computer projection has made repeated correct predictions.
Here’s what might have been more helpful: applying the model to known data (e.g. South Korea, Taiwan), or if the report had been published later, Sweden, Italy, Spain and Germany, to see how well the modeling fits actual data. This would have been an exercise in retrodiction, which is sometimes useful science (e.g, explaining the anomalous perihelion precession of Mercury using general relativity). There wouldn’t have been as many headlines stemming from the report, but it would have been more of a scientific endeavor.
To support this site and its wholly independent host using credit card or PayPal (in any amount) click here