Time series are the most abused statistics in the physical sciences. (It’s an endless, raucous, peer-reviewed contest for the worst in the “soft sciences.”)
The Mann problem (is that a typo?) is that time series are too easy to plot, analyze, and make pontifical projections about. The fault is mine, and my brother and sister statisticians’. In our joy of numbers we have made the bar of entry far too low. With our free and mindless software anybody can play with numbers.
How many times must we shout remonstrate warn caution not to wantonly and without rock-solid justification smooth time series? I’ll tell you how many times: forever.
Just as a for example, here is an example of a well done time series that, despite valiant efforts, falls far short of perfection. The data is from our friend Harold Brooks who is at the Severe Storms Laboratory in Norman, Oklahoma and knows all about tornadoes. He collected direct deaths from tornadoes in the United States from 1875 to 2012. The data “represent our best understanding at this time”, meaning there is error in the numbers. (I retrieved US population from this table and the Census Bureau.)
A “direct” death is caused by the tornado itself, such as being knocked on the head. Indirect deaths are absent. Brooks says “Examples of indirect deaths that have occurred include a heart attack upon seeing damage to a neighbor’s house, falls when going to shelter, and a fire caused by a candle lit when the power went out after a tornado.”Brooks has his own picture of his data (larger version here), which shows the deaths per million citizens in purple dots, a smoother in red, a couple of linear regressions in green, and some projections from the latter regression in cyan. Notice the logarithmic scale.
To prove that the way of plotting can make an enormous difference, here is my version of the same data.
The first problem comes in plotting: It shouldn’t have been done. Making a time-series plot tells viewers these data are the same, they are caused by the same thing. But here the data are not the same and not caused by the same thing.
Consider. These are reported deaths, and there is some ambiguity between “direct” and “indirect” deaths. Given our media is obsessed with all things environmental, it’s unlikely the counts for the past few decades are in error, but some indirect deaths may have been mistakenly classified as direct.
Historically, the counts are probably too low. Every death, especially in rural areas, might not have been reported, and the difference between kinds of deaths is more tangled.
The idea of normalizing by population makes some sense, but the entire US population? How many tornadoes are found in Alaska, Wyoming, or even California where population change was largest? It would have been better to examine the population density in the places tornadoes hit.
Medicine, particularly emergency medicine, has improved immensely over the past fifty years. This would tend to lower deaths.
Housing construction both improved and degraded. Normal, “stick built” houses got better, but they tend latterly to be built in clusters, and when a tornado hits a cluster, well, you know what happens (see that bump in 2011?). In 1875 there were no trailer parks (Brooks and Doswell have a nice sub analysis of trailer park deaths). The overall effect of housing changes can only be a crude guess unless we examine each death and each non-death in detail, a gargantuan task.
Our friend David Legates reminds us that meteorology was barely a science a century ago. Warnings now, especially daylight warnings and in tornado-prone areas, are pretty darn good.
Because of all these and a few more considerations, it’s clear that the data is not the same through time, even though it’s been given the same name through time. There is therefore no justification for any kind of statistical model, especially a smoother.
Smoothers replace the data with guesses of the data, a screwy thing to do. Why substitute uncertainty for certainty? And here the data has measurement error, in the counts themselves and not in why the counts have changed. Why they changed isn’t justifiably quantifiable.
The regression is also misplaced. We don’t need it to “estimate” counts (or percentiles of counts) which we already know. There is a case to be made for a measurement-error model, but to implement it we’d have to know the characteristics of missing data, which we don’t have.
Finally, as repeatedly emphasized, the nature of the cause of “direct” deaths has changed in such a way that no model which isn’t mostly a fiction can quantify. No: the fairest thing to do is to present the data in a tabular or descriptive way and avoid unnecessary quantification which only serves to boost over-certainty.
Thanks to our friend Willie Soon for alerting us to this topic.