Word is going round that Richard Muller is leading a group of physicists, statisticians, and climatologists to re-estimate the yearly global average temperature, from which we can say such things like this year was warmer than last but not warmer than three years ago. Muller’s project is a good idea, and his named team are certainly up to it.
The statistician on Muller’s team is David Brillinger, an expert in time series, which is just the right genre to attack the global-temperature-average problem. Dr Brillinger certainly knows what I am about to show, but many of the climatologists who have used statistics before do not. It is for their benefit that I present this brief primer on how not to display the eventual estimate. I only want to make one major point here: that the common statistical methods produce estimates that are too certain.
I do not want to provide a simulation of every aspect of the estimation project; that would take just as long as doing the real thing. My point can be made by assuming that I have just N stations from which we have reliably measured temperature, without error, for just one year. The number at each station is the average temperature anomaly at that station (an “anomaly” is takes the real arithmetic average and subtracts from it a constant, which itself is not important; to be clear, the analysis is unaffected by the constant).
Our “global average temperature” is to be estimated in the simplest way: by fitting a normal distribution to the N station anomalies (the actual distribution used does affect the analysis, but not the major point I wish to make). I simulate the N stations by generating numbers with a central parameter of 0.3 and an spread parameter of 5, and degrees of freedom equal to 20 (once again, the actual numbers used do not matter to the major point).
Assume there are N = 100 stations, simulate the data, and fit a normal distribution to them. One instance of the posterior distribution of the parameter estimating the global mean is pictured. The most likely value of the posterior is at the peak, which is (as it should be) near 0.3. The parameter almost surely lies between 0.1 and 0.6, since that is where most of the area under the curve is.
Now let’s push the number of stations to N = 1000 and look at the same picture:
We are much more certain of where the parameter lies: the peak is in about the same spot, but the variability is much smaller. Obviously, if we were to continue increasing the number of stations the uncertainty in the parameter would disappear. That is, we would have a picture which looked like a spike over the true value (here 0.3). We could then confidently announce to the world that we know the parameter which estimates global average temperature with near certainty.
Are we done? Not hardly.
Although we would know, with extremely high confidence, the value of one of the parameters of the model we used to model the global average temperature, we still would not know the global average temperature. There is a world of difference between knowing the parameter and knowing the observable global average temperature.
Here then is the picture of our uncertainty in the global average temperature, given both N = 100 and N = 1000 stations.
Adding 900 more stations improved our uncertainty in the actual temperature only slightly (and here the difference in these two curves is just as likely to be because of the different simulations). But even if we were to have 1 million stations, the uncertainty would never disappear. There is a wall of uncertainty we hit and cannot breach. The curves will not narrow.
The real, observable temperature is not the same as the parameter. The parameter can be known exactly, but the observable actual temperature can never be.
The procedure followed here (showing posterior predictive distributions) should be the same for estimating “trend” in the year-to-year global average temperatures. Do not tell us of the uncertainty in the estimate of the parameter of this trend. Tell us instead about what the uncertainty in the actual temperatures are.
This is the difference between predictive statistics and parameter-based statistics. Predictive statistics gives you the full uncertainty in the thing you want to know. Parameter-based statistics only tells you about one parameter in a model; and even though you know the value of that parameter with certainty, you still do not know the value of the thing you want to know. In our case, temperature. Parameters be damned! Parameters tell us about a statistical model, not about a real thing.
Update See too the posts on temperature on my Stats/Climate page.
Update to the Update Read the post linked to above. Mandatory.