David Lavers is the lead author in a GRL paper “A multiple model assessment of seasonal climate forecast skill for applications.”
Lavers et al. checked eight different climate models and found that, “Results suggest there is a deficiency of skill in the forecasts beyond month-1, with precipitation having a more pronounced drop in skill than temperature.”
Nature magazine summarized:
‘Skill’ is the degree to which predictions are more accurate than simply taking the average of all past weather measurements for a comparable period.
…existing climate models show very little accuracy more than one month out. Even during the first month, predictions are markedly less accurate for the second half than the first. Current models simply cannot account for the chaotic nature of climate, researchers say.
My friend and former advisor Dan Wilks at Cornell (who wrote the most influential meteorological statistics textbook) completed a similar analysis about a decade ago and found much the same thing.
I’ve also done work on these (earlier) climate models (peer-reviewed!), too. Things have not really changed. The climate is so complicated that these models just aren’t that good.
But these seasonal models aren’t necessary the same as the global climate models that have people so flustered about Global “Don’t Call Me Climate Change” Warming. Some seasonal models are more statistical, some more physical. But they all try and guess the future, with, as we now know, limited success.
At the same time, Kevin Trenberth, an IPCCer, announces that the 2013 AR5 report will have at least one chapter “devoted to assessing the skill of climate predictions for timescales out to about 30 years.”
I’m guessing he’s using the word “skill” in a different way than is usual. In order to check for skill, we need two things: independent model predictions over a period of years, and the subsequent observations for those years.
They key word is “independent.” The models have to forecast data that was never before seen. It is no good—no good at all—to show how well a model “forecast” data it already knew. You can’t fit the model to data on hand and then show how close that model is to the old data. That’s cheating.
Every statistician knows this is a no-know. And by “no-know”, I mean that we cannot learn how good the model actually is until it shows it can make accurate predictions of new data. (It’s not just climate models that suffer the lack of independent predictions: most statistical models are like this. See yesterday’s post.)
I stress “independent”, because of Trenberth’s use of “30 years.” In order to reach that figure and make it into the 2013 report, climate models set in stone and unchanged in 1980 would have had to make yearly predictions out to 2010. That’s 30 years.
If those set-in-stone-1980 models best average-climate forecasts, then climate modelers will be able to tout actual skill.
But since no climate model has sat itself in stone since 1980, there does not exist 30 years worth of independent evidence. We will still be relying on how close those models fit the already-observed data. The closeness of that fit will be touted as “skill.”
But I’m guessing. Maybe Jim Hansen created a model in 1980 that has secretly been making predictions all along. This will be revealed in 2013. If so, and if that model does have skill, and if it continues to predict increasing temperature, I will personally write emails to every global warming skeptic telling them to cease and desist their efforts. (Technical note: a reanalysis doesn’t count.)
Actually, it’s even more complicated than this. That “1980” model might have made predictions for 1981, 1982, and so on. At the end of 1981, the “1980” model would remain fixed, as far as the physics and model code are concerned, but it would be fed the data observed from 1981. This data-updated, but computer-code-fixed, model would make a new one-year-ahead (1982) prediction, plus a two-year-ahead (1983) prediction, and so forth.
All experience shows that the shorter the lead time, the higher the skill. Lavers found that in the seasonal models, as Wilks did, as I did, as everybody does.
It might exist, but I have never seen a climate or weather model that improved as the lead time increased. There is no earthly reason to expect global warming models will be any different. Trenberth would agree with this.
The point is that no or very few, say, ten-year-ahead forecasts can exist. Climate models just aren’t that old. Worse, even if these forecasts did exists, and even if they were as accurate as you like, it will be nearly impossible to use them to prove skill.
Skill is a statistical, probabilistic, measure. In order to say skill exists with any confidence, a reasonable sample size must exist. Say—just guessing here—at least twenty. That means we would have to have, in hand, twenty separate independent predictions.
We don’t have that. Climate models have not remained static. We do not know whether they have skill. And all experience with other, similar, physical models suggest that demonstrating skill is tough.
So why do people believe in the models so strongly? Because they fit the old data (not perfectly, but pleasingly). But any statistician can tell you that this is no great trick.