It is a sure sign that Sanity has packed her bags and headed for the door when otherwise sober scientists begin slinging around terms like “denier” and “denialist.” Language like this displays willful, pretended, or real ignorance of the historical context of these words. Anybody who talks like this makes himself an ass. They’s fightin’ words which start any discussion on an angry footing, their presence a certain indication we are dealing with zealotry, not science.
Let’s look again at the claim made by the scientists at the Wall Street Journal, over which many have popped their corks:
The lack of warming for more than a decade—indeed, the smaller-than-predicted warming over the 22 years since the U.N.’s Intergovernmental Panel on Climate Change (IPCC) began issuing projections—suggests that computer models have greatly exaggerated how much warming additional CO2 can cause.
There are two claims made here. Given the observational evidence we have, both claims appear true. The first (A) is that for the last ten years it has not grown warmer. Since it has grown warmer in some places and colder in others, this is evidently a claim about some global average and not any individual station. The second claim (B) says that the IPCC forecasts have been systematically too large: it is also concerned with some global average.
Both of these claims are quantitative and subject to easy verification. A person’s politics surely has no bearing on whether they are true or false claims. Now, the “global average” referenced is not a static thing, in the sense that, say, measurements from identical (and identically situated) thermometers at fixed locations are averaged together and called (arbitrarily, of course), the global average. Instead, the global average as it is operationally defined mixes sources and locations freely each year (and even within years). Therefore, when the “average” is computed there will be some uncertainty in it. Further, the uncertainty is larger in times historical than in times present. (There is even some uncertainty at individual locations, because no measurement apparatus is perfect, but this is generally small, though not always, especially in the past or when using proxies: see this series.)
The BEST people, for instance, recognized this and attempted to account for measurement uncertainty by speaking not just of averages, but of averages plus-or-minus. We can, and I did, argue over the better way to calculate and display this uncertainty. All we need to understand here is that some techniques underestimate this uncertainty. Actually, we don’t even need to agree about that: but we do need to see that some uncertainty is present, however small.
This is necessary because if we make claim (A), as the WSJ fellows did, we need to take uncertainty over the global average into account or we cannot know whether the claim is true or false. It is at this point when a lack of understanding of statistics can become a real hindrance. Sloppy language also hurts immeasurably. Let’s work through this slowly.
Suppose we have ten years of uncertainty-free global average temperature measurements. We can line them up and ask questions of this series. Was the temperature ten years ago warmer or colder than the temperature this year? All we have to do is look: it will be true or false at a glance. Was the temperature nine years ago warmer or colder than this year? True or false at a glance. And so on.
What does this mean in the context of claim (A)? Well, (A) says that temperatures have not gone up over the last decade. To verify this, all we need do is look to see if any of the temperatures of the last decade are lower than they are this year. If any are, the claim is false. If none are, the claim is true.
Maybe. Because claim (A) can also be taken to mean that at no time over the last decade have the temperatures increased (they could have stayed constant from year-to-year). Again, we can verify this claim with a glance at the data.
Which of these definitions is right? Evidently neither, because we all understand that the temperatures have some uncertainty in them. Because of that, we cannot just look at the data to say whether it has gone up or down; we instead have to speak of changes in probabilistic terms. And that means hauling in some kind of model.
The simplest (but not so good) model is to imagine each year’s data is irrelevant to knowing each other years’ data. That is, we take this year’s data and display it as an average with so, a plus-or-minus attached to indicate our uncertainty in it. That plus-or-minus can only come from some kind of probability model, meaning that the range of uncertainty will change when the model changes. Which is the best and most proper model? Nobody knows. But let’s imagine we all agree on one, such that displayed before us is a temperature series of averages and plus-and-minuses.
Now, if claim (A) means that temperatures this year are less than or equal to temperatures ten years ago, then we can make a comparison as before, but our comparison will be accompanied by a measure of uncertainty. Using predictive techniques (yes, this is the proper word: see this series), we can ask questions like, “Given the data and assuming our model is true, what is the probability this year’s temperature is less than or equal to temperatures ten (or nine, etc.) years ago?” Notice that this is not the same as a “t-test” or any other kind of statement about parameters of probability models: it is a statement about observable temperatures.
Or, if claim (A) means that temperatures did not increase even once over ten years, then we can get the probability of this just as simply. In support of either version of claim (A), I said that we cannot know with probability greater than 90% that temperatures have increased (over this last decade). In other words, it is likely that claim (A) is true.
This is so using the probability model I indicated. But what if we instead change the model to a linear regression—i.e. a straight line—drawn through the data? Well, we could go through the same steps and ascertain claim (A) in light of this model. But before we can begin we have several things to decide. Why a straight line? Just because it’s easy? Lazy, that. From what year do we start? See this post for the ways that choice can lead you wrong. Do we start with a date (as I joked) in the Jurassic? Or, for fun, in 1973? Every different start date will give a different answer. I will repeat that: every different start date will give a different answer. It is also a stretch, to say the least, to assume temperature always has been increasing in a straight line from whatever start date we pick. (Before the politicization of this subject, every physical scientist would have agreed with that last statement.)
But suppose we do agree on a date: 1964, say, a very fine year. Are we done? No, because we cannot forget that the data that goes into the straight-line model is still measured with uncertainty. We must, just as we did in the first model, account for this uncertainty. That means drawing any kind of naive line (even bold red ones) guarantees over-certainty.
Even if we were to agree on a date—in real life we do not—we could use a model of the measurement error, incorporate that into the model of straight-line change, and then assess claim (A): it is still probably true.
The best thing to do is to model the data in an intelligent way, taking into account the correlations of year-to-year (both auto-regressive and moving average), the measurement error, etc., etc. Hard work! As Doug Keenan has pointed out (often), it’s too much like work for anybody to do. I’d do it myself, but my check from Big Oil hasn’t yet arrived.
Whatever else you do in life, you must not, you must never, look at the pretty red (or blue, etc.) straight line you have just drawn and claim it is, or think of it as, the real data. (It is only in climatology where I have seen scientists forget error bars, and then pitch a fit when somebody points out the omission. You at least have to put predictive, and not parameters-based, error bars on the line, even ignoring measurement uncertainty of the data.)
What about claim (B)? Also likely true, as is generally recognized. We still have to incorporate the uncertainty in the global temperature measurements—there is no or little uncertainty in the forecasts—but this is no different than before.
What about the counter-claim (C) that the 2000’s where the “warmest years on record” or the like? It is trivially false. The 2000s simply were not the warmest. Four billion years ago, Earth was much hotter. “Wait! It’s obvious we weren’t talking about billions of years ago. Cheater! Denier!” Well, it isn’t obvious. What years did you have in mind as comparators? Ah, that’s the real question, isn’t it.
Did we mean just the last century? The last 1000 years? The last 10,000? What? You must supply a starting year. To make the claim (C) that it’s hotter now than before, you must tell us what you mean by before. If you say “before” means the last ten years, then claim (C) is identical with claim (A). If you say the last 200 years, then you have to do what BEST tried and incorporate the non-parameter error bars, otherwise there is no way to compare what happened a century ago with what happened last year. Obviously, the further you go back, the larger those uncertainty bars become, therefore the more difficult it becomes to claim (with any certainty) that now was hotter than then.
As I often say, over-certainty abounds in this field. People speak of models (statistical and physical) as if they were truth, as if the data that goes into them were granted some kind of special immunity from ordinary criticism. And when the critiques come, that’s when the asinine language breaks out. All sense of humor evaporates.
You would think that because both claims (A) and (B) are likely true (and claim (C) is unproved or likely false) that we have found a reason to celebrate! Perhaps our worst fears won’t be realized after all. This is good news! Wouldn’t it be great if we really did over-emphasize feedback in climate models and that whatever changes we do make to the climate are easily mitigated and not as horrific as posited?
Why so glum that things are so good?
Update See this cartoon which shows that the IPCC has been known to employ the technique of variable start dates.
Update It is imperative that all read this series, where I describe just how so many people make mistakes. Those below who have been shouting the loudest are most in need.