Homogenization of temperature series: Part V, The real Grand Finale

Be sure to see: Part I, Part II, Part III, Part IV, Part V

Much of what we discussed—but not all—is in this picture. Right click and open it in a new window so that you can follow along.

homogenization_small

We have two temperature series, A and B. A is incomplete and overlaps B only about 30% of the time. A and B officially stop at year “80”. We want to know one main thing: what will be the temperatures at A and B for the years 81 – 100?

Official homogenization of A commences by modeling A’s values as a function of B’s. There are no auto-correlations to worry about in this data, because there are none by design: this is entirely simulated data. A and B were generated by a multivariate normal distribution with a fixed covariance matrix. In plain English, this means the two series are correlated with each other, but not with themselves through time. Plus, any trends are entirely spurious and coincidental.

Thus, a linear model of A predicted by B is adequate and correct. That is, we do not have to worry, as we do in real life, that the model A = f(B) is misspecified. In real life, as I mentioned in earlier posts, there is additional uncertainty due to us not knowing the real relationship between A and B.

We also are lucky that the dates are fixed and non-arbitrary. Like I said earlier, picking the starting and stopping dates in an ad hoc manner should add additional uncertainty. Most people ignore this, so we will, too. We’re team players, here! (Though doing this puts us on the losing team. But never mind.)

Step 1 is to model A as a function of B to predict the “missing” values of A, the period from year 1 – 49. The result is the (hard-to-read) dashed red line. But even somebody slapped upside the head with a hockey stick knows that these predictions are not 100% certain. There should be some kind of plus or minus bounds. The dark red shaded area are the classical 95% parametric error bounds, spit right out of the linear model. These parametric bounds are the ones (always?) found in reporting of homogenizations (technically: these are the classical predictive bounds, which I call “parametric”, because the classical method is entirely concerned with making statements about non-observable parameters; why this is so is a long story).

Problem is, like I have been saying, they are too narrow. Those black dots in the years 1 – 49 are the actual values of A. If those parametric error bounds were doing their job, then about 95% of the black dots would be inside the dark red polygon. This is not the case.

I repeat: this is not the case. I emphasize: it is never the case. Using parametric confidence bounds when you are making predictions of real observables is sending a mewling boy to do a man’s job. Incidentally, climatologists are not the only ones making this mistake: it is rampant in statistics, a probabilistic pandemic.

The predictive error bounds, also calculated from the same A = f(B), are the pinkish bounds (technically: these are the posterior-predictive credible intervals). These are doing a much better job, as you can see, and aren’t we happy. The only problem is that in real life we will never know those missing values of A. They are, after all, missing. This is another way of stating that we do not really know the best model f(B). And since, in real life, we do not know the model, we should realize our error bounds should be wider still.

Our homogenization of A is complete with this model, however, by design. Just know that if we had missing data in station B, or changes in location of A or B, or “corrections” in urbanization at either, or measurement error, all our error bounds would be larger. Read the other four parts of this series for why this is so. We will be ignoring—like the climatologists working with actual data should not—all these niceties.

Next step is to assess the “trend” at B, which I have already told you is entirely spurious. That’s the overlaid black line. This is estimated from the simple—and again correct by design—model B = f(year). Our refrain: in real life, we would not know the actual model, the f(year), and etc., etc. The guess is 1.8oC per century. Baby, it’s getting hot outside! Send for Greenpeace!

Now we want to know what B will be in the years 81 – 100. We continue to apply our model B = f(year), and then mistakenly—like everybody else—apply the dark-blue parametric error bounds. Too narrow once more! They are narrow enough to induce check-writing behavior in Copenhagen bureaucrats.

The accurate, calming, light-blue predictive error bounds are vastly superior, and tell us not to panic, we just aren’t that sure of ourselves.

How about the values of A in the years 81 – 100? The mistake would be to use the observed values of A from years 50 – 80 augmented by the homogenized values in the years 1 – 49 as a function of year. Since everybody makes this error, we will too. The correct way would be to build a model using just the information we know—but where’s the fun in that?

Anyway, it’s the same story as with B, except that the predictive error bounds are even larger (percentage-wise) than with B, because I have taken into account the error in estimating A in the years 1 – 49.

Using the wrong method tells us that the trend at A is about 1.0oC per century, a worrisome number. The parametric error bounds are also tight enough to convince some that new laws are needed to restrict people’s behavior. But the predictive bounds are like the cop in the old movie: Move along; nothing to see here.

This example, while entirely realistic, doesn’t hit all of the possible uncertainties. Like those bizarre, increasing step-function corrections at Darwin, Australia.

What needs to be done is a reassessment of all climate data using the statistical principles outlined in this series. There’s more than enough money in the system to pay the existing worker bees to do this.

Our conclusion: people are way too certain of themselves!

Be sure to see: Part I, Part II, Part III, Part IV, Part V

20 Comments

  1. dearieme

    “people are way too certain of themselves!” I agree wholeheartedly. There are lots of other near-sciences of which this is true – for example, almost all medical claims that are made based on epidemiology, or medical research based on experiments on tiny, non-random groups. In case I’ve not posted this here before, I offer you my apothegm.
    “All medical research is rubbish” is a better approximation to the truth than almost all medical research.

  2. ken

    Is there any chance you’ll apply your expertise to a sampling of actual climate data & present the actual trend(s) with uncertainty bands, etc. portrayed?

    Generalities are nice to know….but real data analyzed to demonstrate what will be “counterintuitive” (given the conditioning to which the mainstream has been subjected) findings would be compelling.

  3. Mike B

    Well done. This exposition has been long overdue, and you are quite correct that this is a mistake that is widespread, not just in climate science.

    Two quick questions and a comment:

    First, in the paragraph that begins, “Step 1 is to model A…” shouldn’t the parenthetical comment after “homogenized” read, “technically: these are the classical CONFIDENCE bounds…”?

    Second, can you compile a short list of prominent climate papers in which this error was made?

    Finally, climate scientists like to have it both ways. For instance, the paleoclimatologists routinely use an error term that is way too small for assessing their confidence bands, but when climate modelers are desperate to show that their model predictions (forecasts, actually) are not inconsistent with actual temperatures, they don’t hesitate to do things to inflate their error term, such as arbitrarily including models that predict cooling. Thus, the “ensemble” still predicts warming, but the inflated confidence bands manage to “just barely” include the actual temperatures (here I’m thinking of the famous Santer et al paper that actually included a statistician, Nychka, IIRC).

    “Mike’s (Michael Mann) Nature Trick” is only the tip of the “trick” iceberg. Unfortuanately, the ASA is infused with a deadly combination of politically motivated AGW supporters, six-foot invertebrates, and indifferent dweebs. Otherwise, they would step forward as an organization and put a stop to this nonsense.

  4. Michael Smith

    Thank you, Don Briggs, for this excellent presentation.

  5. Pompous Git

    Odd is it not that one of the most important posts on climate should attract so few? Keep it up your Lordship. It’s enough that some youngster will clamber upon your shoulders and see even further 🙂

    BTW, you really need a proof reader. And yes, I am willing to assist. It would be a privilege.

  6. Kewlbreeze

    Mr. Briggs, thank you for this. It really helps simple folk like me (an engineer) to understand this entire farce. From the get go of all of this temp chart mess – the one thought I kept having is “how can these guys be so sure since keeping temp data even today is soooo very difficult (as an engineer I see it all the time). But I thought gee – they must know something I do not. OOOPs – I see now they are just as screwed up as us lowly engineers.

    measuring Global temps will have incredible error associated with it – even today. Much less in 1850-1960. I am sure the temp recording devices in Nigeria and Yemen and Saint Martin are all kept in top notch condition, watched carefully and dutifully tabulated – not!!!

    Again, my sincere thanks for a great piece of work. i will pass this on to others to come and view!!

    KB

  7. Richard Saumarez

    This is interesting. I have had a very similar problem when faced by incontrovertable proof that there was a trend in data. Boot strapping dealt with this trend quite easily.

    The fascinating thing about this whole argument of data analysis is that it points to a certain lack of education as opposed to training. When I was a first year student, we were taught by a couple of real skeptics who emphasized that you should try and look at data in every possible way to find if there were flaws in the process of deduction in drawing a conclusion from that data. (“a rat dressed up as Mickey Mouse”). I came to university teaching rather late in life, as I had expertise in an area that a deparment wished to develop, but couldn’t find anyone to teach it. The thing that struck me was that the students, although very bright, were sponges. One could say practically anything, they wrote it down, learnt it and regurgitated it. After a while I decided that they should be educated rather than taught and so I reverted to getting them to thrash out a problem from a couple of papers and Ifilled in the gaps of knowledge when need. Initially, the students were horrified – I mean they had to think! After a while they enjoyed it.

    We need manditory university courses in scientific scepticism and abuse of statistics. Maybe we might get less of the nonsense that has has been inflicted on climate science.

  8. Erica

    Very interesting. But really, so what?

    We know, not because of any fancy models but simply because of physics, that adding CO2 to the atmosphere has a warming effect, and that if we add *enough* CO2 it will cause climate change. All you skeptics are arguing about is how much is “enough” and how certain we are about the damage already done. But nothing in this piece convinces me (or seems designed to convince me) that at some point – which, as you argue, we can’t know – we’ll reach the tipping point at which oceans will be less and less able to absorb CO2, the landbased ice sheets will melt, etc.

    The scientific consensus is saying that we have about 20 years to act or we’re screwed. You’re arguing here that they state that number with too much certainty – could be up to 40 years before we’re screwed, or maybe (since the uncertainty is on both sides) we’re already there. Either way, I don’t see how this changes the conclusion that we should cut back on our CO2 emissions.

    If there is a non-negligible chance that my house will catch on fire, I take out fire insurance. If there is a non-negligible chance of climatic catastrophe, the world should hedge against the risk and take out “climate insurance,” largely through the kind of investments in clean energy and energy efficiency we should be doing anyway for a host of other reasons.

    Lack of absolute certainty on the details is no reason not to act, when we know the broad outlines of the problem.

  9. Briggs

    Erica,

    Your questions are good and, I think, common. Let me explain why it is not “We have X years to act or we’re screwed.” My proposition is not about the uncertainty in X, but in the uncertainty of the AGW theory itself (and of certain numbers that are part of it, like historical temperatures).

    First, CO2 has existed in much larger quantities in the Earth’s atmosphere without apocalyptic consequences. Second, the effect of adding more CO2 is not linear: that is, we’ve already received most (all?) of the warming we can receive from CO2 in the troposphere. This is well agreed to by all.

    What many climatologists are arguing is that a heretofore uninfluential positive feedback mechanism will (has?) kick in and be influential. This is the AGW hypothesis: it is not, directly, that more CO2 is bad, it is that CO2 will cause, through various mechanisms, water vapor to increase, which will then increase temperatures.

    How can we prove the AGW is true? I argue the only way is to make skillful predictions of future climate based on that theory. In shorthand: the models so far have said we should have got hotter, but we got colder. Thus, in some way, the models are wrong. Note that this is a point of logic.

    One thing that seems like a prediction, but is not, is the claim that “We are hotter than ever.” This post is directed to showing that we are nowhere, not anywhere, near being able to say this claim is true.

    What concerns some of us is that there is temptation to “cherry pick” data such that it can be made to seem we are getting warmer. See my post of today (the Pajamas Media piece) for a link to some Russian revelations.

    Lastly, your point about insurance. Here, you make the common mistake of confusing climate predictions (about temperature and precipitation, usually) and the causes of temperature etc. Assume that the AGW theory is true, and that it really will grow 2oC warmer. Then it is far, far, far from clear that this will mean the world is ending, or that only(!) bad things will happen, and that those bad things will be unfixable.

    Studies which say “X will happen if AGW is true” are entirely statistical, and they thus fall under the rules of today’s post. I have read many of these papers and my conclusion is that nearly 100% of them stink. They reek of confirmation bias and overstatement and over-certainty.

    Now accept even that AGW is true and that only(!) bad things will happen, etc. Then we have to believe that the UN, and other leaders, will be able to manage the money they coerce from us in such a fashion that it is not wasted.

    Even if you believe AGW and its supposed accompanying ill effects, you can’t believe that this is true.

    But you have inspired me to expand this theme into a full-fledged article. Thank you.

    Sir Git of Down Under,

    Yes, my persistent sin lies with copy editing. I sorely need an editor.

  10. Lovely series. A nice exposition of the unstated uncertainty in splicing temperature data.

    Someday, it might also be nice if you taught a session on error. By “error” I mean accuracy, precision, variation, and bias. Dartboard diagrams would be helpful.

  11. Matt O

    Thank you so much for the tour. I appreciate the time and expertise you’ve given us.

  12. Briggs

    PG, Uncle Mike, Matt O,

    Thanks.

    As to why this post is not getting more play, I think it is the difficulty. For one, it is an entirely unfamiliar way of thinking about statistics. People can’t assimilate the information easily. It’s much simpler plotting a straight line over temperatures and, if the slope is greater than zero, saying, “See!”

  13. Green R&D Manager

    Briggs,
    Great series. You nailed it.

    I have been trying to get people to see the uncertainty and error budget issue. Long slog though the NCDC and GISS docs/papers, but it it clearly much bigger than people on either side of the issue seem to realize.

    Erica,
    Your position assumes the theory is right. There is ample data on both sides. Briggs does a great job of showing claims of certainty are unjustified.

    There is overwhelming proof the AGW predictive models have a wide error band as they did not predict the last 10 years of temps, not even close. The 20th century temp data is much more uncertain than most people realize. The further backward looking models that claim to reconstruct the last 1500 years have not tracked current temps indicating they are much less certain than the authors claim, instead they chose to hide this divergence that proved their models weaknesses (the infamous “hide the decline”).

    In short, they can’t accurately model the past and have so far not accurately predicted the future. No scientific theory is proven until it can accurately predict a future event.

    That said, people can and should work to be more energy efficient and less polluting all the time. Air quality in California has improved enormously in the past 30 years. This is a separate issue than regulating all human activities that generate CO2 to fight climate change. Fighting climate change is silly, the climate has changed for the entire history of the planet. It will continue to change no matter what we do.

  14. Geoff Sherrington

    Congratulations. You have expressed with elegance and authority a problem that I have tried to highlight, with less skill and effect.

    Your 5-part essay should be required reading for all who work with sets of data especially when there is mixing of properties like temporal and spatial.

    It is particularly important that authors recognise and incorporate the difference between parametric and predictive bounds.

    A couple of years ago I was commenting on ensembles of global climate models, that the error calculation should include all runs of each model (except those rejects with obvious, self-identified, mechanical, data or assumption errors) and not just the parametric mean of the ensemble average, or even the errors calculated only from the runs presented for model comparison. Is this similar in example to the thust of your argument?

    Are you familiar with the argument over Craig Loehl’s paper, A 2000 year reconstruction based on non-treering proxies, Energy & Environment Vol 18 No 7+8, 2007 http://www.ncasi.org/publications/Detail.aspx?id=3025. It seemed to resolve into two camps, each with a differnt preferred error calculation method. Did you comment on it at the time?

    I have been exposed to the interpolation of the grade of ores in potential mines, between sparsely placed drill holes. One eventually arrives at block grades, as in 3-D grid cells, where a block is assigned a mean grade and an error. It is then processed to extract the metal or taken to the waste rock dump on the basis of skill in the interpolation and estimation of its grade and error. Companies can prosper or fail depending on how meaningful is their estimate of grade and error. It is more than an academic numbers exercise.

  15. Steven Mosher

    As to why this post is not getting more play, I think it is the difficulty. For one, it is an entirely unfamiliar way of thinking about statistics. People can’t assimilate the information easily. It’s much simpler plotting a straight line over temperatures and, if the slope is greater than zero, saying, “See!”

    A couple things would give it more play.

    1. links to the actual math so people could play with the data.
    2. A real example. the approach you discuss is used by hansen ( i think) see CA series on hansens reference method in a statistical framework

  16. Billy Ruff'n

    Following this and other recent blog posts on the topic of homogenization I decided to see how GISS dealt with data adjustments and the possible errors these adjustments might induce in their product. In my reading of Hansen et al 2001 at http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf I find no mention of uncertainty or possible error in the results due to either their method of adjustment or the quality of the underlying data sets.

    Hmmmmm………I wonder why.

  17. Mike D:

    Climate Audit has a good post on that subject. Search for “Texas Sharpshooter”.

  18. Neil Frandsen

    Dear Sir:

    Thank you. I am a retired Seismic Surveyor, and looked at the AGW/ACC claims with deep suspicion, due to the sparse number of Reporting Stations, in the Arctic, in the Antarctic, across the Pacific Ocean, and in Africa. The even lesser number of Stations equipped to send even one weather balloon aloft, a day, also bothered me. Lastly, the number of sounding rockets sent to even 100 km altitude, a year, world-round, left the upper parts of the atmosphere pretty poorly reported…

    Then, from having to learn how 3D Seismic Data was put into their little 20m x 20m x20m Data Boxen, under a 3.2 km x 3.2 km surface grid of stations spaced 100 metres apart, so I could make sure my fellow Seismic Surveyors gave the Y,y,z, Locations of every station to better than the accuracy the Geophysicists needed, I found the ‘averaging’, or interpolation, to put numbers-missing in the Climate Models Data Boxen, quite disturbing.

    Lastly, I remembered out DC-3 pilots, before flying to our on-ice landing strip, radioing from Resolute Bay, NU, to find out what the Real Weather at our airstrip really _was_, and what it had been doing, so as to decide if it was safe to fly to us, (we were west of the Sabine Peninsula, of Melville Island, and it was in late March, early April). I also remembered another year, another Crew, on the Ice at the mouth of Tuktoyaktuk’s Harbour, looking at the 80kph wind, full of Snow, blowing easterly – despite an hour-old Forecast, for Tuktoyaktuk, predicting a nice sunny, day. The Canadian Weather Service Forecaster, in Inuvuk, NWT, when I radio-telephoned him enquireing as to _why_ the difference, said:
    “Oh, _thats_ where that Storm _is_! We lost it, last night!”

    So, too few Stations making on-the-ground Reports, across the Arctic, and the Forecasts are too untrustworthy to depend-on to fly, and disconcertingly _wrong_, from time to time, even if I was secure in a well-maintained Nodwell, and with 3 snowplows and my Cat Foreman’s Nodwell. We returned to Camp, which was on the Harbour Ice, on the east side of Tuktoyaktuk’s Harbour, and worked the next day.

    My Statistical training was the very basic ‘Theory of Errors’ hammered into 1srt year Engineering Students, by L. E. Gads (iirc) at the University of Alberta, Edmonton, Alberta, in 1958-59, when we used the Log-log Decitrig Sliderule….

    From Lethbridge, Alberta, @929m altitude, Lat. 49.38N; Long. 112.48W, where we have lite snow and are at -13°C.

    Neil Frandsen

  19. RichieRich

    Briggs

    Many thanks for this series of five posts. As someone with very little statitistical knowledge, I’ve got a lot out of them.

    Your exchange with Erica above seems crucial and can I follow up on it? You rightly point out that there is uncertainly regarding climate sensitivity. The ceteris paribus warming for a doubling of CO2 is well understood but whether feedbacks are postive or negative and by how much is much less so.

    My understanding is that climate sensitivity PDFs have a very long tail so that there is a very small chance that sensitivity might be as high as, say, 10°C. As I understand it, Martin Weitzmann has argued that rapid mitigation can be justified as insurance against the small risk of catastrophic climate change if the sensitivity turns out to be this high.

    I’m curious as to how your arguments about homogenization link to the issue of climate sensitivity. Are you saying that if homogenization was done properly then this might show climate sensitivity to be lower? If so, what is the explanatory chain? And even if, as the result of Briggsian homogenization, the most likely value of climate sensitivity was shown to be lower the the commonly accepted value of 3°C would this in any way get us away from PDFs with long tails and Weitzmann’s insurance argument?

    Also how does one deal at a policy level with the wider predictive uncertainly. OK, in the best-case scenario under predictive error, temperature might even be falling but in the worst-case scenario, things are worse than the cheque-writers in Copenhagen believe?

    Hope this makes sense and look forward to hearing your thoughts.

  20. Anders L., Sweden

    Excellent articles … BUT … Sometimes I feel that there was too much emphasis on the difficulties and uncertainties. I think it is quite meaningful and warranted to try to understand what the billions of tonnes of carbon that we have moved from the bedrock to the atmosphere are doing to the climate system, even we are struggling with the methodology.
    I think you are quite right that humanity cannot stop climate change as such – the climate has been changing quite independently of us for billions of years, and will continue to do so long after we are gone. But generally speaking, we humans have a tendency to change things for the worse when we do change them, and so I think it is best not to poke around with the climate system more than necessary. For our own good.
    It is quite true that we have no idea what will happen when CO2 reaches 800 ppm or so. Isn’t that an excellent reason for preventing that it ever happens?

Leave a Reply

Your email address will not be published. Required fields are marked *