How much patience do you have left? On and on and on about the fundamentals, and not one word about whether I believe the GHCN Darwin adjustment, as revealed by Eschenbach, is right! OK, one word: no.
There is enough shouting about this around the rest of the ‘net that you don’t need to hear more from me. What is necessary, and why I am spending so much time on this, is a serious examination of the nature of climate change evidence, particularly with regard to temperature reconstructions and homogenizations. So let’s take our time.
Scenario 3: continued
We last learned that if B and A overlap for a period of time, we can model A’s values as a function of B’s. More importantly, we learned the severe limitations and high uncertainty of this approach. If you haven’t, read Part III, do so now.
If B and A do not overlap, but we have other stations C, D, E, etc., that do, even if these are far removed from A, we can use them to model A’s values. These stations will be more or less predictive depending on how correlated they are with A (I’m using the word correlated in its plain English sense).
But even if we have dozens of other stations with which to model A, the resulting predictions of A’s missing values must still come attached with healthy, predictive error bounds. These bounds must, upon the pain of ignominy, be carried forward in any application that uses A’s values. “Any”, of course includes estimates of global mean temperature (GMT) or trends at A (trends, we learned last time, are another name for assumed-to-be-true statistical models).
So far as I can tell (with the usual caveat), nobody does this: nobody, that is, carries the error bounds forward. It’s true that the older, classical statistical methods used by Mann et al. do not make carrying error simple, but when we’re talking about billions of dollars, maybe trillions, and the disruption of lives the world over, it’s a good idea not to opt for simplicity when more ideal methods are available.
Need I say what the result of the simplistic approach is?
Yes, I do. Too much certainty!
An incidental: For a while, some meteorologists/climatologists searched the world for teleconnections. They would pick an A and then search B, C, D, …, for a station with the highest correlation to A. A station in Peoria might have a high correlation with one in Tibet, for example. These statistical tea leaves were much peered over. The results were not entirely useless—some planetary-scale features will show up, well, all over the planet—but it was too easy to find something that wasn’t there.
Scenario 4: missing values, measurement error, and changes in instrumentation
Occasionally, values at A will go missing. Thermometers break, people who record temperatures go on vacation, accidents happen. These missing values can be guessed at in exactly the same way as outlined in Scenario 3. Which is to say, they are modeled. And with models comes uncertainty, etc., etc. Enough of that.
Sometimes instruments do not pop off all at once, but degrade slowly. They work fine for awhile but become miscalibrated in some manner. That is, at some locations the temperatures (and other meteorological variables) are measured with error. If we catch this error, we can quantify it, which means we can apply a model to the observed values to “correct” them.
But did you catch the word model? That’s right: more uncertainty, more error bounds, which must always, etc., etc., etc.
What’s worse, is that we suspect there are many times we do not catch the measurement error, and we glibly use the observed values as if they were 100% accurate. Like a cook with the flu using day old fish, we can’t smell the rank odor, but hope the sauce will save us. The sauces here are the data uses like GMT or trend estimates that use the mistaken observations.
(Strained metaphor, anybody? Leave me alone. You get the idea.)
A fishy smell
Now, miscalibration and measurement error are certainly less common the more recent the observations. What is bizarre is that, in the revelations so far, the “corrections” and “homogenizations” are more strongly applied to the most recent values, id est, those values in which we have the most confidence! The older, historical observations, about which we know a hell of lot less, are hardly touched, or not adjusted at all.
Why is that?
But, wait! Don’t answer yet! Because you also get this fine empirical fact, absolutely free: The instruments used in the days of yore, were many times poorer than their modern-day equivalents: they were less accurate, had slower response times, etc. Which means, of course, that they are less trustworthy. Yet, it appears, these are the most trusted in the homogenizations.
So now answer our question: why are the modern values adjusted (upwards!) more than the historical ones?
The grand finale.
If you answered “It’s the urbanization, stupid!”, then you have admitted that you did not read, or did not understand, Part I.
As others have been saying, there is evidence that some people have been diddling with the numbers, cajoling them so that they conform to certain pre-conceived views.
Maybe this is not so, and it is instead true that everybody was scrupulously honest. At the very least, then, a certain CRU has some fast talking to do.
But even if they manage to give a proper account of themselves, they must concede that there are alternate explanations for the data, such as provided in this guide. And while they might downplay the concerns outlined here, they must admit that the uncertainties are greater than what have been so far publicly stated.
Which is all we skeptics ever wanted.
Update Due to popular demand, I will try and post a part V, a (partial) example of what I have been yammering about. I would have done one before, but I just didn’t have the time. If I just had one of those lovely remunerative grants GHCN/CRU/GISS receive…One point at which all the IPCC beats us skeptics is in the matter of compensation (and then they wonder why we don’t have as much to say officially).
Update It’s finished.