Statistics

On That Climate-Change-Now-Detectable-From-Any-Single-Day-Of-Weather Paper

The peer-reviewed paper is “Climate change now detectable from any single day of weather at global scale” by Sippel and Knutti an others in Nature: Climate Change.

Look at that title closely. As our homie The Real Spark would say: Hold up. Wait a minute. Somein’ ain’t right! The claim in the title on its face seems preposterous, and is preposterous. This paper has nothing to do with finding “climate change” in a single day’s observation. It is instead a wee p-value generator.

Gist: they have shown that climate models run using “external forcing” (i.e. man’s activities) produce output different than climate models run using “natural variability”, and that this output is different even unto individual days. That, and nothing more.

Since it was always obvious climate models based on different inputs would and should produce different outputs, what was already known has been confirmed to be known. In this way, the paper is harmless, except that is has the usual crowd hyperventilating. The Washington Post even managed to blame the paper’s impetus on Donald Trump.

Here are the dull details.

Right off, they (following the crowd) call man’s activities “external forcings”, even though man is just as much a part of the climate as everything else is. A real external forcing would be if aliens named Staisticules from the planet Weepee shot a heat ray at the earth. Skip it.

Here is their entirely statistical model (where I have made slight changes to their notation so that it reads well on HTML; the model is in a supplement to a supplement to the paper; this multiplying of supplements also follows a distressing trend in science publishing).

In a first step, the fingerprint of external forcing is extracted from forced model simulations such that the p-dimensional spatial pattern of daily temperature or humidity is related linearly to one of the target metrics (denoted here, Y_mod[el]) in a regression setting…

Y_mod = X_mod hat(γ) + ε        (1)

They took on a grid daily values X_model, or X_mod, produced by a climate model run on “external forcing”. The Y_mod is either yearly global mean temperature (AGMT) or an energy measure. The “hat(γ)” are the estimated regression coefficients (produced from ridge regression, which is needed because the number of coefficients is greater than number of observations, a common shrinkage technique).

In other words, how well do the model’s grid points X_mod predict the model’s Y_mod (AGMT)? Not an especially interesting question, but it’s not insane, either. Who really cares whether the daily grid points can predict the yearly model mean, when we already have the year’s modeled mean? Well…

In a second step, we project the observations (X_obs , which are fully independent from hat(γ)) and model simulations of natural variability (denoted X_mod*…) onto hat(γ) to obtain an estimate of the target climate change metric for each individual time step that will be used as a test statistic…

hat(Y_obs) = X_obs hat(γ)        (2)

hat(Y_mod*) = X_mod* hat(γ)        (3)

Following this? They secondly used the estimated regression coefficients on observations from a reanalysis model, i.e. the so-called obs, to make an estimate of the observed Y from that reanalysis model. It’s “reanalysis” and not just raw observations because observations must first be massaged to fit inside a physical climate model (a perfectly valid and necessary technique).

Thirdly, they do the same trick for a third model, a model with just “natural variability”, “mod*”, i.e. the same climate model as before but with different inputs. That produces the hat(Y_mod*).

Ready for the magic? I mean, the wee p-value?

Finally, they calculate the probability of seeing hat(Y_obs) assuming hat(Y_mod*) is true. And what did they discover? They discovered that hat(Y_obs) values and hat(Y_mod*) values are likely not the same.

Who knew!

In other words, it’s just as I said. They have discovered that outputs from models are not the same.

Golly.

More for the bored

At the beginning of the paper, they said “The confidence in the detection of a key climate change metric, that is the 40-yr trend in annual mean tropospheric temperature, is very high and has exceeded a 5σ detection threshold recently”.

It should have been an infinite-σ for that trend. Ignore, for the moment, measurement error of temperature, energy, and humidity which is real and important but will distract us. Assume all the measures are error-free.

Now either the temperature, or one of the other measures, changed from one time point to another. Something caused this change. Since the measurements are error-free, we are, or should be, 100% certain what the change was. Therefore, there should never be a problem identifying if a trend took place during a certain specified time.

All you need to do is: (1) pick the time period, (2) define what a trend is, then (3) LOOK! No statistical test is needed, nor desired.

That a test is written about gives us the suspicion that these guys have a signal+noise model of the atmosphere in mind. There is, in this false view, a true signal in the temperature, perhaps another in the humidity, perhaps a third in the energy balance, and so on. But this signal is corrupted by noise—somehow. The temperature really wants to be 10C, but evil forces perturb it so that today it’s 10.3C, tomorrow 10.1C.

No. This view of the atmosphere is always false. The temperature is what it is, wherever it is. Everything is affected by what the temperature is, not what it “wants” to be. There is no signal in the noise: there is no noise: it is all signal.

There are lots of times a signal+noise model makes sense in physics: a communication sent down a channel, for instance. But it never makes the least sense in the atmosphere. You might posit a genuine external forcing like that Statisticulian heat ray, and that forcing may be even be linear, and you can even look for it or estimate its unknown size. But you’re still stuck with dealing with whatever the temperature everything actually experienced.

Our authors sort of get this, and sort of don’t.

We can ask the counterfactual question, What would the temperature (humidity, etc.) be if man did not do this-and-such? For instance, What would the temperature be if the Communists managed to export their religion in 1917 and half the population was wiped out and no cars were driven after, say, 1925? Ask whatever fanciful question you like.

Then you can create a model of the atmosphere that says “The atmosphere works like this.” You then run this model assuming the godless commies won. The model will say “The temperature would have been X given the godless commies won”.

Okay, so what. Is that interesting? Only if that model has been shown to make skillful, accurate, and useful predictions of future (really, never-before-see-or-used-in-any-way data) climates. Even then, that model will have some plus-or-minus, some uncertainty attached to it, some error. Predictions made from the model won’t be certain. That goes for past counterfactual questions or future counterfactual (or as-yet-factual) questions, such as “What will the temperature be in 2050?”

Above, in the main article, they are taking it as a matter of faith the climate models reproduce and predict future counterfactuals perfectly. They assume the “external forcing” model is shinier than a rat’s prosthesis.

Rather, the implicit assumption is that the climate model is good enough, or more than good enough, so that when we can tell the model run on “external forcing” is different than one running on “natural variability”, it really is true that man’s activities are causing changes in the climate, even unto daily values. It must be obvious this is a circular argument.

It’s not even interesting! It was always clear, it always should have been clear, that man causes changes in the climate, just as every other creature and thing does, and since climate is the aggregate of weather, changes would be daily. The paper proved nothing we didn’t already know.

How big of a change will man make? Nobody knows. And this paper is of no helping in discovering the answer to that important question, either.

To support this site and its wholly independent host using credit card or PayPal (in any amount) click here. Consider this seriously! I get no grants or any other support, except from you.

Categories: Statistics

5 replies »

  1. TL;DR: “The model we lovingly hand crafted to show AGW no matter what, showed global warming! When we added energy to a closed system, the system showed it had extra energy!” (Now give us more grant money.)

  2. Researchers need to lay off playing “Where’s Waldo”. They are starting to see him in tea leaves, the carpet, every climate variable…..Break out some kale chips and chamomile tea, relax and let little Waldo be. You’re losing your minds with all this obsessive searching.

  3. If the forcing is man-made, it must be different from all other forcings, one presumes. Otherwise it would be impossible to see a particular forcing and say with any confidence that this one is the man-made one, and not one of the non-man-made ones.

    So, do they state exactly how the man-mande forcing worked, and did they compare it to all natural forcings, either alone, or in some specific combination, and saw each and every time a different model response? Apparently not, if they compared it to just one natural forcing.

  4. Dr. Briggs,

    The paper and your summary are far far beyond my stats, being basically limited to least squares.
    I did note that the paper seems to use the infamous RCP8.5 to some not-understanable-to-me extent.
    Is it a fair question to ask what the results might be if a more reasonable RCP was used?
    Thanks

  5. Dan,

    Who knows. This result has nothing zero to do with the goodness of any climate model. All it does is claim the output from different climate models is different. That’s it!

Leave a Reply

Your email address will not be published. Required fields are marked *