Kevin “Travesty” Trenberth had a peer-reviewed article in Science entitled, “Has there been a hiatus? Internal climate variability masks climate-warming trends.”
First, the word “hiatus” is wrong. Using it assumes what it seeks to prove: that the atmosphere is warming substantially because of human activity. We do not know this is true; and given model results, an area where Trenberth treds oh so lightly, it is almost surely false. The word “hiatus” implies the warming is there, but has been “masked” or “beaten down” by other causes such that the total cause is a no-warming signal in the (operationally defined) global mean surface temperature (GMST).
The real question of interest is not whether there was a “hiatus” but what are the main causes of the (value of the) GMST? Some of the causes Trenberth mentions are uncontroversial; for instance, volcanic eruptions, which block incoming solar radiation. But one cause he mentions, and which is says is responsible for the “hiatus”, is not a cause at all. This is the Pacific Decadal Oscillation (PDO).
He says, “Observations and models show that the PDO is a key player in the two recent hiatus periods”. He cliams the PDO is responsible for “interannual variability” of the atmosphere. This cannot be so. The PDO is an effect, an observation. It is not a primary cause. The PDO is not something apart from the atmosphere, independent of it and which only shows up every so often. It is a pattern formed in the atmosphere by the same (and other) causes which are responsible for the GMST value.
And the same is true, of course, for the El Nino, La Nina, AMO, and any other human-identified handy pattern. To say the PDO is a cause is like saying the “pattern” of colder temperatures we notice in December in the northern hemisphere are responsible for (a.k.a. cause) winter.
Trenberth skirts around the lack of skill exhibited by climate models and implies the models would have been right—which means he acknowledges they were wrong—had this nasty PDO not had its way with the atmosphere. Such faith. He says, “the associated changes in the atmospheric circulation are mostly not from anthropogenic climate change but rather reflect large natural variability on decadal time scales. The latter has limited predictability and may be underrepresented in many models”.
This is silly. The models claimed to be able to identify the main causes of atmospheric change. Because the predictions were so awful is proof that this claim is false. We do not know all the main causes of atmospheric change. If we did, our forecasts would have been accurate.
As I said, the main causes of the changes in the atmosphere also cause changes in the man-identified pattern we call the PDO. We also do not do a stellar job of predicting the PDO. More evidence we do not understand all the causes of the changes in the atmosphere.
Further, there is no such thing as “natural variability”. It doesn’t exist like volcanoes and even human carbon dioxide emissions do. Natural variability is a measure, the result of us holding up a sort of yardstick to the atmosphere. The yardstick exists all right, but it has no causal influence of the atmosphere itself.
For being a world-renowned expert on our climate, Trenberth certainly speaks poorly of its operation.
Small points: Trenberth ignores the satellite data temperature record and instead relies on a statistical reconstruction which does not show the uncertainty in its estimates. He smooths his “data” to show us black lines which are not the “data”, and then speaks of these lines as he speaks of “natural variability”, i.e. as if it’s something real. And then he does some odd ad hoc piece-wise linear regression the purpose of which is unclear and, as far as I can tell, is of no use whatsoever, i.e. it makes no predictions like all good statistical models should.
Ah, because the world is round
It turns me on
Because the world is round
Ah, because the wind is high
It blows my mind
Because the wind is high
Ah, love is old, love is new
Love is all, love is you
Because the sky is blue
It makes me cry
Because the sky is blue
I actually read this paper online yesterday!
Trenbreth seems to believe the models are reality: “The increasing gap between model expectations and observed temperatures provides further grounds for concluding that there has been a hiatus.” Translation—reality is just not cooperating with us.
I was struck by the “internal climate variability masks climate-warming”. I thought warming was all-powerful and nature could not overcome it. That was the original premise, until nature apparently proved that premise wrong. (It’s a leveling of temperatures–we know not where they will go hereafter.) As you note, we do not know all the causes of atmospheric change. That is very obvious.
Doesn’t the stepped graph actually more accurately reflect reality? I have looked at some of the temperature data and there is a definite step pattern, but if one graphs the data that way, the warmists get very angry. USA data shows distinct times where the temperature levels off, then rises. Many global warming graphs don’t show this at all due to smoothing or whatever.
A quote from an unnamed physicist seems appropriate here: “Just because we can measure it doesn’t mean we can control it. “
Dear Dr. Briggs:
Ponens Tollens doesn’t refer to a toll bridge, although the argument could be made that Trenberth is merely affirming his paid up membership in the global modus sinister club….
Bad jokes aside, you keep making the same statement, and repetition doesn’t make it better. You write ” The models claimed to be able to identify the main causes of atmospheric change. Because the predictions were so awful is proof that this claim is false.”
First: the model makers claim the models reflect the operation of the main causes of atmospheric change. The models themselves claim nothing – in reality, however, the models aren’t as described; they’re a lot more like your simple model retrofitted to dubious data, parallel computing, and the needs of generations of grad students grinding out theses (or hourly stipends).
Second, the models do not produce accurate forecasts, but this does not mean that our models of the underlying processes are wrong. At heart these models are pretty much simple minded and most of the 1960s card decks I saw (circa 2000) framing core components seemed reasonable enough. Errors come from programming error (limited), made up stuff (to produce fancy outputs from no real inputs (both pure fantasy and, most commonly, idiotic things like calcs done to 8 decimal places from estimated data), and (above all) attempts to force fit output to historical data by adding or altering parameters.
Since the historical data is increasingly suspect – having apparently been boiled down with respect to the early years and cooked up for the later ones – we cannot use inaccurate predictions to evaluate the models. (Well, unless you want to say that the fact that the models over predict current temps shows them working correctly, because that’s the result good models would produce when calibrated against data biased to show a warming trend where there wasn’t one.)
The old adage may apply:
If you can’t find the heat
Stay out of the Climate Change game
Briggs: You’re supposed to stay out and let Kevin keep finding his heat… and finding it… and finding it
Paul Murphey: Are you saying we should not discard the models because the models may be right but the data is so bad we can’t tell? If that’s what you’re saying, that is much worse than the models just not working. If the data is bad, nothing else really matters at that point. Until the data is fixed, then no models should be tested or run. After we fix that mess, then we can return to the models?
the models do not produce accurate forecasts, but this does not mean that our models of the underlying processes are wrong
So my model of thoroughbred races isn’t wrong because it always picks the 5th place horse as the winner? But since it doesn’t (usually) pick the last place horse my models of the underlying processes involved must be right, eh? Hmmm …. maybe I can get my money back then?
How can anyone say the models of the underlying processes aren’t wrong if the model forecasts have low accuracy? What were these models compared against?
This is from Weather Underground:
The Climate Change Science Program study, which was commissioned by the Bush Administration in 2002 to help answer unresolved questions on climate, found that it was the measurements, not the models, that were in error.
I’m not sure how they “know” it’s the measurements and not the models. Since the data is such a mess, who knows? This looks a lot like blame shifting.
Um, Briggs, it would seem on the face of it to be cause in this particular context. The flow of time itself would dictate that.
“Second, the models do not produce accurate forecasts, but this does not mean that our models of the underlying processes are wrong.”
This old bate and switch tactic is getting boring. It’s dumb too. There is only concern for warming because models predict high levels of warming in short periods of time. Nobody is going to be worried about 1C of warming for a couple of centuries. The net benefits of the warming then outweigh the negatives. The underlying processes may be correct and probably are, but that is not the concern. The concern is, there might be a problem if the high levels of rapid warming predicted by the models are correct. So the issue is and only is, are the model predictions correct? Excuses for why the models are wrong are frankly a side issue.
I should also add that Dr Brigg’s is 100% correct on the issue of the PDO. The PDO is not a “thing in itself”. It is a set of observations of a region of the climate system. (It appears to have a 60 year or so cyclical pattern to it.) Trenberth’s explanation in a nut shell, is that the climate has not warmed because the climate has done something else.
Even if the model runs were based on real world measurements and not fabricated data, the physical models will fail because of the limits of numerical representation. Computational errors resulting from small deltas of much larger magnitudes are very difficult to contain and over thousands or millions of iterations produce nothing more than computer art. And that’s without even considering the fundamental inability to ever provide a deterministic baseline for simulations.
Recommended video: https://youtu.be/19q1i-wAUpY Dr Christopher Essex – Believing in Six Impossible Things Before Breakfast, and Climate Models
The only possibility for producing climate models with any significant skill whatsoever rests in quasi-statistical ones; but they too must not base their computations on fabricated data.
True if you’re trying to model something that seems to behave like a random walk but not true if you’re trying to model a system that is primarily self regulating. Then the errors in principle, cancel each other out over time.
Ah yes … time is the ultimate cause
… and time is definitely in man’s domain
If only we could control time
As I posted on Science:
I think Trenberth may have relied on “cherry picked” data to frame a false argument. The result of the concerted efforts to “cool” past temperatures makes the “warming” from 1910 to the mid30s much more pronounced, a period when CO2 concentrations were virtually flat (pre-industrial). If Trenberth were to use the entire temperature record and not the truncated version he choose to use he would see that this early warming (pre-GHG effect) when measurements were much less comprehensive, was virtually the same as the so-called modern warming from 1970 to 2000 (~0.5 to 0.6C; see http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.gif). Science dictates that his “study” should address data discrepancies for the “hiatus” period (especially when such data is infinitely more accurate and more comprehensive spatially) to compare his temperature “reconstructed temperature index” (GMST) with satellite data (that has been validated by radiosonde (balloon) profiles).