The agreement amongst climatologists who agree that mankind will cause devastating climate change is popularly known as “The Consensus.” Those who agree with the consensus are part of the consensus; while those that disagree with it are called skeptics. OK so far?
Now, what would you think of a study which examined members of the consensus, and which asked those members, “Do you agree with the consensus?”, and then reported that members of the consensus agree with the consensus as news?
Well, reporters said, “There’s a consensus among consensus members!”
Kirsten Zickfeld of the University of Victoria, and some of her pals, gathered top consensus members and asked them questions about climate change. They then wrote a paper summarizing the consensus’s answers: “Expert judgments about transient climate response to alternative future trajectories of radiative forcing.”
Zickfeld presented experts with three made-up forcing scenarios: a high “The sky is falling! The sky is falling!”, a medium “It’s worse than we thought”, and a low “More funding is needed.” The exact (and dull) specifications can be looked up in the original paper.
All experts agreed that “cloud radiative feedbacks” were the least understood processes, and therefore the largest contributer to climate uncertainty. Not un-coincidentally, “cloud radiative feedbacks” are the same sources of uncertainty pointed to by many doomsday-climate skeptics, such as Roy Spencer.
The experts did not agree on importance of other forcings; or in the learned words of the report, the rankings were “not entirely robust with respect to the procedure used.” In other words, no consensus!
They then “asked experts to make judgments about the probability that different levels of radiative forcing could trigger some ‘basic’ state change in the climate system.” That is, will there be “tipping points”?
The picture shows the probabilities “elicited.” It’s a bit screwy at first glance. It appears to indicate the experts’ guesses of the chances for each of the three scenarios.
So, for example, expert #1 (M. Allen) appears to say that the chance the sky will fall is certain. Yet Allen only claims a 60% chance that it’s worse than we thought; and he gives a mere 20% probability for more funding is needed. I make this to be a total of 180% chance of the climate changing. No wonder why these fellows are so nervous!
But that’s not what the graph reports; that is, it does not report the experts’ guesses on the likelihood for each of the three scenarios. Instead, each of the three scenarios was assumed, and only then was each expert asked something like this, “Give the sky will fall, what is the chance that the climate will undergo a tipping point?”
In law, this is called a leading question. In logic, it’s close to a tautology: it only differs from one in the same way the sky falling differs from a climatological tipping point, a distance which is measured with calipers. It is, therefore, a near meaningless question. They should have asked how likely each scenario was; they should not have asked how likely the climate was to change given that the climate changed. The information content of the answers were as low as a New York Times editorial.
However, the all-important abstract—the small blurb which appears at the start of all scientific papers, and the only part of a work which most people read—states, “experts judged the probability that
the climate system would undergo, or be irrevocably committed to, a ‘basic state change’ as > 0.5.” Now, even though that entire sentence is factually correct, it is misleading. Reporters, for example, who are, it must be admitted, not always the best representatives of the cognitive elite, misread that statement to mean that there was at least a 50% chance that the sky would fall.
The researchers pushed on. They asked each expert for his best guess of the (surface) temperature increase we can expect under each of the scenarios. Now just recall: the scenarios already say the temperature will increase, just not by how much, although in all cases the amount of radiative forcing is such that the increase could not be negligible.
Lo, they found that when asked if temperatures were going to increase, the experts roughly agreed on the amount of increase. The researchers repeated the process for the quantity known as “climate sensitivity”, a value which says more about our need as humans to summarize complexity with a single number than its usefulness as a physical measure. Anyway, given the three scenarios, the experts agreed on climate sensitivity, too.
They better had!
If they had not agreed, I would have been concerned deeply. Why? Consider: each of these men believe the same basic theory, they use largely the same data, they run models that contain matching lines of code. They meet regularly to discuss how much they agree, and on how to correct and to come to consensus on the small points where there is disagreement.
Agreement, then, is a given. Agreement is why there is a consensus. Agreement means consensus.
The entire report, with its fictionalized scenarios, merely told us what we already knew. Yet this news was greeted with wide acclaim. Nowadays, this is what is called “good science.”