*This post is inspired by Roger Pielke, Jr., as well as Bernie, Matt, and other readers who asked me to have a look at some IPCC probability statements.*

Roger Pielke, Jr quotes from the IPCC’s AR4 report

The uncertainty guidance provided for the Fourth Assessment Report draws, for the first time, a careful distinction between levels of confidence in scientific understanding and the likelihoods of specific results. This allows authors to express high confidence that an event is extremely unlikely (e.g., rolling a dice twice and getting a six both times), as well as high confidence that an event is about as likely as not (e.g., a tossed coin coming up heads). Confidence and likelihood as used here are distinct concepts but are often linked in practice.

Pielke rightly became perplexed by this language. What could it mean? He asked his readers (and me via email) to consider the following:

Here are some specific definitions to help you answer some questions.

A. “high confidence” means “about 8 out of 10 chance of being correct”.

B. “extremely unlikely” means “less than 5% probability” of the event or outcome

C. “as likely as not” means “33 to 66% probability” of the event or outcomeSo here are your questions:

1. If the IPCC says of a die that it has — “high confidence that an event is extremely unlikely (e.g., rolling a dice twice and getting a six both times)” — how should a decision maker interpret this statement in terms of the probability of two sixes being rolled on the next two rolls of the die?

I answered this puzzler on Roger’s blog (Roger showed two questions, but they are the same at base), but I thought it worth developing further here.

The answer is that there is no answer; or rather, that there are an infinity of answers. The IPCC’s language of “high confidence that an event is extremely unlikely” is ambiguous and incomplete.

Remind yourself that all probability is conditional on certain, exactly specified information, evidence, or premises. What are the premises or evidence for a “high confidence that an event is extremely unlikely”?

Our evidence specifies that “high confidence” means that a statement has 0.8 chance. Here, we have 0.8 chance of an “extremely unlikely event,” and our evidence specifies that this event (call it A) has probability less than 0.05.

We have 0.2 chance missing. That is, there is an 0.8 chance that A is extremely unlikely. But we need the full probability to say what is the probability of A. This must mean that there is a 0.2 chance that A is something other than extremely unlikely. The IPCC does not specify what this “other” than extremely unlikely is, so it could be anything.

We can provide our own evidence to provide a solution. Suppose, just for fun, that the 0.2 chance is for an event A that is merely unlikely, which we specify to mean 0.1 probability. Then we can write a cartoon equation:

Pr(A | this information) = 0.8 * (Prob < 0.05) + 0.2 * 0.1 = 0.8 * (Prob < 0.05) + 0.02.

And that’s as far as we can go. Whatever 0.8 * (Prob < 0.05) becomes, we add 0.02 to it. The problem is that we do not know what (Prob < 0.05) means. Does it mean “more likely to be 0.05 than 0.01”? Or “equally likely to be any number between 0 and 0.05” or something else entirely? There is no language in the IPCC that allows us to discern which of these (or some other) is true.

The IPCC’s language is either sloppy thinking or shrewd politics. Given my experience with actual, working scientists, I tend to believe the former. But if it’s shrewd politics, regardless whether A happens or not, the IPCC has given itself wiggle room to say that it predicted A wouldn’t happen, or that it predicted A wasn’t particularly unlikely.

I say this because though we cannot come to an exact solution, we can find its bounds given the language we do have. First, we know there is a 0.8 chance that (Prob < 0.05): the lowest this can be is 0 (just in case (Prob < 0.05) means 100% certainty of 0), and the highest it can be is 0.05 (just in case (Prob < 0.05) means 100% certainty of 0.05). Thus 0.8 * (Prob < 0.05) is between 0 and 0.04.

Now the 0.2 chance. The probabilities available to us are those between 0.05 and 1 (or so it seems; the language is still ambiguous). This means 0.2 times whatever this is is bounded between 0.01 and 0.2.

Our solution is then

Pr(A | our information) in [0.01, 0.24].

Thus, if A did not happen, the IPCC would point to its prediction and say, “See! We told you so. We said A was nearly impossible and it didn’t happen.” But if A did happen, it could say, “Well, A happened, it’s true. But it happens about 1 out of 4 times, which isn’t that unlikely. We can be satisfied with our prediction.”

Matt- Thanks .. quick question, shouldn’t your 0.04 be 0.05?

I disagree, the IPCC statement is clearly understandable as a (vague) description of a p.d.f. that gives a useful impression of the relative plausibilities of different outcomes in a form most useful for impact studies. Of course the actual scientists/econometricians would use the actual model runs rather than the IPCC statement. See my comments on Roger’s blog for details. I think part of the problem is in trying to link these statements with potential falsification, which I rather doubt was the intent.

Dikran Marsupial, the case being explored here has to do with dice, so expectations about usefulness for “impact studies” has no bearing on the math.

On my question above, never mind, I misread.

Dikran, your position is interesting: Are you saying that where information may be used to decide policy actions falsification is not at issue?

Bernie, no, just that it is a different question. For example, I might hypothesise that the dice we are playing with are both fair (i.e. equiprobable). I could then make the projection that on the next throw of the dice it is highly likely that the sum will be less than 12. This is useful information for predicting the impact on my wallet of betting on the sum being 12. However, if we only roll the dice once, can this give us enough information to falsify my hypothesis that the dice are fair? No. Does that mean that my hypothesis is unfalsifiable? No, you just need a different test which would require multiple rolls of the dice.

Likewise the IPCC theory (as embodied perhaps by their model ensemble) say that “it is likely that temperatures will rise by 2 degrees C or more over the next century under scenario X”. If it turns out that the warming was 1.9999C, does this falsify the theory or the models? No, because the outcome was unlikely, but not impossible according to the theory/model. If you were interested in falsification, you would need to ask the modellers what observations would falsify the theory/models. They might come back and say “warming of 1 degree C or less over the next century lies well below the spread of the model runs, and hence can be considered as being impossible if the models/theory is satisfactory”. Thus the models are falsifiable, just not by the first projection. They are falsifiable by the second (different) projection.

This is one of the reasons that the spread of the models is an interesting thing to look at, it gives an indication of what the models think is plausible (although probably a little optimistically narrow due to the small sample of model runs etc.).

Basically falsification requires a prediction focussing on what is ruled out, not on what might plausibly happen.

Good blog. “The IPCCâ€™s language is either sloppy thinking of shrewd politics”, should say OR.

Is the Spencer paper review still in the works ?

If a high confidence is defined as a probability of more than 0.80 and â€œextremely likelyâ€ as less than 0.05, then it means that, given whatever evidence,

P

[P(A)<0.05]>0.80 or 1-P[P(A)<0.05]≤0.20 or P[P(A) ≥0.05]≤0.2.Donâ€™t get the joke or math in the cartoon equation, though I like reading certain cartoons. Is â€œmerely likelyâ€ the same as â€œnot extremely unlikelyâ€?

Also donâ€™t understand the fuss about this statement, but I bet there are plenty of statements in IPCCâ€™s AR4 report more worthy of attention. I admit that I have read neither Mr. Pielkeâ€™s blog nor IPCCâ€™s AR4 report.

The probability P

[P(A)<0.05]> 0.80 is weird looking, isnâ€™t it? Let Θ=P(A), it becomes P( Θ>0.5 ) > 0.8. There.Roger,

It wouldn’t have surprised me at all if I were to have made a mistake.

Dikran Marsupial,

It is the vagueness in which I am interested. Too, it is extraordinarily unlikely that a modeler will say that any contingent is impossible. Climate models are just not falsifiable. For example, here.

None,

It’s in progress. Spencer answered some questions, but others were raised.

JH,

No idea what you’re trying to do (it might even be right). My math, however, is right. Some mathematicians are unable to think of probability without the aid of equations, particularly strange (and in this case) unneeded parameters. Don’t you agree?

So we state that there is a probability of 0.8 that an event has a probability of 0.05. Why stop there? Why not assign a probability of, say, 0.5, to the probability of some event having a probability of 0.05 having a probability of 0.8?

And in an orgy of infinite regression become completely uncertain about absolutely everything including our uncertainty. And our uncertainty about our uncertainty. And our …

Have you noticed if you keep saying a word it starts to sound weird?

Let me not to insult anyone by making statement like

“Some mathematicians are unable to think of probability without the aid of equations, particularly strange (and in this case) unneeded parameters. “But if enjoy this sort of exchange, please just tell, I’ll try my best to do so.

First, explain to me what 0.8 * (Prob < 0.05) and 0.2 * 0.1 represent in words then. For example, how do you multiply 0.8 and (Prob < 0.05)?

JH,

Now, now. No insult intended. I mean that the parameters are throwing you.

As I said, mine was a cartoon. “0.8 * (Prob < 0.05)" means 0.8 times some probability less than 0.05. The "some probability less than 0.05" is one ambiguity, for which, in my bounds, I used the single numbers 0 and 0.05.

Rich,

Nothing technically wrong with these nested calculations. In practice, of course, they always stop: or, to be perfectly correct, they always have stopped and probably always will.

Anyway, once we hit rock bottom—our intuitions, the

a priori—the regressions stop cold.Dear Mr. Briggs,

â€œNo insult intended.â€ OK, the dinner has just magically healed my wound.

The notation \theta simply makes the expression look more elegant. Why are the parameters throwing me?

There is a 0.8 chance that A is extremely unlikely (0.05) and a 0.2 chance that A is merely unlikely (0.1). That is, there leaves no chance of A being likely or very unlikely or whatever likely. So, I take it as a cartoon equation without mathematical basis or accuracy!!!

Thanks for the probability lesson, but does it apply? The IPCC statement is pure babble. First, there is the issue of “the uncertainty guidance”. WTF is that?. And then there is the “for the first time” statement. They did something for the “first time.” Virgins no more, eh?

What is it that they did? They distinguished (found differences between) A. “confidence in scientific understanding” and B. “likelihoods of specific results”.

Let’s start with A. My personal confidence in the IPCC’s “scientific understanding” of climate, climate change, climate forcings, etc. is zilch. I don’t think they understand (as in know what’s inside the black box) for beans.

But B is the real babble claptrap. Nobody, not even the IPCC, can ascribe a likelihood (aka probability) to “specific results” (which I assume to mean their climate predictions). The future is largely unknown. A lot of stuff could happen. There is no way to know what is more or less likely to happen — we’ll know it when it does happen.

For example, Joe Majorleaguer is hitting 300. He’s a fine hitter. Has a great record of accomplishments. But we do not know whether he will get a hit in his next at bat, nor even the likelihood of that. What’s past is past. The future is unknown, not merely uncertain. Joe has a past record. He does not have a future record. He could get beaned the next time up. Or strike out. Or hit a home run. There is no way to apply the a priori information to the specific result.

We could venture a prediction for Joe’s entire (next) season based on his past performance. That’s what Directors of Player Personnel do. They try to get guys like Joe on their team because they have some degree of confidence in his performance as averaged out over his career. Joe will probably bat 300 next year, all things considered, but we do not know (nor do the Directors, managers, other players, nor Joe himself) how Joe will do in his next specific at-bat.

The model runs are numerous and can be averaged, but the specific climate of 2100 is a one time event. We can roll a six-sided die many, many times. But imagine a die that can be rolled only once. That’s what climate is. A one off event. No multiple rolls.

The best we can say is that the climate of 2100 will be much like today’s climate, given the empirical climate record. If it is different in any way, it is likely (better than 50%) that the climate of 2100 will be colder, since the empirical record shows a long-term trend of cooling for the last 8,000 years or thereabouts. The next Ice Age approaches. Just as they have on a regular schedule for the last 1.8 million years.

The modelers know nothing. The do not “understand” how the climate works. Their models are tea leaves crunched by supercomputers. The simple models, either the null (no change) model or the empirical long-term downward trend model (neo-glaciation), are vastly superior. The IPCC traffics in irrational paranoia, not “likelihoods”, and there is 100% certainty about that.

William wrote: “It is the vagueness in which I am interested.”

Yes, it is ironic that e.g. Judith Curry should write a paper saying that the IPCC have overstated their confidence, when if you look at how the projections are worded they often suggest a considerable degree of uncertainty.

” Too, it is extraordinarily unlikely that a modeler will say that any contingent is impossible.”

I disagree, if you ask a climate modeller for a projection to facilitate falsification I’m sure they would have no problem in obliging. You need a projection focussed on what is ruled out by the theory, not a projection of what we should expect to see. The purpose of the projections in the IPCC WG1 report are clearly intended as an indication of what we should expect to see and are worded accordingly, so one should not expect to be able to use them for falsification.

“Climate models are just not falsifiable. For example, here.”

No, that is not correct. They can be falsified, just not with short term observations as internal climate variability has a stronger influence on such short term trends than the expected AGW. If you look at a longer timescale that is no longer true. The spread of the model runs gives an indication of what can be considered plausible under the assumptions of the models (i.e. a credible interval). If we observe something that lies definitively outside that credible interval then the model/theory is falsified. I gave a specific example in a previous post.

Dr. Briggs,

Information: high confidence = a probability of 0.8, extremely likely = a probability of 0.05

How do you set up an equation to determine the probability P(A| information)?

the term (Prob < 0.05) depends on the distribution of Prob. the value of prob =prob>0.

and prob<=1

Pingback: Teoriers empiriska innehÃ¥ll | The Climate Scam