My dad took a swing with his nine-iron and the wiffle simulacrum of a golf ball took flight, arched upwards, spun left and, without bouncing, *landed atop my favorite blade of grass*! Yes, this really happened.

That a nasty little white plastic ball with holes drilled through it would land on my favorite blade of grass could not be a coincidence. I mean this in the full quantitative sense. For if these calculations are correct, and there is no reason to suggest they are not, my father’s back yard has in it about 1,204,346,880 individual grass blades (his yard is just over three-fifths of an acre).

That makes the chance that my very favorite blade would have been viciously assaulted to be just over **one in a billion!**—a number so incredibly small that it should be written in bold font. Now whenever you see a probability so low, it can only mean that some kind of directing force, some guiding principle, some entity must have had a hand in causing the event. There’s just no other way to get a low probability!

And since I can think of nothing else, the cause must be global warming, a.k.a. climate change, a.k.a. climate disruption, a.k.a. climate tipping points, a.k.endless.a. etc. Take that skeptics!

Or so reason the people who lately claimed that the “U.S. heat over the past 13 months” was only a “one in 1.6 million event.” After making this dubious calculation, it was argued that because the result was so incredibly tiny, something-that-could-only-be-global-warming caused the temperature to take the values it did.

The 1-in-1.6-million came from the NCDC via reasoning like this: a month’s temperature can occur (they claim) in one of three buckets, below normal, normal, above normal. The chance it “falls” into one of these buckets is 1/3. Therefore, seeing 13 months in a row of monthly temperatures in the above-normal bucket has a probability of (1/3)^{13}.

Ignore the simplification about the buckets and the assumption of a 1/3 chance that a month’s temperature lands in any bucket. Rather, accept them both as true. Then, given we believe in these premises, it is indeed true that the probability of 13 out of 13 “above normals” is 1 in 1.6 million. Ok then. Now what? It was also true that the probability my dad’s wiffle ball would crush my favorite blade of grass was 1 in 1.2 billion.

Now it is also true that the probability of any sequence of 13 monthly temperatures is the same: e.g. below-below-below-normal-above-above…above has the same chance as above-below-above-below-above…below; you get the idea. This means if I see one of these sequences—and I must by definition see one of them—the event I witness will be “rare.” Just as the ball-blade meeting was rare.

The argument is then supposed to be that since this probability is so low it could not have happened “by chance.” But chance doesn’t cause anything. There is no wily devil-in-the-machine rolling cosmic dice to determine outcomes of temperature or wiffle balls or anything. Instead, something physical caused the temperature sequence to take the values it did, just as something physical caused the ball to land where it did. If it is this “something physical” in which my interest lies, I do myself no good at all by calculating dubious probabilities and then worrying over them. I should better spend my time investigating real causes then in inventing probabilistic bogeymen.

Calculating probabilities the way we did in these two examples is to purposely, willfully turn a blind eye to all the evidence we have about actual monthly temperatures and actual clubs hitting real balls, and then to say to ourselves, “These probabilities are so low that there must be physics that we purposely, willfully ignored.” When we were kids we had a comeback about Sherlock when hearing observations of this kind (but since this is a family blog, I won’t repeat it).

For the golf ball, I’m ignoring where my dad routinely stands relative to my favorite blade, the distances balls fly when hit with a nine-iron, and on and on. For the temperature, I’m ignoring just about everything there is to know about temperature, which is a lot. Such as how on 30 June the temperature does not “reset” itself so that on 1 July it begins anew, and so forth. It really is a sad business to pretend we don’t know all of this and then to intimate that that some mysterious cause, like global warming, is the real culprit for actual events.

Low probabilities are not proof of anything—except that certain propositions relative to certain premises are rare. If those certain premises are true, then so are the probabilities accurate. Whatever the probabilities work out to be is what they work out to be end of story. If the chance a ball hits my favorite blade of grass is tiny, this *does not mean* that therefore global warming is real. Who in the world would claim that it is? Yet why if relative to unrealistic premises about temperature buckets the probability of 13 out of 13 above-normal monthly temperature is tiny would anyone believe that therefore global warning is real? You might just as well say that the same rarity of 13 out of 13 meant therefore my dad was a master golfer. The two pieces of evidence are just as unrelated as were the rarity of the grass being hit and global warming true.

If our interest is in different premises—such as the list of premises which specify “global warming”—then we should be calculating the probability of events relative to these premises, and relative to premises which rival the “global warming” theory. And we should stop speaking nonsense about probability.

Hi Matt, A Dutch nurse went to jail due after similar calculations were made as you show here:

http://www.nature.com/nature/journal/v445/n7125/full/445254a.html

She is free now but has been in jail for 7 years, see also http://en.wikipedia.org/wiki/Lucia_de_Berk

Very dull, mr Briggs. I was hoping for some work to put these numbers in context. Not that the philosophical context you bring up isn’t good, I think it is spot on. But given all the blog’s contributions, I was disappointed to just see a vague assertion on how Tamino is “speaking nonsense”, without even any hint as to why.

Has any blog or anyone even make the look-elsewhere check?

But Luis, my old friend, the philosophical context is the only point. About the physics I have nothing (here) to say.

Marcel, thanks!

You are shooting at a target. Thirteen bullets in a row land above the bullseye. Do you aim a bit lower next time? No! No! No! Those 13 high shots are no more likely than a sequence of high-low-high-low… shots, or 13 low shots in a row, or any other sequence you can think of. Adjusting your aim as a result of this data would only serve to demonstrate your ignorance of the theory of probability. Just keep doing what you are doing. Your intellectual satisfaction will be reward enough.

The philosophical context is the only relevant point.

Further, the important part of the philosophical context is that a “p” value is nothing but an estimate of our relative ignorance of the multiplicity of causality at work rather than a measure of any given causality.

The bottom line is the meaning/interpretation of any “p” value comes from the context in which it was computed. Its specific value is simply the result of the calculation and, by itself, is nothing but a number with no more nor less meaning than any other number.

SteveBrooklineMA :

Would you say that the causes of average temperature are as readily obvious as the causes of monthly average temperature?

If one were to take “aim” into account, they ought to include evidence about month-to-month variability. In your case, you suggest ignoring the cause (aim). In the story, it is expressed that possible causes of variation are ignored and only the fact the “shots are high” is assessed.

SteveBrooklineMA :

Apologies:

“Would you say that the causes of average temperature are as readily obvious as the causes of monthly average temperature?”

Should read:

“Would you say that the causes of shooting accuracy are as readily obvious as the causes of monthly average temperature?”

I feel silly.

SteveBrooklineMA,

Exactly. Given what you know about guns, aiming, and your experience in shooting, it is highly probable the shots would land where they did.

Lucia has done some analysis on the probabilities based on historical US data.

Her original post on the topic used global temperatures.

Matt, on a related topic, could you have a look at http://www.masterresource.org/2012/07/nordhaus-tol-climate-economics-reconsidered/ and comment on the bit about null hypotheses in climatology:

“In a standard economic regression analysis, we typically approach things the way one is taught in high school when learning basic statistics. Namely, you set up a null hypothesis that is the opposite of the causal relationship you (the researcher) actually think exists. Then, if there is an apparent relationship in the data (such that you get a positive value on the coefficient for a certain term in a least-squares regression, say) you can see if the result holds up at a 90 percent, 95 percent, or 99 percent confidence interval.

In this normal context, the higher the confidence interval, it means the more confident you are that the apparent relationship between two measured variables isnâ€™t spurious. You are in effect saying, â€œIf there really werenâ€™t any relationship between variable X and variable Y, then I wouldnâ€™t be getting this type of result 99 percent of the time. Therefore, I reject the null hypothesisâ€”which says there is no relationshipâ€”and think that there really is a relationship.â€

Yet in charts of climate model projections, the â€œconfidence intervalâ€ works the other way around. Here, the higher the number, the less confident we can be that an apparent match between the model and nature is due to the underlying accuracy of the model. To put it in other words, here the null hypothesis is that â€œthis suite of climate models is accurately simulating global temperature.â€ Thus if we make it harder to reject the null (by ramping up the confidence level), then it gives more wiggle room for the models. ”

Are they cheating?

The odds of flipping a fair coin 10 times and having it come up heads 10 times is one in 1024.

The odds of flipping a fair coin 10 times and having it come up …

H,T,T,T,H,T,H,H,T,H is also one in 1024.

Many people would marvel at the first while finding the second uninteresting. These people are called numerologists.

The tragedy is that have a government “science” agency that finds no barrier to deliberate distortion and lies.

Wow. SteveBrooklineMA for president!

SteveBrooklineMA: You don’t understand the golf analogy, so let’s substitute a rifle for a golf club so that you can see it. If we want to use a rifle as an analogy, it’s more like going to a shooting range where a million marksmen shoot 13 times. Then you happen to pick one of the targets and say how incredibly unlikely that combination of shots — in this case, all above the bullseye — are.

The world is a huge place, and there are untold areas which are measured. The original analysis picked one area and then noted how unlikely it was. It could have picked many other areas, including areas that overlap or are subsets or supersets of the area it chose. (Those are the leaves of grass and the various possible combinations.)

As someone pointed out, Judith Curry did a proper analysis of this and found that the odds, while long, are many orders of magnitude less than the original, ridiculously naive analysis.

That (1/3)^13 thingie is only valid if consecutive monthly average temperatures are not serially correlated.

Thanks for your input on Jeff Master’s analysis of temperatures in the lower 48. I have read the comments on the many BLOGS on the subject in question. They are confusing and but not enlightening. It appears to me that trying to get a better probability estimate is a waste of time. The point you have made is that his calculation for his assumptions is correct but that it doesn’t mean anything because his premise has no model counterpoint with which to compare a probability. Perhaps the higher temperature based on average temperature in the lower 48 was caused by higher Urban Heating of the temperatures sensors, or they werenâ€™t calibrated correctly. Was the data corrected for these effects? There are other scenarios that he and others have tacitly assumed to be absent.

The sad point is that the results generated by Jeff Masters will be used by the proponents of catastrophic global warming and much of the mass media as evidence that his premise is correct. I have just finished reading an older book by John Allen Paulos, Innumeracy, Mathematical Illiteracy and Its Consequences. You have probably read it. He makes the point that statistics and statistical inferences are frequently used to make statements for political gain. One example Paulos made was that women earn only 65% of what men earn. While women in equal jobs earn less but better that 65% of men, the two averages are affected by other factors not mentioned in the statistical statement. So Masters really was looking for political gain with his probability calculation. In the political realm of catastrophic climate change Masters will never be held responsible for misleading the general public about what the 13 months of high average temperatures really means. Those exposed to his predictions will live in ignorance of the truth.

Ye Olde Statistician: Lucia (I had incorrectly said Judith Curry in a previous post) addresses this in her blog. Turns out that the world data works out with a high serial correlation, but the US data has a much lower serial correlation. Lucia does several simulations and finds that the odds are long, but not nearly as long as the original analysis.

Still doesn’t address the concept of picking one set of measurements out of a huge bin and (in isolation) saying that set is rare. There are two distinct issues at play here and I believe Briggs is addressing the second.

Willis Eschenbach at Watts Up With That …

What are the chances that every day in last week’s heat wave would be in the top 1/3 for their respective ranges? A lot depends on what defines a heat wave, its magnitude (whatever that means, maybe its peak temperature?), the magnitude of each range and the starting point. I notice none of the three (Willis, Lucia or Tammy; 4 if you include Masters) considered the magnitude of the 13 month waves. Instead, each calculated using the monthly records while ignoring their arbitrariness (relative to the problem) and coarseness of division. Maybe the approach should be a class lesson? Surprisingly, Tammy started down the right trail but got lost. Looking at Tamino’s box plot, there isn’t enough information to predict which temperatures that fall in July’s top 1/3 will carry over into August’s or, for that matter, any month’s into the next since each month has its own zero.

If global warming causes everything why hasn’t it arranged a jackpot Mega Millions winner for me?

Speed: Willis Eschenbach’s analysis is not applicable, if you read farther down in that thread, someone points this out. Basically Willis addresses one thing, Tamino addresses the wrong thing, and Lucia analyzes what both of them should have analyzed. She concluds: “Using US data: Looks like 1 in anything from 2,000 to 166,667”, which is orders of magnitude less than Tamino’s analysis.

I believe that those are the odds she calculates given the assumption that absolutely no temperature increase has occurred. If you believe that some warming — though not a catastrophic amount — has occurred, then she says, “That said: in the presence of persistent warmingâ€“ even very slow persistent warmingâ€“ seeing temperatures in the upper parts of the historic distribution will not be rare.”

Speed:

On your coin example, while what you say is true, it’s not the correct way of looking at it. If you were betting a guy and he agreed to always take heads and you tails, and he won 18 bets in a row, an event with 1 chance in 262,144, what would be the degree of your belief that it was a fair coin?

Lucia at her Blackboard was running the same “statistics” 9inverted commas necessary).

in comments I pointed out the very point you make here.

It was summarily ignored.

Pingback: More On The 1 in 1.6 Million Heat Wave Chance | William M. Briggs

Pingback: Chance Of Heat Wave Only 1 in 1.6 Million? Or, Probability Gone Wrong | JunkScience.com

Wayne said,

Willis Eschenbachâ€™s analysis is not applicable, if you read farther down in that thread, someone points this out.There are in fact several “someones” but Willis doesn’t concede. He has answered his critics there far better than I can here.

Rob Ryan asks about my opinion of the fairness of a coin that came up heads 18 times in succession. Clearly I would suspect (as would any competent statistician) that is was not a fair coin. But the issue at hand is whether or not last years sequence of monthly temperatures was more or less likely than any other sequence of monthly temps. It was neither.

William, I suggest you read what Masters actually said rather than the strawman you address here:

“Each of the 13 months from June 2011 through June 2012 ranked among the warmest third of their historical distribution for the first time in the 1895 – present record. According to NCDC, the odds of this occurring randomly during any particular month are 1 in 1,594,323. Thus, we should only see one more 13-month period so warm between now and 124,652 AD–assuming the climate is staying the same as it did during the past 118 years.”

As pointed out by Lucia et al. he should have addressed autocorrelation but he didn’t make the error that you claim he did. In your analogy that would be the chance that your dad’s ball landed on the same blade of grass in future attempts.

Speed: Willis is very smart, but he has no more training in statistics than I do. When professionals like Lucia* do something different, I’ll go with them.

I’ve also participated in one discussion where Willis absolutely refused to give one inch to an argument I was making… right up until he had to retract the argument and claim it didn’t matter anyhow. Not saying that’s necessarily common, but I am saying that Willis is not one to admit a mistake until pressed against the wall with two bulldozers. That can be a good trait when you’re dealing with The Climate Establishment, though not so good when someone who is a natural ally is trying to point out a problem to you.

Oops, forgot to implement my footnote.

* Lucia is a professional without an axe to grind. She follows the data wherever it leads.

By contrast, Mann is a professional, though I’m not sure that he knows much more about statistics than I do, and he has an axe to grind, so I definitely don’t believe that all “professionals” are superior to those of us without formal training in the field.

All,

Michael Tobis at the original site linked above takes exception with my analysis, but the most intelligent response he can conjure is to say, ‘Mr. Briggs, a veteran AGW denier, misses the point six ways from Sunday.’

This is his entire response, incidently. Needs work, Mikey old boy.

Even leaving things like autocorrelation aside, it’s a fallacy to assume a 1/3 probability each of being above normal, below normal or normal. Normal, in this context, means ‘exactly equal’, and so the probability of any single month’s temperature being exactly equal to normal is so remote that we can discount it entirely.

Given Speed’s fair coin example, this would be the equivalent of the coin landing on edge.

This leaves us with a choice of either ‘above’ or ‘below’, which shortens the odds considerably to 1:8192

Please ignore my previous reply – I didn’t read the material properly.

SteveBrooklineMA’s sarcasm is right on target here. Given a sequence of consistently high results it would make sense to consider the possibility that there *might* be a systematic effect happening.

The difference from your blade of grass story is in the fact that, although all blades are equally unlikely, if you hit the one *I* had picked out *in advance* then I would be very surprised and suspect a trick (in a way that I would not be if everyone in the world had chosen their own favourite and some other person’s favourite got hit).

Similarly, while I wouldn’t be surprised to learn that someone somewhere in the world threw 13 Heads in a row with a fair coin yesterday, I would have good reason to be very suspicious of a coin which gave *me* that result in the first time I tried it.

In the same way, the particular unlikely sequence of thirteen high temperature months in a row is different from other equally unlikely but less interesting sequences because none of the others correspond to a simple hypothesis that could have been made in advance, and the sequence of thirteen high shots at a target is consistent with a simple plausible hypothesis that I am pulling the barrel up as I shoot.

William, I’m glad you used the golf analogy. It provides me with a way of lowering my golf score. If the probability of landing my first shot on a specified blade of grass is say 1 in a trillion (golf balls travel farther than wiffle balls) and I shoot 105 for 72 holes, the probability that my round of golf existed is one in a trillion to the 87 power (18 of my golf shots don’t land on grass, but in a hole that is way too small). Since, that probability is so much larger than 1 in a trillion, I conclude that my score of 105 is not possible; and although unlikely but almost infinitely more likely, my real score was 1. Look out Tiger, here I come.

ooopz. Make that a score of 105 for 18 holes.