Michael E. Mann and four others published the *peer-reviewed* paper “The Likelihood of Recent Record Warmth” in *Nature: Scientific Reports* (DOI: 10.1038/srep19831). I shall call this authors of this paper “Mann” for ease. Mann concludes (emphasis original):

We find that individual record years and the observed runs of record-setting temperatures were extremely unlikely to have occurred in the absence of human-caused climate change, though not nearly as unlikely as press reports have suggested. These same record temperatures were, by contrast, quite

likelyto have occurred in thepresenceof anthropogenic climate forcing.

This is confused and, in part, in error, as I show below. I am anxious people understand that Mann’s errors are *in no way unique or rare*; indeed, they are banal and ubiquitous. I therefore hope this article serves as a primer in how not to analyze time series.

**First Error**

Suppose you want to guess the height of the next person you meet when going outdoors. This value is uncertain, and so we can use probability to quantify our uncertainty in its value. Suppose as a crude approximation we used a normal distribution (it’s crude because we can only measure height to positive, finite, discrete levels and the normal allows numbers on the real line). The normal is characterized by two parameters, a location and spread. Next suppose God Himself told us that the values of these parameters were 5’5″ and 1’4″. We are thus as certain as possible in the value of these parameters. But are we as certain in the height of the next person? Can we, for instance, claim there is a 100% chance the next person will be, no matter what, 5’5″?

Obviously not. All we can say are things like this: “Given our model and God’s word on the value of the parameters, we are about 90% sure the next person’s height will be between 3’3″ and 7’7″.” (Don’t forget children are persons, too. The strange upper range is odd because the normal is, as advertised, crude. But it does not matter which model is used: my central argument remains.)

What kind of mistake would be it to claim that the next person will be for certain 5’5″? Whatever name you give this, it is the first error which pervades Mann’s paper.

The temperature values (anomalies) they use are presented as if they are certain, when in fact they are the estimates of a parameter of some probability model. Nobody knows that the temperature anomaly was *precisely* -0.10 in 1920 (or whatever value was claimed). Since this anomaly was the result of a probability model, to say we know it precisely is just like saying we know the exact height will be certainly 5’5″. Therefore, every temperature (or anomaly) that is used by Mann *must*, but does not, come equipped with a measure of its uncertainty.

We want the *predictive* uncertainty, as in the height example, and not the *parametric* uncertainty, which would only show the plus-or-minus in the model’s parameter value for temperature. In the height example, we didn’t have any uncertainty in the parameter because we received the value from on High. But if God only told us the central parameter was 5’5″ +/- 3″, then the uncertainty we have in the next height *must widen*—and by a lot—to take this extra uncertainty into account. The same is true for temperatures/anomalies.

Therefore, every graph and calculation in Mann’s paper which uses the temperatures/anomalies as if they were certain is wrong. In Mann’s favor, absolutely everybody makes the same error as they. This, however, is no excuse. An error repeated does not become a truth.

Nevertheless, I, like Mann and everybody else, will assume that this magnificent, non-ignorable, and thesis-destroying error does not exist. I will treat the temperatures/anomalies as if they are certain. This trick does not fix the other errors, which I will now show.

**Second Error**

You are in Las Vegas watching a craps game. On the come out, a player throws a “snake eyes” (a pair of ones). Given what we know about dice (they have six sides, one of which must show, etc.) the probability of snake eyes is 1/36. The next player (because the first crapped out) opens also with snake eyes. The probability of this is also 1/36.

Now what, given what we know of dice, is the probability of two snake eyes in a row? Well, this is 1/36 * 1/36 = 1/1296. This is a small number, about 0.0008. Because it is less than the magic number in statistics, does that mean the casino is cheating and *causing* the dice to come up snake eyes? Or can “chance” explain this?

First notice that in each throw, some things *caused* each total, i.e. various physical forces *caused* the dice to land the way they did. The players at the table did not know these causes. But a physicist might: he might measure the gravitional field, the spin (in three dimensions) of the dice as they left the players’ hands, the momentum given the dice by the throwers, the elasticity of table, the friction of the tablecloth, and so forth. If the physicist could measure these forces, he would be able to predict what the dice would do. The better he knows the forces, the better he could predict. If he knew the forces *precisely* he could predict the outcome *with certainty*. (This is why Vegas bans contrivances to measure forces/causes.)

From this it follows that “chance” did not cause the dice totals. Chance is not a physical force, and since it has no ontological being, it cannot be an efficient cause. Chance is thus a product of our ignorance of forces. Chance, then, is a synonym for probability. And probability is not a cause.

This means it is improper to ask, as most do ask, “What is the chance of snake eyes?” There is no single chance: the question has no proper answer. Why? Because the chance calculated depends on the information assumed. The bare question “What is the chance” does not tell us what information to assume, therefore it cannot be answered.

To the player, who knows only the possible totals of the dice, the chance is 1/36. To the physicist who measured all the causes, it is 1. To a second physicist who could only measure partial causes, the chance would be north of 1/36, but south of 1, depending on how the measurements were probative of the dice total. And so forth.

We have two players in a row shooting snake eyes. And we have calculated, from the players’ perspective, i.e. using their knowledge, the chance of this occurring. But we could have also asked, “Given only our knowledge of dice totals etc., what are the chances of seeing two snake eyes in a row in a sequence of N tosses?” N can be 2, 3, 100, 1000, any number we like. Because N can vary, the chance calculated will vary. That leads to the natural question: what is the right N to use for the Vegas example?

The answer is: there is no right N. The N picked depends on the situation we want to consider. It depends on decisions somebody makes. What might these decisions be? Anything. To the craps player who only has $20 to risk, N will be small. To the casino, it will be large. And so on.

Why is this important? Because the length of some sequence we happen to observe is not inherently of interest in and of itself. Whatever N is, it is still the case that some thing or things *caused* the values of the sequence. The probabilities we calculate cannot eliminate cause. Therefore, we have to be extremely cautious in interpreting the chance of any sequence, because (a) the probabilities we calculate depend on the sequence’s length and the length of interest depends on decisions somebody makes, and (b) in no case does cause somehow disappear the larger or smaller N is.

The second error Mann makes, and an error which is duplicated far and wide, is to assume that probability has any bearing on cause. We want to know what *caused* the temperatures/anomalies to take the values they did. Probability is of no help in this. Yet Mann assumes because the probability of a sequence calculated conditional on one set of information is *different* from the probability of the same sequence calculated conditional on another set of information, that therefore the only possible cause of the sequence (or of part of it) is thus global warming. This is the fallacy of the false dichotomy. The magnitude and nature of this error is discussed next.

The fallacy of the false dichotomy in the dice example is now plain. Because the probability of the observed N = 2 sequence of snake eyes was low given the information only about dice totals, it does not follow that therefore the casino cheated. Notice that, assuming the casino did cheat, the probability of two snake eyes is high (or even 1, assuming the casino had perfect control). We cannot compare these two probabilities, 0.0008 and 1, and conclude that “chance” could not have been a cause, therefore cheating must have.

And the same is true in temperature/anomaly sequences, as we shall now see.

**Third Error**

Put all this another way: suppose N is a temperature/anomaly series of which a physicist knows the cause of every value. What, given the physicist’s knowledge, is the chance of this sequence? It is 1. Why? Because it is no different than the dice throws: if we know the cause, we can predict with certainty. But what if we don’t know the cause? That is an excellent question.

What is the probability of a temperature/anomaly sequence where we do not know the cause? Answer: *there is none*. Why? Because since all probability is conditional on the knowledge assumed, if we do not assume anything no probability can be calculated. Obviously, the sequence happened, therefore it was caused. But absent knowledge of cause, and not assuming anything else like we did arbitrarily in the height example or as was natural in the case of dice totals, we must remain silent on probability.

Suppose we assume, arbitrarily, only that anomalies can only take the values -1 to 1 in increments of 0.01. That makes 201 possible anomalies. Given *only* this information, what is the probability the next anomaly takes the value, say, 0? It is 1/201. Suppose in fact we observe the next anomaly to be 0, and further suppose the anomaly after that is also 0. What are the chances of two 0s in a row? In a sequence of N = 2, and given only our arbitrary assumption, it is 1/201 * 1/201 = 1/40401. This is also less than the magic number. Is it thus the case that Nature “cheated” and made two 0s in a row?

Well, yes, in the sense that Nature causes all anomalies (and assuming, as is true, we are part of Nature). But this answer doesn’t quite capture the gist of the question. Before we come to that, assume, also arbitrarily, that a different set of information, say that the uncertainty in the temperatures/anomalies is represented by a more complex probability model (our first arbitrary assumption was also a probability model). Let this more complex probability model be an autoregressive moving-average, or ARMA, model. Now this model has certain parameters, but assume we know what these are.

Given this ARMA, what is the probability of two 0s in a row? It will be some number. It is not of the least importance what this number is. Why? For the same reason the 1/40401 was of no interest. And it’s the same reason *any* probability calculated from *any* probability model is of no interest to answer questions of cause.

Look at it this way. All probability models are silent on cause. And cause is what we want to know. But if we can’t know cause, and don’t forget we’re assuming we don’t know the cause of our temperature/anomaly sequence, we can at least quantify our uncertainty in a sequence conditional on some probability model. But since we’re assuming the probability model, the probabilities it spits out are the probabilities it spits out. They do not and cannot prove the goodness or badness of the model assumption. And they cannot be used to claim some thing other than “chance” is the one and only cause: that’s the fallacy of the false dichotomy. If we assume the model we have is good, for whatever reason, then whatever the probability of the sequence it gives, the sequence must still have been caused, and this model wasn’t the cause. Just like in the dice example, where the probability of two snake eyes, according to our simple model, were low. That low probability did not prove, one way or the other, that the casino cheated.

Mann calls the the casino not cheating the “null hypothesis”. Or rather, their “null hypothesis” is that his ARMA model (they actually created several) caused the anomaly sequence, with the false dichotomy alternate hypothesis that global warming was the only other (partial) cause. This, we now see, is wrong. All the calculations Mann provides to show probabilities of the sequence under any assumption—one of their ARMA or one of their concocted CMIP5 “all-forcing experiments”—have no bearing whatsoever on the only relevant physical question: What caused the sequence?

**Fourth Error**

It is true that global warming might be a partial cause of the anomaly sequence. Indeed, every working scientist assumes, what is almost a truism, that mankind has some effect on the climate. The only question is: how much? And the answer might be: only a trivial amount. Thus, it might also be true that global warming as a partial cause is ignorable for most questions or decisions made about values of temperature.

How can we tell? Only one way. Build causal or determinative models that has global warming as a component. Then make predictions of *future* values of temperature. If these predictions match (how to match is important question I here ignore), then we have good (but not complete) evidence that global warming is a cause. But if they do not match, we have good evidence that it isn’t.

Predictions of global temperature from models like CMIP, which are not shown in Mann, do not match the actual values of temperature, and haven’t for a long time. We therefore have excellent evidence that we do not understand all of the causes of global temperature *and* that global warming as it is represented in the models is in error.

Mann’s fourth error is to show how well the global-warming-assumption model can be made to *fit past data*. This fit is only of minor interest, because we could also get good fit with any number of probability models, and indeed Mann shows good fit for some of these models. But we know that probability models are silent on cause, therefore model fit is not indicative of cause either.

**Conclusion**

Calculations showing “There was an X% chance of this sequence” always assume what they set out to prove, and are thus of no interest whatsoever in assessing questions of cause. A casino can ask “Given the standard assumptions about dice, what is the chance of seeing N snake eyes in a row?” if, for whatever reason it has an interest in that question, but whatever the answer is, i.e. however small that probability is, it does not answer what causes the dice to land the way they do.

Consider that casinos are diligent in trying to understand cause. Dice throws are thus heavily regulated: they must hit a wall, the player may not do anything fancy with them (as pictured above), etc. When dice are old they are replaced, because wear indicates lack of symmetry and symmetry is important in cause. And so forth. It is only because casinos know that players *do not* know (or cannot manipulate) the causes of dice throws that they allow the game.

It is the job of physicists to understand the myriad causes of temperature sequences. Just like in the dice throw, there is not one cause, but many. And again like the dice throw, the more causes a physicist knows the better the predictions he will make. The opposite is also true: the fewer causes he knows, the worse the predictions he will make. And, given the poor performance of causal models over the last thirty years, we do not understand cause well.

The dice example differed from the temperature because with dice there was a natural (non-causal) probability model. We don’t have that with temperature, except to say we only know the possible values of anomalies (as the example above showed). Predictions can be made using this probability model, just like predictions of dice throws can be made with its natural probability model. Physical intuition argues these temperature predictions with this simple model won’t be very good. Therefore, if prediction is our goal, and it is a good goal, other probability models may be sought in the hope these will give better performance. As good as these predictions might be, no probability will tell us the cause of any sequence.

Because an assumed probability model said some sequence was rare, it does not mean the sequence was therefore caused by whatever mechanism that takes one’s fancy. You still have to do the hard work of proving the mechanism *was* the cause, and that it *will be* a cause into the future. That is shown by making good predictions. We are not there yet. And why, if you did know cause, would you employ some cheap and known-to-be-false probability model to argue an observed sequence had low probability—conditional on assuming this probability model is true?

Lastly, please don’t forget that everything that happened in Mann’s calculations, and in my examples after the First Error, are wrong because we do not know with certainty the values of the actual temperature/anomaly series. The probabilities we calculate for this series to take certain values can take the uncertainty we have in these past values into account, but it becomes complicated. That many don’t know how to do it is one reason the First Error is ubiquitous.

**FAQ**

*Why don’t you try publishing this in a journal?*

I’ll do better. It’s a section in my upcoming book *Uncertainty*. Plus, since I am independent and therefore broke and not funded by big oil or big government, I cannot afford page charges. Besides, more people will read this post than will read a paper in some obscure journal.

*But aren’t you worried about peer review?*

Brother, this article *is* peer review. Somebody besides me will have to Mann about it, though, because Mann, brave scientist that he is, blocked me on Twitter.

*Could you explain the genetic fallacy to me?*

Certainly: read this.

*Have you other interesting things to say on temperature and probability?*

You know I do. Check these out.

The cycle goes like this:

-Left science makes something up

-Right science refutes it

-Left science ignores the refutation because it controls the airwaves

The global warming myth will continue to be perpetuated over and over again until ‘traditional’ media is finally dead. We have the edge in cyberspace.

Thank you for this post.

The casino analogy for comparing models with probabilities is really well done. It would be enriching to see what would happen if the same models were used on data without the past 15 to 20 years to simulate this exercise being done in the past. What conclusions would we reach?

I could be wrong here, but I think there’s a misleading typo in …

Third Error: paragraph 6: sentence 3 [Para begins “Look at it this way.”]

As written: “And cause IT what we want to know.”

I think Matt means “And cause IS what we want to know.”

If I’m wrong, I apologize.

Thank you for another great post, Matt.

‘A physicist could measure the various causal factors to predict an outcome with certainty.’ (paraphrasing)

Similarly, ‘Mann’ [& that ilk] that have hung their reputations on a particular cause-effect relationship … and will apply analyses to get the particular results they need.

These aren’t mistakes of analyses, they’re willful stratagems to achieve a self-interested outcome — “torture the data until it confesses” goes the saying…

Thus, in a broad sense, the ‘physicist’ & ‘Mann’ are undertaking very similar endeavors: One is striving to be objective, and the other is striving to reach a particular objective…

Pingback: The Four Errors in Mann et al’s “The Likelihood of Recent Record Warmth” | JunkScience.com

I have been surprised by the 538 Significant Digits Newsletter (website) not picking up on & criticizing the inappropriate statistics used by scientists promoting global warming (aka climate change). For all of 538’s supposed mission to hold various numerical & statistical claims made in the main stream media to account, it seems loathe to buck the “popular” warmist theme for climate. Your post above on the Mann errors would be a good place for 538 to live up to its (self imposed) mandate.

Pingback: The Four Errors in Mann et al’s “The Likelihood of Recent Record Warmth” | Reaction Times

It’s interesting that the word chance is used only twice in the Nature paper; both times as a modifier, once to the word “likelihood” and a second time to the word “occurrence.” Yet in the press release posted at wattsupwiththat “chance” is used five times; four times as a noun, as in the thing “chance”, and only once as a modifier.

Are we to conclude that the authors do indeed know the difference in use and that they hope to exploit the public’s misunderstanding of what “Chance” is while pointing back to the paper and claiming they never said “Chance” was responsible for anything?

It seems you have to read all documents associated with a Climate Change paper, including the press release, to understand how the message is manipulated.

“The Likelihood of Recent Record Warmth” is that some kind of admission in itself.

That should make the public wonder. I know this is intended to seem like non biassed independence but given the line which the media takes the title and abstract are telling. It was written in a rush (says sherlock). I wonder why.

…”the observed runs of record-setting temperatures were extremely unlikely to have occurred in the absence of human-caused climate change, though not nearly as unlikely as press reports have suggested. These same record temperatures were, by contrast, quite likely to have occurred in the presence of anthropogenic climate forcing.’

The press have exaggerated!

so,

extremely Unlikely to happen without it

in contrast

quite likely to happen with it.

Full of waffle and superfluity. I realise the statistical argument is what is important but nothing makes someone want to find out more than curious language like this.

I now see two camps in this discussion.

On the left we have Mann, et al (which unfortunately include Richard Muller) who are highly confident in their analysis of temperatures and what has caused the changes.

On the right exist all the folks at WUWT, wmbriggs, numberwatch, junkscience, and a plethora of others who are nominally competent in related fields looking at the data and saying “How in the H are y’all so confident in your analysis?”

There is a fundamental marketing problem here. Who wins the marketing game? The guy who is confident? Or the guy who questions the confidence?

Herr Briggs. I can’t say you were masterful in your execution here. You brought a bunch of wonderful examples out to attempt to explain several very painful concepts. I hope some of the folks who didn’t understand before might understand now. The challenge as I see it is that you quite eloquently said “Here is why I know you all are lying to yourselves”. I don’t think anyone likes to find out how they lied to themselves.

Pingback: The history of Climate Change - Page 41 - Historum - History Forums

Excellent expose, but….

the first mistake they made was putting the word Mann anywhere near a scientific study. Game over before it began.

As usual you fail to understand the nature of 1/f statistics and confuse it with that for normal noise (or perhaps you just avoided this complexity).

To put it simply, imagine a drunk in a square staggering around. If they are at point A, what is the probability of them still being at point A as we go further and further into the future? The answer is that we EXPECT the drunk to move (sometime) and so whilst the immediate probability of being within a particular distance of their original point is large, the long term probability gets smaller and smaller.

Because climate noise is also 1/f noise (but with a different power actually 1/f^n), WE EXPECT as NORMAL that the climate will see long term variation GREATER than the 99.9% confidence based on short term variation. In other words (given time) >50% of the time it will beyond the 99.9% confidence limits. In other words the variance increases (I think its as root of n – where n is the number of “samples”).

That means that for any variation – given enough samples/time we EXPECT the sample variation to have grown to include that variation.

Here’s an article I wrote a while back:http://scottishsceptic.co.uk/2014/12/09/statistics-of-1f-noise-implications-for-climate-forecast/

Mike H,

No, this is the old way of thinking I tried to explain. Probability, or “noise” in this probabilistic sense, is not causative. The drunk’s actions are the cause of his movement. Your uncertainty of those steps can be modeled, but that model is in no way causative. All you’ve done is proposed a probability model for the understanding of the uncertainty a certain sequence takes. It in no way explains that sequence.

Charlie,

Only one typo? My enemies must have been put to sleep by the length of the article!

Alan Tomlin — 538 is subservient to the warmists. One of their very earliest issues had a sensible article explaining IIRC the weakness in the belief that climate change had caused an increase in extreme weather events. http://fivethirtyeight.com/features/disasters-cost-more-than-ever-but-not-because-of-climate-change/

The article was impeccable, but some warmists complained. So, 538 ran a “correction” or “refutation”. http://fivethirtyeight.com/features/mit-climate-scientist-responds-on-disaster-costs-and-climate-change/ This response didn’t actually refute the original aricle, but it was presented as if it did.

Possibly there is a fifth error, using post hoc estimates of probability as part of an analysis whose purpose is to predict the future. Alternatively, this is a corollary of one of the four fallacies.

The probability of a series of observations is 1.0 (unity) because these events have already happened.

We must claim that the series of observations reveals some deterministic causation, possibly hidden or unknown. From a series of murders in London the inference was made that the cause was Jack-the-Ripper. As I understand the history, Jack-the-Ripper was never found. We are compelled to wonder if Jack-the-Ripper did these deeds or if there were copycat murderers at work.

Only time will tell if Dr Mann’s weather observations will reveal that the underlying cause is climate change. Perhaps natural phenomena could account for all of the observed events. Eventually we might come to the view that weather is chaotic and therefore cannot be used to predict changes in climate.

Why do you think that the “height” analogy and the “craps” analogy is appropriate for temperature trends, considering that the science is more than confident that an anomaly from one year is correlated to one from the next year? Think about it when it comes to hourly temperature. If the low for the day is 10°, and the high is 35°, when you arrive at a data point at 11am that says the temperature is 20°, its not like you can reasonably declare that any one of the numbers in the range is going to be equally valid for the 12pm hour. No, the weight would be heavier on numbers nearer to 20°, unless (a) there is a reasonable physical reason to doubt this, or (b) one says that there is no robust way to get any closer to the right number than to take the whole range at face value (ie, no significant physical understanding/knowledge).

Or am I reading you wrong?

Salamano,

Yes, you’re reading it wrong. “Correlated” is in a sense only another word for probability. Last year’s temperature value did not cause this year’s, and so on for the other. So “correlated” only means that knowing last year’s value gives more information about this year’s—in the presence of a specified model—than only knowing the specified model.

The whole classical language is the real problem. It disguises cause.

And in any case I do not mean for height to be an analogy for temperature. I use it as an illustration that certainty in parameters does not translate to certainty in observables. The “data” Mann uses isn’t data. It is parameter estimates, results from models.

Yes, one would want to use a model to predict the height of an unobserved person. Classical statistics and Bayesian statistics measure uncertainty differently, but to predict the height, the uncertainty in the parameter estimation is taken into account in both methods. That is, the uncertainty in the parameter estimatiom directly affects the uncertainty in the prediction of height. No way out, unless one wants to employ nonparametric methods, the third way.

In some cases, one would be interested in a parameter, and hence want to estimate it, e.g., the proportion of Trump voters in The US, which is a parameter (a population characteristic it is!) .

(Human height cannot be uni-modal. Male and female difference. Hence this example won’t pass peer review process, in which examples need to be correct too.)

How does it disguise cause? I think you are trying to confuse people by conflating people’s misunderstanding with what applied statistics/probability can do. Yes, statistics or probability itself cannot tell us where or how to find causes. Applications of statisitics and probability can be essential in the process of finding causes.

JH,

That’s naughty, Mann is trying to confuse the reader. He uses the bore to death method. He uses four syllable word or even several words where more clarity with fewer words would suffice, States the obvious, why?

“anthropogenic climate forcing” “human caused climate change” That’s not done for poetic effect or to save repetition since the repetition was unnecessary in the first place.

Nature should have corrected the typos before publishing as well. Did you read the article. I did, twice and was daydreaming from about half way down. THAT is the purpose. “oh he knows what he’s talking about, I won’t bother, he’s a “scientist”. He forgets the lay person cares about typos and things mentioned above. These are all part of the BS detector. What floats off the page is that he gives no other option than his favourite anthropogenic cause. He doesn’t admit, either, that we can’t measure what proportion of the CO2 is from humans but speaks as if we know all these things.

Separate point to the article:

Uncertainty can’t be quantified. If we knew the value of our uncertainty we would know we were wrong or right. It’s not a lot better than saying “how sure are you of your educated guess” If it were an informed guess then it’s not a guess it’s a known thing.

Even error bars big as you like just say “we admit we don’t know”. When they get something right they forget they admitted they couldn’t predict and say they were right all along. We can all do that it doesn’t take a “scientist”. Perhaps if “quantify” means something else, obviously. To me it means count or state an amount.

The future is unknowable.

“gravitational” in the second error, left hand margin.

Today’s post: Hos last line paragraph seven.

“semi-empirical” approach. Is this to answer the criticism now in common discourse that the evidence for climate change lives inside climate computer models? I think so.

‘‘climate noise may exhibit first order non-stationary behavior, i.e. so-called “long-range dependence. Analyses of both modern and paleoclimate observations, however, support the conclusion that it is only the anthropogenic climate change signal that displays first-order non-stationarity behavior, with climatic noise best described by a stationary noise model.”

Decipher:

“Climate internal variations may exhibit long range dependence.

However, analysis of modern and paleoclimate observations, support the conclusion that it is only the human caused climate change signal that displays first-order non-stationarity.

Climate internal variability is best described by a stationary internal changeability model.”

Notice as well that the human element has “behaviour” That must be the way you can tell it’s human.

They don’t know what “signal” is human. amongst the red noise as the engineers like to say. They believe there is a human signal. That’s all.

Joy,

Being naughty has been my specialty since birth. 🙂

I actually like the idea of the paper. In fact, under all the same assumptions in the paper, I know a re-sampling (not exactly a simulation) method such that the uncertainties in the estimated parameters are asymptotically negligible. One can actually study the effect of the measurement errors in observations too. All the authors have to do is to turn to their statistician colleagues for help.

Without some sort of data/experience modelling (structurally or not), one’d have to rely on their god to tell them how to make predictions. Some people might be able hear their god’s voice and prefer such method. Well, I guess life could be a fun ride if one makes decision without considering any experience/data, good or bad.

Keeping harping on same problem known to require a solution bores me.

Joy,

A student has earned the following scores for the past 20 quizzes, reported in sequential order, i.e., a time series data,

94.5, 95.2, 95.6, 96.1, 96.6, 96.3, 96.4, 96.7, 97.5, 97.1,

97.6, 98.4, 98.2, 55.5, 98.6, 99.3, 99.2, 99.6, 100, 100

Ignoring possible errors due to the subjective grading and the possible causes of any score results for now, can you come up with a likelihood that this student would score between 98 and 100 on the next quiz? (Consider the sample mean, standard deviation and the increasing trend of the time series.)

If you can, it implies that you have somehow modeled the likelihood/uncertainty.

Great article!

That quote by Joy:

Also highlights another couple of fundamental problems:

1. The modern signal cannot really be analysed for both the natural and human signals unless you have already made arbitrary assumptions about the nature of those signals that would allow you to discriminate between them. So that claim is essentially a tautology.

2. Mann and most all climate researchers, when dealing with paleo data, ignore the problem of scale, resolution and error. For example, if the signal for auto-correlation in natural processes is important over say 50 – 100 years, how would you detect that in paleo data with temporal resolution of 200 years, or 1,000 years? It would appear as noise. (There would also be a problem of detection in modern data too, with a typical data length of only about 150 years).

The problem is compounded by the fact that much paleo data are proxies and fail to also acknowledge that the proxy relationship could be multivalued or heavily influenced by other factors. For example, a tree may respond to temperature, but its growth may be less for both increased AND decreased temperature. That blows away any paleo reconstruction almost instantly – modern divergence might be explained by such a mechanism.

Naughty JH,

You like the idea but not the paper?

Under all the same assumptions but substituting the satellite record would the signal be the same? Why did they select the records used?

Repeating the mistakes over and over doesn’t help.

“Asymptotically negligible” = dark magic if I understand correctly. Isn’t that treating a complex thing like a toy example? Watering down the error?

All models are not equal. Just because a model is huge or complex doesn’t make it accurate.

Why consider bad data? It depends on what bad means.

“YOU MUST USE ALL OF THE AVAILABLE DATA”.

Your referring to divine revelation but all sorts of revelation including common sense are important.

The quiz example: Given 2D, restricted knowledge which is a toy example. I would say “High expectation” Is that quantified uncertainty?

Unless we need to sell an idea or say something about the usefulness of a prediction. If it is for internal business use there’s incentive in intellectual honesty. If public policy is involved then duty of care mandates honesty or at least I wish it did. The incentive or duty has been corrupted.

“If you can, it implies that you have somehow modelled the likelihood/uncertainty.” or considered expectation?

We can predict. Our brain does it every time we move.

If all we know is a fraction then gathering more data or worse, mixing special data still won’t render more truth about the bigger picture.

Keats on uncertainty:

If you hang on to the known it brings no revelation.

open-mindedness is “letting the mind be a thoroughfare for all thoughts… not irritably reaching after fact and reason, hungering after truth and always trying at it.”

“It should be capable of being In uncertainties, mystery, doubt, remaining content with half knowledge.

Pingback: Smakebiter fra siste tids klimaforskning | Klimarealistene

Mr. Briggs,

I believe the picture you’re using for this post depicts “dice setting” in which you put the dice in various patterns in an attempt to increase your chances of rolling a particular number or perhaps staying away from others — particularly a 7 after the point is made.

I have been to Las Vegas many times and seen many people “dice setting”. It is an allowed practice. Some casinos frown on the practice because it slows down the game and hence cuts into the casino’s profits.

The standard rules for throwing the dice are:

– keep them over the table while throwing — so you can’t switch out the dice

– the throw of the dice must be in the air

– they want you to “hit the back wall” with the dice so that they bounce off

– the dice must stay within the table

– the dice cannot go higher than the eye level of the stickman

– either die cannot lie within the stacks of chips on the table — called “No roll. In the money”

There was a dice sliding incident at the Wynn casino a few years ago so control and influencing the dice as a gambler is possible. It’s not legal however. The Wynn is suing the couple for $700,000.

Pingback: Mann et al 2016 Time Series Analysis Flawed? | The Drinking Water Advisor

Excellent summary. For me the challenge is to understand that all weather is emergent from a complex matrix of conditions. In a sense all weather and climate is “natural” which renders the term as meaningless as the term “climate change”. Since CO2 is part of the matrix, it is not deceitful to assert that CO2 is influencing climate/weather. It is also meaningless because we see the CO2 influence manifested as benign or trivial. To sustain the climate crisis narrative requires the sort of terrible papers highlighted here, hysterical false claims about weather events, religious zealots obsessed with evangelization of climate fear and silencing of critics, especially those who are former believers.

Pingback: Outside in - Involvements with reality » Blog Archive » Chaos Patch (#99)

I thought this was excellent. Thank you. I know zip about Statistics and find discussions of it as dull as dishwater. No one ever slows down to explain the ‘inside-baseball’ terms bandied about (p-value, etc), nor bothers to clarify the acronyms, much less the underlying concepts.

This was fascinating. Look forward to that upcoming book if you’re writing it as clearly as you explain it here.

Pingback: Iowa Caucus Open Discussion. Update: Text Fixed, That Coin Toss Discussed |

Pingback: This Week in Reaction (2016/01/31) – The Reactivity Place

Anyone who publish anything from the dishonest, “green” activist M. Mann proves they are corrupted, it’s really that simple!

And when all of the math games are done, all the doubts sown as seeds in a lawn…what conclusions are to be drawn.

Conclusions are not the goal for those folks commenting.

They wish to sow enough doubts so that nothing is done, While temperatures are rising…even if we use the precise data and graphs used by Dr. Roy Spencer to show doubt. http://www.drroyspencer.com

Page one, the 1st address, tap it a look.

That is raw Satellite Data. The Half, Left of Center (closer to 1979) show most of the temperatures are below the black line.

The Half, Right of Center (closer to 2015) Temperatures are above the line.

That is an honest Sign of rising global temperatures.

Would you like to See a Pandemic Fraud?

Take your hand and cover the entire graph left of center.

Notice how that BIG BLIP near the center of the graph now show dwarf all the following temperature dots. SEE…..NO GLOBAL WARMING.

In High School, Mr. Watson would have called that selective editing to prove MY point of view…In Church, it would be called a lie.

When all the statistics are spend, All the math wizards are done sowing doubts…

THE TEMPERATURES ARE RISING, ICE IS MELTING, SEAS are rising.

Rick Kooi, very few people I know say that the earth isn’t warming.

Canada was 90% covered in glaciers, where are they now?

Obviously the earth is warming.

The question to some is ‘why’, to most it is, ‘does it matter much’?

Pingback: Recent Energy And Environmental News – February 15th 2016 | PA Pundits - International

Pingback: Half Of Climate Science Results Non-Replicable, Flawed | PSI Intl

I think temperatures were extremely unlikely to have occurred in the absence of human-caused climate change. Humans had a lot of impact.

“Humans had a lot of impact.”– IgorSounds like sarcasm, Igor? If so, you’ll be pleased to see this analysis that more than justifies mockery of warmist delusions.

https://www.youtube.com/watch?v=kwIixU1JyDU

Isn’t it a typo 5’5″+/-3″?? you mean 5’5″+/-3′., no?

In the antepenultimate section ‘But aren’t you worried about peer review?,’ there is a verb missing:

“Brother, this article is peer review. Somebody besides me will have to [tell / warn / tease] Mann about it, though, because Mann, brave scientist that he is, blocked me on Twitter.”

Having read only those specific sentences, I’m forced to assume the rest of the article is equally sloppy and that you have zero credibility, Mr Briggs! Is it any wonder real scientists pay little attention to your FUD-mongering ways? The dog barks, and the caravan of courage moves on! /sarc

Pingback: Smakebiter fra siste tids klimaforskning - Klimarealistene