I have a new paper with the Global Warming Policy Foundation (if you search for them on Google, that company does not give the url, lest you be corrupted by non-Expert opinion).
Here is their summary:
How the IPCC Sees What Isn’t There
In his new paper, statistician William M Briggs surveys the field of climate attribution studies, in which changes in the weather are blamed on humankind. In particular, he looks at the recent pronouncements on that subject by the Intergovernmental Panel on Climate Change in its Sixth Assessment Report.
Briggs conclusion is that climate scientists are far too confident in their conclusions:
“There are multiple layers of uncertainty. There is the uncertainty in the events themselves, the uncertainty that arises out of the fact that the climate models used in these studies are imperfect, the uncertainty that arises from the statistical models used to reach the final conclusions, and finally the fact that any correlations between models and reality are weak and inconclusive.
Professor Briggs’ paper is entitled How the IPCC Sees What Isn’t There, and its publication coincides with the release of another GWPF paper, about the attribution methodology known as “optimal fingerprinting”. GWPF hopes that these two papers will start a serious debate on the reliability of climate attribution studies.
Download the report here.
One again, ladies and gentlemen, and you, too, Experts, the link is here.
Here is the other paper mentioned, and abstract.
Suboptimal Fingerprinting? A debate about climate statistics
Earlier this year, the economist Ross McKitrick published a new paper about an important methodology used in attributing changes in weather events to mankind. McKitrick observed that the so-called “optimal fingerprinting” methodology, which has been used in numerous studies and has been behind dozens of newspaper headlines about mankind’s influence on the atmosphere, was statistically erroneous. The implications of the findings are therefore potentially profound.
Today, the Global Warming Policy Foundation (GWPF) is publishing a new paper entitled Suboptimal Fingerprinting?, in which McKitrick explains in layman’s terms the problems he has uncovered (a more technical treatment can be seen here).
Myles Allen and Simon Tett, who developed the optimal fingerprinting methodology, have already given a short response to McKitrick’s criticisms.
McKitrick has now provided further comments, which can be seen below, alongside invited comments from Myles Allen and the climate economist Richard Tol.
In addition, we are today also publishing a new paper by statistician William M Briggs, which questions the credibility of the climate attribution statistics.
We hope that these papers will start a serious scientific debate about the credibility of attribution science, and of optimal fingerprinting in particular.
To that end we are inviting further comments from readers with expertise in statistics.
M.R. Allen and S.F.B. Tett (1999) Checking for model consistency in optimal fingerprinting
Suboptimal Fingerprinting? by Ross McKitrick
Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here
Have you seen an expansion of Mckitrick’s work using:
A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for
Heteroskedasticity, Halbert White, Econometrica, Vol. 48, No. 4 (May, 1980), pp. 817-838
in order to account for the variations in measurement?
”We hope that these papers will start a serious scientific debate about the credibility of attribution science, and of optimal fingerprinting in particular.”
Globalist Warming Hysterics having a serious scientific debate: How dare you! — our science is safe and effective! Experts agree! Go away! You’re fired!
Briggs: Sorry, but if they had not chosen the name “thegwpf.org” they might be findable. Only Duckduckgo actually found the link. Blame the organization for the lack of understanding of SEOs.
Google produces a link easily if you use “thegwpf”. Naming of a site is vital. I will note that other sites are easily found also if you type in the correct term. Google is picky on a lot of things. And they have been censoring things for 15 plus years, so you’re very late to the parade. No one cared in the beginning and they let the monster grow. They fed the gremlins after midnight.
There is no dialogue on what the government is using to terrify people. There never was. And they have called it “the science” for centuries. Worry about getting smarter people and you won’t have to worry about search engine evils. You must be loved by magicians as you watch exactly what they want you to watch…..
The “debate” thus far between McKitrick and Allen is short, but enlightening. Allen’s response to McKitrick basically says, “You think our model is wrong, but the widespread application of our model shows the signals are so loud that humans cause global warming that it’s a moot point our model is wrong.” And Richard Tol’s comment to the debate is priceless.
Steve Koonin, former (Obama) policymaker, is starting to question the so-called climate experts and the conclusions they are drawing with far too much certainty. Perhaps there is hope…
They definitely pretend not to see a lot of things! But I guess “essential” people are exempt!
Bezos Leads Parade Of 400 Private Jets To COP26 With $65M Gulfstream As Greta Accuses Leaders Of Betrayal
Forget COP26, The G-20 Still Struggles To Meet COP15 And COP09
“Very Green” – Biden’s Gas-Guzzling 85-Car-Motorcade Raises Climate-Crazed Eyebrows In Vatican Visit
Is Climate Alarmism An Establishment Attempt To Restore Social Control?
Only Duckduckgo actually found the link.
Oddly, Duckduckgo uses the Google search engine. If DDG can find it so should Google.
Actually, Google does find it but shoves it way down the list.
Congratulations on the publication of your excellent paper! Well done. Studies like yours are vitally important. They demonstrate the scientific and logical flaws in climate alarmism, which is an existential threat to civilization. The alarmists seek to destroy, not to “save”.
Two points in that regard may be understated.
First, the alleged warming (if there is any) could be due to numerous natural factors including the sun, orbital mechanics, clouds, oceanic circulation, and cosmic rays. The attribution models focus on human influences only, and so are under-specified. If Y = f(many X’s) and only one X is considered while the others are ignored, the model is guaranteed to be wrong and useless.
Second, the alleged warming (if there is any) is entirely beneficial, a Good Thing, and not a bogeyman to be feared. Warmth is not a problem; it is a blessing. The Halloween scare about warmth is laughably ignorant, a weapon used by totalitarians seeking to enslave populations, and should be thrown back in their faces. Nothing is as frightening as totalitarians on the march. Warmth, on the other hand, is a boon to the Planet.
From the UK, when I search for:
“Global Warming Policy Foundation”
Results with Google.com: Wikipedia entry first (disinformation), followed by two fachekas and a negative Independent.co.uk article.
Results with Duckduckgo.com: Wikipedia entry first (disinformation), followed by a lefty critique, linkedin?, and a negative Independent.co.uk article.
Not a lot of difference between the two.
Wikipedia entry tries to discredit thegwpf.com by highlighting accusations of their “denying climate change” and the associated efforts to revoke their charitable status (because they dare to disagree with “the science”). LOL.
These folks are frightened to their core of being scientifically challenged.
Excellent article Briggs.
Saving the word “heteroskedasticity” for my next scrabble match.
What am I missing?
Why is the idea that climate stasis is possible via human intervention a thing? Under what circumstance could we have stopped the glaciation of most of the United States back when Washington state was under 2 miles of ice or the warming of the earth under which that ice melted and retreated?
And why is carbon villified? All life on earth is carbon based. Carbon dioxide is necessary for all planet life to exist. Without plants we don’t have oxygen and we all die.
None of this makes sense..
The fundamental problem is simple: if we assume that man-made CO2 has no effect on climate at the level being claimed then the observations are all in one data set – natural variability. That assumption produces a result of equal validity statistically – it just means our population sample was rather too small. That’s unsurprising given how few years we have of reliable data compared with say the age of the Earth.
On other words, the complete opposite assumption also fits the data at least as well. That they claim climate models fit the data is simply false – climate models produce a huge range of results and are tweaked to match reality. Many climate models on many runs do not match the data.
Allen’s “the signals are so loud now” response made me chuckle. Not so much at Allen, but at myself, as it brought back a memory of a mistake I made early in my career. I’ll need to sanitize this a bit, hopefully it still makes sense.
The company I worked for out of college had a proprietary algorithm they used to pull a signal of interest out of the noise, where ‘noise’ is defined as anything unrelated to the signal of interest (i.e., extraneous signals ’caused’ by an unrelated process). The algorithm required a second signal, a clean reference signal, which was usually the biggest obstacle to successfully applying the algorithm. I had the bright idea of extracting the reference signal from the noisy signal itself, thus eliminating the biggest obstacle in applying the algorithm. The reference extraction algorithm was a bit convoluted, which had the effect of obscuring what my common sense should have been telling me.
The reference extraction algorithm worked beautifully right off the bat. When the standard two-signal algorithm worked, it produced a very distinctive signal, which provided confidence that it was working properly. My ‘extracted reference’ also produced these distinctive signals, so I really thought I had come up with something useful. Luckily I didn’t tell anyone about it, I wanted to test it some more first, and I spent a number of hours doing just that. Finally it dawned on me that the distinctive signals seemed just a little too good, so I tried it on data that should have been a counter-example, but it, too yielded the distinctive result. So obviously my reference extraction algorithm was taking the tiniest hint of a false signal and magnifying it. I imagine that all this seems obvious to most readers, but in my defense, the reference extraction process was specifically designed not to have this problem, and damn, the results looked so good. And, of course, I really wanted it to work, to show off my signal processing skills fresh out of college.
The moral of the story for me was that I should trust my common sense when it comes to getting something for nothing. If you want to believe something bad enough, it’s easy to fool yourself by adding layers obfuscating details, each layer solving issues in the layer above it, until one is unable to discern any more issues. But the basic nature of the problem is still there, of course, hidden by the added complexity. The best analogy I can think of at the moment is those perpetual motion or “over-unity” contraptions all over YouTube. Most are probably scams, but I think at least a few are true believers who are oh-so-close, they just need to add one more layer of complexity.