Heartland Climate Conference #ICCC10: Day 2. Lousy Models, Wrong Theory

Yours Truly is in this film (released in theaters this fall)
Yours Truly is in this film (released in theaters this fall)

Forgive the brief post. Got back late, got up early, and have to be out early. Don’t want to miss Mark Steyn’s breakfast speech.

See the Twitter hashtag #ICCC10 or watch the speeches live. I’m speaking sometime after 10:50. Maybe 11?

If you can, at least see Will Happer’s talk. He’s a well known, and very well respected, physicist from Princeton. He showed us all that global warming is no different than Alice in Wonderland.

Happer showed the same picture we have all seen. The one where the models are way-up-here and the reality way-down-here, and where the gap between models and reality is growing wider and wider and wider and…

As I never tire of emphasizing, lousy models prove lousy theories. The theories which underlie the climate models must be wrong. Must as in must. Climate models have no predictive skill. They do not even come close to besting persistence (whoso readeth, let him understand:).

Every working scientist knows this and damn well ought to start admitting it. If you care about the truth, it’s time to uphold it. This farce has gone on long enough.

It used to be a fundamental principle of science that when a theory produced bad predictions every scientist except True Believers said so. True Believers? The originators of the theories. Blondlot never admitted his N-rays didn’t exist. Have Fleischmann and Pons allowed themselves to believe cold fusion was an error? Chiropractors are amoung us.

This is why last week I quoted Planck: Science progresses one funereal at a time.

We’re going to have to wait for the few who originated this carbon-dioxide-will-kill-us-all-if-we-don’t-act-soon theory to die off before global warming completely disappears. But the bulk of scientists who know exactly what I’m talking about. Enough already.

So since we know, with as much certainty as we know anything else, that the models stink, and therefore are based on a faulty theory, what’s the right theory? Excellent question. Let’s find out. But the breathless positive feedbacks built-in by assumption to the models surely can’t be right.

I understand why scientists want to avoid telling the simple truth that the models are broken. No one wants to be savaged by the activist press and lying, self-aggrandizing politicians.

But if just a few more would admit basic scientific procedures we can put this false theory behind—and move on to the next scare. Sustainability? Climate Justice?

If you are a civilian and you’re getting your knowledge from the mainstream media or politicians or anywhere but the primary source material, you’re lost. And if you can’t read this primary source material, then you’d be better off ignoring everything you hear in the press. Go to a local university and seek out a physicist and ask him, “Is Briggs right? When predictions are so lousy and growing lousier for nearly two decades, does that mean the theory which is responsible for those predictions must be wrong?”

Do not ask one of the handful of True Believers—the same dozen guys who show up in all the quotes. Ask a working physicist who has never showed up on television. Do not ask a psychologist, or a sociologist, or an economist, or anybody else who could not understand what dynamic modeling is.

Go and ask. I dare you.

All typos free again.

Update Yours Truly is pictured being interviewed by Marc Morano in this hilariously inept coverage. Best shot is of an empty room labeling it a “party.” The bug wit or liar who wrote that failed to discover all those chairs were soon to be filled with the authors of Climate Change: The Facts for a book signing.


  1. “Is Briggs right?”

    Yes, if the data the models are calibrated with and tested against is roughly correct. Otherwise, no.

    Many years ago now I took one of the major models apart and discovered it to consist mainly of code to make it run on a parallel machine – that what was left consisted mainly of some 1960s fortran surrounded by thousands of ad hoc changes and/or embellishments intended either to add detail or to improve the thing’s ability to hindcast.

    The core 60s stuff seemed ok to me, most of the additions irresponsible.

    So how good are the models? We’ve no idea because the implementations are terrible and the data used symbiotically with them cannot be trusted.

  2. Rich

    “So how good are the models? We’ve no idea because the implementations are terrible.” What’s the difference between a model and its implementation? If I have this great model that predicts lottery numbers but my implementation is so lousy it always fails to get the answers right, where does this really great model exist apart from my code?

    Surely the computer code is the model. What else makes sense?

  3. John B()

    Paul Murphy:

    “Yes, if the data the models are calibrated with and tested against is roughly correct. Otherwise, no…
    …So how good are the models? We’ve no idea because the implementations are terrible and the data used symbiotically with them cannot be trusted.”

    Huh? Assuming I understand your comment…
    I don’t see how Briggs could be right about a model being lousy if it’s properly calibrated and gives lousy results.
    Or no, Briggs is NOT right, IF the model is lousy to begin with. If the model is a lousy model, how can we have NO IDEA about how good it is? Without even considering results, how can it not be lousy? (Reminiscent of the “Harry Read Me” file.)

    I do appreciate your insight into a model.

    I’m wondering if you’re thinking about CAGW theory itself?

    But, again, if the “scientists” can’t come up with a methodology to properly assess the theory, how can it be a beautiful theory?

    Maybe I misunderstood your answer, or maybe I misunderstood the question you were answering.

  4. John B()


    I glossed over the question …

    I didn’t see or ignored “…does that mean the theory which is responsible for those predictions must be wrong?”

    Yes, I stand by the second part of my comment:

    “…if the “scientists” can’t come up with a methodology to properly assess the theory, how can it [even] be considered a beautiful theory?”

    Again, I go back to the Harry Read Me file – If “Climatologists” like Michael Mann don’t even understand “Computer Systems”, how can they possibly understand a “Climate System”.

  5. It would be crime to miss Mark Steyn!

    I don’t think that lousy predictions disprove a theory. They disprove models and call into question the theory, but there exists the possibility that the theory is correct and the evidence is just not there yet. This reduces the theory to an unproven hypothesis which should not be believed until proven. The models are worthless at this point and should be discarded. Start over and try again to prove the hypothesis.

  6. Gary

    Good talk, Briggs. To change a mind, go through the heart.

  7. John B()

    …and the way to a “man’s” heart is through the stomach?

  8. DAV

    I don’t think that lousy predictions disprove a theory. They disprove models and call into question the theory

    I disagree.
    1) the models could be just a lousy implementation of the theory but if that’s so why are they defended so vigorously instead of being corrected?

    2) they could be implementations that are deliberately incorrect but one has to ask: why would anyone do this?

    The only reasonable conclusion is the models ARE the theory. They give incorrect predictions so the basic theory does as also. The theory is wrong.

  9. Sander van der Wal


    Indeed. A model that is a computer program implementing a specific theory is that theory.

  10. The theory is that CO2 causes warming in the atmosphere due to back radiation. The models are what was used to try and prove this. It could still be true that CO2 does cause warming. Physics seems to indicate that this is possible if not probable (jump in here physics and correct me if I was lied to by others). The problem is we have multiple claims as to how much, what interactions occur, etc. There can be a secondary process in this that has been missed. Until it is proven that there is no such thing as back radiation, or that it has been improperly quantified or other factors not found that affect this, the hypothesis could still be true. This all started far before computers and models.

  11. Sheri, the “theory” is plain–CO2 has a bending vibration in the Near Infra-Red and so will absorb radiation at the appropriate frequency and re-radiate isotropically. Half the re-radiated radiation will go back to earth and half will continue out to space. (This neglects height effects, i.e. assumes the CO2 is at a low enough height that the earth covers a solid angle of 2pi that is seen by a CO2 molecule.) That theory neglects the fact that H2O also has a low frequency bending vibration. The warmists try to get around that bit, by talking of “feedback” and that’s the theory that is unproven and likely to be false.

  12. I should add the theory is plain–it’s quantitating the theory that has lead to nonsense.

  13. Bob: Thank you. That is more or less what I was taught. I have always noted that water seems to have a similar bending frequency and wondered how we differentiate between the two. Feedbacks were always told to me to be more or less “fudge” factors to get the global warming theory to work as politics wanted. I think that’s probably my point–the theory is plain, the quantitating is problematic. And that’s where the models come in so if the models fail, we really have to idea how much of an effect CO2 has. It could be virtually negligible. Right now, no one really knows.

  14. John B()


    As you know it isn’t just Global Warming Theory, it’s CAGW, the proverbial three-legged stool.

    This is more than just: “CO2 causes warming in the atmosphere due to back radiation”

    CAGW or Climate Change might ultimately be based on the “back radiation” theory, but it takes a giant leap and says that the climate system cannot handle the “extra” human-caused-increase-in-CO2, resulting in catastrophic failure of said system.

    So it’s NOT even really a three-legged stool problem, because we currently have the one leg (increase in human CO2), we do have that “warming leg” BUT THAT might be part of an entirely different stool; because we’ve had warming without human CO2; and we’ve had warming without an increase in CO2. (We’ve had cooling as well.) Of course all of this relies on proxies to provide us enough data to make a model. (Wasn’t that yesterdays lesson?)


  15. Engineer

    The models are simply wonderful, except for the fact that there are no objective validation and verification tools, no formal methods that establish as proof that the code and the theory behind the code is correct. The state space is enormous, and hardly touched, explored or examined. As far as I know, there is no coding standard, like FAA’s DO178/C that at least validates the coding construction and test processes, but even that won’t help much if the concept behind the code is hosed. For something that is being used as a reason to drive policy and the subsequent spending of billions, if not more, of the currency of your choice, is it not strange that there are no objective standards? I realize it may be a daunting task, perhaps impossible, but that is not a reason to embrace the insanity of accepting these numerical hallucinations as a representation of reality.

  16. Nate

    @Engineer – good luck setting some kind standard.

    Looking through some of the info around DO178/C, my bull**** detector is firing left and right. It seems to simply be a justification for miles of paperwork so that somebody can be sure that they have engaged in enough CYA. Of course, Cognizant or TCS will happily help you implement it using cheap foreign labor, and they’ll say they’re doing all those things. Then some poor schmuck will have to actually look at the code and discover that the standard did nothing since everybody involved lied about it.

  17. DAV


    DO178/C certainly provides CYA but BS it’s not. It follows basic System Engineering flow which is: Concept, Requirements, Design and Implementation. The underlying premise is that all parts of the Implementation should be traceable to the Requirements and Concept. NASA has something similar called the Gold Standard. It also increases the up-front cost but the hope is to prevent even more cost at a later date either through failure or during on-going maintenance. It’s basic Quality Assurance.

    Is it perfect? No — partly because the steps aren’t always followed, The Solar Maximum Mission control wheel failures and the mirror defect in the Hubble telescope come to mind. Both of those had steps skipped because of cost considerations but ended up costing almost as much as the initial project to correct. The International Ultraviolet Explorer’s onboard computer was removed from the rocket and ‘fixed’ in a Holiday Inn room however that was an ’emergency’ (the fix was temporary as the CPU still continued to fail). The OAO-C (Copernicus) flight software had a number of bugs that were discovered many years after launch fortunately many of which were never encountered during observatory operations.

    If the HARRY-REAME has any truth to it, the quality of climatology model software is abysmal.

  18. DAV


    The only sensible reason to assign partial credit to parts of a broken theory is for salvaging pieces to be used when constructing a new one. If it were a theory on which horse would win a race it would be small consolation that the horse selected wasn’t dead last so PARTS of it might be correct. Still, one couldn’t say the handicapping theory as a whole was correct.

    There is no escaping the fact that the current climate theory is wrong and needs repair. The basic premise that temperatures would rise with rising CO2 levels certainly are amiss. But saying the theory just doesn’t account for all of the other variables yet is like saying the Handicapping Theory above is correct but just doesn’t take into account the other horses yet. Hardly convincing.

  19. Engineer

    This is only one of many engineering standards. It is on my mind due to the A400M crash that resulted in the complete loss of airframe and occupants in Spain earlier this year. Evidence is pointing towards inadequate software I&T .

    As a research tool, I have no objection. The flexibility of having a “living” tool that can be maintained with in-house expertise by subjective-objective means is very convenient and cost effective, but if it crosses the threshold into policy/public safety then it is fair game for the same kind of accountability chain the rest of the applied engineering/physics world deals with. It may not be perfect accountability, it may not net the people “at the top” that so many wish to pillory, but it’s a good start, and would go a long way to help restore credibility to the process. Of course, in a practical sense it is probably impossible to perform objective V&V on the models, since they are essentially models of open systems, orders of magnitude more complicated than aircraft ECM or space probe mission software. They would be forced to admit that their models are as objectively testable as the Oracle of Delphi, unless you could set up an interview with Apollo.

  20. John (B):

    If we mis-measure the rate at which objects fall and decide, on the basis of those measurements, that acceleration due to gravity (near earth surface, direction limited to “down”, etc etc), is about 29.5 ft sec/sec then all our predictions about the speed at which objects dropped from airplanes hit the ground will be wrong (and any predictions about directionality will be wrong too) – but Newton’s basic model would still be correct.

    The main point I see Dr. Briggs making is that reality falsifies the theory – and that’s right provided that the measurements made to determine either end of this (i.e. those used to calibrate the model; (e.g.. 29.5 ft/sec/sec) and those describing the reality against which we compare model output) can be trusted. My point wrt climate models is that we cannot trust either data set, and so don’t know on this basis whether the underlying model works or not.

  21. JMJ: We can only hope. Actually, hummers aren’t that great a vehicle and they won’t let you have a military one with a machine gun turret, so personally I think it’s a waste. However, CSI Miami used a ton of them in the TV series. Nothing suspicious about a black hummer pulling up to your house, right?

    DAV: I am not saying the theory doesn’t account for all of the other variables. I am saying that’s possible. Probably unlikely, but I don’t know. I am differentiating between the original theory that CO2 warms, based on physics, and the augmentation of the theory with all kinds of forcings and feedbacks to arrive at CO2 being dangerous, a pollutant, etc. Please note that I did not say one could keep the theory and models. I said it was busted back to an hypothesis and they needed to start over. That would, I thought, convey the idea that I do know the models are broken and cannot be used.

    Additionally, climate models are weather models on steroids. There is some usefulness in the models. Not for representing CO2 and its warming effects. So they should stop using them for said purpose and go back to weather forecasting. At least people will tolerate a 65% or less accuracy with that particular usage. It doesn’t fly for using to redistribute wealth and claim we’re killing the planet.

    I agree with Paul Murphey’s example and his statement about bad data sets. Well said.

  22. Nate

    Of course, in a practical sense it is probably impossible to perform objective V&V on the models, since they are essentially models of open systems, orders of magnitude more complicated than aircraft ECM or space probe mission software.

    This is kind of what I was driving at. In the low-level software world (disclosure – I only work with 3GL-5GL languages) I believe that one can arrive at a relatively strong level of certainty that the software will perform according to detailed written specifications, as long as *everybody* involved can be trusted, and the software undergoes as much rigorous testing as the hardware. From what I can tell from my (cursory) research these guys are running models with 3GL (mostly FORTRAN?) code (and maybe a bit of 2GL here an there) with tons of tweaks and simplifications built up over the years.

    But I don’t think they have even a semi-complete list of specifications for the earth… How does one even start to write the requirements for a set of thousands of interconnected variables?

    In general I think we’re in agreement.

Leave a Reply

Your email address will not be published. Required fields are marked *