How To Generate Massive Scientific Over-Certainty With These Four Simple Tricks

How To Generate Massive Scientific Over-Certainty With These Four Simple Tricks

Update This must be more difficult than I thought, judging by lack of questions or comments. If you do not understand, ask.

Here is a picture of what they are calling “Global Temperature”. It is from NASA (click Global Annual Mean Surface Air Temperature Change on this page).

Quoting: “Land-ocean temperature index, 1880 to present, with base period 1951-1980. The solid black line is the global annual mean and the solid red line is the five-year lowess smooth. The blue uncertainty bars (95% confidence limit) account only for incomplete spatial sampling. [This is an update of Fig. 9a in Hansen et al. (2010).]”

What a lovely example of how to generate massive over-certainty in just one picture! Yes, sir. You too can generate your own massive over-certainty if you just follow the steps the makers of this graph used.

1. Replace Your Data With A Model.

The red line is not the data. The red line is not real. The red line did not happen. The red line is not being used to make a prediction for (as of this writing, since the graph is automatically redrawn yearly) 2021. So why is it there?

To cover up what happened in an attempt to make changes in what happened seem more certain than they really are. This was certainly not done for any nefarious reasons: they are not trying to trick you. This is just what scientists do. They are inveterate modelers. Models, to them, are realer than Reality in some essential sense.

Notice what happened in your mind when you first saw the picture: “Looks like their theory about global warming is true. That red line goes up.” Lacking any other theory of temperature change, the certainty you put into scientific pronouncements is therefore stronger than it would have been absent the red line.

There are more theories, though. Another theory, and a sound one, is “natural change.” As in, “Looks like 2020 was colder than 2019. I wonder what did that?”

Well you might wonder, because here is another picture, also from NASA, but by another group. From a page with the headline “2020 Tied for Warmest Year on Record, NASA Analysis Shows“.

So, which is it? 2020 is colder than 2019 or the hottest “on record”?

That, incidentally, is another way to pile up over-certainty, because the period over which records can be kept is quite short, which makes it likely they can be broken more easily than very long records.

There are also several lines here, which signal a kind of intellectual agreement. “They must be right,” you think, “because they all say the same thing.”

And they should, since they’re all using the same techniques and the same data, more or less. Would you be more certain in the graph if you took their code, ran it yourself, and added your line? “Now there are five agreements!”

However, that is a minor technique next to…

2. Hide The Real Model.

As far as historians tell us, 1880 was a fine year. In a marvelous coincidence, it was the first year Science was published. The optimism for the future and the wonders science would bring was palpable.

Times change.

For instance, there were in the world not nearly as many thermometers as there are now; indeed, only a fraction. And they were of a different kind, or rather kinds, than we now have, especially at sea. Meaning if you set the range of old fashioned ones, next to the range of newer ones, in the same spot, you’d see a range of measurements, some of them perhaps a degree C different.

Another difference: the timing of measurements wasn’t the same. Many, many fewer then as now.

And the grandpappy of them all, the hugest of hugeous over-certainties, leaving the other over-certainties to boast of their relative skinniness, the locations of measurements were largely not the same.

See what that means? Do you? It means that the dot for 1880 is not the same as the dot for 2020. It means the dot for 1880 is a prediction of the dot for 2020 (or vice versa).

The average of all measurements (however defined) in 1880 just can’t stand in for the average of all measurements in 2020, not without some whopping plus-or-minuses attached to the 1880 average. Where are they?

Think: each dot is supposed to represent some kind of global average, whatever that might be. Go out into your yard or street and walk around. The temperature will likely change as you do, and by quite a lot if you live in a place that isn’t homogeneous. Now imagine doing this for the entire surface of the earth (or rather just above it). Everywhere. All at once. (Many times.) The number you get from averaging all this is going to be different than the dot shown for 2020.

How different? I don’t know, and neither do you.

So not only is the 1880 dot a prediction for 2020, both dots are predictions for the real average, which, of course, is impossible to know. Yes, even if you used a satellite. Because satellite-derived temperatures are themselves model derived (an inverse formula of some complexity, a function of the light they receive), and anyway you can’t do the entire surface of the earth at once either (clouds, etc.).

Which brings us to…

3. Use The Wrong Plus Or Minus.

I talk about this one to exhaustion, but it’s very rare I can get anybody to understand me, especially if they have had previous statistical training.

If we want to quantify the uncertainty of how the 1880 point is a prediction of 2020 (the points that went into 2020, not the globe!), we need a model. All models say what they are told to say, and some say right or reasonable things, while others do not.

The loess model above is useless if it just uses the points we see, because it does nothing more than replace the data with a model, the Deadly Sin of Reification. This model says the wrong thing.

But if that model was using, in some fashion, all the data points that go into each yearly point, it might be saying something reasonable. Then that model could be a prediction for (points for) 2020. Which this surely was not. But let that pass, and let’s assume that’s what they intended.

With that generous (and surely wrong) assumption, we can interpret the blue vertical lines, which are intended as plus-or-minuses. They are the wrong ones. Even if this is the right model.

They are the plus-or-minus of the model parameters, of little chunks of math inside the model. These no one in the world cares about; or, rather, since scientists love parameters even more than grants, no one should care about.

What we want is the predictive plus-or-minus. And that bound is usually four to eight times as wide as the parametric bound normally presented. The blue plus-or-minus is about 0.2 C. Which means the predictive bound is likely to be 0.8 to 1.6 C!

And that is just the predictive bound for the temperature as represented by the point called 2020. Not the real global average, which nobody knows.

Here’s a picture from a group that is aiming toward the right path, but isn’t yet on it.

This is Berkeley Earth’s BEST, one of the contributors to the second graph above. It goes back further in time than the others, where the sources and locations of temperature measurements are even more different than in 1880.

Those gray plus-or-minuses are nice to see, but, alas, they are also parametric. They are too narrow. Here’s an old analysis of their method which goes into the details (not light reading).

Even so, let’s take their plus-and-minus seriously. Look at 1750. So (appropriately) wide are plus-and-minus they don’t even fit on the graph. In other words, we can’t tell with any reasonable certainty whether 1750 was warmer or colder than the points that represent 2000-whatever (it’s not clear what the final year is on that chart).

I emphasize: the points that represent 2000-whatever. Because it’s not the global average, which again nobody knows. Shall I repeat that? Nobody knows.

BEST (the name of the Berkeley project) didn’t understand they had to do the same thing for points after 1850, and up to whenever date the methods and locations for the latter points stopped and never changed.

Which they didn’t. Stop, I mean. All these things are ever in flux. Which means every point should have predictive plus-or-minuses around them.

Which sort of brings us to our last tip, which is to…

4. Be Selective In What You Show

The whole political point of global cooling, a.k.a. global warming, a.k.a. climate change, a.k. climate emergency, is that temperatures (etc.) we are now experiencing, or soon will experience, are worse than ever. And that Experts, scientists, and rulers know the exact climate optimum; i.e. the best temperatures, precipitation, both rain and snow, sunshine, humidity, and so on, at every point on the earth and under the water. These are some smart guys!

Even so, it’s of interest what temperatures (etc.) were before in history. Why start the graph at 1880? History is longer than that.

Why not start, say, at 6,000 BC? Or 20 million BC (or whatever the name of that movie was)? Here’s one chart (from another approved source) that attempts to do so:

You’ll notice the chart makers followed all our tips (except of course this last one) and generated massive over-certainty. Kudos to them. But at least they went back before 1850. Sort of. There’s an odd lacuna at that date. They just gave up?

Anyway, you have to love how they gift us two new over-certainty generating ideas. The first was to label points in history “Coolhouse” which were much warmer than they are now, accepting the plot as true. But they still managed to make the viewer sweat over global warming. Magnificent.

Also, from about 5 to 10 thousand years ago, it was again much hotter than now. A time when man thrived. But not on burning fossil fuels. Meaning it was hotter then than now for other reasons. Reasons that still might exist. Interesting, no?

Their second method was to plot the most panicked predictions of future 2020-like points (and not actual global means, which again nobody knows). This makes a terrific contrast to the historical data. Scary!

Until you ask yourself, how in the hell can they be that certain of what the temperature will be, when they aren’t even that close to certain what the temperature was?

That is when you begin laughing.

Question & Answer Period

“You can’t be right, Briggs, because Experts disagree with you.”

Are you a reporter, or maybe an activist?

“What’s the difference what the real global mean is. I see that nobody can know it. But they still pick some points, and methods, and functionally declare that to be the global mean temperature, or GAT. In fact, I remember you saying you could do this yourself, years ago.”

You’re right. I did, and they can. This functional GAT can be tracked, modeled, and toyed with, like any other scientific measurement.

But then you have to prove it means something to me. That is, how is a small change in functional GAT, given the hugeous changes we’ve seen in history, going to affect me? And the guy down the street? And so on?

I can barely feel a 1 C change, so functional GAT change alone is of little meaning. So it has to be things changed by changing GAT. And of those secondary changes, and the tertiary after adding in “solutions” to bring us to that Expert-known optimum, we are necessarily even less certain. Here we enter the Multiplication Of Uncertainties, where the string of propositions you center around functional GAT itself (the string I mean) grows rapidly uncertain and improbable.

“It’s me again. You can’t be right because otherwise why would so many people, even some very smart ones, be so worried and assured they are right? Besides, even if they are wrong, isn’t it better to do something?”

I can be right. And, no, it isn’t. How do you know what you will do won’t make things worse? Answer: you cannot.

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.


  1. Robin

    “… there were in the world not nearly as many thermometers as there are now; indeed, only a fraction. And they were of a different kind, or rather kinds, than we now have, especially at sea.”

    Yes the measurement of sea temperature today is complicated. I came across this issue when trying to compare two marine environments for a project some years ago; the temperatures are today recorded by devices at various depths, some are indirect measurement techniques, some are direct, etc. I would be willing to wager that these comparative temperatures in these climate studies are ‘guesstimated’ using temperature data emitted by devices that are located at varying depths that are of varying reliability.

    It’s beyond imagination that there were reliable sea temperature measurements around the world in 1880. It is almost impossible to measure them accurately today.

    “I can barely feel a 1 C change …”

    Human sensitivity to an environmental effect is often best expressed by exponential scales, such as the Decibel scale, the Richter scale, etc etc. Maybe this should apply to climate sensitivity as well.

  2. “You can’t be right because otherwise why would so many people, even some very smart ones, be so worried and assured they are right? Besides, even if they are wrong, isn’t it better to do something?”

    Your interviewer’s question is an excellent exemplar of the analytical fallacy, DO vs. SAY.
    “Watch what they DO. Don’t listen to what they SAY.”
    Treating statements and words of con-men, charlatans, fakes, frauds, tricksters as if they are meaningful and worthy of analysis leads you down the road to analytical perdition.
    How can you mitigate this analytical fallacy?
    Observe and comment on what the con-men DO.

    The “very smart people” may SAY they are “worried.” But many, probably most, of them do not DO actions congruent with worry.

    SAY: Planet is over-heating, sea-levels dangerously rising.
    DO: bought multi-million dollar ocean-front property

    Al Gore
    SAY: Planet has a fever. CO2 causes it.
    DO: Lives a life of conspicuous consumption, enjoying all the fruits of modern, petroleum-fueled comfort and luxury–heated and cooled mansions, private jet globe-hopping, etc.

    That said, the fear-mongering chicken-little-the-sky-is-falling, planet-has-a-fever-we’re-all-gonna-die influence operations are very effective on the general population.

    A small, probably growing sub-segment of the general population does ACT as if they are worried. The most extreme of these take the obvious action. One self-immolated on the steps of the Supreme Court recently.

    Until Al Gore, Obama, Michael Mann, Bill Gates, and the other leaders of the CO2-is-killing-the-planet scam self-immolate, ignore their words, and watch their actions.

  3. brad tittle

    Might I add additional notes on how to obscure the system:

    DO not plot all of the data. Plotting all of the data might expose you to what the “actual” values of the data were. (this is a big might, see Briggs comments on Thermometers)

    Do not plot the data against an actual 0. Try and recreate the 0 in their chart. ASK any of the people who think this chart actually shows something how the 0 was determined.. You might get an answer, but none of them have tried.

    Plot all of the data against absolute 0 and you will learn something. My apologies in advance, it isn’t an exciting thing to learn. But here are two things I learned.

    The plot of all data ranges from -80C up to almost 50C. If the range of your plot is from 0K to 325K, your plot of all data will use 40% of the vertical area of the chart. Graphic artists may scream about wasted space. My professors screamed when I didn’t do this because I would have obscured THE most important part of graphing, THE ability to visually compare the data against its absolute value.

    2nd thing I learned… Plot this idiotic chart on top of all data and suddenly you realize BECAUSE YOU CAN SEE RELATIVE SIZES, that this chart is a @(@#$)(@#$@#$ flat line on all the data.

    Apologies for swearing and screaming. Please know I am yelling at all the folks who keep telling me “HOW DARE YOU!”

    Anyone who looks at the chart above and doesn’t have the chart of absolute values of ALL the data available is not a scientist.

  4. BrianPratt

    The ridiculousness of a global mean temperature even on a daily basis is so obvious that the fact that most people don’t get it speaks to, well, people. I tell my students that a friend came from northwest Argentina to Saskatchewan early January, 20 years ago. It was +45 in his city when he left, about +25 when he landed in Buenos Aires, about –10 when he landed in Toronto, and –35 when he arrived. Average: +25. Completely meaningless.

  5. Hagfish Bagpipe

    ”If you do not understand, ask.”

    Read you loud and clear, Sarge: The stupid, she rises, while the smart, he sinks. And the devils run riot.

    ”That is when you begin laughing.”

    Funniest show on earth.

  6. Rob SCAGEL

    The “multiplication of uncertainties”/ “propagation of error” needs its ownblog entry. Thanks again.

  7. Indeed. Reminder: the thermodynamic temperature is a proxy and the measurement of it is also a proxy. The thermodynamic temperature is a representation of the internal kinetic energy of a defined sample of matter and *only* its kinetic energy. Inverting the color temperature (from emitted light) isn’t as simple a thing as told. Much is left out. Also left out are the conditions where the usual inversion’s sensitivity and resolution reasonableness don’t apply.

    Lying with statistics, gets done every day. Chart tricks are one of them. It is fine to magnify what you want to talk about; but still supply the properly zeroed main chart. “GAT” variations, in particular, are very small and cluster around the 59F/15C point.

    Back in the dark ages, if I supplied an analysis and did not do, to the extent of my abilities, a proper error analysis and propagation, I’d be laughed out of the building. Very few people do these, and most of those are skeptical of the activist’s narrative.

  8. Incitadus

    Exactly we have no idea what the mean temperature of the earth is today much
    less what it was in 1849. This fact has been pointed out numerous times over the
    years but does not seem to caught on with the zombies. What we need is a twenty
    year multi-billion dollar advertising budget to break through. Barring this my advice
    is to get a chainsaw and a rototiller, hunker down the storm is upon us. What I find
    amusing is how the role of the earth’s core temperature is always ignored, it has to
    be cooling. Though in human timeframes I’m sure this is immaterial I’d still like to
    stick thermometer in there.

  9. I think you missed one: the use of “temperature anomaly” instead of temperature. This does two dishonest things:

    1 – it assumes knowledge of the base against which the anomaly is measured (the 0 line on the graph has no factual basis); and,

    2 – it exaggerates the visual impact of the rate of change.

    Consider, for example, what a graph of my car’s acceleration would look like if I used meters for the distance on the Y axis and seconds for time on the X axis versus using mm on Y and minutes on X.

  10. John B()

    With that generous (and surely wrong) assumption, we can interpret the blue vertical lines, which are intended as plus-or-minuses. They are the wrong ones. Even if this is the right model.

    They are the plus-or-minus of the model parameters, of little chunks of math inside the model. These no one in the world cares about; or, rather, since scientists love parameters even more than grants, no one should care about.

    What we want is the predictive plus-or-minus. And that bound is usually four to eight times as wide as the parametric bound normally presented. The blue plus-or-minus is about 0.2 C. Which means the predictive bound is likely to be 0.8 to 1.6 C!

    Perhaps a clearer way of saying that is the :

    BLUE bits show how well the RED LINE represents the BLACK points when
    what we really want is how BLACK points represent reality

  11. Darin

    This hierarchy of uncertainty is wildly underappreciated.

    * The uncertainty of our parameters.
    * The uncertainty in our predictions based on the data we’ve collected.
    * Epistemic uncertainty – do we actually understand what’s going on.

    Briggs, what back-of-the-envelope are you using for your prediction interval?

  12. C.R.Dickson

    For those who want to see a discussion on the NASA GISS numbers (which are not data and are not temperatures), I had a post about anomalies published at WUWT several years ago . You can go the link below for that post that also includes a bit of an update:

    When climate practitioners (don’t call them scientists) can’t do simple freshman college chemistry and physics treatment of data, you no longer have science.

  13. Jerry

    First, convince the population of The Science of Climate Change. Ram it down our throats non-stop until we all believe it, just “because Science”.
    Then act as it is Settled Science, or Science Fact. No need for discussion, discussion is unhealthy, unsafe, and misinformation.
    Then enact the New World Order.
    Brilliant. And we even help dig our own graves.
    You have to admire it, in a way.

  14. dalyplanet

    Nice post Mr. Briggs. I used to discuss these points in the comment section of the Washington Post Global Climate Armageddon articles until I was kicked out permanently for telling the truth one too many times.

  15. Bob

    Its just as well that thermoclines don’t move about in the sea. That’d really put the cat among the pigeons. Yeah regular as clockwork spot on the spot every time??????

    All temperature taking is subject to the condition in an area at that particular time. 90 million years ago the equator sea temperature was at or over (from memory) 35 degree’s c. How do we know? coral dies at that temperature apparently. There is a band of black dead coral deposits. At the equator that came from the last world, boil up.
    Around 90 million years old. When the Antarctic was growing swamp forests. Having balmy 18C days. (


  16. George

    I’ve read many articles that examine surface temperature datasets, but I haven’t seen any that look at the annual average distance of weather stations from the equator, how this changes over time, and how this may impact the calculation of global temperature averages.

    Most casual readers looking at a temperature reconstruction chart would not be aware that each year of the record uses data from a different set of weather stations. Even if they did know this, they may not realise the implications.

    I have examined three datasets that are publicly available, from the Global Historical Climatology Network (GHCN), NASA’s Goddard Institute for Space Studies (GISS), and the University of East Anglia Climate Research Unit. These datasets cover land-based measurements only, but as far as I know, this is the predominant type of measurement performed over the last 150 years.

    As an example of what I found, the following link is to a chart of GISS v4 data, showing the relationship of changes in temperature to changes in station distance from the equator over time.

    I think that the fairly clear inverse correlation is more than a coincidence. It may be a way of smoothing out the variations in temperature data, to try and remove any correlation with natural factors such as solar activity. Then again, it may simply be an artefact of automated data processing that scientists haven’t noticed.

    Note that only complete annual temperature records were used. Records with any missing monthly values have been removed.

  17. Johnno

    Looking for the real global mean in 2022, is a lot like looking for a real woman in 2022.

    As Matt Walsh demonstrated, experts don’t know and cannot answer, but F.U. for daring to ask such insensitive questions. Just follow THE SCIENCE ™.

  18. Richard C (NZ)

    4. Be Selective In What You Show

    As per Paul Murphy above, they only show anomalies. That is scientifically fraudulent.

    The absolute series for NH and SH reveals the summer-winter warm-cool cycle absent from anomalies, with the phases in opposition. The global mean is skewed by the NH so GL assumes the NH phase.

    Those series can be generated with Web-based Reanalysis Intercomparison Tool (WRIT):

    Also a model but obs based. At least a more realistic picture but I still have my doubts because ENSO activity doesn’t show up (or does but I’m not seeing it).

  19. Rudolph Harrier

    Once I saw a report correlating something or other to a “human freedom index.” This index is proudly calculated from a weighted average of freedom indices in twelve areas. Each of those indices in turn is calculated by an average of various “subfactors” or “subcomponents” (with some subfactors contributing to multiple indices.) Many of these subfactors were derived from the “rule of law index” which is itself derived from “8 factors and 44 subfactors.” These are derived from a survey (I had to go three papers deep to find this information by the way) which was conducted in a variety of fashions between different countries (ex. online versus face to face.) Note that while all answers in this survey are simple “yes/no” answers from surveys given to finitely many people, they are somehow used to make “a continuous score rang[ing] from 0 to 10” which is mathematically impossible unless a model was used to smooth the data.

    Getting back to the “human freedom index,” other parts of the index were derived in an even more arbitrary way. For example there is a “disappearances” index which is given only a score of 10, 5 or 0 depending on whether “disappearances” happen not at all, sometimes or frequently, respectively. Note that this weighting means that having one or two people in a country of millions vanish is exactly at the halfway point of freedom between a country where there has never been a disappearance and one where the government abducts people daily. Another index used a single question with 4 responses and then, well, I’ll let the Cato Institute explain itself:

    “These responses are reported using a Bayesian aggregation methodology created for the Varieties of Democracy project, which maps them to the scale of a mean value of 0 and a standard deviation of 1. We in turn map this to our 0-to-10 scale by assigning scores of ?2.5 or lower a score of 0 and scores of +2.5 or higher a 10, with scores within that interval assigned scores that are interpolated linearly

    You get the idea. I’m not going to go through every subfactor since that would be a paper in and of itself.

    Even granting the unlikely hypothesis that “human freedom” can be measured numerically in a meaningful fashion, the amount of uncertainty in this index boggles the mind. Each part has a tremendous amount of uncertainty associated with it, and these are only magnified in the final product. And keep in mind that there are papers that use the index as input data, where of course it is treated as though it had no error whatsoever, leading to further uncertainty in the derived papers.

    But the sad thing is that this isn’t even the worst index I’ve seen. There was a pollution index that was even more ridiculous, but I can’t remember then name now.

  20. SomeBlokeFromCambridge

    There is also the problem that Global Temperature does not exist. Temperature is an intensive property of matter. Although you can add up a whole list of temperatures and achieve a numerical result, that result has no meaning (other than maybe a sort of hand-waving proxy for heat energy). The Globe consists of various different materials including water in three phases and gasses at varying pressures, is heated on one side and cooling on the other and rotates in space.
    What is the Average Temperature of Greenland in the winter and Florida in the summer? Or, on a domestic scale, 25.002 ml of gin at 14.567 oC, a small can of tonic from the fridge at 5oC and a lump of ice at -15 oC, all in a glass tumbler and sitting in a warm garden on a sunny day?
    I asked a doom-mongering local priest, who was warning us all of the Climate Crisis and the dangerous rise in Global Temperature: Tell me, what is the current temperature of the Globe? No answer received 😉

  21. Johnno

    The global climate hoax models are designed to spit out only one answer…


  22. Kip Hansen

    Briggs ==> It is far worse than you think — the first graph presented is “Land-ocean temperature index, 1880 to present” — with 70% of the area being Ocean and in the earliest 75% of the time period with almost ZERO data points per thousand square miles and absolutely nothing regular on the time scale — no regular measurements from no regular spacial or temporal points. In the first 80 years of the land record, the measurements are very sparse and mostly undependable, with very few continuous records at all.

    From the view of a pragmatist, before WWII, the “Land-ocean temperature index, 1880 to present” is (now, what follows is a highly technical term….) “mostly made up”. It should be considered only vaguely representational and NEVER represented as degree-valued points.

  23. John Oliver

    I think it would be helpful to remember that most people are layman. They are not people that work as professionals in prob-stats or research. And unfortunately it is the “average Joe” who you need to reach. I took 2 stats courses in college and courses involving data collection research design etc (40years back!) and i still have to look up the concepts you reference ie: “lowest smooth” , concept of uncertainty, confidence level calculations, spatial sampling, over certainty, the entire concept that these are models and not actual high quality consistent equally comparable well located readings from “calibrated thermometers” , how at satellite “actually measures” temp, etc etc. Also in some of your other articles forget the sports analogies.

    We need you, we need your skill set and experience. I just think you need a retired old school English professor to be your editor and help you clean up your writing in-terms of clarity and use of terms for a wider audience.

  24. Bob

    In support of John. I don’t even get to layman stage.
    My concern is simply this. Our planet will get to a stage it will throw humanity out of the nest.
    Another catastrophic reset of which the planet seems to have had 7-ish, so far where a small percentage of life survives.
    Humanity has X time to get off of the planet or more than likely go extinct. Or try to build domed area’s where it is protected from the nature of this planet. Which just may give humanity Y time to get off of this planet.
    It won’t matter to me, I’ll be dead. But it’ll be a big worry to those living at the time.
    I am convinced it is a matter of when, not if, it will happen. Just by looking at the geological records. I don’t have any more training in geology than statistics. But Geology seems to have real things they have found from the past. They seem to have achieved good dating of what they have found.
    But my training isn’t any better for it than it is for why we write they instead of thay. :-).

  25. James G.

    One thing that amused me from NVIDIA’s GTC keynote this year was their involvement in a new earth and climate model for the planet. They are now using the latest AI technology (“transformers”) to model the physics of the atmosphere, and it doesn’t attempt to solve the physical equations in the way that previous models do. I haven’t looked into it with any detail to form an opinion, but it seems AI models are becoming the new method for climate analysis.

    The introduction comments start at about the 10 minute mark of the left detailed video and there are more details later.

  26. James G.

    The more detailed discussion of the technology is around 48 minutes.

  27. James G.

    Professor Ole Humlum on his Climate4You site covered a past World III model from years ago in his latest monthly update (at end of current download). Not a climate model but a resource model.

    The Limits to Growth model 1972. Quote:

    However, the problems with the World III model clearly went beyond the technical weaknesses of the model. A Club of Rome official stated shortly after the predictions were released that the idea was “to get a message across, and to make people aware of the impending crisis”. In other words, presumably, the model outcome had been determined before the model was run.

  28. Neil Jordan

    re SomeBlokeFromCambridge – the winning comment about temperature being an intensive property that cannot be averaged. The correct unit of measurement would be heat content, or enthalpy. With paired temperature and moisture content, you can calculate enthalpy and do averages. However, the plot of enthalpy would be a nearly horizontal line, the hockey stick disappears.
    re sea temperatures, a witch’s brew. Some temperatures were with thermometers in a bucket hauled up from the surface. After steam and diesel, some temperatures were at the sea water cooling intakes, at whateer depth below the surface. Most recently, there are in situ continuous measurements from surface to some depth. In some cases all three are measured.

  29. hyportnex

    As I recall the first reproducible temperature measurement, reproducible in a scientific sense, was done by Fahrenheit around the 1720s and his thermometer was probably good within a +/- 0.2C, or some such. Any reference to Earth’s temperature before his time, especially before Galileo or Newton, at worst is pure speculation, or at best reference to proxy secondary indicators, such as tree rings, ice cores, etc. to which nobody has direct experimental proof whatsoever. I am not saying that such method is wrong and hence unscientific, nobody can go back and measure the Earth’s temperature what it was 5million years ago, but it should be acknowledged the amount of guess work involved in estimating the GAT during the time of the Pleistocene, for example, by placing a somewhat larger uncertainty bars around those estimates. So we have 300 years of direct measurements very nonuniformly sampled on the Earth’s surface and we are making on that basis HISTORICAL post-dictions and FUTURE GLOBAL pre-dictions.

    The problem of “the” GAT, whatever that term might mean, is compounded by the problem of what physical meaning one may attach to a nonuniformly sampled global spatial arithmetic average of the temperature. Why is that average more significant meteorologically than, say , the exp[E{log T}] or log[E{exp(T)}] or some Holder mean or whatever. Do they all show the same trend lines? Probably not for the GlobalWarmerCoolers would be triumphantly showing us those too, but they do not…

  30. Eric

    How do they know how much to subtract from the daily temp to convert from measurements to them temperature anomaly graph?

Leave a Reply

Your email address will not be published. Required fields are marked *