Statistics

Models Only Say What They’re Told to Say — The Paper!

The paper (more like a glorified note, really) “Models Only Say What They’re Told to Say” (by me) will appear in the Springer book Prediction and Causality in Econometrics and Related Topics shortly. I’m providing a PDF of the paper in advance. There is also a coronadoom example or two!

The paper has a fraction of math in it, but none anybody who paid attention in middle school would have trouble with. It is mostly, as with most of my professional work, a matter of philosophy.

Now on that subject, I want to clear up a misunderstanding some have about the necessarily true statement that “all models only say what they’re told to say.”

First, it is indeed necessarily true, the proof of which you can find in the paper, and I won’t bore you with here.

Second, it is neither good nor bad that “all models only say what they’re told to say.” It is just The Way Things Are. It is also no limitations on models or on our understanding of how the world works. Our understanding is models.

Nevertheless, it often serves as a pithy reminder to say “all models only say what they’re told to say” when people are being terrorized by a model, as they were during the (still lingering) coronadoom panic, and as they are in an increasing number of ways.

Models can be good or bad. The ones we’re terrorized with, and called “Denier!” for doubting, are the bad ones. Which is why it’s good to highlight that the model is only saying what its builders want it to say.

Here’s an example from the paper, adapted and modified from The Price of Panic.

Here is a press headline from a Minnesota news source on 13 May 2020: “Updated Model Predicts COVID-19 Peak In Late-July With SAHO Extended Through May; 25K Deaths Possible” [the source is listed in the paper]. This was of course produced during the coronavirus panic.

The article stated:

An updated model from the University of Minnesota and state’s health department is predicting that COVID-19 cases will peak in late-July with 25,000 deaths possible — if the stay-at-home order is extended until the end of May…In Scenario 5 [the model scenario relied up by government], the stay-at-home order is extended for all until the end of May. With that happening, the model predicts that the COVID-19 peak will happen on July 27, with the top intensive care units (ICU) demand being 4,000 and 25,000 possible deaths.

Another estimate, Scenario 4, predicts that if the stay-at-home order is extended by a month into mid-July, the peak would occur on July 13 with 3,700 as the top ICU demand and 22,000 possible deaths.

In previous briefings, Minnesota’s Governor Tim Walz had asked Minnesotans not to focus on specific numbers, but rather focus on when the peaks might occur. “Modeling was never meant to provide a number,” Governor Walz said on Wednesday. “It was meant to show trend and direction, that if you social distance you buy more time.”

This is false. It, and many similar comments made by numerous official sources during the panic, were common. They were all not only false, but misleading, too. Enforced stay-at-home social distance working was an input to all of these models. We cannot therefore point to model output and say, at least with a straight face, “See? The model says stay-at-home social social distancing works. Which is why we need to implement it.” This mistake made countless times during the crisis. This is detailed at length in [16], including a discussion of how the World Health Organization made the same mistake, based on a high-school science project, to conclude social distancing worked, see [17], when, as always, this was a premise of the model.

The models reported on here were built saying social distancing reduces death. This assumption was an integral part of the models. It was not a “discovery” of the models, it was a condition of them. The models had to say social distancing worked because they started with the premise social distancing worked.

You cannot “discover” stay-at-home social distancing worked via any model—though it may be discovered via after-the-fact observation. You had to have built in that possibility in the first place. You knew in advance that it worked because that’s what you told the model.

You cannot run the model, wait for the output, run to your Governor and say “The latest model says social distancing works.” If your Governor had any sense he would say, “Didn’t you write the model code? And didn’t the code say somewhere that social distancing worked?”

The rush to embrace these models as if they were oracular says more about the goals and desires and decision makers during the panic than it says about model making.

Incidentally, according to the CDC, as of 13 September, attributed COVID-19 deaths in Minnesota were 1,803, with the peak occurring in mid-May, [18]. These represent all attributed deaths, including those deaths where the individuals died of multiple causes. The error of the models relied upon to make decisions was therefore at least 12 times, an enormous and horrendous mistake. The restrictions is Minnesota did stay in place, but were interrupted by the Minneapolis riots, a time when social distancing was not observed. If the models had any bearing on reality, since social distancing did not obtain, the deaths should have been higher than 25,000.

You can read the rest in the paper.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here

Categories: Statistics

47 replies »

  1. Dear Briggs. Thank you and God Bless you. It is to be wished that 200 million Americans will read, understand and remember this.

  2. Remember, propaganda is models. Models say what they are told to say.

    (Just an aside, when I read the title of the post, I first thought of airhead models in tight dresses that say what they are told to say. I think it applies to these models too.)

  3. OT: Bloody blubberhead blabbermouth bish: “deserter”!? Hysterical.

    Cement overshoes, East River, for that one.

  4. So Mark Twain was almost right. There are actually FOUR kinds of lies; White lies, Damnable lies, Statistics and MODELS.

  5. This is a silly notion. It’s a tautology. Yes, we input equations with parameters and run a program (“model”) and we analyze the output. But that doesn’t mean we know the output when we run the model. No one says “let’s assume that social isolation saves lives and make a model using that assumption to verify that isolation saves lives. Then we’ll enforce an edict to isolate based on the model.” If anyone believes that such a thing happened, there’s no hope for any common ground.

    We have models for, for example, gravitational interaction and what rocket thrust, amount of fuel, etc. are needed to put a satellite in a particular orbit, or send a bunch of experiments to Mars. They’re based on a more fundamental set of models of how we’ve observed gravitational forces to behave, how rocket thrust works in various environments (atmosphere, space), aerodynamic drag, etc.

    We (a very broad use of “we,” I certainly don’t do this) see what is required to achieve the desired objective. It’s based on our level of confidence in our physics understanding. So, what about another example where there’s no objective? We understand a lot (certainly not everything) about fluid dynamics. We want to understand what happens to sediment when it washes out of a river into the ocean after a flood. How far out will it go? How will it disperse laterally? What will be its vertical profile? Yes, the equations and the parameters are put in but the model is run to see what our best understanding of quantities, flow rates, and physics will lead to. If we did this (again, “we” does not mean “me and my team”) and someone said “isn’t there a place in the code/model where you tell the model that the silt will move outward approximately x meters, it will be detectable approximately y meters east and z meters west of the river/ocean boundary?” The answer would be “No, we told the model about the river flux, the density of water, the specific gravity of the silt particles, the movement of tides, and our best understanding of the physics involved, etc. If we knew enough to plug in x, y, and z we’d not have bothered to run the model. Of course, we’ll still have to validate it against the observed facts. If it’s clearly erroneous, we’ll know that the properties and parameters were incorrect or that we don’t sufficiently understand the physics of the situation.” That’s very far from “the model only told us what we told it to tell us.”

  6. “No one says “let’s assume that social isolation saves lives and make a model using that assumption to verify that isolation saves lives. Then we’ll enforce an edict to isolate based on the model.” If anyone believes that such a thing happened, there’s no hope for any common ground.”

    Oh YES we can! And they do! Just ask any biased polling group or journalist, and then recall that scientists are no different. Even if the scientists want to be honest, the people paying them for results are not, and those people are the same ones paying the journalists and the pollsters and the marketing firms and the legalese departments.

    Naturally no-one is saying absolutely everyone is out there to screw you and rig reality. But it exists, and it happens, and there’s plenty of evidence that this is the case here.

    Nobody has a reason to rig a rocket model because there’s no incentive to make it fail for doing what it is designed to do. But someday if society “progresses” crazy enough and the environmentalists believe all those rockets up in space are endangering the planet which is killing the polar bears, then you will start to see rigged models designed to reduce the use of rockets and demonstrate that rockets are a danger to our planet and our health. The gravitational and thrusts and fuel are irrelevant to putting the rocket up there, except where they can feed the outcome that the rocket going and being up there is bad for us.

  7. Johnno, you’re suggesting that polling groups doing polls and journalists writing articles or doing “investigative reporting” is analogous the process to which Dr. Briggs refers and I mentioned? That doesn’t make sense. Sorry, I don’t believe that the pols go to epidemiologists and say, effectively, “we need you to make a model that shows that isolation saves lives. Payment for the model is contingent on providing the results we described.” I don’t put any group on a pedestal but, without evidence, I’m not willing to accept that.

  8. No one says “let’s assume that social isolation saves lives and make a model using that assumption to verify that isolation saves lives. Then we’ll enforce an edict to isolate based on the model.”

    No one says this but they do this. Certainly as our host points out and I can verify, officials in Minnesota did exactly this. At every point their analysis was based on their precious models, but at every stage their models were wrong. Walz claims success since his policies “allowed us to” do better than his models, but this assumes that the models were an accurate reflection of what would happen without his policies (even though they weren’t accurate in the scenarios best matching his policies!) In his press conferences he constantly pointed out whether a model predicted a spike, or whether it said that his new policy would instead save lives. This is exactly how policies were actually carried out. He may not have said “we will do this because our model says it will save lives, when it was created with the assumption that doing this will save lives” but that’s what he actually did.

    We have models for, for example, gravitational interaction and what rocket thrust, amount of fuel, etc. are needed to put a satellite in a particular orbit, or send a bunch of experiments to Mars. They’re based on a more fundamental set of models of how we’ve observed gravitational forces to behave, how rocket thrust works in various environments (atmosphere, space), aerodynamic drag, etc.

    You are making a common error of physical scientists. That is, because we can discover regular physical laws which allow us to predict what happen in certain constrained situations, and so make good models there, then all models must be built on a similar foundation.

    But consider what would be necessary to accurately model the spread of a new disease from physical laws: You would have to know how people gather or stay apart, including in response to new polices, news of disease, supply shortages from the disease etc. You need to know how much distancing actually affects the spread of the disease which will depend on whether people are outside or inside, ventilation, the type of activity they are involved in, their natural resilience against the disease, etc. If you throw in masks you must know the material in the mask, how properly the mask is being worn (is it a snug fit, is it fully covering the nose and mouth, etc.) how often people will get sick of wearing them and sneak them off, etc., and that’s without getting into the properties of the disease itself. And all this must be done for a disease which, at the time the models were created, had barely anything known about it (even whether it was airborne properly, or spread through droplets through sneezes, or primarily spread through some other fashion, was unknown.)

    But we haven’t even gotten into the effects of increased hospital demand, disruptions to supply chains, public panic, depression, etc.

    There are no set of laws that will let you accurately determine the behavior of an entire state under all these factors. So you either pretend those factors don’t exist, which gives a model so removed from reality as to be useless (kind of like saying “we will use this model that says how objects move in a weightless vacuum” to try to predict the motion of an underwater object in a current surrounded by schools of fish), or you must arbitrarily decide that your measures will have a certain preventative effect (the IHME explicitly did the latter with respect to masks.)

  9. I wish you would call models what they really are: SOFTWARE.

    SOFTWARE is horribly, inextricably, manifestly bound to only produce what it is supposed to say. In fact, it is more difficult to get software to say the correct thing – and VALIDATE the output – than it is to get it completely wrong. 99% of software is NEVER properly validated. It’s too difficult and expensive.

    This is a lie that is at the center of all the government discussions of privacy and state/private partnerships. Everyone points to software as if it is something monolithic and completely trustworthy. That’s garbage.

    It’s the same as the supposed “safe guards” put in place to limit companies or governments looking at your data. It’s a blatant LIE. Anyone with admin access can look at anything they want. “Oh, this database has an audit log. okay `DELETE FROM dbo.AUDIT_TABLE where …`.” Just like Tucker Carlson found out: Safeguards are not worth the breath it takes to say the word.

  10. Russell Haley: I disagree. Models may be (but are not necessarily) run on software but they aren’t software. Just as a single example, F=ma is a model. It works well in many circumstances, not well at all in others (without, for example, relativistic corrections). But I can (and often do) run the model with a pencil and a piece of paper. If I’m rushed, I may use a slide rule (I have several scattered around). But, unless you have a definition of software so broad as to be useless, F=ma is not software.

    As to your specific statements regarding validation, I’d probably not go quite as far, but I generally agree.

  11. From Briggs paper:

    ”We might suppose that a modeler (or “researcher”) has claimed xt is the number of cases of splenetic fever after being exposed to daily news broadcasts about coronadoom.”

    [hagfish bagpipe laughter]

  12. Even the model F = ma cannot say anything beyond what it is told to say.

    That is, the model will never allow you to work with any situation where force is not the product of mass and acceleration, because the model is defined precisely by that assumption. (Where here I mean “assumption” in a mathematical sense, i.e. an axiom taken to be true for the sake of a model and which cannot be verified by mathematics. You may have a good reason for believing in this law outside of the model and mathematics, but from the perspective of mathematics it is still a starting assumption.)

    Thus the model will never let you predict situations where relativistic corrections are necessary. From the viewpoint of the model no correction is necessary, F = ma is just a fact.

    Note too that the model will not protect you from misusing it. For example if you do not consider forces arising from friction or gravitational pull, the model will do nothing to stop you.

    All that isn’t to say that the model isn’t useful or that it can’t make accurate predictions in restricted situations. Obviously it can. But this is because of the painstaking experimental and theoretical work done to get to the model.

    Returning to disease models: can any portion of disease transmission be known with as much certainty as “F = ma” is, even in a restricted situation? Obviously not. Do the disease models then produce results with huge uncertainty bounds which admit that no useful prediction can be made? Verifiably they don’t. So how do they function? They must make certain assumptions (ex. “a lockdown lasting x number of weeks will reduce transmission by a factor of y”) which are then treated by the model as absolute, just like F = ma is treated in Newtonian physics. But these assumptions never have as much evidence for them as F = ma does, and often do not have any justification whatsover beyond the modeler saying “it seems plausible.”

  13. Rudolph Harrier: Yes, trying to determine whether social isolation will decrease or increase or have no effect on the spread of Covid-19, for all the reasons you mention is an extraordinarily complex problem, especially in comparison to putting a satellite into orbit.

    And perhaps you’re an expert. I’m not. I have tracked a lot of statistics w.r.t. Covid-19 in the U.S., total recorded deaths and total recorded deaths by age cohort going back a few years. And I heard yesterday that drug overdoses in 2020 increased by some 21,000 or 30%. Some of those may be attributable to Covid-19. But, speaking at the 30,000 foot level, there were at least some 450,000 deaths more than would be expected using the (remarkably stable) deaths for previous years, adjusted for population change. So first: in my opinion, it is unequivocally demonstrable that Covid-19 resulted, for whatever reason, in a significant increase in mortality. I’ll agree with Dr. Briggs (and my relatively amateurish analysis shows) that the greatest effect, by far, was among the elderly. I’m 67, but don’t characterize myself as elderly! But still, lots of people clearly died early.

    So, it was worth trying to determine what measures might reduce the impact of the virus. The evidence was less compelling, but still very suggestive, that gathering in crowds caused upticks in infections, hospitalizations, and death. So, it was not unreasonable to think that keeping uninfected people away from infected people would be helpful. I’m not here to support the actions of the governments or to say they were morally defensible, I’m only contending that it’s not loony to think that it’s possible that isolation would help.

    As to masks, it’s certainly true that the virus’ size is far smaller than the mesh openings in cotton masks. But, at least early on, there was the hypothesis that the virus cohered to moisture in exhalation and a mask can definitely stop a significant portion of moisture (droplets) from a mask wearer’s breath. So (again, without taking a position on the legality or morality of mask mandates) it was not unreasonable to theorize that mask wearing might slow the contagion.

    So, how would you go about validating or falsifying the hypothesis that isolation and/or mask wearing would slow the spread? I took a couple of cracks at it myself, but my knowledge of epidemiology and programming limited my efforts. Nevertheless, trying to determine, through inputs to epidemiological and social models, whether some measure or combination of measures, might be effective in slowing the spread of the virus was a valid goal. One would consider the magnitude of each effect and the accuracy of data available for that
    effect, and make decisions as to what to include and what to ignore. Definitely, as I implied above, if the facts on the ground demonstrate that the model failed, the model must be discarded.

    I am a partner in and run an engineering consulting and materials testing firm in Southern California. There were a lot of models and attempts to experimentally validate models for how concrete structures and steel structures behave in earthquakes. This is also a complex problem with multiple influencing factors such as the exact structural configuration of connections, the material properties, the geometry of the structure, the energy of any particular earthquake at the structure’s location, the seismic spectral response, the duration of the earthquake, the physical character of the foundation and the soil or rock that supports the structure, the locations and sizes of the gravity loads on the structure, and others. And, as it turned out, the 1971 San Fernando earthquake demonstrated that the concrete structural frame models were lacking and the 1994 Northridge earthquake demonstrated that the steel moment frame models were lacking. Unfortunately, the only real experiments are actual earthquakes. Fortunately for the users of structures and for the economy, but unfortunately for structural model validation, the experiments are relatively rare. Nevertheless, structural engineers continue to run simulations, both in physical labs and on computers, to achieve a better understanding of the effects of earthquakes on structures and on what can be done to minimize damage. The same could be said about pandemics.

  14. So first: in my opinion, it is unequivocally demonstrable that Covid-19 resulted, for whatever reason, in a significant increase in mortality.

    It is only unequivocal in the sense of chronological succession. That is, that COVID-19 happened first, higher deaths happened later. (And if you want to be pedantic even that is not really unequivocal, considering how sloppy some governments are at reporting deaths, but assume for the sake of argument that the deaths are accurate.)

    We can’t say definitively that more people died due to COVID-19, and we definitely cannot say that it would have been worse without intervention. This is because, as you note, we do not know the exact reason for most of the deaths (and even the deaths directly attributed to COVID could have many contributing factors.) They could be due to COVID directly but undiagnosed, they could be due to secondary factors (ex. COVID weakens immune system and then the patient dies from another infection), they could be due to fallout in the wider society (ex. COVID deaths reduce the workforce and thus disrupt supply lines, which in turn kills people due to lack of necessary goods.) But they also could be due to effects from the lockdowns (ex. direct effects like lack of access to hospitals and indirect effects like suicide through depression.) They could even be due to some unrelated cause. Remember that “excess” mortality is not an actual measurement but it is in itself a model. All we can say with any certainty is whether the number of deaths increased. We cannot say that they increased “too much” without a model of what the “correct” number of deaths would have been if all things had been the same.

    Keep in mind too that it is especially unclear what excess deaths for 2021 should mean. Should we consider the “expected” value we might predict in 2020, or in 2019? If we predict based on the conditions in 2020 then having COVID be present will be baked into the calculations, which might not be useful when trying to analyze how much effect COVID is having. But if we base our calculations on what we might predict in 2019 we will ignore the large excess deaths from 2020. Since “excess deaths” is meaningless without a model the model itself cannot tell us which is the better thing to do.

    That’s not to say that a models will not make further assumptions. For example the current statistics define all “excess” deaths as COVID-19 deaths, which is where they get their current headline grabbing numbers. So for their model tells us that excess deaths are due to COVID, but only because the model was told to say that.

    The evidence was less compelling, but still very suggestive, that gathering in crowds caused upticks in infections, hospitalizations, and death. So, it was not unreasonable to think that keeping uninfected people away from infected people would be helpful.

    A compelling case can also be made that shutting down an economy for extended periods of time will cause devastating economic damage, which will in turn lead to great harm to the public both directly (ex. no money to pay for rent and food) and indirectly (ex. suicide from being depressed by the dismal state of the country.) A compelling case can also be made that denying hospital care to those with chronic conditions would cause great harm, even death (especially from lack of screening for things like heart disease and cancer.)

    It isn’t obvious in the abstract which would have a greater harm: closing down or not closing down. But the deciding factor was entirely models. People modeled the danger that would happen if we didn’t close down, but not the danger that would happen if we did close down. If there had been no models of disease spread but tons of models of how many people would starve from a lockdown, we wouldn’t have locked down.

    Note too that the models don’t even need to be accurate. The models for deaths in particular were ridiculously bad, usually off by a factor of at least 10. But the modelers have kept their jobs and will be around to model for the next crisis. Of course, they will only make models that support the position that politicians want to take.

    So, how would you go about validating or falsifying the hypothesis that isolation and/or mask wearing would slow the spread?

    The question is too complicated to be realistically answered by modern science. You might as well ask whether burning tires in one specific city in the country will lead to storms appearing a year later in another specific city. The system of weather is too chaotic to make such predictions with any accuracy, and humans are arguably more chaotic than the weather.

    You can make some very broad predictions on what people will do by observing human nature, but these will be based on heuristic guesses instead of more certain physical laws, and even the best guesses will often be completely off base.

    And that’s still ignoring the fact that a political decision is not a scientific one even when we CAN predict things accurately. It ignores questions of tradeoffs as well as questions about whether an action can be justified as a proper use of authority. Presenting matters like this as matters of science alone only leads to a cargo cult following the decrees of scientists, which are necessarily political but which will be treated as if they were scientific. (This is of course the reality we live in now.)

  15. On a side note, I did some searching on excess death numbers to try to verify the 450,000 number. Nearly every site I went to only talks about excess deaths in terms of them all being COVID deaths. The push is pretty heavy to abandon “reported” deaths (which are already easy enough to manipulate) and go entirely to “excess” deaths. This will make COVID deaths entirely a function of the model used for “excess” deaths, so that they can become anything whatsoever as needed.

    We might as well play the same game. Maybe all “excess” deaths in the last few months have been related to complications with “the vaccine.” Who’s to say?

  16. Models are n’t software.
    Software can be used in modelling

    Models can be simple equations. As pointed out above and endlessly elsewhere through the years.
    complex models use software programmes and computers.
    Sheri,
    “Airheads” have more fun….

  17. This amateur would approach this worthy topic slightly differently, for there is a weakness.

    I love the abstract. It is amazing, and critical: all that models can rightly do, because that is all they can honestly be made to do, is predict Y | X.

    Models can crank and (sometimes) provide ‘unanticipated’ results. But all results can only be implications of the model. This is the meaning of “models can only say what we tell them to say.” The implications of the model are the implications of the model. Retain the model, and the implications remain the same. Change the model, and (possibly) change the implications. But is there a reason we should be amazed if we didn’t initially see all the implications ourselves?

    However, Minnesota Governor Tim Walz made a decision to trust authorities, who acted as if their model was indeed predictive and had already “demonstrated independent success.”

    The weakness is that “demonstrated” forever remains a decision. A person, a moral agent, must decide when the evidence is “independent” enough and probative enough for the particular circumstance.

    For me — my decision, so to speak — is that the note insufficiently emphasizes the inherent (and precarious) role of moral agency in such decisions.

    p.2
    Models per Se ==> Models per se

    p.3
    “They may or may [not] explain the cause of the movement”

  18. Robert M Ryan –

    Even in Physics, mathematics can be fudged. And models can explain two completely different realities and still work to account for the observations.

    This was the precise issue between the Keplerian heliocentric and Tychonic geocentric systems. And that’s why in the end Einstein’s General Theory of Relativity had to have both as equivalent coordinate systems, and even when NASA disagrees with a geocentric cosmos, they still use geocentric reference frames to make their calculations easier.

    Just because you can have some models like F=MA that work, and nobody finds this controversial, doesn’t mean other models can’t exist, especially when you have many unprovable and metaphysical assumptions built into the variables.

    For example, the Drake Equation, for which explanation I’d recommend checking out Michael Crichton’s essay ‘Aliens Cause Global Warming’ where he exposes science and especially mathematics as being a realm where any bullshit can be passed through using impressive looking equations, and models can be worked to match the conclusion you’re looking for by working ass backwards and introducing any ad hoc construct you want to be there due to religious dogma, but whether they correspond to reality is a whole other story.

  19. Joy,
    Models are n’t software.
    Software can be used in modelling

    In a way, models are like software. Software us the completion of the wiring of a computing machine. As the physical part of the machine as general purpose, the software us effectively the actual design of the finished machine. That machine will do whatever it is told even if the programmer is unaware of what is was told leading to surprising behavior.

    In much the same way, models have built in assumptions and can only produce answers constrained to those assumptions. If the assumptions are wrong then the model will produce wrong answers.

    The only way around this is to verify the models against future cases. This is done on engineering but not so much in epidemiology and other soft sciences.

  20. I’m going to have to stop posting with my phone. The spelling guesser sometimes insists on a particular spelling and hard to convince otherwise. As an example, I try to type “the” but get “then”. Even if I edit to delete the “n” it often reappears.

    In the above, “it” was transformed to “is” and I don’t know why. Annoying.

  21. Just seen a classic case of models being used to justify actions – Here in WA the Uni of WA has released results which shows that as a result of “modelling” the WA experience is better than the NSW case – however it ignores the state of Victoria which Locked down severally and ended up with the majority of Covid attributed deaths in Aus.
    Another example of models producing the results they want. Australia now is on the defensive not learning from Sweden or Florida.
    In Australia the average age of a “Covid victim” is greater than the expected life expectancy in Aus – go figure.

  22. Dav,
    Yes I understand, was referring to the use of the description of software as if “just”, somehow makes any difference, particularly to professional mathematicians. Engineers appear to be better adjusted to the strengths and weaknesses of them. Non initiated seem sometimes het up, unnecessarily, in my view…those of us who are not modellers can be easily convinced into blaming the secondary problem.

    After all, if powerful leaders want to find a way to promote bad policy, introduce rules and regulations based on bad ideas or whatever, they will always find a way, so the modelling is a secondary problem. It’s the assumptions and premises that people argue about.

    Re the typos? I do still believe there’s and switchable software that lives in comment boxes and buglies that make the rounds, which might well alter meanings or at least make people look silly. Have decided will get a phone, having spoken to RNIB totally blind man who agrees with me, that apple voiceover software is rubbish but that the phone works really well. Sometimes, with the spellings, it offers your original mistake! If you’re not concentrating you just ‘agree’ to the offerings of the computer. It’s getting that the size of the letters are larger than even a smartphone would fit , so it’s audio from now on. Just as I always used to, with microsoft and freedom scientific software. You can hear spelling mistakes and typos, clear as day.

    What annoys me the most are the full stops, randomly inserted. Have sen them appear elsewhere, so know it’s nothing specific I’m doing.
    Thanks for the clarification though you won’t know how helpful some of those comments can be

  23. OOpse,
    I’ve inadvertantly discovered how to hid text in comments!
    I wrote:
    {“” (no spaces or speech marks) and “”}
    switchable buttons.
    Hope I didn’t just press the button on the portcullis or launch something

  24. …still coming out wrong, I know what I mean, greater than and less than symbols….

  25. Joy,

    The greater than and less than symbols have special meaning to blog software. They are editing markup flags used by HTML. They can be inserted with special symbols which I think are “& lt ;” and “& gt ;” using lowercase sans spaces and quotes.

    Trying it: lt=< gt=> If there then it worked.

    Another way as to use the ampersand followed by “#” and two digits then “;”. That’s what they are in HTML but the blog software isn’t really HTML.

    & #38 ampersand
    < #60 less than
    > #63 greater than

    I’m pretty sure its my phone. Never happens in the PC. Also happens when commenting in other places using the phone. Just had a devil of a time entering “lt”. Phone insisted on making it “Lt”. After about a hundred tries it suddenly conceded and at least presented them as options.

    Part of the problem is my fingers are large compared to the keyboard and I often hit adjacent keys.

  26. Thanks Dav, for the link

    Will have a look and decipher. You see I would never keep trying because in my mind the computer’s doing the same calculation over and over and my keeping on with the same command won’t render anything new, so it just shows that computers can be capricious!

  27. DAV

    Yes

    Thank You – I often disappear stuff

    And I like the nonbreaking space (let’s try)

    &nbsp&nbsp&nbsp&ltDon’t Disappear&gt

    Let’s Try

  28. I am just guessing that Robert is most likely a believer that says two plus two could equal five. Mr. BRIGGS, thank you for your blogging; though there are a few things you write that are above my paygrade, I comprehend the gist of it all. I quite enjoyed catching up today, I started with your thoughts on the Holy Ghost, quite good I must say.
    Thank you again for your clear headed thinking.

    From the western part of the mitt,
    Michael S.

  29. “all models only say what they’re told to say.”
    I disagree. This may be true for climatologists, but definitely not for engineers. We rely on models to predict behavior in systems that have not yet been thoroughly explored in real-life, primarily because we haven’t actually built them yet. It’s a lot cheaper to build and break models than it is prototypes!

    If a model only said what we told it to say, it would be useless. Rather, models (hopefully) encapsulate the mathematics relevant to a particular problem in such a way as to operate as the real-world operates, hopefully closely enough that the model gives the same results as the actual construction would.

    Now any good engineer realizes that models have circumscribed utility and that they do not and cannot completely simulate reality in every aspect. Also, of course, any useful model must be tested and retested against real-world results to find flaws. Nonetheless, engineering as a field cannot operate without models. They’re the entire basis of the discipline.

    The problem with “climate models” as used by the anthropogenic-climate-doomsayers is not that they “only say what they’re told to say” (not that this isn’t true!) but that they have never been validated against reality. Of course doing so would take at least a hundred years, given that “climate” is really just the 30 year average of the weather, but nonetheless it has never been done. And, BTW, back-predicting known data is really nothing more than curve-fitting; the only valid results are when a model is compared against completely new situations not used in its creation.

  30. This is mostly an observation, as I highly respect your knowledge of statistics and I agree with your POV on lots of things, but the paper you wrote on models…did that thing get proofread? Page 2: “It’s form was fixed by the scientist or researcher”.

    Errors like that do not inspire confidence (albeit only a little, but still). The second sentence in Section 1 of the paper doesn’t, either.

    Given the opposition, we have to be on our “A game” at all times. Here’s to hoping I am as I type this post. 🙂

    Again, big fan, but it’s hard to take the paper seriously with issues like those. I really urge you to have the paper proofread and edited again before publishing.

  31. UA,

    Yep, it hasn’t been. You’re seeing the version before Springer edited it, to avoid the paywall.

    My enemies have long been known to insert many typos in my work. Dastardly devils.

Leave a Reply

Your email address will not be published. Required fields are marked *