Statistics

If All Models Only Say What They’re Told To Say, Can We Learn From Them? Mask Model Example

From reader Goncalo Dias comes this excellent question:

I have just sat through your seminar on (super-)models. [LINK]

I have a quick question: if you assume some fluid mechanical properties and some extensive properties of fluids, virus particles, etc, and run a calculation about propagation, even though everything is in the assumptions+data+model, would you say that I could find out whether masks work? Would it be untangling the knowledge from the premises to the conclusion?

Rephrasing, would in this case be said that I could learn something from models?

To re-re-reiterate, yes, all models only say what they’re told to say. Which is not intrinsically good or bad.

Now even though this is so, it doesn’t mean the modeler knew of everything he told the model as a whole to say. Even though he told every part of the model what to say. In other words, the model may be so large and complex that they modeler will not foresee every possible output given every possible input.

So, as you suggest, he might very well understand how the various parts of the model work together, how one stage causes changes in the next, by varying the inputs and studying the model’s innards and outputs. And thus learn from the model.

Like you say, you could have a large, sophisticated model “fluid mechanical properties and some extensive properties of fluids, virus particles”, air transport, mask type and permeability, moisture context of the air and mask, viral load, state of disease (which is related to virus shedding), test used to characterize disease (false positives and negatives accounted for; Ct level in PCR tests, etc.), mask cleanliness, hours masks worn, locations where mask worn, how close people were apart and populations density of these places at the various times when masks on and masks off, and so on and so forth.

This model requires a ton of inputs, all of which must assumed. The model, as said, can be run under various assumptions, and you could study it results. Let us suppose you discovered that when the moisture content rises a certain known amount, ceteris paribus, model-based mask efficacy (judged by infection rate, or eventual death, or whatever else you pick) decreases by such-and-such a level.

Now even though the model was bound to say that, because you made it say that, it doesn’t mean you understood you were making it say that. Because, again, of the complexity.

Yet this does not prove that masks work when dry. Nor does it prove masks don’t work when wet. Because both of these, as it turns out, are assumptions, or direct deducible consequences of the assumptions you made.

In order to prove masks work, as you say they work, you have to duplicate, in real life, the same set of assumptions you made, and check the actual outcome (infection rate, deaths, or whatever). If these match, you have evidence your model is working with this set, and this set only, of assumptions. Obviously, you have to check the range of all possible assumptions with reality to prove you have a good model.

Even so, this is not necessarily proof you have identified all the proper causes.

There are all kinds of reasons your model could work well, but you have misunderstood the causes. For one example, some of the causal connections in your model might be down or upstream proxies of real causes. Or you may have identified an acausal correlation (long time readers will recall that people dying by strangulation in bedsheets and GDP example). And so on.

So in order to get at real causes, you also have to rule out other possible causes of the observed results. What might these be? No universal list exists. After all, if we knew all possible causes, we wouldn’t have to model. We’d know the causes. Even if we think we might have listed all possible causes, we might have missed some.

Getting at cause is not easy, and becomes harder—and more tedious and expensive—the more complex the situation.

This is why people like statistical shortcuts. They believe if the model passes a “test”, by comparing it favorably to reality, then they don’t have to bother checking further for causes. Oh, sure, everybody knows these correlational tests don’t prove causation—it’s a common adage after all—but everybody believes that, for them, correlation is causation.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here

Categories: Statistics

43 replies »

  1. They built a model of the world where they have god-like control of all outcomes. But! — first they need to get god-like control of all inputs. They’re working on that now.

  2. If All Models Only Say What They’re Told To Say, Can We Learn From Them?

    Isn’t the most obvious answer:

    We learn what it was that we told them to say

  3. “decreases by such-and-such a level.
    Now even though the model was bound to say that, because you made it say that”

    He specified the form of a reasonable model, sure, but he didn’t make it say that, because the specific such-and-such number, call it K, is being estimated from the data. Before hand, in fact, you didn’t even know if it would be a decrease or an increase or no change.

    And ‘all possible inputs and outputs’, ‘universal list’, “prove” are very silly requirement expectations for scientific work. Unlike religion (‘the god I believe in did it and that’s that’), we just talk about evidence for or against, always subject to change from knowledge from better experiments.

    Justin

  4. Yes, people with little hands-on experience with science often fail to understand how modeling works and what it’s for. Brigg’s comments don’t really make much sense. This is a recurrent theme in amateur climate science skepticism of the sort that crops up often on this website, where the importance of models is wildly overemphasized, and their predictive record falsely represented as well. Here is a video that brings this home a bit:

    https://www.youtube.com/watch?v=ZY-pO_zTVvU

  5. Michael “hockey stick” Mann, and a video of opinions, are exactly what is needed to rebut any criticism of AGM.
    Bravo. Science by consensus. Yay!

  6. @Lee Philips
    Do you hold high respect for people who just offer judgements and borrowed opinions and nothing else? What about some valid arguments? What exactly doesn’t make sense in Briggs comments? Please explain.

  7. Are YOU kidding me?

    LEE actually linked Michael Mann? I’ll check it out later

    Lee – I thought you kept your finger on the pulse of skeptics

  8. Engineering, of course, intrinsically depends on models, which are almost always mathematical equations which boil down the knows operation of some physical process “F=mA”, for example. Science is the art of making models that allow us to predict the operation of certain parts of the universe, engineering is the art of using those models to make things that work to specification. The difference between engineering and craft is that engineers use models to verify the performance characteristics of a device before they actually build it, so as to be reasonably sure that it will perform to specifications and not waste time and money (and lives). Obviously, for engineering purposes a model is useless unless it performs to some reasonable level of accuracy and precision. “Good enough” is acceptable, so long as the error is bounded.

    I disagree with the statement that “all models only say what they’re told to say.” All models are bounded and describe the part of the universe for which they were designed. However, engineers use models because we don’t know what the answers are! However we can be reasonably sure, from testing and experience, that the models being used accurately capture the underlying physics. Of course they are not of any use outside of that area, although the area in question can vary widely depending on the equations being used. PSPICE circuit analysis, interestingly enough, works quite nicely for fluidics and heat transfer, since the underlying equations are very similar.

    The current crop of climate models, with the possible exception of a Russian model (https://wattsupwiththat.com/2019/07/02/climate-models-are-fudged-says-climatologist-video/) are bad not because they only say what they are told, but because they do not accurately capture the underlying physics of the situation. They cannot accurately predict future results and, in fact, have never been validated. They are political tools, not scientific ones.

  9. Paul Blase:

    You have some interesting points (especially about off-label uses of PSPICE), but I think you are misinformed about climate models. Prediction of increase of global mean temperature as a function of atmospheric CO2 concentration (for example) was accurately done in 1967¹, and models since then have only gotten more accurate².

    However, for those who have a political objection to reality, repeating a mantra like “models only say what they’re told to say” must bring some comfort.

    [1] https://www.forbes.com/sites/startswithabang/2017/03/15/the-first-climate-model-turns-50-and-predicted-global-warming-almost-perfectly/?sh=3afed2ad6614

    [2] https://www.theguardian.com/environment/climate-consensus-97-per-cent/2016/jul/27/climate-models-are-accurately-predicting-ocean-and-global-warming

  10. Paul,

    Because all models only say what they’re told to say does not imply all models are bad. But it is indeed true they’re all slaves to your intellects. Like the article says, our intellects may be inefficient for large models and you may not comprehend their totality, but that does not mean the model isn’t saying precisely what you made it say.

    If you doubt this, create a model that says other than what you told to say. This is a serious challenge.

    (I need to come up with short word or acronym for this.)

  11. Lee,

    Will you be sending us code, or equations, for the model that says what you didn’t tell it to say?

  12. “Will you be sending us code, or equations, for the model that says what you didn’t tell it to say?”

    Look up any scientific paper that uses any kind of model. Mine, if you like. They all incorporate models that say what no one told them to say. That’s why we construct models. So we can find out what they say.

  13. Lee,

    Okay, send us one. And — help us — show us where it starts to say what you didn’t tell it to say. Thanks.

  14. Grab literally any paper of mine or someone else’s from Google Scholar or anywhere else. Look at any of the results in the paper. Now you are looking at something a model said that no one told it to say. If you want to try to make your point more refined, or clarify what you are trying to say, try something besides repeating yourself. It’s possible we don’t disagree, and the confusion is just linguistic. But to find out you would need to break free of your childishness.

    For example, everyone understands that a model, whether conceptual, mathematical, or computational, has all its possible results contained within its definitions and assumptions. The Pythagorean theorem is already, in that sense, there, as soon as the Euclid’s axioms are set down. But it still needs to be discovered. No one describes that as “telling” geometry to produce the Pythagorean theorem. There is no geometry skeptic website that claims that Euclid “fudged” his model.

    Models are just that—an attempt to model the physics, or whatever it is, that we think is relevant to whatever we are trying to figure out. We don’t “tell” the model what answer to get. We don’t know until we use the model to make a calculation.

  15. Lee,

    Literally tell us which one and literally point to the place in it where the model you created says other than what you told to say. This should (literally) be an easy task for you. Thanks.

  16. Lee,

    Your reluctance to complete this (literally) simple task has been noted.

    Everybody else.

    Lee, and others, confuse their ignorance of the complexity of their models’ outputs with the simple truth that the models’ outputs are there only because they told to model to make these outcomes.

    If you doubt this, then please do try to create a model which produces output you didn’t tell it to say. Not that you didn’t know it would say. But that you didn’t tell it to say. If your model has components C_1, C_2, …, C_p, with inputs I_1, I_2, …, I_m then it will have certain outputs. If you swap out C_i for something else, the outputs will change at some combination of inputs. All because you choose both the C and the I. Update Your failure to anticipate the outputs for whatever combination you choose of C/I does not mean you didn’t make make the model say “these outputs”.

    This appears to be, to me, a clear argument. Obviously not. So if anybody else has a better way of putting it, help us out.

  17. Paul Blase makes an excellent comment
    ~~~
    Models of the modellers thoughts also only say what you imagined them to say . The word for it is projection.

    ~~~
    “state of disease (which is related to virus shedding),”

    The disease state is not directly related in any measurable or linear way to the ‘viral shedding’.
    What is meant, presumably, is that the acute or chronic stage of the INFECTION is an indication of the likely shedding of virus.
    This has no bearing on whether or not (a mask per se) works to prevent spread of the disease. That’s a red herring.

    air flow and virus ARE captured in masks designed to do just that. Claiming they don’t is inexcusable.

    Confusion about where virus spreads best, (i.e. in the home in family units), is irrelevant to the matter of masks themselves.

    There’s a lot of pontificating about “papers” which are a useful distraction from what is known in simple Biological terms about respiratory viruses, air flow and human behaviour regarding masks. ?
    The minute is being used as a distraction and it’s quite bizarre because nobody’s wondering about this now. People have all moved on to pharmaceutical management except for clinical staff:
    Who will STILL be wearing masks in clinical settings that are directly related to treating the respiratory tract and upper airway in acute infected patients.

  18. Dear DR Briss,
    It’s not confusing. Perhaps you have to be steeped in it to be confused

    What you write is clear and pretty well always has been on this matter of models.
    What’s wrong is when people are making claims about what’s been said ABOUT the models, BY modellers from SAGE. The confusion is in all the misinformation going around through the media and strange rulings from politicians all running in different direction.

    One country cannot cut and paste a method from another country entirely but I fear this has been attempted by some states. The entire field is a wash with bad information as has been said a thousand times.

    Mixing the climate argument in with covid is asking for trouble.
    Can nobody see any good that will ievitably come out of the situation?

    The reasonable, sane, public have had a year’s lesson in looking at numbers and seeing how they can mislead.
    CovidClimage Mash-up is a recipe for chaos.

  19. “Your reluctance to complete this (literally) simple task has been noted.”

    What a buffoon. This guy never disappoints.

    Model + inputs produces a set of results {R}. {R} is a consequence of the model, whether expected or a surprise. Everyone understands that.

    “This appears to be, to me, a clear argument.”

    Argument for what? He wants to substitute his formulation “tell the model to produce {R}”. But any normal person knows immediately that that formulation has a set of implications that go beyond the actual state of affairs. It suggests that the model was manipulated somehow to get {R}. That the evil cabal of climate scientists, or whoever is twisting his panties today, fudged things to get the {R} that would please their Illuminati overlords.

    I’ve had the displeasure to work with some scientists who turned out to be dishonest and terrible people in various ways. But even they did not “tell” their models to produce any particular results. Why? Because, in most cases, you just can’t. Other people are looking at your models and dissecting your calculational methods. You won’t get away with it.

    You can change Euclid’s axioms and get a geometry without the Pythagorean theorem. But nobody says that Euclid “told” his model to produce the Pythagorean theorem. He wrote down the axioms that he thought were best, and {R} came out. Nobody uses the word “tell” this way. And nobody who knows what the work “argument” means uses it the way it appears above.

  20. Lee is doing a “thought-experiment”.
    Briggs is asking for a “demonstration”.

    My prediction: Briggs defeats Lee with a tko (or Lee retires, hurt).

  21. “told” versus “made”
    Made’s a good word because the model’s a creation

    You can ‘make’ a person say something with the use of thumb screws, apparently
    Those are useful tools if the purpose is evil or if the ends justify the means

    So in one sense models are used to frighten and threaten and blind people with science or bore them to submission. Still says nothing about whether the model is useful in seeking truth, which is a different purpose
    of course.

    So it all depends on where you want to end up and if I were going there, I wouldn’t start form here! As the Irishman said

    If you’re going to be accurate you have to be honest. Same in all areas of truth discovery.
    Media and salesmen pretend that science is pure and perfect.

    It’s easy to make the perfect the enemy of the good but it’s impractical and overly idealistic

  22. Lee believes (and feels, no doubt) that if he attempts to make a point with enough condescension and smugness that his point will be self-evident.

    Not working, Lee.

  23. The model is limited to its inputs — what you tell it. The result it produces may be unknown until it’s spit out, but that result is still determined by the inputs. In the case of models that fail to make accurate predictions the modeler has failed to tell the model everything it needed to know. Because the modeler doesn’t know everything or perhaps he wants to hear the model whisper sweet nothings in his ear. Enormously complex systems, such as climate, may be impossible to model accurately since the modeler simply cannot know everything that affects the system, and tell it to the model. Seems pretty straightforward to me, but then, I’m not an expert.

  24. All,

    Lee provides us with a terrific example proving himself wrong.

    Euclid indeed told his (deduced) model to say what it said. He did this by picking the conditions (axioms), and then working out the model’s consequences. Perhaps he knew in advance what the model would say, perhaps not (but this is Euclid, so probably the former). If he would have chosen different conditions, his model would have given a different answer. Simple as that. And in both cases, it was he telling the model what to say.

    Lee perhaps is making two errors, both common. The first is his apparent assumption that all the premises he puts into this model (like Euclid into his) are true, or close enough to true to make no difference. Only bad modelers put false premises into their models. And boy does it happen.

    The second is to hear “all models only say what they’re told to say” and conclude that, therefore, all models are in error in some way. Which is false. But there are so many lousy models—regular readers, how many have we dissected over the years?—that it’s easy to make this error, too.

  25. Well now. For those who have been to high school and remember something about geometry:

    If it is appropriate to say “Euclid told his model to produce the P. theorem”; if that’s how geometry teachers talk; if that is the language used in textbooks; then Briggs is right and my objections are misplaced.

    But if nobody talks like that, if that language seems wrong and misleading; if, instead, people actually say “from the axioms we can deduce the P theorem”, then I am right and Briggs is being strange.

    I know which is correct, because I have my nose in one math book or another almost every day, and I’ve spent decades involved in modeling in the physical sciences. You know which is right, too, reader, but I can’t force you to acknowledge it.

    Briggs points out two errors that I “perhaps” made, and then describes two positions that I did not take and do not agree with.

  26. Lee,

    So you put false premises into your models?

    Well, if a man can say he’s a woman, you can make a model that says so, too.

  27. “So you put false premises into your models?”

    Undoubtedly. Real scientists make mistakes, frequently. I guess that not something you have direct experience with.

  28. Recall a blog title:
    “all models are wrong but some are useful”
    or something so close that it makes no difference

    Hyperbole I suppose
    All models are wrong because they wouldn’t be models if they were perfect, they’d be the real thing

    It’s distraction. What. matters is how well the scientists concerned represent their own work.
    It is very different listening to Gavin Schmidt describing how models are used compared with Neil Ferguson. He made no false claims about what they are for. Scientists are blamed for the outcomes of decisions made on the state of the science the time.

    The modelling of covid’s R number, for example, according to those listening to SAGE and working for them, is/was a work in progress. As more information comes along so predictions become easier.
    Personally I’d have been happy with a good current epidemiologist with practical experience to have made estimations based on what he knows. That would never be acceptable for appearance’s sake, and for justifying difficult choices. So models are also a crutch.

    The numbers everybody argues about seem to be more straight forwardly understood with mental arithmetic and back of he envelope calculations as the decisions or actions are limited to only a few combinations. Timing is everything.

    When making decisions about complex matters in a clinic where one person is involved, nobody needs a model!
    They have a brain which does allthe thought experimentation

  29. “Well now. For those who have been to high school and remember something about geometry:”

    The best part of maths was geometry and trigonometry
    somehow though, I get the impression that very clear mathematicians, or professional ones, think as soon as you mention Euclid, it’s all over as far as understandings concerned

    It’s not hard to understand the concepts if I can understand, it’s the measure my Mum always uses about art
    (she’s an artist, being modest)

    What’s hare is the maths itself that is used in complex calculations.
    Only a few gifted people can understand mathematical language once it spills over into more than half a line..perhaps ten characters!

    Concentration is requited! and the ability to hold a thought for more than a millisecond

    So it’s a shame when people misunderstand each other’s intellect when clever people communicate with average individuals

    No need for it really. It’s fun and a source of humour though so don’t want to spoil the fun

  30. Much of quantum mechanics is “wrong.” But it’s a model that works for a certain restricted class of problems.

  31. ”All models only say what they are told to say.”

    What does this mean? A model is now alive and can be told to say something? I wish I could tell a model to say something. Ha. Is this worse than the sin of reification? Another word game? I am all for it. What Chinese doesn’t love word games?!

    In a model with or without an irreducible error, e.g., y = x + 2, there resides a very specific relationship between y and x. That is, the equation tells me that y = x+2. Not sure who tells whom. Silly?!

    If math is discovered, then Pythagoras didn’t tell the Pythagorean theorem what to say. The Pythagorean theorem told Pythagoras that his conjecture or prediction was right.

    I’d like to echo Justin’s comments. Based on the data structure and characteristics, I postulate an appropriate model with unknowns to be estimated. The fitted model dictates the predictions or inferences. (One needs to know the properties of a model so that it can serve adequately.)

    There are so-called Rashomon effects. They are not caused by crooked people who want the models to say certain things. A consultant gets paid for good solutions. How to find good solutions or models, be it algorithmic or equation-based, is another story. There is no shame in modifying one’s model when more information is available.

    There is no shame in being silly or weak… or a transgender person.

  32. To me, the aftermath of the way mainstream science has managed the Covid situation is not so much the validity or invalidity of the specific models that have been used, as the fact that it has made pristinely clear that most naively underestimate the human factor in those realms in which we desperately want to believe that the “subjective” has little importance.

    Even when you are honest and deeply commited to carry out your task in one of those fields (science, law, etc.) with the best of your abilities, to think that you can leave your human side, with all that goes with it (in the case of this conversation, the choice of particular variables, inputs and ESPECIALLY the choice of the real life data that you will not use in your models) reminds me of the fable of the frog and the scorpion.

  33. Of course computer models represent the personal opinions of the owners and programmers.

    With engineers, designs based on models get tested to verify the models.

    Components, and products made with those components, are tested.

    The models are refined, based on those test results (and product feedback from customers), if necessary.

    With climate change “science”, the models were developed without knowledge of exactly what caused climate change. That knowledge still does not exist today.

    The climate model climate predictions were poor 40 years ago.

    And the climate model predictions today are actually getting worse.

    So far, the latest batch of models (CMIP6), on average, are predicting even more global warming than the prior CMIP5 models … which, on average, already predicted more that double the global warming that actually happened since 1979 (compared with UAH satellite global average temperature data).

    The “average” model represents the mainstream government bureaucrat scientist consensus, and that consensus has obviously been wrong for 40 years.

    The models are not being refined for more accurate predictions.

    And the “best” model (maybe only by chance) is the Russian INM model, because it over predicts global warming less than all the other models.

    But it is generally ignored — accurate climate predictions are obviously not a goal.

    In my opinion, models that make wrong predictions should be called failed prototype models.

    And after many decades of wrong predictions, they should be called climate computer games, except the Russian INM model — that one deserves to be called a climate model.

    I personally prefer models that are tall, female, and walk on runways.

    Richard Greene
    Bingham Farms, Michigan
    http://www.elOnionBloggle.Blogspot.com

  34. “They’re not ((predictions)), they’re projections…tacit knowledge gets lost in translation with climate modelling”
    Gavin S. JLP lab

    versus:
    “We’re not trying to reduce uncertainty” in any event “the predictions made no difference to the decision made at the time”
    (Neil Fergusson re surges capacity response to rapidly rising infections last year prior to first shut down)

    If commentators short circuit ‘the science papers’ and go straight to the newspapers or Twitter, then all sorts of misrepresentation is possible. The press always gets a free press pass to operate how they choose without being voted in or out

    “Freedom” of the press is a big problem in the UK
    Boris has argued for it in the past as has Gove, yet there must be some kind of balance
    Like equality, some are more free than others

  35. ”All models only say what they are told to say.”

    What does this mean?

    I would have thought that was obvious.
    The model is a human construction. It is finite. It is deterministic.
    If it is complex, the results (outputs) may not be as your intuition expects.
    But the outputs will always be what the model specifies them to be, based on the inputs (and maybe past model calculations, if the model includes “memory” of what has gone before). As long as the model is working as designed, then it’s output will be as designed.
    Therefore, it will “only say what you told it say” – even when you don’t know exactly what it will say.

Leave a Reply

Your email address will not be published. Required fields are marked *