Philosophy

Data Do Not Have Means: Or, The Deadly Sin of Reification Strikes Again!

3322780400_4c89f2a7a2_o

No, data do not have means. Nor do they have variances, autocorrelations, partial or otherwise, nor moments; nor do they have any other statistical characteristic you care to name. Data only have causes.

“What?! Briggs, you fool. Are you saying that I can’t calculate means? If you’d take the trouble to open any statistics book, you’d see that data do have means.”

No, they don’t.

“You’re being obstinate. Just like in that recent article about time series in which you didn’t understand, because you couldn’t open a book on time series, that time series need to have stationarity in order to be modeled in the usual way.”

Sorry. Data do not have stationarity, nor do they have the lack of it. What’s stationarity? From the government itself comes this definition of stationarity:

A common assumption in many time series techniques is that the data are stationary. A stationary process has the property that the mean, variance and autocorrelation structure do not change over time.

This is a perfect instance of the Deadly Sin of Reification. Of any actual series of numbers, including a series of length 1, a mean may be calculated. But that series does not have or possess a mean. The data do not sense this fictive mean, nor is the data influenced any way by this hobgoblin, because, simply, the mean does not exist; and since it doesn’t exist, it doesn’t have any causative power. And the same is true for any statistical characteristic of any data.

The calculation we also call a mean certainly can exist, if somebody troubles himself to flick the stones on his abacus. But for a series of size 1, a calculation for variance or autocorrelation cannot be done, yet the series still exists (the fallacy of limiting relative frequency lurks here; frequentists are obliged to think the impossible, that every datum is embedded in an infinite sequence). This, then, is the problem: to have a mean is equivocal. The phrase can be used correctly (as it is on this blog), but it usually isn’t.

The Deadly Sin of Reification is so rife in probability and statistics that to find it absent is a surprise. And this is so even though every statistician will say he agrees with the statement, “The data are not the model used to represent the uncertainty in that data.” She will say she agrees, but her actions will be contrary.

This is why you hear talk of data being “normally distributed” and the like. No data in the universe is normally distributed, or distributed in any way whatsoever by any probability. Probability has no power; probability is not a cause! The uncertainty in much data can, of course, be modeled using a normal distribution, at least to a first approximation. It’s proper to say, “Given some evidence which led me to this conclusion, the uncertainty in this data is represented by a normal.”

That means, with some light qualifications, any data can be modeled by any probability distribution (this follows from the fact that all probability is conditional). In particular, data lacking the criterion (lacking the calculation) for “stationarity” can be modeled by a distribution which requires it. The model may or may not be any good, naturally, but we tell model goodness by its predictive ability.

Glance at any paper which describes simulation. The entire field reeks of the DSR. Terrible troubles are taken to, it is said, ensure random numbers go into routines so that the resultant simulation has (possesses) the proper means, variances, autocorrelations and the like. Data are generated, they say, by this or that probability distribution, which, it is said, have these certain characteristics.

Now to generate means to cause, and probability isn’t a cause, and random only means unknown and everything in a simulation is known. So there are two central fallacies here. It is true, as in the data series where a mean can be calculated, certain things in simulations can be calculated, but any resemblance to live things is in the minds of users and not in the simulated numbers themselves.

To say data has a mean or any other probabilistic characteristic is thus always a fallacy. Data always has a cause or causes, of course, and knowledge of these causes is always our real goal. Probability doesn’t get us to this goal. Knowledge of cause can.

Everything that was something else and is now this must have been caused to be this by something that actually exists. This cause or these causes do have certain characteristics and powers because, of course, to be responsible for a change is what being a cause means. But these causes won’t be means, autocorrelations or anything like that.

Again, our understanding of the uncertainty of data is influenced by means and so forth. That’s because probability is an epistemological and not ontological concern. So I reassert the true proposition: data do not “have” means.

“I don’t understand a word you’re saying, Briggs. None of this is accepted by statisticians. It isn’t even in any books!”

That so? So much for the field of statistics, then. But you’re wrong: it is in one book. We’re still waiting to see who will publish it.

Addendum Wiley sent comments from three reviewers, one unconditionally recommended publication and two conditionally recommended it, but the conditions are nothing much (better title, ensure certain literature cited, etc.); so it’s good news.

Categories: Philosophy, Statistics

61 replies »

  1. Congratulations on the book review outcome. Looking forward to purchasing a copy.

    As to the thesis of your argument: in my observation, most papers I have reviewed in my field of critical care informatics & analytics make the arguments that data have means, or that data follow such-and-such distribution. Unfortunately, I believe the basis for this sloppiness may have its beginnings in the teaching of basic statistics & probability inside the Academy.

    When I was a young and wide-eyed undergraduate (I am channeling the curmudgeon in me), I recall taking a course in design of experiments. In this course we studied causes of measurement uncertainty and conducted experiments in-situ and validated uncertainty models. We also had to validate cause of uncertainty in addition to quantification, and seek mechanisms to improve precision. I suppose this still goes on today. Yet, I absolutely recall being taught what you explain in your missive above… even though that was 35 years ago.

  2. Congratulations Matt!
    Looking forward to read the book. Hopefully they will price it reasonably.
    Marcel

  3. Marcel, John Z,

    They haven’t said yes yet. I answered the critiques yesterday. And now we wait. Still haven’t heard back from Springer.

  4. A concept from the physical sciences might apply here, with some stretching.

    Means are extensive properties. If you slice the data, the means change. They do not come along, hidden inside a datum, ready to be found by a keen observer with a microscope hooked up to R. This is in contrast to intensive properties, which is a concept I don’t think I can make work in this analogy. Perhaps the only intensive property of a datum is that it has its very own ‘Given E’.

  5. I have some data, I’m asked to calculate a mean; how should I refer to that mean and it’s relationship to the data in a way that:

    1) reduces the chance that I’ll con myself in my own reasoning
    2) reduces the chances that other people will will con themselves when (if!) they consider what I’m saying
    3) doesn’t require a full sentence of explanation
    ?

    Or should I just accept that verbosity is the price that has to be paid for avoiding trouble down the line?

  6. mrsean,

    Say, ‘The calculated mean of this data is x’.

    And if your interlocutor asks, ‘What does this mean mean?’ say, ‘It means it’s the calculated mean, and nothing more.’

  7. James,

    Means are not extensive. If you split a system you do not get the total mean by adding the individual means, the way that you do with volume.

    Means are largely intensive the way that pressure is, which is expressed as a mean. In a small system there will be statistical variation, i.e. fluctuations from equilibrium.

  8. Briggs. Great news on the book. Wiley no less! Can’t wait, whoever publishes. But…when I see phrases like “…the uncertainty in this data are represented…” as you preferred characterization, I think “well just as the data do not have a mean, they also don’t have uncertainty. The uncertainty is in you or me. You go quite aways down De Finetti’s road, but reject subjective probability. Or this statement, “Again, our understanding of the uncertainty of the data…” There may be variation in the observed quantities, but isn’t the uncertainty about that and the underlying causes within us ? I’m sure that once I get the book, I’ll understand more!

  9. gofx,

    Excellent comment. Here’s the short explanation. All probability is conditional. We have a proposition of interest and some premises probative of it. Once these are fixed, the probability is fixed; it is objective. But there is tremendous freedom in choosing premises and propositions of interest. In that narrow sense, probability is subjective.

    But that same freedom is found on high school algebra exams, and we do not say algebra is subjective. For instance, “Solve for x in this system of equations: y + x = 7; y < -8." No subjectivity in x, which is fixed by the premises (which tacitly include the rules of algebra). But the choice of “-8”, “+”, and so on were subjective. Note, too, x is not a unique number! Same thing happens in probability.

  10. “That so? So much for the field of statistics, then.”

    I have to admit I probably don’t completely grasp the point you’re trying to make. But here is something that gives me pause: experimental physicists and engineers (and anyone in an undergraduate physics laboratory, at least when I was teaching it) learn the basics of a field that is called, to borrow the title of one book on my shelf, “The Statistical Treatment of Experimental Data”. Look around you. Every artifice that you see, from your computer to the panes of glass in your windows, depends on the correct application of these statistical techniques. So does the discovery of the causes of diseases such as cholera. I’m not talking about the pseudo-science of questionnaires about religiosity. Given this, it’s hard for me not to suspect that your protestations are either straw men or quibbles about how we refer to things, rather than what we actually do.

    Congratulations on progress toward publication; perhaps things will be clearer after dipping into the book.

  11. Lee,

    “Every artifice that you see, from your computer to the panes of glass in your windows, depends on the correct application of these statistical techniques.”

    This isn’t so, but it can seem like it, especially in those fields where reality is a constant and insistent protagonist, like engineering. The goal is always to discover and understand cause. Engineers are good at this and so their applications of statistics are usually better. They set up systems where the same sets of causes operate, or where deviations from these fixed sets are rare (this is why bridges seldom fall), but it’s only because they have done such a good job identifying the right causes that it comes to seem like their data “has” means and the like.

    This is contrasted with those fields that make use of pseudo-quantifications, like you mentioned, and where it is quite clear data do not “posses” means or other statistical characteristics.

  12. Thanks, Briggs. I think I see what you mean, You have given me a lot to chew on, and that’s just the appetizer. Can’t wait for the main course. Hurry!

  13. I’m much more interested in this cabal of receptive ‘reviewers’ that Matt has infiltrated into Wiley.

    For this cannot be an isolated incident. Matt must have infiltrated entire cadres of sympathizers throughout the academic world.

    Else favorable reviewers of Matt’s book would not have (to Wiley) appeared “by chance.”

    For — THERE IS NO ‘CHANCE’!!!!!!!!

    But how did Matt engineer this proto-March Through The Institutions behind even our backs?

    Logic dictates that Matt has powers we simply do not comprehend.

    Who knows where it will end. Will TED talkers and New York Times editors now soon be professing that everyone has known all along that data do not have means?

    Only Matt must know the answer to that question.

    And that’s kinda scary.

    I’m afraid.

    I’m very afraid.

  14. A question. Commonly in business, “averages” (means) are used to track some metric that we’re interested in improving.

    Say we are a utility company and want to improve our work time on a specific task (say, replacing a gas meter). We look back in time to see how long the task takes based on our computer system which the technician uses to track “start” and “end” times. We then calculate a mean of this data, “by month” or some other time slice, and set a “target” mean that we want to try to get to – reduce our time by 5 minutes. The technicians, supervisors, managers and engineers get together to figure out things that could be done to improve the process of installing a meter, and implement these changes. The executives monitor the results, again by calculating the mean “by month”, and see a marked improvement in the mean.

    Obviously this is just one metric, but is this a case where using descriptive statistics like the mean can work? Another example might be statistical process control?

    Of course that statistic doesn’t actually tell us that what we did truly improved the process without looking at each individual case – perhaps we just fired the technician that took hours for each install? This is the problem marketers face all the time – did our coupon or ad actually influence the customer to purchase the product?

  15. Nate,

    Perfect example. Your technicians etc. looked for causes of time-to-completion and the like. That calculated mean time certainly exists, like any other calculated mean. But there is no statistical mean driving all those individual performances. People’s behavior doesn’t sense this mean, somehow, and move toward it or try to conform to it. Therefore, the data you measured doesn’t have a mean in the statistical sense.

  16. Lee: Are you saying without statistics there would be no computers or panes of glass, because obviously there were many discoveries and inventions prior to statistical evaluations. Computers works because zeros and ones get turned off and on. Glass makes window panes because of its inherent physical properties. Maybe the discovery of the cause some diseases would have taken longer without statistics, but even then I can’t see how it would have failed to occur. So much science existed outside of and independent of statistics, I would consider statistics a possible tool for identification of certain patterns, to be used very, very judiciously and checked over and over lest we make invalid conclusions (like eating bacon is as bad as smoking cigarettes, for example).

    JohnK: Your assumption may be based on the idea that there exists a very small cabal of receptive “reviewers” exist that believe a book should be published even if the premise is not something they agree with. The size of your input sample may be grossly underestimated. It may not be such an unusual feat. (Or maybe cynicism has gripped us all and we need to rethink that premise.)

    Factor in that the book was sent to multiple publishers and rejected by some, so there is an element of throwing an idea out there and seeing where it sticks. Why it sticks is because the reviewers believe there is a reason to publish–it’s that simple. Not scary at all. It might have been scary had Matt done a statistical analysis of a publisher likely to go with the book, sent it out and then the book was published first run. Or Matt is really skilled at statistical analysis and his model had predictive value. So many factors…….so many possible outcomes.

  17. Thanks Matt.

    In regards to causes – in the book, do you touch on the ideas of Shewhart and Deming and their differentiation of causes into “common causes” vs “special causes”? JM Keynes talks about this a bit too in his book on probability. A recent example might be Rumsfeld’s comment on “known unknowns” vs “unknown unknowns”.

  18. Nate,

    Not really. I’m more interested in showing that probability is not a cause, that our true goal is understanding causes, therefore we need to know what cause means and how it follows from understanding natures and essences. I can only give an overview of this latter material, of course, but have lots of pointers for more reading.

  19. What I am taking away from this.

    The data is a set of observations. Don’t attribute qualities to the data beyond that. The data exists.

    What can we do? We can fit the data to a model, such as a normal distribution. And we can calculate mean, variance, co-variance, etc. from the data to calibrate the model, and we can make inferences from the model.

  20. Re the Addendum – it is good news.

    It is true, as in the data series where a mean can be calculated, certain things in simulations can be calculated, but any resemblance to live things is in the minds of users and not in the simulated numbers themselves. (1)

    Ah, good, you acknowledge the sample mean that’s calculated from the data. So, now, do data have means?
    Again, our understanding of the uncertainty of data is influenced by means and so forth. (2)
    Great, you do understand that means (such as the model assumption of stationarity in time series) can influence our understanding of the uncertainty in the data! Which is the reason I asked you to think about differences between sample (data) and model (that attempts to quantify the uncertainty).

    Still, what does this have to do with the classical, mystical probability?

    If you believe everything has cause, then you’d say data have cause(s). But what is your objective of saying so? Can it help you do the following?

    Again, the challenge: I can send you the monthly temperature anomalies. With the understanding of (1) and the recognition of what second-order stationarity model assumption is for (2), you are one step further. Next, show me that your Third Way does what JohnK claims via the challenge.

  21. Re the Addendum – it is good news.

    It is true, as in the data series where a mean can be calculated, certain things in simulations can be calculated, but any resemblance to live things is in the minds of users and not in the simulated numbers themselves. (1)

    Ah, good, you acknowledge the sample mean that’s calculated from the data. So, now, do data have means?

    Again, our understanding of the uncertainty of data is influenced by means and so forth. (2)

    Great, you do understand that means (such as the model assumption of stationarity in time series) can influence our understanding of the uncertainty in the data! Which is the reason I asked you to think about differences between sample (data) and model (that attempts to quantify the uncertainty).

    Still, what does this have to do with the classical, mystical probability?

    If you believe everything has cause, then you’d say data have cause(s). But what is your objective of saying so? Can it help you do the following?

    Again, the challenge: I can send you the monthly temperature anomalies. With the understanding of (1) and the recognition of what second-order stationarity model assumption is for (2), you are one step further. Next, show me that your Third Way does what JohnK claims via the challenge.

  22. With regards to simulations, if I’m programming an agent-based simulation that sets up the initial conditions for a bunch of agents using a random-number generator that distributes its output according to a Normal cumulative probability distribution, but I write that these data are “generated by” a Normal distribution or whatever, is that really the same fallacy? I admit that it’s an awkward way to state what I’m doing, but I don’t think it’s the same kind of mistake as saying that real-world observations “have” a Normal distribution.

  23. And this is so even though every statistician will say he agrees with the statement, “The data are not the model used to represent the uncertainty in that data.” She will say she agrees, but her actions will be contrary.

    Interesting sex transformation. Only women will belie their words or did you have someone in particular in mind?

  24. Just to be clear, when I said

    such as the model assumption of stationarity in time series,

    I mean the assumption of stationarity in time series MODEL.

  25. DAV,

    Truly, it’s a mystery.

    Joe C,

    More or less, yes. Everything in a simulator is known, but we often pretend it isn’t, hence “random-number generator”. Random only means unknown, as said above. But we don’t speak of “unknown-number generators”. The supposed “randomness” almost blesses results. So what does simulation really do? Not what most think, though it can do a job of calculation of (some) uncertainty. I have an entire section on this in the book. Perhaps next week I can do a post on this to keep things separate.

    JH,

    “So, now, do data have means?” Why, yes, JH, we have calculated means, as I take such pains to say in the article. But data do not have stationarity nor do they have means.

    What is my objective for saying everything changed thing has a cause? To say what is so.

    I’m not at all sure what your “challenge” is. Is it to create a model for some particular data series? Why would I care to do that? Anybody can create a model.

    Doug M,

    Pretty close, except that, given varying premises, we have various models, and each is conditionally correct. Each does not explain the cause of data, but each explains our uncertainty in it given these assumptions. Means, parameters and so forth belong to models, not reality (i.e. data).

    In classical statistics you often hear of “true values” of parameters, which is a claim that these parameters exist, and which is false. The question of origin of parameters, as statisticians usually define them, is fascinating. I have a section on this, too.

  26. I’m very glad the book has got fine reviews and hope to purchase copy when it’s available….
    I’m not sure I understood your article or prescriptions. My own understanding (which may be very naive) is that a data set is a collection of observed values of “something”. Now the most informative way of seeing what that data set looks like is a graph of some sort. Failing that, there are measures of “central tendency’–mean, mode; measures of width–standard deviation,…; measures of symmetry (I forget what the third moment is called); measures of peakedness (is this kurtosis?)…etc. So do those numbers calculated from values of the data give you some notion of what a graph might look like?
    Am I wrong?
    By the way, here’s a physical situation where mean, etc., don’t describe what is going on: the diffraction pattern from a two-slit experiment. I imagine there are quite a few others.

  27. JH: Monthly temperature “anomalies” are not raw data. They already contain the calculation of a mean or some other statistical construct in order to calculate the “:anomaly”.

    Bob: The best way to “see” data is a graph? I believe that’s where the entire global warming industry got it’s way of communicating.

  28. “Bob: The best way to “see” data is a graph? I believe that’s where the entire global warming industry got it’s way of communicating.”
    Ah, Sheri, I’m talking about real MEASURED numbers, not numbers got from a computer or by fudging the values from data sets of primary measured quantities.

  29. Bob: Okay, that’s somewhat better, but the scale of graph and other items can still be used to create false impressions. If one is very careful, a graph probably can be useful. (I’m thinking of temperature graphs that have a very narrow range on the Y axis so the changes look so much bigger.)

  30. Sheri, you’re quite correct that graphical displays can be used to distort what data really is telling us. There’s a fine book on introductory statistics (for non-technical sorts), “Seeing Through Statistics”, by Jessica Utts, that gives essentially your example as a case of distortion (among others). The book was the text for a summer school statistics course I gave, mainly to football and basketball players at Penn State during my post-retirement graduate studies in statistics. (“In the country of the blind, the one-eyed man is king”….or so they say.)

  31. Briggs,

    Models postulated to quantify the uncertainty in data often involve certain theoretical means. And one can calculate the sample means for data. Nothing to disagree.

    A student just asked me what the median and mean of their exam scores (data) are. Does the exam scores have an alive median or mean? No. Do all means and medians have physical existence? No. “Have?” Well, perhaps there are criterion for using the word have.

    Do median and mean imply anything to the student? He wanted to know whether he was among the top 50%. Why mean? He also wanted to know whether the exam was easy.

    Anybody can create a model.
    Maybe, anybody can create a model. Maybe. But can anybody back up your claim that your Third Way can do what JohnK claims here?

  32. Sheri,

    I know that monthly temperature anomalies are not raw data. I also know the difference between climate and weather. You will be amazed to know how much I know, and of course, also how much I don’t know.

    By the way, you basically said that Briggs is not a true statistician or scientist; see https://www.wmbriggs.com/post/15095/

  33. “You’re being obstinate. Just like in that recent article about time seriesin which you didn’t understand, because you couldn’t open a book on time series, that time series need to have stationarity in order to be modeled in the usual way.”

    I cannot tell you who said this. I do know that I did not say that time series (models) need to have stationarity in order to be modeled in the usual way! However, my comments were about checking the model assumption of stationarity and about what you said about trend. (Why the assumption though? This is the part I ask you and your readers to check out a textbook or google to find out more if they wish to know more.)

    Anyway, what is the usual way? There are various ways to model time series data, but not all requires the assumption of stationary. There are regression models, e.g., harmonic seasonal models that don’t require stationarity assumption. There are non-stationary models. Again, google or open a textbook to find out more if people wish to know more.

  34. JH: If you know monthly raw temperatures are not raw data, I do not understand why you want Briggs to use a statistic to “prove” something. I guess I’ll wait to see if Briggs understands what you are saying. (I’d only be amazed at what you know and don’t know if I knew what those things were.)

    Not clear on why I said Briggs is not a real statistician or scientist. I have said that I do not consider some of his ideas correct and why, but generally I reserve “not a real scientist” to someone who spouts dogma in lieu of facts on a regular basis. Otherwise, some things people claim are science and statistics may not be what I consider such but that does not mean they are not being scientific—it just means I disagree with their assessment.

  35. Bob & Sheri, (sounds like a drink)

    Your discussion of graphs reminds me of an anecdote an economist friend of mine once told. He was interested in some aspect of the university operation and was given a number of graphs to look at. Response: these are just pictures, as an economist I want raw data so that I can do my own analyses.

  36. Sheri,

    No one who is an actual statistician or scientist would ever make two-year ahead predictions.

    (See https://www.wmbriggs.com/post/17079/)

    So, worse, Briggs and his coauthors made multiple-year ahead predictions in the paper or graph here – https://www.wmbriggs.com/post/15095/.

    How about trying to understand Brigg’s Third Way paper first and his comments here-https://www.wmbriggs.com/post/17079/?

    I am not asking Briggs to prove anything, but to back up his claim.

  37. JoeC,

    I do lots of similar things with simulations (agent-based or otherwise). I tend to say that ‘values of X are drawn from a Y distribution’.

  38. JH: At this point, Briggs model is not scientific. He must be able to predict and we don’t know that. I stand by my statement that one cannot predict even a year in advance in a chaotic system such as climate. Briggs statement that his model can make credible predictions seems a lot premature to me right now. Briggs may feel free to disagree with my assessment if he choses.

  39. In a dream last night I told a mathematician that Briggs said that a series doesn’t “have” a mean, that you can merely “calculate” one. I got a chuckle: “I can’t remember the last time I calculated anything. I can barely do 2+2 in my head…” The dream went on but drifted way off topic at this point.

  40. Greg,

    I dreamt the frequentists who read this, “the fallacy of limiting relative frequency lurks here; frequentists are obliged to think the impossible, that every datum is embedded in an infinite sequence”, converted to logic and abandoned hypothesis testing forever.

  41. I was amazed (and delighted, of course) that Matt got three quite positive reviews from Wiley’s reviewers. So I penned a (joyful, congratulatory) tongue-in-cheek comment about Matt’s amazing Svengali-like new powers. If Sheri did not get the joke, that is my fault for not telling the joke better.

    Matt in these Comments:
    >”The question of origin of parameters, as statisticians usually define them, is fascinating.”

    For Matt’s sympathetic students: One root problem may be that people don’t truly understand (I certainly didn’t at first) that statistics USES math; it is NOT math. By that I mean: statistics has NO mathematical foundation.

    The ‘statistics’ we all have learned and/or used are merely mathematical elaborations of the absolutely fantastical and/or positively woolly ‘ideas’ and assumptions behind them.

    Matt has discovered that the very FOUNDATIONS of ‘statistics’; the logic and assumptions upon which it is built PRIOR to any mathematics, is fatally flawed.

    Here is a key point. The ‘statistics’ we all learned is NOT built on ANY mathematical foundation — not a solid mathematical foundation, not on ANY strict mathematical foundation. It is built instead on assumptions and logic. But the assumptions are false, and the logical, not logical. And only then, AFTER this rotten foundation has been laid, was mathematics — solid and sometimes not so solid — APPLIED.

    ‘Statistics’ is NOT mathematics. That is key. ALL the math in ‘statistics’ is applied math only, applying math to… Raising the question, Applied to What?

    Matt discovered that ‘Applied to What?’ was the key question. Then, often simply by looking at the literature and finding the places in which the inventors of the ‘statistics’ we know talked about WHY they proceeded to invent ‘statistics’ the way they did, he has shown that the founders of classical statistics had defective ideas.

    So naturally, ANY math — even completely solid math — built from defective ideas is …. not what you want.

    This is why Matt’s work is so important, in my view. ‘Radical’ means “from the root”. Matt’s work is an effort to re-examine the rotten roots, the foundations, of ‘statistics’ as is now learned and taught, and to re-found statistics on a firmer basis.

    One way I proved this to myself was to examine Matt’s argument re confidence intervals. I found, just as Matt said, that not only does any actual classical confidence interval that we could ever calculate have ZERO meaning — zero, zip, nada — meaning, but that classical statisticians admit that very thing, in all the best textbooks. See here for an example I found for myself.

    So: the ‘foundations’ that we all learned, or gathered, or picked up in the air, in our statistics courses and from our practical work DON’T REALLY ‘FOUND’ STATISTICS adequately. All that math is built on sand.

    When that kind of statistics ‘works’, it works not because of itself, but because we carefully replicate studies, think hard, etc. — not because the statistics itself is ‘sound’. At best, classical (and classical Bayesian) statistics has heuristic value only.

    The ‘ideas’ we all had about statistics are preconceptions, not arguments, with precious little if any mathematical, let alone logical, foundation.

    Not that the Third Way statistics Matt offers is a panacea, a magic dust we can sprinkle to obviate the hard work of getting closer to the reality under study. The Third Way is simply a way to be far more honest, from the beginning of our analysis, about what any statistical analysis, even an impeccable one, can and can’t do.

    The Third Way, Matt argues, is merely the least we should be doing, if our goal truly is to come closer to the reality we are looking at.

    That is the first thing we must all learn, I think, to follow Matt’s work.

  42. JohnK: My response was apparently too serious sounding also. 🙂

    If you are looking for a group of people who do not believe data has a means, try dentists or doctors. Asking what the “average healing time” or “average pain level” is met with “It’s all individual.” These people do not see any use for average or means or anything similar.

  43. Ok, I’ll probably make a mess of this. But here goes anyway.

    Nate talked about meters and shaving 5 minutes off of meter installations based on a “mean”.

    Let’s say that we have four, two man teams. Two days of work look like this:

    Meter# Date Team Time
    10156 3/3/2015 B 43
    83226 3/3/2015 D 69
    94312 3/3/2015 C 67
    23815 3/3/2015 A 60
    10876 3/3/2015 B 82
    70288 3/3/2015 C 70
    42382 3/3/2015 C 79
    23944 3/3/2015 C 27
    58843 3/3/2015 B 27
    90258 3/3/2015 A 51
    30607 3/3/2015 A 78
    92332 3/3/2015 C 72
    46538 3/3/2015 D 161
    29150 3/3/2015 A 40
    75077 3/3/2015 A 19
    84339 3/3/2015 B 75
    26974 3/3/2015 B 45
    97686 3/3/2015 C 82
    41787 3/3/2015 B 20
    88853 3/3/2015 C 54
    58179 3/3/2015 D 80
    10269 3/3/2015 A 60
    78262 3/3/2015 A 78
    93826 3/3/2015 A 69
    62550 3/4/2015 A 42
    73096 3/4/2015 C 66
    20276 3/4/2015 C 83
    25695 3/4/2015 C 82
    69185 3/4/2015 A 68
    69026 3/4/2015 B 66
    73723 3/4/2015 D 136
    36331 3/4/2015 B 57
    74087 3/4/2015 C 40
    26394 3/4/2015 C 39
    13143 3/4/2015 B 43
    48029 3/4/2015 D 96
    38779 3/4/2015 A 42
    24903 3/4/2015 A 73
    26355 3/4/2015 A 22
    95770 3/4/2015 C 78
    30395 3/4/2015 A 45
    73306 3/4/2015 D 164
    74259 3/4/2015 D 156
    53789 3/4/2015 D 117
    68118 3/4/2015 B 53
    72365 3/4/2015 C 41
    27354 3/4/2015 C 72
    16937 3/4/2015 B 72
    51654 3/4/2015 B 39

    Average time is 67 minutes (3300/49). So Nate might say, “Hey, let’s try to get that down to 62 minutes.” But is that going to actually help productivity.

    A closer look a the data shows:
    A Count 14 A Sum 747 Ave A 53
    B Count 12 B Sum 622 Ave B 52
    C Count 15 C Sum 952 Ave C 63
    D Count 8 D Sum 979 Ave D 122
    Total Count 49 Total Sum 3300 Average 67

    Sure enough Team D says that’s cool. No problem. Five minutes? Hah! We got that!
    Teams A and B say wait a minute. No way.
    What they need to do is fire Team D and hire some guys that will get the numbers up around where the other guys are at.

    So. The point to me is, yes you can calculate a mean but does it mean anything? If we picture the data as fairly uniform, then maybe calculating the mean can help us discern something about the data and the causes. But what if the data is not uniform. The mean can actually be misleading in an information sort of way. That’s because the mean doesn’t really tell us the causes or what the data is actually saying.

    Team D is actually taking about twice as much time to change out a meter and they are doing about half as many per day. Improving productivity is probably not going to be accomplished by haranguing teams A, B and C. It’s going to be by fixing Team D.

    Anyway, that’s sort of what I hear him saying.

  44. 1. It IS quibbling. It’s similar to the criticism I received elsewhere when I said (not being a geophysicist) that “I believe the mainstream geophysicists’ interpretation of the evolution of the geophysical system as influenced by CO2 emissions.” I was chastised as follows: “Aha! It’s a belief, I was right, your climate fanaticism IS akin to a religion.” Sigh… Ok, “given my level of knowledge and the seeming credibility of the arguments in any direction that I’ve read and the literature I’ve read about the earth/atmosphere/ocean system, it is my opinion that those who infer from their studies that great damage is likely with increasing CO2 levels in the atmosphere are more likely to be correct.” Did that really further anyone’s understanding of me, the system, or the science (whichever way you interpret it)?

    2. Dr. Briggs, is it your position (not to say “belief”) that, for example, the alpha decay of a particular atom of U238 to Th234 and He4 at a particular time has a cause? A “yes” implies that your opinion (not to say “belief”) is that there are so-called “hidden variables.” From what you’ve written, I anticipate that your answer will be “yes.” Despite your religious beliefs, I will be very surprised if your reply is that “God decides which atoms should decay when.” And prediction of the macroscopic properties of a sample of U238 using the known half life works pretty darn well.

    3. I’ve asked for years for an example (not caring about a particular one about which I might care but merely a representative example) of how one might use the body of knowledge in which Dr. Briggs possesses a doctorate to make an assessment. One example: Many contend (I don’t) that living under high voltage power lines or using cell phones “cause” (or, better, significantly increase the likelihood of – whatever that means – developing) cancer. Were I to attempt to retain Dr. Briggs to enlighten me as to the truth or falsehood of this contention, would he throw up his hands and say “you’ll need to speak to oncologists and physicists, statistics don’t cause cancer”? I might then reply “I have done so, several are currently actively investigating the physics and physiology involved but are years from a conclusion, can’t your knowledge of statistics and the data available provide valuable information for decision making in the meantime?” I’m very interested in Dr. Briggs’ response. Much of his writing seems to me to be able to be synthesized as “statistics are useless” though he has denied that. If that’s a poor example, feel free to choose a better one.

  45. Rob Ryan,

    You’ve generally lost me with your questions. The first I don’t understand at all. Number 2, yes, something causes the alpha decay. The decay happens, yes? It is a change, yes? Therefore, something actual must have caused the change. What is the cause? I have no idea. Actually, I have some idea. The cause could not have been nothing, because nothing is not a thing and the absence of everything cannot be cause. Likewise randomness cannot be a cause because randomness is not a thing. Therefore, it must be something actual.

    About 3, many times I have told you exactly what to do. The advice is even linked to the right in the “third way” paper. So I am not at all sure what your complaint is.

  46. The first: It’s an analogy. Some would say “the data have a mean.” You would say “a mean can be calculated for the data.” Some and you are saying the same thing. No one (well, I suppose there may be someone) thinks that the data conspire to create a mean, or that the mean has caused the data or that the data has caused the mean. When someone says “the data have a mean” the mean that they have added the numbers and divided by the number of numbers they added (or they multiplied the numbers and extracted the root represented by the number of numbers, or they inverted the numbers, added the inverses, and inverted that number, or…). That’s the same thing that you mean. Just as when I say “I believe…” I actually mean the longer version. Neither “I believe…” nor “the data have a mean” implicitly carry the baggage you attribute to the latter.

    As to number 2, I believe that you will find yourself in disagreement with those who study such things (but that’s a position with which I’m sure that you’re familiar). As best I understand their understanding, the atom decays because the atom decays.

    Number 3: No, it doesn’t answer my question. Let me bring it down to Earth (and let me emphasize, the hypothetical situation I’m describing has no relation to anything in which I have a personal stake, it’s purely made up so that I can understand what you think can and cannot be done with statistics and yet I think that it actually could happen and maybe has). Suppose I’m the swing vote in a city council. An ordinance is on the agenda that would prohibit the issuance of building permits within x feet of high power electrical lines. I have rabid proponents and opponents in my ear. I’m able to buy some time for trying to understand how I should cast my vote (if I vote against and power lines really are carcinogenic, I may have condemned some number of people to a horrible, premature death, and if I vote for and the are not, I’m doing serious and needless financial damage to land owners). I don’t have years for the physicists and oncologists, I have, say, six months. Dr. Briggs, if all the data on power lines (locations, voltages, etc.) and on cancers (locations, types, other exposures, whatever you want to know) is made available to you, can your expert knowledge of statistics help me decide how to vote? I don’t have time to see if models and predictions have skill, I only have the data and I need to make a decision. Can you help or not? If you can, how will you proceed?

  47. Terminology is often misused by non-specialists.
    How many times have you heard people speaking of the IP address of their computer? Yet computers don’t have IP addresses, network interfaces do – a subtle, but significant difference.
    Whilst I applaud and encourage such edifying posts, I doubt very much that things will change because of them – people, even otherwise well educated ones, will continue speaking of data “having” a mean, just as they will continue to insist that their computer has in IP address.

  48. Hmmm. Data do not have means. That’s a tough one to swallow.

    So I’m told to control a production process. I install some equipment to make and record measurements that seem likely to be relevant to understanding the process. I group the data into blocks of sequential measurements. I calculate the mean of the data blocks.

    Now of course the calculated mean is only an estimate of the true mean of whatever aspect of the process is being measured, and that true mean, of course, doesn’t even exist because the process is continually drifting, as all physical things do, and which is why there is a need for me in the first place. But I pretend it has meaning anyway, and use that pretend meaning to tweak the process so that production can continue and make money for my company which in turn puts some of it in a paycheck for me.

    So what I need to understand the point, I’m thinking, is a story. The story goes something like this: Engineer A collected some data and tragically assumed the data had a mean, which, of course, it did not, and could not, since data do not have means. As a result, Engineer A, based on this erroneous belief, took action B, which caused consequence C, which in turn caused tragedy D, and the engineer was promptly fired.

    Yep, a story with a tragic ending, so I can learn my lesson about data and means. That’s what I need. Otherwise it just sounds like a semantic in search of a relevance. (If there was a book of such stories, I would surely buy it, because engineers love reading about the engineering failures of other engineers.)

  49. Milton,

    Open wide. There is no “true mean”. It doesn’t exist. Thinking it does is what causes people to err and believe probability is causal. That can lead to a cessation in searches for actual causes.

    Want a horror story? Just saw on the news yet another “study” showing Volkswagen “caused” so many excess “deaths”. See this: https://www.wmbriggs.com/post/17025/ Big dollars on the line here.

    Now like I said, many engineers are not as pin-headed and do seek for causes, and when the commit the DSR it is of less harm because they have designed their systems so that most causes are controlled. But life is populated with more than engineers. There are also politicians, college professors, activists, and others who do not understand basics like control and who cause real harm with the DSR.

    And even engineers would benefit from a true understanding of probability, that it is meant for quantifying (when possible) uncertainty. That being so, it is good to quantify it in the right way, and that leads to how to tell how what is good and what bad. Etc.

  50. Rob,

    I still don’t your point in (1). Looks like you agree with me.

    For (2), if people disagree with me, they are wrong. Simple as that. And I believe you will find many physicists do not disagree with me; which is to say, they agree. I have references in my book, but they are easy to find.

    For (3), ah, your old favorite. Step 1, re-analyze all the studies presented using classical methods. If you’re in a hurry and can’t do it, multiply the uncertainty by roughly 4 to 8; i.e. widen confidence intervals and risk assessments by this much. The only way to discover the exact amount is to redo the studies with the original data, and wait for verification the models are good. If you can’t wait, you have to wide the uncertainty by an unquantifiable amount to account for the uncertainty in the model specification.

    Not all probability is quantifiable, and neither are most dollar amounts. People who do these studies quote amounts down to the penny. They are out of their minds. It’s complete nonsense.

    Then gather physicists, chemists, and biologists and have them describe the causal mechanisms they claim. There will be folks on both sides.

    Then make your decisions knowing this golden rule: not all uncertainty is quantifiable. Most isn’t. Tough cookies if you wanted a different answer. Life isn’t easy nor fair.

    Making up quantifications merely for the sake of having numbers leads to over-certainty, hence bad decisions, hence real harm.

    The success of science in some areas leads to the false expectation that it can be successful in all areas, if only enough elbow grease or money is applied. It just isn’t so.

  51. Gosh, Milton, what a tragic story. How about another? Fred goes into oncologist Mark. Testing shows Fred has a very virulent strain of Cancer Die Die. Mark tells Fred that “on average” people live 6 weeks at this point of diagnosis. Fred goes home, kisses his wife, runs off to Vegas, spends all the savings on showgirls and booze (kind of like that Nicolas Cage movie). Then, at the end of six weeks, he waits to die. Then seven, then eight, then nine. He’s living in the street now, of course, since he blew through all the money. He can’t go home, can’t get a job, lost his wife, his kids. So he steps in front of tour bus. Not a good outcome to the use of means there, Milton. (The doctor broke the unwritten code of doctors NOT to make estimates or tell people mean outcomes.)

    Briggs example of Volkswagon is good. You never hear about a “dearth of deaths” and a call to kill a certain number of people because not enough are dying to match the statistics.

    When we are looking at something like quantum physics, the probabilities there are much more accurate, though they do not address cause. However, the standard for “certainty” in physics is much, much, much higher than any other science. You don’t hear string theory saying “It could be that such and such particle will be in this area” or “Maybe that particle trail is the Higgs Boson. Let’s put out a news announcement.” Even in quantum physics, what is actually being described is that being attributed to a “particle” that we only have circumstantial evidence for. The use of the term “particle” is merely a convenient way to describe a set of circumstances that produce result “x” each time. It could be a “particle”, a “string”, a “box” or whatever we want to call the circumstance. The physical reality of the particle is not known but does not have to be–it may be verified later, maybe never. Calculations work and produce results that are replicable. On the other hand, the actions of CO2 in the atmosphere are not understand and the actions prescribed based on these “averages” can be horrifically damaging. It matters how one uses “average”.

    Also, your using means because it’s useful is not the same as the belief that data actually has means. We do things that may be useful but completely wrong if looked at rationally and mathematically. As someone (I think it was Will) pointed out, we used average size for people when making clothes and building homes. In those cases, people outside the range cannot buy “off the rack” and have to pay more their clothes, but we don’t have any other practical means of doing the manufacturing. Average size means nothing to an NBA player. As the population grows fatter and taller, so do the clothing sizes. Then the gymnasts are paying a premium for small sizes. We all know the mean is not a real part of the data–it’s just a value we use. Like the average couple has 2.5 kids. No one thinks there’s half a kid out there.

  52. #1: No, I disagree completely. My point is that, what people mean is what I said. Your harping on “data do not have means” is, as the saying goes, distinction without a difference. I get that your point is that the data are the data and nothing more, I even agree. But, when people say “this data has a mean of x” they mean that they calculated the mean and that calculation yielded the number x. That’s what you mean also. I think.

    #2: OK, the physicists whom I’ve read mostly don’t think that there are hidden variables, which would logically be necessary were the decays to have a cause. That’s so even if God decides on an atom by atom basis. I wouldn’t be surprised of others do think so. But their models mostly certainly have skill. Or do they? Does a model “possess” anything any more than data does?

    #3: I will absolutely acknowledge that you’ve answered the question thoroughly and I won’t ask it again. And I agree that epidemiological analyses saying that Volkswagen “caused” some number of deaths by sales of cars believed by people to have one set of emissions characteristics and therefor being worthy of purchase when in fact those cars had a completely different set of emissions characteristics are foolish. But the emissions resulting from those purchases damaged some number, greater than zero, of peoples’ health or they did not. That statement MUST be true with probability 1, no? How will a statistician inform the discussion around that simple “did or did not” question? We certainly can’t put a bunch of people in a room with Volkswagen diesel exhaust at varying levels and durations and see what happens. If a statistician cannot inform such a discussion beyond “there’s no way to know” then his or her value is the same as that of a philosopher. I don’t mean to insult philosophers, they may have value but I think that people generally expect something more tangible from a statistician. Perhaps those expectations need to be adjusted.

  53. Rob Ryan: Concerning (1), I can think of one example where “average” is interpreted as being part of the data. When wind companies are selling the idea of a wind plant, they talk about “average wind” in the area. There is a location in Wyoming with an “average” of 40 mph wind. When the wind salesmen say “average speed”, they are hoping that the audience hears “the wind blows 40 mph in this area” so wind energy will absolutely work here. No worries about backup, etc. I have never heard a wind salesman say “The wind averages 40 mph but we all know that means anywhere from zero to 80 mph, so the electrical output will vary dramatically”. They speak of the 40 mph as if it is a “real” number, not a calculation.

    (2) Models should posses skill because I believe that is the purpose of a model. It shows how something works or predicts how something will work in the future. Data is just data–no predictive value. It tells you what is here and now. It does not give you the cause of the phenomena. (I’m sure physicists often do not believe there are hidden variables, but Newton never imagined atoms either. Acknowledgement that there may be hidden variables is simply realistic–we certainly do not know everything. Plus, if we assume there are no hidden variables, we have no reason to keep trying to understand the phenomena at hand. We know all there is to know. I don’t know if there are any hidden variables and at the moment, we have little or no evidence that there may be, but I don’t think we can exclude them without limiting our ability to learn and discover.)

    (3) We can put people in a room with diesel if we are the EPA. Been there, done that. It’s not realistic to sue for “premature deaths” or “illnesses” from this.
    “But the emissions resulting from those purchases damaged some number, greater than zero, of peoples’ health or they did not.” The statement is true, however, we simply cannot know or even estimate the answer here. We do not posses suffice knowledge of all variables involved. One cannot predict accurately without possessing knowledge of all the input parameters. In this case, there are too many unknowns to get even a rough estimate. Throwing darts at a board may be just as accurate. Human health, its interaction with the environment, etc are just too complex for anything other than guesses.
    The expectations from statisticians and the use of stastics do need to be adjusted, definitely. The Volkswagon owners could sue Volkswagon for lying about the emissions on the grounds that the purchaser believed he was buying a car that helped curb air pollution when in fact he was not. That we do know and can prove. The cars were marketed with a false premise.

  54. JohnK,

    Perhaps, the only subject that has mathematical foundations is mathematics. What’s mathematical foundation? A philosopher might think it’s about the nature of mathematics. A pure mathematician might think it’s about mathematical concepts, theorems, axioms, proofs and so on; he/she can prove theorems ignoring what a philosopher has to say about foundations of mathematics. I don’t mean to insult philosophers, they may have value. 

    It is built instead on assumptions and logic. But the assumptions are false, and the logical, not logical.

    Based on the way Briggs presents the example in his paper below, I can see why you’d think so.

    “We assume an ad hoc ordinary regression model. If we adopt the Bayesian philosophy, we need priors, and here an assumption of “flat” priors will do. “

    Assume…adopt… assume agian… will do. But really, this is not how statistical modelling starts. He obviously employs an inappropriate model and therefore all conclusions followed may be wrong. I have suggested to a model that’s applied to truncated data before. How do I come up with the model? Statistical foundation!

  55. Sheri: (3) “The Volkswagon owners could sue Volkswagon for lying about the emissions on the grounds that the purchaser believed he was buying a car that helped curb air pollution when in fact he was not.”

    I found this statement intriguing. One would assume that the Volkswagen owners would have to show some harm in order to sue for damages. But what harm did they suffer? They got to drive what they perceived to be a high-performance low-emission vehicle that outperformed the competition. True, their psyche no doubt suffered great harm when the illusion was shattered, but who’s fault is that? Why, the EPA’s!

    But back to the reality. No, wait, forget reality, here’s a contrived example, for (2).

    I own a casino with 1000 slot machines. This is legal, but licensed and highly regulated. I can buy my slot machines from whomever I please, but each slot machine must have a sealed “Chance Engine” installed by the State that determines payouts. Some of us casino owners grumble about the Chance Engines – we are convinced that they are rigged in a subtle and sophisticated manner that we don’t understand. We just aren’t making the income we think we should be making based on a paper calculation, and some of us are barely staying afloat.

    My slot machines are all networked to my central computer which collects all the operational data available (basically everything except the internal workings of the chance engine, which is off limits). So I set up a monitoring program to track what I call “average payback” of each machine using a weighted time average. Sure enough, there is an identifiable ‘signature’ associated with the payout graph that often precedes the machine drifting into a >100% payback mode, and I can respond by taking that machine out of service for a short period of time.

    I have a working theory about what is going wrong in the Chance Engines, but no practical way of testing the theory, much less remedying it. But I have hired some folks to watch the graphs all day long, and profits have improved to a reasonable level, so this is my new status quo (albeit an uneasy one).

    Now, you may dismiss this example as overly-contrived, but I picked the details to represent the obstacles I have faced in real-world situations. (I tried to come up with an example involving rolling dice, but I couldn’t quite get there.)

    Sheri, you say “the data is just data, with no predictive value”, but a typical engineer’s career is chock full of examples of situations like this, where decisions are based on data and not a true understanding of the underlying principles. Now don’t get me wrong, engineers love to dive into the details, and learn new stuff, and create and test predictive models, but the vast majority of problems we face don’t justify that level of thoroughness, and management demands visible progress, There’s even an expression for this: “At some point, you have to shoot the engineers and ship the product.”

    BTW, I view the Vox article on Volkswagen killing people as a non-sequitur to this whole discussion. The proper response to Vox would be to show equally plausible (i.e., equally goofy) reasoning on how Volkswagen’s actions saved lives (e.g., underpowered vehicles cost X lives per year in accidents that could have been avoided with a properly powered vehicle). Liberals are happy to sacrifice lives to save the planet, so the Vox article came across as a tad disingenuous. I wonder what the lib response would have been had Volkswagen secretly increased emissions to improve gas mileage rather than performance?

  56. Milton: From a Bloomberg business article–
    “John Decker bought his 2013 Volkswagen Jetta diesel thinking he was doing his part to improve the environment and reduce his carbon footprint.
    Now that the German automaker has admitted its claims about the model’s performance were false, he just wants the company to buy it back from him.
    “I feel completely deceived by Volkswagen,” Decker, of Sacramento, California, said in an interview. “I’m extremely upset about it. I feel defrauded.””

    The article notes that the diesel version cost as much as $7000 more than the gas model, which the owners paid thinking they were helping the environment. To some people, saving the environment borders on a religion. Driving a car that pollutes more than promised would be sacrilege to them. They feel stuck with a polluting car that cost $7000 more and delivered nothing close to the promised benefits.

    “So I set up a monitoring program to track what I call “average payback” of each machine using a weighted time average.” At this point aren’t you using a “model” rather than just looking at the data? Wouldn’t you just need to be graphing the data and looking at the graph if you were only using the data? I guess to me when you start using a weighted time average, you are looking at more than the data. Perhaps not? (The example makes sense to me that you can’t access what may be the actual cause, so you work around it.)

    I am not saying you can never use a weighted average, etc–I gave the example of clothing and housing sizes. The concern is when the average (such as the wind velocity in my example) becomes the “reality” and the data is just ignored. The same is true in global warming–individual temperaturesdo not matter. Only deviation from the global average matters. Yet there is no real meaning to a global average temperature. It doesn’t reflect the reality of temperature ranges on the planet.

    I’m pretty sure libs would not like the underpowered versus properly powered vehicles, but I could go for it. That’s interesting question–would the libs be angry if the emission tests were faked to increase gas mileage? Hmmmm.

Leave a Reply

Your email address will not be published. Required fields are marked *