What Is Cause? — Guest Post by DAV

Regular readers will recognize DAV as a regular contributer whose humorous voice of reason has kept many conversations on target. Today he presents ideas to spur a discussion of causality, a topic which, strangely enough, is often neglected in statistics.

I’ve been doing some thinking lately about the meaning of the word “cause.” I don’t mean in the sense of an association of individuals enjoined in a common endeavor. I rather mean “cause” as in “cause and effect.” Everybody has their own ideas of what a “cause” is. I met someone a while ago that insisted on varying flavors of cause such a Associate, Secondary or Joint none of which should properly be labeled Cause. It struck me as overly precise and much akin to saying hamburger is not meat—steak is.

A demonstration of the well known Nugent effect (source)
Nugent's Effect

One thing is clear though. Whatever a “cause” is it must precede the effect. Or does it? If there’s room we’ll come back to that.

When we say “X causes Y” what do we really mean? We might say a trigger is the cause of a gun firing but the trigger actually sets off a chain of events leading to the gun firing. It’s not THE cause in the sense that it is the ultimate event prior to the discharge. So why do we say the trigger is the cause and ignore the firing pin, primer and propellant?

Part of the reason may be that the trigger directly interfaces with the operator. Saying the trigger is the cause carries a number of tacit assumptions: the gun is working properly; it was loaded; the ammunition was built correctly; and more. Oddly, the safety, or more precisely its setting, is also part of the event chain. That implies inhibitors are also causes if “cause” is everything in the event chain.

A major notion is time sequence. X comes before Y therefore Y cannot be the cause of X. Establishing time order can be tricky in cyclical scenarios. There’s the notion of volition. Guns don’t (normally) fire by themselves. A possible reason why the trigger is labeled “the cause.” There’s also the notion of separation meaning the closeness (in time or number of events) to the effect but apparently the notion of volition trumps that. It does for guns, anyway.

I think that people, whether they realize it or not, actually mean cause is something that can be used to predict an effect. In other words, they rely on the dependence between the two. But not the dependence alone. It’s also the independence. Causes are (normally) independent of their effects so effect (normally) can’t be used to predict the cause. But there are times when determining precedence becomes a chicken and egg problem. For example a drug user takes a drug for its effect. The resulting effect could lead to more drug use. Sometimes, the merely the anticipation of a drug’s effect leads to its use. In that sense, the effect is one of its own causes. Sometimes, time information may not exist.

So, in the long run, cause seems to be determined by correlation or its absence. The common adage about correlation and causality really only applies to two variable situations. An experiment is just an active way to introduce more variables (hopefully in a controlled manor) into the mix.

There’s also the idea of level of predictability. In the comments of my previous post, there was a brief discussion on whether a slap could result in a fatal leap to a conclusion. Well, it’s a cause in the sense that it is part of the chain leading to the leap. But our sensibilities and experience tell us the slap is an outlier in this case.

A lot of this comes from my use of Bayes Nets. A Bayes net is a graphical model of the joint probability of all of the variables. As it turns out, any given node within the network is insulated from the others by the surrounding variables. This is called a Markov Blanket. A node can be completely described by this blanket which includes its parents, its children and the parents of its children. In a Bayes Net, the parents are causes and the children are their effects. Interestingly, the idea of the Markov Blanket says that events can be predicted by their causes.

I’ve run out of room. I really don’t have a point here. Just some ramblings to think about.


  1. Will

    Well done DAV. Great read, and thought provoking too.

    When you said that cause is something that can be used to predict effect, I think you hit the nail on the head. The issue of course is that there many aspect leading up to an event that contribute to the outcome.

    My own experience with data models has shown me that, very often, there is no one cause. Things happen because of a combination of things. Each thing, on it’s own, is trivial and contributes very little to the end result. Of course, going back to Briggs posts on models, any explanation is just a model, and so even the interpretation of events is just one possibility of many.

  2. In [modern] common usage, ’cause’ is related to physical laws (taken in the widest possible sense, to include laws which are yet unknown). There is an implicit worldview there that must be analyzed — it says that physical laws are deterministic, or classical (e.g. Laplace’s dictum — “if one knew the position and speed of every atom in the universe at a particular instant, one might forecast the fate of the entire universe ). Quantum mechanics is sometimes considered to be a (limited) antidote for that worldview — and whether it is that, or not, would be another discussion –, but the view of the proverbial man in the street is still very much ‘classical’. However, physical laws of the classical type are abstractions.

    In other words, the problem of cause-and-effect is another instance of the problem of concretes-and-abstracts, which follows us since the pre-socratics (think about Zeno’s paradoxes). The ‘solution’ is a long reflection about abstraction, what it is, what it isn’t, and what is its relationship with our experiences (the-world-as-we-see-it).

  3. Rich

    It seems to me that causes don’t begin as predictions but as explanations. Once we feel they’ve been confirmed by experience we start uses them as predictions. But explanations and causes, are just part of a dialogue about the world. There is no certain way to determine if they are in the world or not. This is because whatever happens is what happens and not something else. Could it have been something else? No.

    You will note that I have described theories of cause and effect as models built to explain data which are then used to predict future data. Perhaps that’s why our explanations tend to be more certain than they should be.

  4. DAV


    Yes. Everything seems an interconnected web. The idea behind the Markov Blanket allows us to at least limit how many variables we need to consider (once it’s known of course). Another thing I rather glossed over is how hard it is to determine dependence and independence. As you’ve noted, everything seems to be correlated to everything else to some degree. The practical solution is to draw an arbitrary line but that leaves open the possibility of a false model.


    Those “physical laws” are themselves models using named causes as the predictor variables.


    We say that X explains Y when X can be used to determine Y. Determining Y from X is a prediction based on X If you already know Y, then X is irrelevant (unless you are searching for a suitable model that can predict Y). How would you confirm X is a “cause” of Y without using it to predict Y? You have to start predicting from the very beginning of the search.

  5. Carmen D'oxide

    Nice and clear, DAV. I think we like to identify causes because it’s handy to have somebody to blame.

  6. Rich


    “unless you are searching for a suitable model that can predict Y” – well, yes, that was rather my point. I would have thought the whole process would begin with, “Why did Y happen?” Followed by, “I think it was caused by X”. Then, “Let’s try X. Wow, Y again!” I can’t see how you’d begin with “X causes Y”.

  7. Matt

    To say that X causes Y it is neccesary that Y always follows X. If X can occur without Y then you are missing a piece of the puzzle and X by itself does not cause Y. Perhaps X and Z causes Y.

  8. Big Mike

    It seems to me that what we select to designate as the putative cause of a particular event is largely determined by usefulness in the given context.

    Surely the combustion of the propellant caused the bullet to move forward. If I’m interested in ways of making things move forward, then studying combustion as a cause is useful. But it was the effect of the primer that initiated the combustion of the propellant. If I’m interested in how propellants are ignited, then we look at the primer as the cause. This can be carried back through the chain of causation, and can be examined at many levels (e.g. the molecular nature of the instability of the propellant that caused it to burn in the first place).

    If I’m interested in engineering a new kind of bullet that will leave the gun with a higher muzzle velocity, it may not be particularly productive to spend too much time considering the motives of the person pulling the trigger, which act was the cause of the trigger being pulled, and so forth down the chain of causation to the actual combustion.

    In this sense, then, a useful definition of cause is that of an event that is sufficient to predict another event with (almost) certainty, and constrained to our area of interest.

    While it’s very interesting to ponder the question of whether the universal chain of antecedent[1] causation can be thought of as having an initiation, and if so, what it could be, that’s a matter of religion, as no experiment is possible.

    [1] I’m allowing for, while not necessarily endorsing, the possibility that the term “antecedent” may, in some models of physical reality, encompass causes that appear to happen *after* their effects.

  9. Pete

    I followed the Bayes Nets link, for which many thanks. However, I couldn’t get past table 1. Can someone explain this?

    P(X3=absent | X1=no)=0.99995 P(X3=absent | X1=no)=0.00005
    P(X3=absent | X1=yes)=0.997 P(X3=absent | X1=yes)=0.003

    I’m thinking this is a typo? Should it read

    P(X3=absent | X1=no)=0.99995 P(X3=present | X1=no)=0.00005
    P(X3=absent | X1=yes)=0.997 P(X3=present | X1=yes)=0.003


  10. DAV I think you have quite effectively disposed of the concept of “the” cause of something, but I am not clear on whether you mean to imply that there is no coherence to the correlation/cause distinction (when you say “So, in the long run, cause seems to be determined by correlation or its absence. The common adage about correlation and causality really only applies to two variable situations. An experiment is just an active way to introduce more variables (hopefully in a controlled manor) into the mix.”).

    Do I go wrong in thinking that it makes sense to say “A causes B in context C” only when C admits in principle the preparation of states both with and without A and states prepared with property A are expected to subsequently exhibit property B to a consistently higher extent than those prepared without A?

  11. JH


    I have always enjoyed reading your comments. Thanks.

    So, in the long run, cause seems to be determined by correlation or its absence.

    In the context of statistics, let {(x,y)} ={ (0,-1), (1,2), (2,3), (3,2), (4,-1) } be the observed values of the variables X (a predictor variable) and Y (a response variable). The resulting (sample) Pearson correlation coefficient is 0. However, there is a perfect, deterministic quadratic association between x and y: y = [ 3 – (x-2)^2 ].

    Based on Big Mike’s definition, “X causes Y” because of the deterministic (stronger than “almost certain”) association.

    What is your definition of correlation? If it’s to be measured or estimated by the Pearson correlation coefficient, then one would conclude there is no correlation between the two variables in my example. However, based on the observed data set, there is an association between them, and X can be used to predict Y.

  12. DAV


    Not really. What if X is intermittent in ability all by itself? For example suppose Y is a dart hitting its target and X is a lousy dart thrower. We can still say X causes Y if only occasionally. Pulling the trigger of an empty gun won’t result in a bullet coming out of the barrel yet we can still say the trigger causes the gun to fire.


    I think you’re right. We have Matt to thank for that link. It’s not a bad place to start.
    If you’re interested in more, two books you might find useful:

    Causality: Models, Reasoning, and Inference, Judea Pearl, Cambridge University Press, 2000
    Learning Bayesian Networks, Richard E. Neapolitan, Prentice Hall Series in Artificial Intelligence

    Pearl’s book is a must on Causality.
    The second is devoted on building DAG’s with causality discussion minimized. Much better than Pearl if you’re only interested in construction but Pearl offers better insight into the mechanics.

    Alan Cooper,

    Got me. Not sure what you mean by preparation of states. When I say “absence of correlation” I mean the opposite of correlation: statistical independence. An example in a three variable situation: C is the cause of A and B if: (1) A,B,C are mutually correlated and (2) A and B are independent when given C. A&B appear correlated if C is not considered. C could also be an unobserved variable but it would be tough to prove it’s a cause. I use the C==>A&B example because I think it’s one of the easiest to see. There are other rules. The minimum number of variables necessary is three but is often greater. An experiment offers a way to introduce the number of variables necessary to determine causality. Sometimes experiments are unfeasible. When so, variables must be uncovered by serendipity.

    Big Mike,

    Indeed. I meant to include attention focus as one of the reasons for calling X a cause while ignoring others.

  13. Will

    Alan: even In a two variable situation each variable is, and must be, a combination of other variables. All things in the physical world have a position, and a time of measurement that is relative to an observer.

  14. DAV


    I don’t have one. At least not a good one. Nearly every method of determining correlation seems to work only in ideal cases and return some inbetween value in the nonideal. It’s a bit what I was referring to in one of my comments about drawing an arbitrary line. My personal preference is to use mutual information content but I have no rational reason for the preference. In practice, I try different methods and thresholds. Whichever yields the most faithful DAG wins and that is a can of worms all by itself.

  15. DAV

    Carmen D’oxide , amen.

  16. DAV,

    “Those ‘physical laws’ are themselves models using named causes as the predictor variables.”

    You may call them ‘laws’ or ‘models’, but the salient point is that they are abstractions, i.e., they cannot account for the entirety of concrete reality. We need more than abstractions for that.

    Another way to put is to say that no account of reality can be exhaustive, since all accounts are based on abstractions. Therefore, there are two options: either causal chains will address reality, and then they will always have loopholes, or they will refer to abstract systems (what is being discussed as “n-variable situations”), and it is only in this case that they can be “complete”.

    Note that this is not a disagreement with anything you are saying. It is rather a comment about how no ‘model’, whether deterministic, statistical, or something else, can ever achieve ‘completeness’ when referring to reality.

  17. Outlier

    “smoking causes cancer”. Well, sometimes it doesn’t, and we accept that other things can cause cancer too. That A causes B isn’t very interesting; for example, the trigger to gun firing chain is trivial. The interesting part is given effect B, what is cause A? A person with a rare cancer is apt to ask why me? How did I offend God? What chemical was I exposed to many years ago? The search for causes is the search to confirm determinism. I deny that there is a reason for everything. Isn’t stats fun?

    Since God’s Design was mentioned in the illustration, who is to say that time is the same to God as it is to us. What if he is outside that dimension and can see all of history and future simultaneously. I won’t argue this, but it is a prettier concept than some other gods I’ve seen.

  18. Milton Hathaway

    Many years ago I was driving with my daughter and pulled up to a stoplight. Immediately in front of us was a car belching black smoke. My daughter, clearly annoyed at the situation, ask me why that car smoked like that. As I was answering her with an explanation that included references to the workings of an internal combustion engine, my thoughts went back to a “root cause analysis” class I had recently taken at work. The goal of that class was to impress upon us two concepts. 1) Most people ‘jump to cause’ too quickly. A true cause is one that, if altered, will change the effect in question. 2) Most people stop asking “why” too soon.

    I decided to try the techniques from the class on the smoking car problem immediately in front of us. But my daughter was way ahead of me, unsatisfied with my answer about what was likely wrong with the engine, asking “well, why doesn’t he just get the engine fixed?” We hypothesized and evaluated, considering things like a shortage of money, a lack of time, unavailability of alternate transportation, etc, but none seemed very satisfying. Ultimately, long after the light had turned green, we could come up with only one root cause that satisfied both of us: the reason that the car belched black smoke was that the tailpipe stuck out the back rather than the front.

  19. Does the Fall Equinox cause the World Series?

    Prediction (like autocorrelation) is not the same as causation.

    DAV, I see you resort to “sensibilities and experience” as means to determine causation. Yes, that is correct, but also nebulous. Dig a little deeper, please.

  20. DAV

    Uncle Mike,

    I was heading for likelihood of causation but ran out of room. Our experience tells us that the likelihood of a slap leading to a suicide is rather low so the tendency is to discount it as a cause.

    If given only that the World series and Fall occur at around the same time of year there would be insufficient evidence that one causes the other.

  21. …and to think I knew of DAV before he became famous. Well done, sir.

    I’m still working on what causes the Catahoula to bark at the nice mail lady and wag his tail at the JW’s?

  22. DAV,

    The Fall Equinox is in September, the World Series is in October. One always precedes the other. One could make the case that A causes B since everything is interconnected in our deterministic universe…

    But one would be wrong to do so. As you note, our sensibilities would be offended. It is not sensible that the Equinox causes the World Series. It violates our sense of causality.

    We know causality very well. If you strike your thumb with a hammer, pain will result. Our experiences confirm this to a degree that only a fool would test that particular cause-effect relationship.

    More than any other animal, humans are adept at predicting certain causalities. Indeed, successful prediction of the future is the key to our success as a species. It is that success which prods us to see causalities everywhere, from witches to CO2, even when there is no smoking gun.

    We leap to causality conclusions, it is our nature to do so. But we are often wrong, even the best of us, even in overwhelming consensus, even Bayesians and other theologians. There is no handy-dandy algorithm that solves the causality puzzle — other than brute empiricism, which also often fails in difficult cases.

  23. DAV

    Uncle Mike,

    So true but if all I know is that they happen one after the other rather frequently and without any other knowledge it wouldn’t be wrong to assume the equinox causes the series. Doing so will give a reliable prediction that the series is nigh and that’s what counts. It also causes leaves to fall off trees near where I live. If someone were to ask, “Why are the tree losing their leaves?” and I answer “Because it’s Fall” would I be wrong? It’s not necessary to know any more when it comes to predicting pending leaf loss (where I live). If a cause is that which is useful for prediction then the Equinox is indeed the predictor of the Series and this column of data labeled Leaf_Loss.

    Oddly, the reverse is also true. If I were reading a story that takes place around the time of the Series would I be wrong in concluding it was in the Fall? Does it matter if there really is a causal relation between the two (whatever “really” means)?

    What gets my goat is when the association is hardly at all and X is perhaps a remote contributor to Y; yet still used for power plays via the Fear Factor.

  24. Matt


    I disagree. Your counter example states x and y to different levels of precision. For a proper analysis of cause and effect, cause and effect must be stated to the same degree of precision.

    Regardless of the type of projectile or the nature of the weapon (firearm, bow, thrown). The primary difference between a good marksman and a poor marksman is consistency.

    Until you get down to accounting for factors like air flow, ballistics is reasonably deterministic.

    The apparent intermitancy in your counter example comes down to how you define the target (wall, anywhere on standard dart target, bullsey) and how you define X.

    If you define x to include exactly how each individual dart was thrown, much of the intermitancy will vanish.

    I also disagree with your gun example. A live round in the chamber, a blank (powder charge but no projectile) round counts as live, is a necessary pre-condition to the gun firing.

    Pulling the trigger causes a gun to fire if an only if there is a live round in the chamber.

    X = Pulled trigger
    Y = Gun fires
    Z = Live round in chamber
    X does not cause Y
    X + Z causes Y

  25. Matt


    I am going to try and re-state my dissagreement with you.

    You are over simplifying cause and effect. You are asuming that the cause must be one single discreet event.

    The real world is not that simple.

    To disect your gun example:

    A: Finger Pulls Trigger
    B: Trigger releases hammer
    C: Hammer strikes firing pin
    D: firing pin strikes primer
    E: primer ignites
    F: spark from primer ignites main powder charge.
    G: expanding gasses from burning powder propel bullet down the Barrel.
    Y: Gun Fires.

    You then ask which of A through G causes Y.

    Your question over simplifies the issue. Y is not caused by any one of A through G, it is caused by the complete sequence A through G occuring in that order. Remove any one of them, for example jam the firing pin so it cannot strike the primer and Y does not happen.

    Furthermore, there could be other Ys for which several descreet events (A, B, C, …) must occur simultaneously in order for Y to happen.

    An apparent itermittant cause effect relationship between any X and any Y can happen only because there are other contributing factors that must occur for Y to happen that you left out of your analysis.

  26. DAV


    The precision is the same for X and Y in both examples: one bit. DartThrown(=yes/no)/HitsTarget(=yes/no) and TriggerPulled(=yes/no)/GunFired(=yes/no).

    Your definition of cause may be too stringent. It seems you would effectively disallow a terrible dart thrower to be the cause of a dart landing on target. It doesn’t matter if the moon got in his** eyes or he just happens to be extremely inept. It is only fair to say he is the cause when the dart does land on target. Yes indeed there is something missing that accounts for inaccuracies but it is not particularly relevant to assigning cause. We could be interested though in improving the accuracy or dart throws or increase the reliability of guns firing when pulling the trigger. If so, then searching out the additional variables is a must.

    BTW both of these are two variable problems. Assigning causality in these cases requires additional knowledge such as one follows the other in time; there is only one dart thrower; etc. In both of the example cases, a hidden assumption is that Y (when it happens) does follow an X=yes. There is nearly always the possibility of other causes of Y. When the evidence suggests them, though, is when Y occurs without X.

    The first requirement in assigning causality is correlation. Normally correlation alone is insufficient for causality assignment in the two variable case. The above examples however received the benefit of tacit “expert” knowledge. If Y doesn’t always follow X, there may be any number of reasons. If we know them they constitute extra variables. If not, well, then we must work with what we know. and suffer with any innate limitations. Having them doesn’t necessarily mean they are relevant for the purpose of assigning causality.

    Maybe I’m misunderstanding your point?

    ** I say ‘he’ because women are excellent dart throwers. If you happen to have a wife, tell her she is wrong, then see if the darts from her eyes don’t land on target.

  27. Matt


    The problem is your two binary variable model is almost never valid in the real world. It is a drastic oversimplification.

    In the real world many things require mutliple distinct events occuring either simultaneously or in series as a cause. Interrupt only one of those contributing events and the effect doesn’t happen. The cause is not any one of the contributing events but the full set of them together.

    When Y doesn’t always follow X, the reasons why always constitute extra variables whether you know what they are or not. Not having them doesn’t necessarily mean that they can be ignored for the purpose of assigning causality.

  28. DAV

    Simplification, yes. Drastic, no.

    If you insist, there is always at least one other factor between every cause/effect pair. The reason is that P(Y|X) cannot equal 1 in the real world or at least can never be proven to. Just because Y has always followed X in the past is not proof that Y will occur for every X in the future.

    In one of your above posts, you had D=hammer hits primer followed by E=primer ignites but inexplicably left out D’=no manufacturing error and all of the other reasons the primer might fail to ignite. Why did you stop where you did? I submit you will never identify all of the links in the chain.



    You could keep expanding this indefinitely but it really isn’t required as the host of recursive SomethingElses is neatly incorporated within P(Y|X) so why bother? You are still stuck with setting the direction of the link between X and Y (and everything in between). In the above, causality flows from X to Y. That makes X a cause of Y.

    X==>Y is convenient (necessary even) shorthand. That something lies between the two should be understood.

  29. Matt


    No, Your D’ does not happen between the trigger pull and the gun firing or not, so it is not part of the event chain from that standpoint.

    If you are stuck with a probability model, you have not and can not prove causality from that model. Other evidence of causality is needed.

    I will conceed that in the gun example your shorthand is appropriate. However, that is ONLY because the chain of events between the trigger pull and the gun firing is well understood.

    Stipulating that the gun is loaded properly and that gun and ammunition are all in working order Prior to X, then Y will always follow x, no probabilty model needed.

    However, where that full chain is not understood, your shorthand is not appropriate.

  30. DAV

    Strictly speaking an inhibitor is likely better represented as a co-parent (co-cause). It depends on which representation is more faithful to the joint probability. And, yes, the inhibitor is always understood (or should be). So the statement “X==>Y” is really “X==>Y if nothing goes wrong”. After a while it gets rather tedious to keep appending the phrase. How does knowing the things that can go wrong help in determining the causal link between X and Y? Are you claiming X cannot cause Y if something can go wrong?

    You haven’t been paying attention to what Briggs has been saying. When it comes to real life, probability models are all that we have. Even the simple regression boils down to a probability model. P(Y|X)=1 can never be proven in the physical world. Best we can do is say it comes close and round up. If a probability model can’t establish causality then we are left without a way to ever establish it.

  31. Matt


    You are looking at the problem backwards. Unless X is the imedeate proximate cause of Y, it’s not about knowing what can go wrong, but knowing everything that has to go right Between X and Y. All posible failure modes do not need to be known or accounted for, but everything that has to go right for X to lead to Y does have to be accounted for.

    Probability models do not exist in the real world. According to Briggs, the uncertanty that is quantified by a probability model is a product of the limits of what is known and knowable, not a feature of the real world.

    In cases where the starting conditions are knowable for all relevant elements (the case of a gun), the probability model can be eliminated, not by means of regression or any other statistical technique, but by starting from known fixed and verified conditions.

    In assigning causality in the gun case, all that is needed is knowledge of the mechanics of the gun and mechanics and chemestry of the ammunition. A probability model built from statistical techniques adds nothing to the knowledge of cause and effect in this case.

    In cases where less is known, again statistics and probabilty modes do not directly lead to knowledge of causality. They do however point us in the right direction to look for the descrete event chains that comprise cause -> effect relationships.

  32. DAV


    No model exists in the real world. I never said probability was a feature of “reality”. It is a measure of our certainty. P(Y|X) always less than 1 means we can never be 100% certain of any cause. It’s not a model. It is a statement about us and not nature. All of science is built around the notion of most probable explanation. We can never be certain what we think we know won’t be falsified tomorrow. That is why the explanations are only probably true. We will never be in a position where we can claim knowledge of what “really is”. Instead we must resort to approximations that seem to work reliably and are probably true. To me, that makes it probabilistic. All of that stuff about “knowledge of the mechanics of the gun and mechanics and chemistry of the ammunition” is what we think we know. They are in fact models expressing the most probable explanations. They seem to work which is fine but we can’t be 100% certain they are “Truth”. And how do we know they work (so far)? Through statistics.

  33. Matt


    You are incorrect on uncertanty in the mechanichs of a gun. A gun is a manufactured artifact. The explanations are not mearly the most propable explantions, they are the only possible explanations. We know how and why a gun works not through statistics, but because they were designed and built by people who shared their knowledge with others.

    While there are many things that we can’t know with 100% certainty, I would dispute the contention that we can’t know anything with 100% certainty.

    I can know the contents of a book with 100% certainty, all I have to do is read it.

  34. DAV

    I was talking about cause and effect and never said we couldn’t know anything with certainty. If I did I misspoke.

    How would you ever know if you ever had the only possible explanation? Even if you can only think of one that doesn’t mean there isn’t one you haven’t imagined which may even be a better explanation. If there is more than one, why would you want to use the one that is not the most probable? What we think we know about how things work is nothing more than the most probable of the explanations we have conceived.

Leave a Reply

Your email address will not be published. Required fields are marked *