Probability does not exist; therefore, nothing has a probability, so nothing can be caused by probability, though the uncertainty of statements can be had conditional on assumptions, and this probability changes when the assumptions change. Probability is not a matter of matter, but of mind alone.

Though there is much to flesh out (as I have done), that is the entire philosophy of probability. Anything that violates these tenets, or those that can be deduced from them, isn’t probability, but something else.

I believe William Dembski’s “design inference” theory largely falls in this latter category. His technique is partly a reinvention of p-values, partly unrecognized use of causal knowledge, and the sneaking in of unquantified probability by calling it something else. There is, incidentally, no problem with unquantified, and unquantifiable, probabilities. Indeed, they are the most common kind, which we use constantly to express uncertainty in vague situations. (I’m not going to prove this here.)

Jargon and notation are barriers to clear thought; but for a very good reason, I’m going to start with these devices. I will develop a notion of probability using no example, and I ask you not try to think of examples, either. This command is especially for those who have learned probability in any formal way.

Suppose all you know about some event (a statement) E is H. H is a proposition, i.e. our premises or assumptions. I cannot emphasize too strongly that “all.” It means what it says: all. Then we can calculate the probability of E with respect to H as Pr(E|H) = p. Let’s say p is small. Put any number you like as “small”, as long as it’s non-zero. If it were 0, then E is impossible with respect to H. That italicized word is to be taken as strictly as the first.

It turns out E happens: we measure it; it occurred. That is, Pr(E|Observation) = 1. E is true with respect to our measurement.

The probability of E with respect to H is tiny. Yet E happened. Because p is wee, yet E happened, does it mean H is false, or likely false? Remember, H is all we know about E. We believe H to be true. We can write, if you like, Pr(H|Introspection) = 1. Therefore all we can say is that something with a small chance of happening, happened. I.e. Pr(E|Observation & H & Introspection) = 1.

We can also do this trick: Pr(H|E) = 1. E happened, or is assumed true in this equation. But H says E can happen. And H is all we know about E. There is no observation that can “weaken” H, for, as I might have mentioned, we believe H is true. It is all we know about E.

Technically, we should write Pr(H|E & Introspection) = 1. Great troubles often happen when all we know or assume is not written in the equations. This is because equations too often take on life, they are reified. This often happens wherever equations are used. They become realer than Reality, then comes the Deadly Sin of Reification. It is the sin all users of p-values commit.

Nothing above changes, though a mistake enters, if we assume H is “chance” in all these equations. The mistake is to believe “chance” causes, or fails to cause, E. Chance is a synonym of probability: since probability does not exist, neither does chance. Something that does not exist cannot cause anything to happen, nor can it fail to cause something to happen.

Nothing happens “by” chance, which is perfectly equivalent to saying nothing is caused by chance. Nothing is “due” to chance, another synonym for cause. Once again, the italicized word is as strict as can be. No thing.

When things happen, unless we’ve been trained in the Way of the Wee P, we seek for causes, not probabilities. Probabilities are only really useful before things happen or are revealed. Probabilities are used to express uncertainty in events (i.e. statements or propositions). Once we see by observation that an event is true (it happened), we ask “How?” And this is the right question. If we knew how, then if the event is not unique, we can refine predictions of new similar events, or control future ones; and if the event is unique, we can ascribe credit (or blame). Seeking for cause is the goal, not just of science, but in all aspects of life.

We are finally ready to tackle Dembski’s “design inference” technique. Quotes are drawn from his book of the same name (paperback edition, 2005) with the telling subtitle “Eliminating Chance Through Small Probabilities”. Here is his “Explanatory Filter” argument (p. 48):

Premise 1: E has occurred.
Premise 2: E is specified.
Premise 3: If E is due to chance, then E has small probability.
Premise 4: Specified events of small probability do not occur by chance.
Premise 5: E is not due to a regularity.
Premise 6: E is due to either a regularity, chance, or design.
Conclusion: E is due to design.

He refines this in later chapters, adding all sorts of jargon, notation and technicalities, but these refinements do not alter in any substantial way the basic argument or my criticisms, as we’ll see.

Let’s walk through the premises. The first is obvious. By “specified” he means he has assumed some evidence, but without necessarily writing that evidence down, that other things beside E, but somehow similar, could have happened. The example he has in mind is the opening of a combination lock safe at a particular combination; “off” combinations failing to open it.

Premise 3 gives “chance” causal powers, so it is false. But it might be rescued if we give these powers to some Agent. The lock needs to be turned by somebody (or thing). One possible, but not sole, premise we might entertain is that H = “This lock has n combinations, only one of which opens the safe.” The probability of E given H and the one right combination picked by the Agent ignorant of the right combination is 1/n. Technically, Pr(E|H & Ignorant Agent picks right combination) = 1/n. If n is large then the probability “due” to H and the Agent is small. Is this H all we know about E?

Premise 4 is false. Just plain false. It is not true, which I take to be obvious. If E happens, and all we know or consider is (writing A for “ignorant Agent”) HA, then E still happens and HA is still all we know. No matter how small 1/n is, if HA is all we know, then HA is true (given our introspection, which sharp readers will have realized I am omitting from the equations). Rare things will and can happen if HA is true. Premise 4 is false.

Premise 5 is where Dembski sneaks in other possible causes, his first admission (without admitting it) that HA is not all we know. By regularity he means a cause unknown to the Agent, but known by who-knows-what: he is vague about this. In this example, a “regularity” would be a malfunctioning safe, which the Agent believes has n combinations, but because of some error has a smaller number.

Regardless, Pr(E|HA and Introspection) = 1/n. But Pr(E | Regularity & A & Mysterious knowledge) = 1/m > 1/n; i.e. m < n.

Premise 5, in other words, rejects this altered hypothesis. It is not possible by assumption. The lock is as we assume in H.

Premise 6 says something caused the lock to open: a regularity, which is ruled impossible by Premise 5, “chance”, which is impossible, but fixable using our Agent, or “design.” The first two being rules out, we pick “design”. We’ll come to that next.

Since Premise 4 is false, the argument fails. Unless we can rescue it, as we did Premise 3. And that is exactly what Dembski attempts, in effect, like this:

Premise 4′: Specified events of small probability do occur by “chance”, but if we can consider a non-“chance” causal explanation operating instead of “chance”, and we believe the non-“chance” cause is more likely than “chance”, then the non-“chance” causal mechanism is preferred.

Premise 4′ is not true, either. So the conclusion again fails. Now the modified premise is not probability, per se, but a decision. It is an act of will (as when “rejecting null hypotheses” using p-values). Based on further introspection, which provides this non-“chance” cause, when the likelihood of non-“chance” cause is greater to some unknown extent than the likelihood of the “chance” cause, we decide to state “The non-‘chance’ cause did it.”

This is not a bad rule, all things equal. If you knew based on X that either Y or Y’ is true, and that Y is more likely than Y’, and you could only pick from Y or Y’, then, ignoring the costs and benefits of our choice, a not-insignificant assumption, picking Y is the way to go. But it is a decision rule and not probability.

What Dembski is doing is this. He has in mind two causal possibilities, H (the ignorant Agent), and, say, H’, the non-ignorant (and possibly lying about being ignorant) Agent. He may have only started with H, but after seeing E (the safe opens), he considers H’ because his introspection directs him to. There may be many clues he gleans from the Agent that lead him to H’. This is not unwise, but it is not in any way a formal process. Two people can disagree whether to consider H’.

Now if we never considered H’, then because we began believing H, we must necessarily end believing H. That is, we start with Pr(H|IA & Introspection) = 1 and must necessarily end with Pr(H|E Observed & IA & Introspection) = 1.

In particular, it is impossible that “H not”, or not H, the contrary of H, or Hc, is true, or even possible, because it must be that Pr(H|IA & Introspection) + Pr(Hc|IA & Introspection) = 1, and Pr(H|E Observed & IA & Introspection) + Pr(Hc|E Observed & IA & Introspection) = 1, and the first elements of these equations are already equal to 1. This is key.

Thus we must switch from believing Pr(H|IA & Introspection) = 1 to Pr(H|E Observed & IA & Introspection’) < 1, where Introspection’ means further introspection added to the original introspection. This is important. It means not only did E occur, which is new information, but we also add new information in the form of new introspection because of the circumstances in which E happened.

He makes this move because Pr(E|H & IA & Introspection) is small. That Pr(E|H & IA & Introspection) is small is not enough to make this move, as Dembski himself acknowledges. He says (p. 162) “extreme improbability by itself is not enough to preclude an event from having occurred by chance. Something else is needed.” That “something else”, it turns out, is an assumed cause for which E is more likely, and which based on further introspection is more likely than “chance”.

Here is what further introspection says: “Very often when a rare event occurred in cases like these, a cause other than that stated (such as the ignorant agent) was responsible. Here is that alternate cause, and here is why I find that alternate plausible.” Given that, and given the other cause H’, then Dembski decides the other cause operated.

In notation, Pr(E|H & Introspection) is small, and Pr(E|H’ & Introspection’) is large, or even equal to 1. Also, Pr(H’|E & Introspection’) >> Pr(H|E & Introspection’) (where I have subsumed the ignorant Agent into H).

Now Dembski rejects subjective Bayes (p. 67), as do I (and as do p-value-loving frequentists). But nobody rejects Bayes formula. We can use it here. Dembski never makes this step because he never quite realizes that what he is doing is in the end probability after all. We can write (I won’t derive this, but all students of probability can):

(1) Pr(H|E & I’) = Pr(E|H & I’)Pr(H|I’) / [ Pr(E|H & I’)Pr(H|I’) + Pr(E|H’ & I’)Pr(H’|I’) ],

recalling that not just E was observed, but E led to I’, and thus H’. If we used Bayes with just H, Hc and I, we start with Pr(H|I) = 1 and end with Pr(H|E&I) = 1, though I’ll leave the math to the reader.

Let’s step through equation (1). The left hand side is what we want to know: how likely is the “chance” explanation, i.e. the “null” hypothesis after seeing E and adding our further introspection. Pr(E|H & I’) is small because it still assumes H is true, even after the further introspection. But Pr(H|I’) is now closer to 0, if not 0, by decision.

The first part of the denominator is equal to the numerator. Pr(E|H’ & I’) is now certain, or almost. And Pr(H’|I’) is large, again by decision (and not forgetting Pr(H|I’) + Pr(H’|I’) = 1).

So we have, in cartoon form,

(2) Pr(H|E & I’) = small x small / [ small x small + 1 x large ],

where both the “large” and small” are somewhere in the closed interval [0,1], so the RHS is 1 / (1 + large/small^2), or since large/small^2 here is very large, we have (1 / very large), which is very small, which is to say tiny, or even 0.

So, as desired, the probability for H after seeing E, and adding in the further introspection, is tiny, or even 0.

Since now Pr(H|E & I’) + Pr(H’|E & I’) = 1, we have that Pr(H’|E & I’) is nearly certain, or certain.

This was all possible only because of that further introspection which involved our knowledge of cause. I do not say this is bad reasoning: I say it is good. Even ideal. Because it is cause we always seek, and not probability.

What Dembski has done, then, is to reinvent probability and call it something else. Since he didn’t write down all his information as probability, he lost track of it. This is the exact same mistake users of p-value make. Though Dembski is better because he, it appears from his writing, would never invoke the other cause unless it was much surer in his mind (that further introspection). Blind use of p-values is just silly by comparison.

Dembski lists the steps of his of further introspection (cf. pp. 184–185). These involve “patterns” that are like E, or can be E, “probabilistic resources” which are supposes about how E could happen more than once in situations like but not ours, “saturated probabilities” which are the probabilities of “probabilistic resources”, “side information” and “subinformation” which just are the further introspection, along with information theoretic measures on the alternate causes.

This is all in an attempt to make formal the process by which we suggest H’ to ourselves. Yet this can never be formal, since except in highly constrained circumstances, disagreements are inevitable. I won’t bother critique each step to show how, because I believe it is fairly obvious at this point.

Here, finally, is the running example Dembski uses. Let’s see how we would do using probability alone.

There was this clerk in New Jersey, one Nicholas Caputo, a Democratic, whose duty back in the 1980s and before was to draw names of candidates (in private) to decide whose name was printed first on the ballot, whose second, and so on. Whoever’s name is printed first has, experience shows, a substantial advantage (voters being lazy). Caputo’s nickname was the “man with the golden arm”, because he picked Democrats 40 out of 41 times.

The natural H here is that Caputo cannot see or know the names he is drawing, and that the bag (or whatever) had all the candidates names in it. The event E is given to us: 40 out 41 lucky Democrats. Given our initial introspection, it’s easy to calculate Pr(E|H&I) ~ 2 x 10^-11, or about 1 in 54 billion.

Hold up. Wait a minute. Somethin’ ain’t right. New Jersey? 1975-1985? Democrat? In private? Forty out of forty one? Caputo? Uh huh.

That’s our further introspection right there, and it’s strong. The more you know of Jersey politics, the more overwhelming it is. Obviously, Pr(H|I’) is going to be small, and Pr(H’|I’) large. What about Pr(E|H & I’)Pr(H|I’)? Pr(E|H’ & I’)Pr(H’|I’)? Fugeddaboutit.

Obviously, quite quite obviously, it was our further introspection on cause, not “chance”, that led to us giving H little credence. If it turns out we are wrong, and that Caputo was unknown to us a saint, never missed mass, voted for Reagan, and immaculately honest in his drawings, then we are still right to make the judgement we did. Because all probability is conditional on the assumptions we make, and our assumptions here on the opposite of saintliness are solid given what we know about Jersey politics.

These further introspections are causal assumptions made based on all kinds of historical evidence. Which, as we admit, may be in error. But that only means Pr(H’|I’) < 1. To us. The causal knowledge is strong here, i.e. highly probable. That is why we reject “chance”.

In the end, we don’t need anything more than just plain probability to solve these kinds of problems, and, of course, knowledge of cause. That’s the real key, this outside causal knowledge. Failing to consider this is also why p-values fail—and they fail worse because these are much more model dependent and H is rejected for much weaker reasons. However, I’ve told that story many times, so I won’t belabor it now.

It turns out that I agree, then, with Dembski’s conclusions, at least where he and I share the outside causal knowledge. And where we agree on H—which isn’t always a given. There is no such thing as “chance”, and so the H we come to can vary, again based on our causal knowledge.

In particular, given that Dembski’s book is associated with “creationism” or “intelligent design” (even though it is only a small part of his book), and his methods have been used to “prove” there is design in the universe, I don’t want anybody going away and thinking all these proofs have therefore been invalidated. They haven’t, because here he and I agree on the outside causal knowledge, even if we disagree about “chance” in this case (it can’t be defined uniquely or without controversy here). Indeed, I regard intelligent design (as I define it) as trivially true.

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.

1. Robin

Thank you again SSgt Briggs. For me, the most fundamental concepts and assumptions of probability are the hardest to grasp, such as those you have identified above.

But I am confused as to the continuing statement that “Probability does not exist”. I get that, from a purest, theoretical viewpoint, probability is nothing more than a mental construct. But we can attach to it physical measures and therefore make it exist.

For example, as an engineer, it is critical that everything is measurable and that uncertainty is quantifiable. So we can take measurements, such as weights, masses or lengths, for example, and create probability distributions from this data that will help us to understand the nature of a process and it’s uncertainty. We could, for example, make simplifying assumptions as to the shape of the distribution, or we could take the data and create an empirical distribution. This is the basis of calibrations, specifications, codes, standards, etc that are the backbone of any modern engineering endeavor.

So while in the purest sense I would agree that probability is just a mental construct or mathematical technique, we can make it exist when associated directly with the physical world. And we can use it to assess the level of uncertainty within an engineering process.

Am I wrong here – am I missing something?

2. Briggs

Robin,

You don’t make it exist. You quantify uncertainty, which is purely a mental construct. In your case, it seems, mostly as predictions.

You can do this with any propositions. For instance, “A Leprechaun, Ogre, and Honest Politician are in a room and one must walk out.” The probability “The Ogre walks out” is 1/3, which we deduce based on the evidence given, even though none of these creatures exist in Reality.

In your case, the weights, masses and lengths exist. And you can say things like “This length will be greater than x”, which, based on whatever measurements and assumptions you make, will have some probability. In the end, though, the length will be a real thing. It won’t be a probability.

3. Ye Olde Statistician

“extreme improbability by itself is not enough to preclude an event from having occurred by chance.”
Indeed, even by the usual methods, if the probability of E is 0.01, then we might expect to observe it once in 100 units. (This is not exact, but will do for comm box purposes.) But Easton, PA, for example, experienced three hundred year floods in the space of a few years. But river flow is not independent year-to-year (i.e., not “units”). The causal pattern was not the villain Chance, but housing construction in the Poconos, which resulted in greater runoff on the mountainsides, asphalt and concrete being less absorbent than soil. When the drainage was fixed, the flooding ceased.

4. John B(S)

Briggs

Don’t the woke say sex/gender/whatever is purely a mental construct?

5. They might; but sex is real and there are only two that are consonant barring accidents. That is, genomic sex and morphological sex are congruent. Have Y associated genes, you are male. Don’t have them, you are female. Gender, is, for human languages have them and only three of them and these exist in minds. The rest of this stuff is politics.

Well being neither a woman nor a mathematician, (the two soon to be synonymous),
I am hesitant to comment. This observation simply amplifies what should already be obvious.
That is absent mind there is no world. The seemingly concrete solid physical world does not
exist at all beyond the neurological construct of animal or man to perceive it. There is no
world there are seven billion worlds in the human context. Now the probabilistic manipulation
of those seven billion worlds with propaganda and fake news is another matter. That’s how
you can win a war when you’re actually loosing it. And Briggs with reassignment surgery
there’s hope for you yet, you could still win that prestigious Field prize!

7. 1 – I have not read Demski, so can only comment on your reading of him here.

2 – I think you are working toward something interesting. Let me quote from my favorite author 😉

A nice way to look at this is to decide apriori that events an observer, whether real or imaginary, can categorize as real (there is a point at which the event happens) or not real (the event does not happen) exist. In this construct an event has a probability of one if it happens, zero otherwise, and is real (happens) if and only if all the events that must be real for it to happen are real.

In other words: P(E|ei)=1 if and only if P(ei)=1 for all i.

At first glance this may seem to require time with each event characterized by one or more probability estimates ranging from strictly greater than zero to strictly less than one during the period before the yes/no information is known, a transition state during which it happens or doesn’t, and a later period during which the outcome is known.

However, if event E is fully determined by some set of events ei, then knowledge of all ei is equivalent to knowledge of all (ei AND E).

So, chance does not exist and our estimates of prob based on freq are admissions of ignorance…

Meanwhile one of the fun implications of this is that the “laws of physics” are not laws, they are statements about observations: if elements in group A (=left side of equation) vary, elements in group B (right side) consistently vary too. That’s why laws (e.g. newton’s famous 3) change as new observations are made.

8. Fr. John Rickert, FSSP, Ph.D.

Greetings! Just a couple of brief comments for now.

1. You are not taking a completely determinist / fatalist approach, correct? Free will can be an important, operative factor, yes?

2. One problem I have had with Dembski’s approach is that it would seem to lead us to the conclusion that any “freak accident” was in fact carefully contrived.

3. Another problem I have is that he is too willing to write off simple regularities that are intentionally, intelligently caused, e.g., markings on a highway.

9. Ye Olde Statistician

Consider the man who is brained by a hammer while on his way to lunch.
Everything about his perambulation is caused. He walks that route because his favorite café is two blocks in that direction. He sets forth at the time he does because it is his lunchtime. He arrives at the dread time and place because of the pace at which he walks. There are reasons for everything that happens.
Likewise, the hammer that slides off the roof of the building half a block along. It strikes with the fatal energy because of its mass and velocity. It achieves its terminal velocity because of the acceleration of gravity. It slides off because of the angle of the roof and the coefficient of friction of the tiles, because it was nudged by the toe of the workman, because the workman too rose to take his lunch, and because he had laid his hammer where he had. There are reasons for everything that happens.
Not much of it is predictable, but causation is not the same as predictability.
It would never occur to you – at least we hope it would never occur to you – to search out “the reason” why at the very moment you walked past that building, some roofer in Irkutsk dropped his tool. Why should the concatenation become more meaningful if the roofer is closer by? Spatial proximity does not add meaning to temporal coincidence. Chance is not a cause, no matter how nearby she lurks.
So the hammer has a reason for being there, and the diner has a reason for being there; but for the unhappy congruence of hammer and diner, there is no reason. It is simply the crossing of two causal threads in the world-line.
“Ah, what ill luck,” say the street sweepers as they cleanse the blood and brains from the concrete. We marvel because our superstitions demand significance. The man was brained by a hammer, for crying out loud! It must mean something. And so poor Fate is made the scapegoat. Having gotten all tangled up in the threads, we incline to blame the Weaver.

10. Simon Holmes

If I may start again, here are the relevant excerpts from your article and tweets that I am responding to:

– “Premise 4: Specified events of small probability do not occur by chance.”
– “H = “This lock has n combinations, only one of which opens the safe.”
– Premise 4 is false. Just plain false… If E happens, and all we know or consider is (writing A for “ignorant Agent”) HA, then E still happens and HA is still all we know. No matter how small 1/n is, if HA is all we know, then HA is true… Rare things will and can happen if HA is true. Premise 4 is false.”
– “My point is all that specifying and information theoretic measures are just IDing alternate causes, for which he, without realizing it, gives high “prior” prob to.”

If I may summarise and paraphrase your argument as I understand it:

If a specified event E happens, and Pr(E|H) = small, and ALL we know is H, which is the prior probability of E happening (“chance”), then chance is the only valid explanation. Furthermore, if you decide to posit an alternate cause H’ (e.g design) for E, you are inadvertently preferentially giving H’ a higher prior probability without justification.

My objection is that, whilst your example in and of itself is true, Dembski seems to me to be arguing that, in reality, H is not all we know about any specified low probability event (SLPE). We can also draw from our knowledge of other SLPEs; let’s call this K. We also know/can calculate the probabilistic resources available (section 6.1 in his book); let’s call this R. And let’s says HKR= H’.

And, crucially, in every case we know of where H’ is true, the best explanation is a cause that fits the definition of design.

Therefore, Dembski is not simply ad-hoc giving design a higher prior probability than chance. He is saying that actually Pr(SLPE|H’) = 1. Aka Premise 4.

11. Simon Holmes

Argh, it’s annoying how my comment lost all the formatting.

12. And this is precisely why high energy physicists have accomplished nothing at great cost over the last 40 or more years.

The map is not the territory.

13. “Nothing happens “by” chance, which is perfectly equivalent to saying nothing is caused by chance. Nothing is “due” to chance, another synonym for cause. Once again, the italicized word is as strict as can be. No thing.”

“By” and “Due” are simply shorthand for saying the observed data can be described well by a probability distribution. For example, observed test statistic is not far from what is expected under a model (ie. p-value is large, if you like).

Now nothing can be caused by or due to the particular gods you or anyone else believe in because said gods don’t exist in reality…is another argument one can make. 😉

Justin

14. Briggs

Simon,

Thanks.

Let’s try to keep the order right here. In the end, as I saw, the reasoning to propose H’ is sound, as in the NJ “voting” example.

You agree that if all we know is H, and E happens, and Pr(E|H) = small, then Premise 4 is false. So events with small probability can and do happen. We invent a machine that spits out patterns of strings from an enormous set of characters. After we get our pattern, the machine is destroyed. The pattern we see, our E, is very improbable given this H. It happened, though.

Dembski would not object to that example, as he indicates throughout his book. But then he introduces other examples in which H is not as plausible as some H’. And that is his “method”.

What Dembski is doing is saying is he has another H’, based on other information, that starts with a very high “prior”. I agree he should be doing this. But that’s just probability. Nothing more. (As in the example I give above.)

His “solution” is nothing more than a rediscovery of probability. It is not a formal solution, and cannot be made formal, in the sense that his method leads you to the truth of whether E was caused by H or H’. There is no algorithm provided by Dembski that will lead you to the one true cause.

His “solution” is also not unique. There are available an infinite number of H’ that can be chosen, all of which meet the criterion Pr(E|H*) > Pr(E|H). And which can be “justified” by Pr(H*|my introspecion) >> Pr(H|my introspecion).

His thinking was right to attempt to find a method to eliminate p-values (the rejection regions, etc.). But in the end, it’s nothing more than a realization that probability should account for all of the information you are considering, meaning the full information of the cause or causes of E.

Like I said, I agree with that sentiment, and have been preaching it for a long time. But it’s just probability married with the understanding, albeit unrecognized clearly by Dembski, that it is cause that is of paramount importance.

Update To be clear, Premise 4 is still false, strictly false, even for “specified events.” Unless Pr(H’|intro) = 1 (certainty!), it can still be, however improbable, that E was caused by H. We must absolutely always keep in mind the difference between decision—what to do about a probability—and probability itself.

15. Fr. John Rickert, FSSP, Ph.D.

Just wondering, what is your view on quantum mechanics? In “The Character of Physical Law,” Richard Feynman says that the laws of physics at this level have an intrinsically statistical character, and that there is no way around it. I highly recommend his videos on QED, which are available on YouTube.

16. Simon Holmes