I received thoughtful questions from EB about my views of probability and how they fit with science. This is long, but the questions alone make it worth reading.
The end result will be that probability is merely the completion of logic, and so all questions of certainty and uncertainty can be treated in a consistent (and, yes, rigorous) way as are all logic questions. And I call for an end to the pagan belief—as strong now as it was in Rome—that chance is real and causative.
I hope this email finds you well. I have been deeply intrigued by your perspective on probability, particularly your epistemic and logical interpretation, which rejects the notion of probabilities as objective features of the physical world. Your arguments resonate strongly in many areas, particularly in the rigor and clarity they bring to reasoning and decision-making contexts.
However, as I delve further into the subject, I find myself grappling with several challenges to your view, and I would be grateful if you could help me understand how you address these issues.
1. The Success of Probabilities in Physical Sciences
Your framework emphasizes probability as a measure of epistemic uncertainty tied to what we know about a proposition. However, physical sciences—especially quantum mechanics and statistical mechanics—seem to rely on probabilities as intrinsic properties of systems, not merely our knowledge of them. For example:
- In quantum mechanics, the probability of finding a particle in a specific state is often interpreted as a real feature of the system.
- Statistical mechanics uses probabilities to predict macroscopic behaviors from the microstates of particles.
- Stochastic models in physics and chemistry describe processes like diffusion and random motion with an apparent objectivity.
How does your epistemic framework account for the success of these interpretations in describing and predicting physical phenomena? Do you see the apparent objectivity of such probabilities as a misinterpretation of what is fundamentally epistemic?
I have a model (borrowed from Laplace) which predicts, based on physical observations, the sun will rise in the east tomorrow. It counts the number of times it has done so, compares those to the number of risings, and makes predictions. Does well. There is nothing causal about this model, though it is brilliantly successful. It uses observations, which are part of Reality. But it doesn’t say why Reality acts like it does.
That’s all that happens in the branches of physics you mention. The models use observations of Reality, bring in some logic and other similar evidence, including at times partial causal evidence, and make predictions. The closer the evidence gets to cause, the better and tighter these models get.
Take dice. The probability model makes use of two parts of Reality, that the dice have six different sides one of which must show, and makes predictions. Good ones. But it doesn’t say what has caused the outcomes.
By cause I everywhere and always mean the full cause—the formal, material, efficient, and final aspects—and the conditions required to bring about that cause.
We must always keep separate the ontic from the epistemic; that is, separate what is and what we can know about what is. Mistaking the latter for the former is the Deadly Sin or Reification.
2. Irreducibility of Quantum Probabilities
In indeterministic systems, such as quantum mechanics, probabilities appear irreducible. They do not seem to arise from ignorance about hidden variables but instead reflect fundamental randomness. If probabilities are purely epistemic, how would you interpret such intrinsic unpredictability?
I do not think “randomness” is ontic. It is only epistemic. If one knows the full cause (and conditions) of any observation, then there is no uncertainty. If Reality is non-local, as some quantum mechanical results suggest (via Bell), then we cannot know the full cause. Thus we will never do better in QM models than non-extreme probabilities (between o and 1 but not the end points).
I don’t like the term “hidden variables” for this reason. Variables are not causes. Variables are better thought of as conditions, conditional states upon which we change our beliefs. We seek cause. What is hidden from us are the causes of some events. They are for that reason intrinsically unpredictable, but they are still caused. We just don’t know the causes.
It’s not only the highly abstract, and somewhat artificial, constrains of the physics lab, whose workings are inaccesible to most, that provide examples along these lines. Us ordinary people have our own venues fitting the same description of unknowable causes: other people. Their intellects and wills are closed off to all but themselves (and to God). Minds are not material (proved elsewhere), so their workings cannot be measured. We can only see behavior and record explanations, the veracities of which will always be open to question.
In other words, Uncertainty is ever with us. Parts of the world will, in this life anyway, be closed off to us. What are the limits? I do not know. I don’t think anybody knows.
3. The Role of Probabilities in Causality and Physical Laws
Many argue that probabilities are tied to causality and physical laws, acting as tendencies or propensities. For example, the probability of radioactive decay is often viewed as an inherent property of the atom. How does your view accommodate cases where probabilities seem tied to physical causality rather than epistemic uncertainty?
Tendencies and propensities are ontic, up to a point. Substances have powers, all things have an essence, and this leads to outcomes or behaviors which are, again to a point, predictable to the point at which we understand these essences, which is the realm of the epistemic. A substance’s powers are manifested in certain conditions, and these have to be known, too. Which for things like lumps of uranium we cannot know. Again we come back to “hidden variables”. The traditional view is to narrow.
Take this paper’s account of quantum causality:
As one of us (SK) has observed (Kauffman 2016, Chapter 7), we might plan to meet tomorrow for coffee at the Downtown Coffee Shop. But suppose that, unbeknownst to us, while we are making these plans, the coffee shop (actually) closes. Instantaneously and acausally, it is no longer possible for us (or for anyone no matter where they happen to live) to have coffee at the Downtown Coffee Shop tomorrow. What is possible has been globally and acausally altered by a new actual (token of res extensa). In order for this to occur, no relativity-violating signal had to be sent; no physical law had to be violated. We simply allow that actual events can instantaneously and acausally affect what is next possible (given certain logical presuppositions, to be discussed presently) which, in turn, influences what can next become actual, and so on.
Talk about forever hidden variables! If this is right, and I suspect something like it is, then conditions that lead to the power of decay manifesting would be impossible to measure, since at least there are too many things in the world, at such fine detail, that we couldn’t keep track.
Not individually, but since the powers are the same, because the essence is fixed, even as conditions seethe and change from moment to moment, we might be able to track collections. As we indeed can. We’re averaging over conditions, as it were, in these large collections. As long as conditions are well behaved, this explains the excellent statistical measurements we can make on these substances.
Again, up to a point. Once will becomes involved, well, we are again closed off to the full picture.
4. The Subjectivity of Epistemic Probabilities
While your emphasis on conditional probabilities based on evidence is clear and logical, does this not introduce a degree of subjectivity that undermines the universality of probability theory? For example, two observers with different knowledge might assign different probabilities to the same event. How do you reconcile this with the apparent universality of probability in scientific practice?
Absolutely it does. I insist it ought to. If we agree on a proposition, we have made a subjective agreement on the importance of that proposition. Call it Y. If you insist subjectively on evidence X, and I insist subjectively on W, where the subjectivity is impossible to escape, then as long as X does not imply W (or vice versa) we arrive at Pr(Y|X) ? Pr(Y|W). That’s life. But in both cases those Pr(*|*) are objective, as in any math calculation.
I reconcile this in scientific practice by most scientists in most situations holding the same X. Yet unless Pr(Y|X) is extreme, then somebody who knows the full cause and condition, call that FC, will arrive at Pr(Y|FC) = 1. That is, they will beat you.
5. Practical Applications Beyond Bayesian Reasoning
Your framework aligns closely with Bayesian reasoning, which is widely used in decision-making and machine learning. However, do you see limitations in your approach when applied to frequentist methodologies or propensity-based applications, such as long-term predictions or physical models?
It’s close to Bayes, but not Bayesian. It’s logic. Which means it uses all evidence in a consistent way. A physical model, say, a pure instrumentalist set of equations, fully “deterministic”, is no different, to me, than another model which uses, say, normal distributions. Call the first model M1, and the second M2. Then Pr(Y_t|M1) = 1 or 0, as the case may be, and Pr(Y_t|M2) in (0,1). Simple as that. Of course, nobody writes it out the first way, but that’s custom, not logic.
Nobody is a frequentist. We have some people claiming to be, of course, but it is impossible to believe or to practice. There are no real-life frequentists. At best they are loose Bayesians. See my Class for more on this.
6. Implications for Deterministic Systems
In deterministic systems where outcomes are entirely determined by initial conditions and physical laws, probabilities often describe statistical regularities over ensembles or large populations. How would your view interpret these probabilities? Do they simply reflect ignorance about initial conditions, or is there a broader explanation within your framework?
Just as noted above. If the model is M, the initial conditions C, then Pr(Y|MC). If you want just M and are uncertain about C, then integrate out the C, like
$$\Pr(Y|M) = \sum_i \Pr(Y|MC_iE)\Pr(C_i|E),$$
where the E is whatever evidence you entertain about possible initial conditions.
You can do the same with models, too. Suppose you have one, and I have one, and etc. Then we can do
$$\Pr(Y|E) = \sum_i \Pr(Y|M_iE)\Pr(M_i|E),$$
adding in uncertainty about initial conditions (if any) in the obvious way.
Two last points: I don’t buy physical “laws“, and the pagan notion that Chance is a god must be ended.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use PayPal. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.
Discover more from William M. Briggs
Subscribe to get the latest posts sent to your email.