# Video

Bitchute

HOMEWORK

THERE IS NO SUCH THING AS UNCONDITIONAL PROBABILITY, I.E. THERE IS NO Pr(A), only Pr(A|B). If you think not, find a Pr(A) and write out the full probability in the comments.

# Lecture

We’re doing the first two sections of Chapter 4 of Uncertainty.

We start by answering an excellent question.

I have a question about Bayes’ Theorem and philosophical arguments. I ask because I have a broad scholastic approach to philosophy that relies metaphysical demonstrations.

Is the contrast between probabilistic vs deductive arguments unhelpful? It seems like deductive arguments mask the uncertainty of probable premises. If each premise of an eight-step argument is 95%, the lower bound would be 66% (given independence).

For some reason, this doesn’t sit right with me. Bayes’ Theorem seems useful for when deciding theories within the world, but not applicable to first principles (like the reality of change). But I don’t have much of a mathematical background. Any assistance you can provide would be extremely welcome.

We want:

Pr( C | P_1 P_2 … P_8) = 1

where we deduce C is true assuming P_1 through P_8 are true. Don’t miss the key word: we assume P_1 through P_8 are true. It does not matter if they are false, even. We assume the are true. And then we deduce the local truth C.

We don’t yet know how to assign any numbers beside 0 (falsity) and 1 (truth) to probabilities. But it’s no secret we’ll be able to, and in the obvious way, which we’ll prove in a lesson or two. For the moment, assume we have proved it.

Anon is thinking like this:

Pr(P_1 | E) = Pr(P_2 | E) = … = Pr(P_8 | E) = 0.95,

where he has external evidence E which allows him to assign that 0.95 to each P, and to ignore all the other P_i while focusing on just one P_j.

But, as is now clear,

Pr(P_i | E) ≠ Pr(C | P),

where P = P_1 & P_2 & … & P_8, for any i.

What Anon has in mind is the answer to this very different question:

Pr(C P | E).

In other words, he wants to know about C and P being true, when evidence suggests P is only uncertain. Using Bayes (which we proved):

Pr(C P | E) = Pr(C|PE)Pr(P|E) = 1 x 0.66 = 0.66,

because 0.95^8 = 0.66. This assumes E says nothing directly about C, in the presence of P. We’ll come to that “in the presence of” next week, if it isn’t already obvious.

This again shows us probability, being logic, doesn’t care about the premises. Just about the CONNECTIONS between premises and the proposition of interest.

For the rest of the lesson, we use this excerpt from Chapter 4 of Uncertainty.

He’s alive Frank, though he’s on life support. Doctors say he’s got a 50-50 chance of living…though there’s only a 10% chance of that. —Captain Ed Hocken.

## Arguments

Probability is, like logic, an argument. Logic is the study of the relation between propositions, and so is probability. Like logic, probability is not a real or physical thing: it does not exist, it is not ontological. It cannot be measured with any apparatus, like mass or energy can. Like logic, probability is a measure of certainty, where by custom non-extreme certainty is called probability; extreme probability, certainly false and certainly true, are thus a special case of probability, probabilities of 0 and 1.

Probability is widely misunderstood for two main reasons: the confusion between ontological and epistemological truth, and the conflation of acts or decisions with probability; also insists on this distinction. These errors are discussed in the next chapter, but it’s helpful here to have a précis. A thing is ontologically true if it exists, and it is ontologically false if it does not. An epistemological truth is when we know, given certain unambiguously specified evidence, that a proposition is so. Epistemologically true propositions do not have to be ontologically true. We know the proposition “Mike is green” is true given “All dragons are green and Mike is a dragon”. This is an epistemological conditional, or local, truth.

But we also know the major part of the premise is ontologically false because there are no dragons, green or otherwise. That we know it is ontologically false—that we know there are no dragons—is itself both an ontological and an epistemological truth, conditional on observation. The lack of dragons is ontologically contingent. The subjects of contingent propositions can only be or not be, exist or not, they cannot have ontological probabilities: they can only be ontologically true or false. Yet since we might not know whether a thing be or not, propositions can and do have epistemological probabilities. Probability is the simple extension of logic to situations where the evidence does not guarantee epistemological certainty.

The truth or probability of any proposition is conditional on the given or accepted premises. We know that “All men are mortal” is true because our senses confirm individual instances and because induction-intellection provides the certainty. This is also why we know “For all natural numbers \$x\$ and \$y\$, if \$x = y\$, then \$y = x\$” is true. Individual instances are observed, and induction supplies the rest. We know that “Chicken-man wears a costume” is true if we accept “All super heroes wear costumes and Chicken-man is a super hero.” But we also know that the same proposition is false if we accept “No super heroes wear costumes and Chicken-man is a super hero.” And we know that “The probability Chicken man wears a costume is 32%” is true if we accept “Just 32% of super heroes wear costumes and Chicken-man is a super hero.”

Counterfactuals are always ontologically false; i.e. they begin with premises known observationally to be false. Yet counterfactuals can have meaningful (epistemological) probabilities. The conclusion “Germany won the war” is likely true (we may say) given “Hitler decided not to attack the Soviet Union” and other premises (those which follow) which are plausible or deducible given that premise. Counterfactuals are surely meaningful epistemologically but never ontologically. Counterfactuals, a neologism, are ontologically unknown, but also valid probabilistically. A counterfactual, in relation to a proposition of interest, is a premise which assumes what will be ontologically true but which might not eventuate. Given “Assuming mom comes for dinner” then “I will make cookies.” In other words, a counterfactual is a standard, run-of-the-mill prediction, which nobody disputes can be handled with probability.

There is no such thing, therefore, as an unconditional truth or probability. Everything follows from this simple fact. The truth or probability of any proposition changes depending on the evidence. We could not say anything else but that “The probability Chicken-man wears a costume is 32% and no other number” is true, given only “Just 32% of super heroes wear costumes and Chicken-man is a super hero” and the tacit premises about the meaning of the words, the grammar, and our implicit understanding (formed via induction, too) about how sets of propositions like these are related to one another.

## Probability Is Conditional

As we saw in Chapter 1, all truth is relative in the sense that all propositions which are (epistemologically) true are so because of some reason or reasons, except that I do not endorse the use of “inductive probability” because, as we have seen, induction is a many-faceted thing. All things that exist are also ontologically true given some reason (some cause), incidentally. The propositions of which I speak include necessary truths, which are absolutely true propositions given a set of premises which, if not themselves indubitable, are the valid result of reasoning from inductions using sound rules. We distinguished between necessary truths, which are universally or always true, and locally conditional truths, which are propositions accepted as true when reasoned from given but not necessarily true premises.

An example of a necessary truth is the proposition “P is P and not not-P” (where P itself is a proposition, and we include, as always, the tacit premises provided by our understanding of grammar and so forth). This is the so-called Law of Identity. A conditional truth is the proposition “George wears a hat” conditional on the premise “All Martians wear hats and George is a Martian.” That probabilities are conditional is recognized in many of the authors mentioned, but also, after a fashion, in others.

This difference between necessary and conditional truths is no small distinction, as we’ll soon learn in the area called “subjective” probability. Most workaday or ordinary truths, and all scientific ones, are conditional truths, propositions which are not necessarily true because the premises which support them are not themselves necessary truths. In most of regular life, and in science, we argue with contingent premises, therefore we can do no better than conditional truths. The constant danger is that conditional truths are taken for necessary truths; and when this happens it is often because of scientism. I’ll prove this with examples in later chapters.

Since probability is like logic (or is logic), before we can understand the probability of any proposition we have to know which premises are given to support that proposition. In other words, there is no such thing as an unconditional probability. Many authors think there might be; for example Hájek. These authors speak of the supposed difference between epistemic, aleatory, factual, stochastic probability, among other terms: factual or physical probabilities are thought to exist. Now to exist is an ontological and not an epistemological statement. If factual probabilities existed ontologically they could, at least in principle, be measured, like mass or electric charge can. Dice throws and human births are often given as examples of things that have factual probabilities. The chance of a boy, we hear, is something like 51%, a number which comes about by measuring actual births and tallying the relative frequency of boys.

But there is no such thing as factual probability. Each birth is caused to be a boy or a girl by some thing or things. Among these things are the genetic makeup of the father’s gametes, the conditions in the mother’s uterus, environmental stressors and so forth. Each baby is produced by slightly different causes which vary for many reasons. The process itself is constrained to follow certain lines. Human mating doesn’t produce turtles or kitchen mops: miniature human beings are made. That we don’t know precisely what all these causes are in this birth is one question, and a good one for biologists and families-to-be, but the causes must be there. Our (epistemological) knowledge of the proposition “It will be a boy” can of course be informed by what we know of the causes and of what we have seen in previous births. But there is no ontological probability driving any birth this way or that, a probability which somehow “balances the books” across all of humanity so that just 51% of boys are born. To think there is is to commit the gambler’s fallacy.

Maybe this is better seen in a simpler example. Suppose we have a device into which marbles are dropped and which must come out slot A or B, and that the hole through which the marble drops is wider for slot A. Given this evidence and our common experience of falling objects, our epistemological probability is that the marble is more likely to show in slot A in one drop or averaged over many. But each marble as it bumps and spins and drops its way through the device is caused to take the path it does. If we knew the characteristics of the machine and the initial conditions of the drop and the equations of motion for objects like marbles, we could deduce which slot the marble must go through (as we can with, say, coin flips). The answer would be both an (eventual) ontological certainty (for the marble to be at this or that slot) and an epistemological certainty.

The exception to this appears to be for quantum mechanical events; photons going through this and that or this or that slot, the time at which a nucleus disgorges an alpha particle, whether this particle is spin up or down and its entangled pair the opposite, and so forth. Experiments repeated under very similar conditions—one cannot claim absolute similarity except by induction; if one could know all conditions, then quantum mechanics would not be mysterious, but do not forget all is the most demanding word there is—show stable frequencies of outcomes. But so would repeated experiments with the marble in the machine, as do human births, coin flips, and so forth.

The equations which give rise to the quantum mechanical experiments are probabilistic in nature; they speak of probabilities and not certainties. On the other hand, so do equations saying things about coin flips. Yet in everyday, schoolyard coin flips or all quantum mechanical experiments, all we know is that we don’t know the cause which gave rise to the observations. We do know, or we should know, that some thing or things must have caused the outcome. With our marble or with coin flips, being so exposed to the world, as it were, we have some ability to know what these causes are. But this isn’t so in quantum mechanics, where the causes are hidden from us. But QM is different: in QM we know we can’t know why any QM experiment happened, but that something caused the outcome we do know. That we know, or should know, causes exist and the nature of causes is discussed later in the Chapter on Causation (which includes a discussion on Bell’s theorem for QM). All that is important here is that our epistemological understanding (or ignorance) of causes are conditional.

Now to say, “The probability of X is \$p\$”, where X is some proposition, is to speak incorrectly. Colloquially we often do say things like that, but there is always and must be tacit evidence behind the \$p\$. A man says, “The Tiger’s are probably going to win today.” The man who says this assumes his audience shares the same or much of the information about the game he does, and if this audience does not, the man might use the statement to launch into an analysis, which is a listing of his premises (many of which will be vague) and how they are probative to the proposition of interest “The Tigers win today”. Or the listener may say nothing but might, for instance, supply the premise “I trust this guy” and therefore agree with \$p\$. The trust is the condition or premise which must exist if he agrees with \$p\$ and does not know the man’s other premises.

To speak properly one must say, “Given this evidence, the probability of X is \$p\$.” This is why there is no such thing as a probability of being struck by lightning or of dying from a heart attack or whatever. Probability is no intrinsic; there must always be conditions. No probability exists ontologically, and therefore no probability can be calculated unless it is conditional on some evidence. Probabilities speak only of the possible truth or falsity of propositions. Thus (as will be proved below) the probability of “Pat the cat shot at least 30 rounds” given “50% of cats shoot at least 30 rounds and Pat is a cat” is 50%, but the same proposition has probability 60% given “60% of cats, etc.”

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know whom to thank.