Uncertainty & Probability Theory: The Logic of Science

# Video

Links:

Bitchute (often a day or so behind, for whatever reason)

**HOMEWORK:** With B = M red balls, N total balls, and so N – M white balls, and with R_i = red drawn on i-th draw, what is Pr(R_3|B)? Prove it.

# Lecture

*This is a summary of the first part of Chapter 3 in Jaynes.*

I renamed the video, coming up with a better title only after I had finished recording. I want to emphasize that future knowledge can influence past events. Not the events themselves, but our knowledge of the events. Past events are not caused to change. The change is acausal and with respect to our uncertainty.

This has profound consequences. It means that we can’t get cause from probability models. We bring cause *to* models. This means the enormous number of causal claims we see based on probability models are not justified. Not to say they are wrong, because modelers may have guessed right. But they can’t know it.

You see the homework. We have With B = “M red balls, N total balls, and so N – M white balls, and with R_i = red drawn on i-th draw”. From that, and using nothing but the rules we have already deduced, we derive the hypergeometric distribution. Since there is no politics to it, wokepedia has a decent article, in which you can read all about it.

We start with trivial things, like Pr(R_1|B) = M/N, and Pr(W_1|B) = (N-M)/N. We then get Pr(R_2R_1|B) by referring to Bayes’s theorem.

Pr(R_2R_1|B) = Pr(R_2|R_1B) x Pr(R_1|B) = [(M-1)/(N-1)] x (M/N)

because, obviously, if we assume we took a red out on the first draw, there are then N-1 total balls, and M-1 red ones. The entire distribution is calculated in the same way (which you can find at the link). It allows us to answer questions like “I took out 4 R and 3 W, what the chance of 2 new R and 2 new W?” And so on and so forth. Any question we can ask about the observable (the balls) with respect to B, we can ask, and we can answer.

We have met our first “predictive distribution.” Which gives probabilities for observables with given conditions.

All science should be put in these terms. Alas, it is not. We will come to all that in great detail.

Here’s where it gets beautifully weird! What is Pr(R_2|B), i.e. the probability of a red ball on the *second* draw? Logically, if there is a second draw, there must have been a first one. That first may have been either red or white, we don’t know which. So we have

Pr(R_2|B) = Pr(R_2 (R_1 or W_1)|B) = Pr(R_2R_1|B) + Pr(R_2W_1|B)

We use Bayes twice on the right hand side, e.g. Pr(R_2R_1|B) = Pr(R_2|R_1B) x Pr(R_1|B), and Pr(R_2W_1|B) = Pr(R_2|W_1B) x Pr(W_1|B). Then (and make sure you get this):

Pr(R_2|R_1B) x Pr(R_1|B) = [(M-1)/(N-1)] x (M/N)

And

Pr(R_2|W_1B) x Pr(W_1|B) = [M/(N-1)] x ((N-M)/N)

And do (do the math!):

Pr(R_2|B) = [(M-1)/(N-1)] x (M/N) + [M/(N-1)] x ((N-M)/N) = M/N.

In other words, the probability of red on the second draw is identical with the probability of red on the first *because we do not know what happened on the first!* We are already in deep kimchi if we think we can get cause.

The weirdest thing is best explained in an example. Let N = 2 and M = 1 (one red, one white). Then calculate

Pr(R_1|R_2B) = 0

because if we knew the second draw must have a red, we deduce there is no red available for the first, since there is only one red ball. Knowledge of the future, or assumed knowledge, *acausally affected knowledge of the first draw.*

Amazing!

There is nothing special with this set of numbers: the math works for any N and M. Knowledge of the future acausally affects the past. There is no directionality in probability.

Again I say, this will have profound consequences in all areas of science, as we’ll see.

*Subscribe or donate to support this site and its wholly independent host using credit card click here*. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

In the last two paras, you had it right the first time: present knowledge affects our understanding of, or knowledge about, past events – but not the events themselves. P(E) never changes, our estimates for it do.

First you say:

“Knowledge of the future, or assumed knowledge, acausally affected knowledge of the first draw.”

You follow that with:

“Knowledge of the future acausally affects the past.”

Those are VERY different statements, and the second doesn’t follow from the first …

And the first seems trivially true. If I could see into the future and know that I won the lottery, it would logically follow that I must have bought a lottery ticket … but I fear I don’t understand how that is “acausal”. All that’s affected is my knowledge … and what is the “cause” of knowledge?

In perplexity, your friend,

w.

Willis,

Poor writing. I had thought it was understood by that point that I meant epistemologically.