# Class 17: Random Means Unpredictable – To You!

Uncertainty & Probability Theory: The Logic of Science

Link to all Classes. Jaynes’s book (first part):

# Video

Rumble

Bitchute (often a day or so behind, for whatever reason)

# Lecture

We are reading Chapter 3.8 of Jaynes.

Watch this video of an audience participation lecture of a true random event. My first ever job was on Random Lane. True.

We are still using the same information as last time. B = “urn/object/device with M red/success/1 states/balls/states, and N total”. From which we deduce there are N-M white/failures/0. We already know how to compute Pr(R_1|B) and such like.

But let’s add to B this: “We take a ball out and put it back in/take a reading and restore the object or device.” This gives us B’. In old urn language we’d say we’re “sampling with replacement.” Take a ball out, put it back in, and draw out another, etc.

So what is Pr(R_1R_2|B’)? We can use Bayes:

$$Pr(R_1R_2|B’) = Pr(R_2|R_1B’)Pr(R_1|B’)$$.

Pr(R_1|B’) we already know, since on our first draw/measurement nothing has been replaced. But what is Pr(R_2|R_1B’)? Now you might guess the right answer here, but this is not the point, because you can’t always guess. You must read Jaynes on this. There is no point in me repeating what he has said so eloquently. The link to his book is above. I will assume you have read it below.

What does random mean? Only one thing: unpredictable on the information you assume. Here, on the B or B’ or whatever you choose. By unpredictable I mean a probability in $(0,1)$, and not in $\{0,1\}$. Which is to say, any probability that is not 0 or 1. Not a local or necessary falsity or truth. This applies to the proposition of interest, say A, to the information you assume, say B. Thus in $\Pr(A|B) \in (0,1)$.

Some use the word randominzation. What could that mean? Making the proposition of interest unpredictable. That and nothing more.

It may be that for some C that Pr(A|C) = 1 (or 0), but Pr(A|B) is somewhere in (0,1). Indeed, some C always exists such that the proposition of interest is known. This is always when we know the full cause and conditions of A. So that another way to say random is unknown cause.

Take an urn as B says. What order are the balls inside? You do not know. There is no information about order in B. None. Zero. Zilch. The is some causal information about A in B, like the material cause; i.e. the number and colors of balls. There is no information about the efficient cause. That means

$$\Pr(A|B) \equiv \Pr(A|B+\mbox{randomized})$$.

Randomizing, say by shaking the urn, provides no additional or new information to B. But let’s think about what you might do when picking a ball out. You do so, note it, and put it back in. If you’re very careful you could put it so it’s on top, for instance, so that when you go back in to grab another you might well grab it again. But that is not B. That is B + “Careful placement”. Pr(R_1|B) remains forevermore M/N.

If you are making your first draw, you only have B. You still do not know the order or the cause. There is no need for “randomization”. Unless you have outside information about the order you would add to B.

This is only our first time to discuss “random”.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \\$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

1. McChuck

Once again, with feeling, Copenhagen Interpretation delenda est. “Scientists” preach that reality does not exist, because they cannot know the tiniest details to the most exacting precision. They have confused and conflated lack of information with lack of existence for a century now. They have wasted billions of dollars and thousands of minds on literal nonsense.

@McChuck — GBY — @Briggs — I am not perfect at reading what you are saying. But “I Feel” (as in the sense of when to transition from slipping the clutch to not having the clutch engaged at all) that you are pointing at the abyss that cannot be defined. If one metric overrides, we bring on another metric to confound. The murkier the metric the better it is to wave your hands with. If you can color a heat map, you can make anyone shiver in fear.

The heat waves have me shivering in my house right now at 57 F, still early August..

3. Hagfish Bagpipe

Super entertaining. Watched the vid with sound turned down. Linen jacket great for summer. Even a tie is not needed in August. You’re on fire, like an olympic fencer dispatching your nefarious lexicographic enemies… and at the end of my refrain, strike home! What was that, the Three Musketeers? D’Artagnan? Dumas? Pere or Fils? Whatever, you’re totally shredding it, dude. Two comments. Okay, everyone has their cross to bear. But I like watching you doing your thing. It’s at least as amazing as these top level Olympic athletes. You should have taken up fencing. I think you would have been great at that.

4. Hagfish Bagpipe

Cyrano de Bergerac — that was the versifying swordsman.

5. Far too complicated. P(E) –> 0.5 means the outcome is “random” because =0.5 means we have no info about it. Period, the end.

6. Briggs

Paul Murphy,

That is not so. First, as I have repeatedly emphasized, there is no such thing as “Pr(E)”. There are only things like Pr(E|B) for some information B. That B means we have info about E. Period.

What we can say is that Pr(E|B) = 0.5 is maximally unpredictable. That will turn out to be important when we start looking at entropy.

7. Sorry about being unclear. To me P(E) with no b means b is a null set.. just crappy shorthand.

8. Briggs

PM,

Still doesn’t work. Take Pr(E| I know nothing about E). That does not equal 1/2. It has no value, except to say it is in [0,1].

9. Yes, I understand what you mean – and no, I don’t think it’s right.

If I have it right the big difference is in our understanding of what probability is. I maintain that anything we can think of
as capable of either happening or not happening has a fixed probability that’s either 1 or 0 and therefore that values for
P(E) are either 1 or 0 or an estimate (think p hat) reflecting what we think we know about E. Thus P(E|e subscript i) =1 iff
all P(e subscript i) = 1 but, of course, we usually don’t know enough to list even the more proximate members of e subscript i
and almost never know whether they are real (happen) or not.

In this view of probability the estimates we use reflect the information we have. So P(Heads), for example,
is actually fully determined by the forces acting on the coin – so 1 or 0 – but we usually don’t measure those
forces preferring, instead, to say we have no info about the outcome until it happens – so 1/n, or in this case
1/2. Note that each p(e subscript i) is also an P(E) so time is an order based illusion here and the p(e subscript i) are not
causal for E – they’re merely the necessary and sufficient conditions for E to be either real/true/happen or not.

I believe you did some work in crypto – consider “011000010011101101” – the encoder’s goal here is to reduce the info content of each bit to as nearly zero as possible by setting p(1) –>1/2 for each position without garbling the message and the decoder, of course, tries to impute meaning to the bits by finding a reason to think p(bit subscript i) not = 1/2. For example, pick n a prime > 1e100; pick m some small number such that p(n+ iota (count to) m) contains a prime – gets arbitrarily close to p(0) and p(1) = 1/2 and is unbreakable without knowledge of both n and m (or smallish ranges for them).

10. Briggs

PM,

Yes, I understand what you mean, too, and your conclusions are wrong for the reasons I have demonstrated.

First of all, it is always wrong to write “Pr(E) = anything” without the information used to determine the probability. When causes and conditions (call that state of knowledge C) of E are fully known, then Pr(E|C) = 0 or 1. All other information gives different probabilities. Saying you know nothing about E gives no fixed number. How can it? You can’t get something from nothing. Your “nothing” is not nothing, but something.

The goal of the cryptographer is not to make the message with a certain probability, but to make the message hidden. The bits aren’t set to have probabilities at all. For instance, take one-time pads. Each bit or character has its own separate key. The message is unbreakable. It makes Pr(message = i | one key per character & this encrypted character) = 1/m, where m is the number of possible characters. It’s 1/2 with 0/1, of course, but it’s with respect to the message and not the key. You have it backwards.