Statistics

Probabilities For Unique Events, Like Nuclear War, Exists

This will appear obscure to you. It isn’t. The errors here are fundamental, and cascade all throughout science. They account for, in part, why science has become so bad. The politics are more important, sure. But those politics needed a lever in bad science. Bad science arises from over-certain science. And over-certain science is created, all too often, from misunderstanding and misapplying probability.

EVENTS, DEAR BOY. EVENTS

Our subject is “unique” events, and whether probability can be “assigned” to them. Interest arose from a post at Marginal Revolution “Allegedly Unique Events“. Specifically about whether there can be a probability of nuclear war arising from our latest moral panic over Ukraine.

We answered the question before about the probability of nuclear war. I won’t repeat all that here, and instead discuss “unique” events.

Here’s the answer: all events are unique. If by event we mean a contingent observable. Something that happens, and can be observed or measured, in the real world. Events proper, though, are only a small subset of propositions around which we can form probabilities. There are an infinite number of propositions that having nothing to do with things being observed. Such as (to name only two) numbers and counterfactuals.

I’ll give examples of all that below. First, why fight over “unique” events and probability? Because of magical thinking and the belief that probability is real or a power—like electricity or magnetism, some kind of force that can act remotely and mysteriously, in a way nobody has defined, nor dares define.

Laugh if you like, but this evolved into the mathematical theory of frequentism, the idea that probability is forever unknown, except at “the limit”, which is to say, never. This is why probability is “estimated” from sequences of events: not known: estimated. Events in frequentism are strange curious things, never quite defined, and left ephemeral.

Events in frequentism must be embedded in an infinite sequence of events which are exactly precisely like all the other events in that sequence, except that all are different, too. Different only in their randomness, which is the power of probability.

Randomness is possessed by events. It is an essential, unremovable part of events. But it cannot measured directly. You can only see it by looking away, as it were, and measuring the behavior of events. And, even then, you never come to a complete knowledge of it—except at infinity.

Ever thought about this before? It is as strange as it sounds. This really is the theory. Stated as starkly as this, it appears ridiculous. But that is only because it is ridiculous. Frequentism is believed because it is taught, and it mathematics emphasized; the nature of events and probability is not much pondered. That’s because just as the student becomes curious in the metaphysics of the system, he is bombarded by more math. Which, if he is usual, he scarcely remembers. Except that it was hard and bizarre.

Take coin flips, the paramount exemplar of a frequentist “event”. It is beloved because it really does seem we can embed flips in an infinite sequence of similar events. We cannot, of course, because we’ll never see the end of them.

Now if we knew the starting conditions and causes of each flip—the weight of the coin, the spin given to it, the force impelled, and all that—we would know the outcome. It would be certain. (It has been done many times in real experiments.) The probability of a head would be “extreme”, i.e. 0 or 1, as the case might be. It would not be 1/2. It would never be any number beside 0 or 1—if we knew the causes.

It is impossible for all the causes operating on the coin not to vary between flips, though most of them are negligible, like minute fluctuations in gravity caused by ancient black holes whose gravity waves are only just now reaching us. Or in the quantum effects or particles gaining more actuality than potentiality, only to sink back into states of predominate potentality. And so on. There is always something going on, and different, between flips.

If we did not know those causes, and all—dear reader: take this word in its strictest sense—and all we know is that there are two outcomes, and only two, and one must happen, then the probability is deduced, and therefore known, to be 1/2.

A frequentist ignores this, and instead “experiments” with flips. He takes no care in the various causes, never really thinks about them at all, except in a loose sense, firmed up only strange times; times which belie either his prejudices or strong experience. He believes that probability, the randomness, is a part of the whole experimental set up. Somehow. He never, not ever, explains how. Because, of course, he cannot.

He begins his “experiment”. After a while, he tires, and ends his flipping. He sums the heads and inputs that number into some odd formula, which allows him to “estimate” “the” probability the coin, the next time it is flipped, will show heads.

Well, in the case of coin flips he could have saved himself the trouble. Not always, though. In situations where we don’t know what the causes are—knowledge of symmetry coupled with what is known about physics is some way toward knowing some of the causes in coin flips—we can use observations to calculate probabilities of future events. Where all that it required is to believe the causes of the thing measured or observed are roughly the same, but which differ in small, unmeasureable and unknown ways. Ways that we can express as uncertainty in unknown similar events using probability.

This is never done, though. Except in rare cases. Even Bayesians don’t do this. This is because all Bayesians are first trained as frequentists. They are just as obsessed with “estimating” probability as frequentists—instead of deducing it. Strange, no?

The result of all this is that concentrating on those estimates, instead of moving on to probability, and being vague or wrong about causes, produces massive over-certainties. The reason any good comes from these methods is because not one man anywhere is a consistent frequentist or Bayesian. It is impossible to remain faithfully adherent at all times to the theories. Reality always intrudes, and the uncomfortable parts of the theories are not brought to mind. But never mind all that now.

Know instead of the magical beliefs in “randomness” which lead to “estimates”, and know that we can instead go right to probability. And probability can be deduced for events of any kind—as long as you can supply the premises relating to the event, which describe what you know about it.

Again, all events—observable, measurable contingent bits of Reality—are caused: made to happen. Since the causes change, even by some fractional unimportant among, all events are unique. We can assess probability for events, which nobody disputes, yet since all events are unique, we can assess probability for unique events. QED.

Now there are certain things lacking in the proof given here. But they are easily supplied. I have done so, here and here. Enjoy.

Which you are not likely to if your career hinges on the old, magical beliefs.

BEYOND EVENTS

There are more than events. Events are only special case of propositions. Propositions can be of any kind. The Marginal Revolution post gave two examples of propositions: a potential measurable event (the world blowing up) and a counterfactual (the world might have blown up long ago). Counterfactuals are not, of course, measureable.

Neither are numbers.

Example: given knowledge of arithmetic and your background knowledge of the definitions of the symbols used, what is the probability of the proposition “4 > 2”?

I know you know, but perhaps have forgotten that numbers are not observable. They are not contingent events. Yet we can form probabilities about propositions regarding them. If you rebel at this, being too well trained in the current academic regimen, change the proposition to this “The nth digit of pi > the nth digit of e, where n = some huge number that you can write down but nobody has yet”. Say, n = 100^100^100^100^100^100 or whatever.

This proposition has no probability. No proposition has a probability. Because probability does not exist!

The proposition only obtains a probability in relation to some premises. When you—yes, you—supply conditions (the premises) on which we can calculate the probability.

You might be a genius mathematician and have worked out the formula for the nth digit of any “irrational” (telling word!) number. In which case your probability is either 0 or 1 depending on what your formula says—even if your formula is mistaken! (If you understand this, you understand all.) Or you might be a plodder like me and only figure the digits have to be 0-9, and that’s it. Then it’s fifty-fifty.

Our unbreakable rule is: change the premises, change the probability.

That’s a long, convoluted way of saying all probability is conditional, just like all logic statements are conditional. If you’re inclined to disagree, don’t forget the tacit premises of knowledge of the symbols and grammar used to write down any proposition are always conditions.

Now let’s do counterfactuals. Here’s a quote from MR:

“Do such respondents really believe that the probability of a nuclear war was not higher during the Cuban Missile Crisis than immediately afterwards when a hotline was established and the Partial Nuclear Test Ban Treaty signed?”

Doubtless you think that counterfactual probability (now) was indeed higher during the Crisis than after. It is no longer an event, because the time has passed. It is now an unobservable counterfactual. But we can still think of it as if the time has not passed, and that we are living in it now, the conclusion uncertain.

To come to a probability, which need not be a number, what would you do? Right. You’d think about all the causes of the thing in question. What were people thinking, and why? What were the nuke capabilities of the time? What was happening in the USSR and what did that mean to the thoughts of its rulers? What was happening in the USA and what did that mean to the thoughts of its rulers? And so on. Causes.

You won’t know them all. Nobody can. So you have to paper over your ignorance of the causes using uncertainty. In some propositions, no knowledge of cause is even possible, and so probability is all we have.

THE FINAL SHOCKING CONCLUSION

It’s cause. For events, anyway. Knowledge of cause is the goal. It’s not always attainable. It is only in the crudest, or simplest of events. If during the Crisis you knew all the causes behind the decisions of all involved, the probability for you would have been certain. Of course, only God himself can know all these causes. The best any man can do is approximate.

Which we do always, all the time, at all moments, really. The future is a string of contingent events. Of which the outcomes are not known with certainty. But we have to act, which means coming to some kind of grasp of the uncertainty of the events relevant to us, or rather that we act upon. Those that act on us unbidden, the unknown unknowns, by definition there’s nothing we can do. Shit happens. To coin a phrase.

For everything else, we have probability.

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.

Categories: Statistics

13 replies »

  1. I think that’s what’s frustrating most people. They don’t know anything about the potential causes of war, and so are desperately trying to fill in the gaps to arrive at some kind of seat-of-the-pants estimate of what will happen. Because having that estimate at least makes us feel a little bit better, because it feels like it’s less unknown.

    But therein lies the rub. The more I watch, the more I realize there’s a lot going on that we can’t know, at least not yet. It makes the confident statements by retired military personnel almost comical.

  2. Aleluya Number 1 “Who Will Answer?”

    ‘Neath the spreading mushroom tree
    The world revolves in apathy
    As overhead, a row of specks
    Roars on, drowned out by discotheques
    And if a secret button’s pressed
    Because one man has been outguessed
    Who will answer?

    Is our hope in walnut shells
    Worn ’round the neck with temple bells
    Or deep within some cloistered walls
    Where hooded figures pray in halls?
    Or crumbled books on dusty shelves
    Or in our stars, or in ourselves
    Who will answer?

  3. Isn’t this related to Bertrand’s paradox? Frequentists will often express confidence in their theories by showing a long string of observations in situations where it is possible. Don’t believe the coin flip has a 50% chance of coming up heads? Here, let me flip a coin 1000 times. Or let’s roll a die 10,000 times to see its probability. Or spin a roulette wheel repeatedly, etc. (Amusingly in real life physical defects usually cause these distributions to not be uniform at high scales, which is why some frequentists prefer experiments like “let’s run a computer program designed to output heads 50% of the time and see if in the long run it outputs heads 50% of the time.”)

    But take the act of “choosing a random chord in a circle.” If you watch videos on the subject you will see people absolutely certain that the correct probability of being longer than the side of an inscribed equilateral triangle is definitely 1/3 (or 1/2 or 1/4) because they made a computer program to select a random chord and got that ratio in the long run. Of course, the ratio will depend on the selection process used. What this in turn suggests is that the event of “choosing a random chord” does not fit into a unique random sequence of events. Each sequence gets you a well defined probability, but you get a different one depending on you approach things (much like a set having multiple limit points.)

    But if even an “abstract and mathematical” example like what we see in Bertrand’s paradox can have no unique sequence of events to belong to, why would we ever believe that a real world event would need to belong to a unique sequence of events? That is, even if we believe that it is possible at least in theory to imagine unique events coming from a string of “random” events (so that it would be possible to not just see the same event happening again and again with the same set up) there still isn’t a good reason to believe that a real world event will have a unique probability, unless it is explicitly spelled out how that sequence of events is formed.

  4. Closely related might be a book by Charles Perrow, “Normal Accidents,” which discusses these types of rare events–accidents–that escape probabilistic (frequentist) prediction. His argument, despite rarity, is that they are normal because human error, mistakes are bound to happen. Also, due to complexity and tight coupling of system integration (technology).

    The premise of the book was triggered by the 1979 Three Mile Island nuclear plant incident, and published in 1999 (before 9/11). He added an endnote on the Y2K problem, which of course, when read after the fact appears hyperbolic–hindsight being 20/20. Though, as a sociologist, he’d appear to be handicapped in assessing technology–though that doesn’t stop him from opinionating.

    I’m fairly certain I read it in the wake of 9/11 in an effort to come to grips with that unusual event. It’s greatest shortcoming is probably assessing risk and probability as Briggs addresses in this post. But it is a fascinating read for the events–the normal accidents–discussed therein.

  5. The overlords have a mountain of filth over in the Ukraine.

    Count on it that they have been running every kind of bad business possible through there. Nasty finance, human/pedo/prostitution/porn trafficking, drugs, bio-weapon labs, and on and on and on and on.

    Bits and pieces of the putrid mess have popped up more than enough to spell it all out.

    They are absolutely insane about it not all getting exposed.

    That’s what’s driving the situation and I’m not sure that you can statistically model that kind of insanity.

    That kind of insanity is what is driving them to use fear to stampede the sheep into war.

    But we can stay confessed. If we don’t fear death the overlords got nothing on us.

  6. If probability need not be a number then what may be meant by comparing two probabilities or saying, as you do, that probability was higher then.

    All events are unique as all particles are — but in physics theory may be advanced by coarse-graining. In this way the events can be grouped together such as sequence of coin flips.

  7. I like the way you think – sir! It’s a shame you ‘weren’t around’ (and known to me) 20 years ago when I was active in the same general cause.

    My conclusion then, was (rather than trying to reform stats) to try and downplay the role of statistics in (real) science. I was happy to use frequentist stats, but mainly as a means of summary to aid in the interpretation of marginal cases – but my first line response to such uncertainty about ‘significance’ was always to look at more, and independent, observations – in light of clear and precise theoretical predictions.

    But now that science is dead among professional researchers, I have given up trying to sort the odd grain of truth from the bushel of lies, errors and deliberate misleadings that is published research.

  8. “It makes the confident statements by retired military personnel almost comical.”
    I am just such a retired military person and I am absolutely confident that World War 3 could happen at any moment; or not. (certain of uncertainty)
    Russians admire strong leadership, but so do most Americans. They have it, we don’t.
    What exists is opportunity to seize some land (Belarus, Ukraine, South Ossetia) with very little risk, land that officials of the former Soviet Union still believe is theirs for the taking. Historically, Russian leadership doesn’t worry much about the proletariat. So what 50 million die? What matters is who dies. Half of theirs and all of ours, hooray for a successful program.

  9. Isaac Asimov wrote “Foundation Trilogy” and its premise is that the behavior of large groups of humans can be predicted with some accuracy. By the second book, if I remember right, the “Mule” challenges this predictability; a charismatic leader can disrupt predicted group behavior.

    Nearly all prognosticators, predictors and pundits predicted a Hillary Clinton presidency. They were mistaken and that kind will be mistaken again and again, in part because of “false consensus” and other predictable (!) behaviors. These pundits *wanted* HRC to be president and because of that, reduced the weighting on factors that actually turned to to be weighty and in large number; one such factor is the vast number of deplorables and what exactly motivates deplorables such as myself. How can a prognosticator correctly weigh a factor that he does not consider weighty?

    In a way, the attack on Pearl Harbor illustrates this phenomenon. The Kaena Point radar station saw the incoming Japanese. But they were expecting Americans to be arriving from California. So this flight is coming from the wrong direction. Ignore it.

    The Russians, many or most of them in my opinion, do not see taking Ukraine as an invasion. It is already theirs. Ukraine should never have declared independence from the Soviet Union. After all, the Americans did not allow the southern states to secede from the United States.

    But Ukrainians are not Russians; though there be many ethnic Russians in the Ukraine. This is why splitting Ukraine seemed almost reasonable; after all, micro-states are a “thing” with the Russians; study the situation with South Ossetia.

  10. As long as we are talking about Foundation, let’s dive deeper. Lots of spoilers follow. You have been warned.

    The Mule was not only a charismatic leader but had powerful psychic powers that allowed him to manipulate the emotional states of an entire planet, and with such fine control that he can turn his enemies into his followers. The story initially says that he was a threat to the Seldon Plan precisely because he had so much individual power; even a normal “great man” is actually more of a product of his followers and his environment than himself, and so can be predicted by the law of large numbers. But the Mule is such an outside force and has such an ability to twist things to his whim that he is outside the laws of psychohistory.

    However it’s not clear that things are quite so clear cut. He’s brought down by the second foundation, a group of behind the scenes manipulators who make sure that predictions go off without a hitch, even if some pesky bit of nature or chance gets in the way. Since the whole point of the Mule is that he could never have been predicted or fit into calculations, he cannot have been the reason for such a second foundation. From the very beginning Hari Seldon was rigging them game, using his predictions as edicts for the future with behind the scenes actors to ensure that his vision was maintained; the Mule was just an event several orders of magnitude larger than they’d thought they’d have to adjust for.

    The parallels to real world predictors are not hard to draw.

Leave a Reply

Your email address will not be published.