It is in one sense fortunate that the mathematical, or rather quantitative, roots of probability began with gambling. Routine gambles are easy to understand, and the calculations not only easy, but as models have great applicability to actual events. All know the story of how quantitative probability flourished, and flourishes, from these beginnings.
On the other hand, it has been difficult for probability to remember that its more robust, fuller, and certainly more supportive roots which are non-quantitative. That gambles were easily quantifiable and made skillful models produced the false idea that all probability is, or should be, quantitative. And this led to the main error, discussed last time, that probability exists. It also produced a second error, which I won’t examine here (but have at length in Uncertainty), that probability is subjective.
Given the rules of craps—our premises—we can deduce the probability of winning and losing. We can also apply this model to real dice. And the same is true for card games, slot machines, and so on. These models have been found to work well. But even casinos change out worn dice and bent cards knowing the models are no longer as applicable.
These models work well for single gamblers (with assumed fortunes), but they cannot be applied to groups of gamblers, because how much and how long people, plus how many people, gamble cannot be captured by the simple premises. Here I agree with Taleb when he says about groups of gamblers, “Some may lose, some may win, and we can infer at the end of the day what the [casino’s] ‘edge’ is, that is, calculate the returns simply by counting the money left with the people who return.” This observational data is used to infer premises for a model beyond the premises available per game (which are easy).
Taleb continues: “We can thus figure out if the casino is properly pricing the odds.” The odds for each single game are deduced, so that means, at first glance, that the overall odds are also correct. But sometimes it pays for casinos to change single-game odds. If few wins at some slot machine, few will use it (after word spreads); likewise, if one pays off well, more will use it. Observed behavior can help slide the single-game deduced odds to entice more gambles. Since behavior is volatile, so will be these models.
I also—everybody also—agrees with Taleb that when a gambler goes but he must stop playing. For some reason he calls going bust an “uncle point” (crying uncle?). Everybody also knows that because a certain gambler reaches an “uncle point”, that other gamblers might still have money. This seems to be something of a revelation to Taleb, though, who calls the models applied to groups of gamblers “ensemble probability” models, and those applied to single gamblers (with known or assumed fortunes) “time probability” models.
Taleb then argues, what isn’t a secret, that sometimes people use the wrong model. They’ll use a single-gambler model for a market (group), and a group model for a single-gambler. I don’t think this often happens, however, not with stocks, anyway, with so much money involved.
He says, “I effectively organized all my life around the point that sequence matters and the presence of ruin does not allow cost-benefit analyses; but it never hit me that the flaw in decision theory was so deep.”
Well, of course, the presence of ruin, i.e. if one is ruined, the cost-benefit is not flawed, it is as easy as can be. That that possibility of ruin exists does not conceal a flaw in decision theory, either.
I agree that decision theory has many flaws, but I see them differently. Many formal quantitative methods allow for impossible values (infinities or other large numbers), or they assume probabilities are real or they conflate probability and decision. Probability is not decision.
Taleb is concerned with “tails”, which is to say, large values. Now actual observed large values may or may not be well modeled; often they are not, and then Taleb’s criticism is spot on. For instance, normal distributions are as overused as the word “like” is in ordinary conversation. Other times there are possibilities in decision analysis for “tail” values that can’t be seen, and that’s a flaw with either the probability model or decision criterion (or both).
Somehow Taleb believes people, unless they possess genius, cannot figure probability if they do not have “skin in the game”, his favorite marketing phrase. This is false, as is obvious. People who do not give a rat’s rear about an outcome are less likely to attend to the problem as closely as those who do care, which is clear enough. But having money on the line does not bring the psychic gift of probability awareness. Indeed, gamblers with much “skin in the game” are apt to be the worst estimators.
That’s enough for Part II. I’ll wrap it up in Part III, Ergodicity and all that.