Make a list of things that might happen. Label this list X_1, X_2, X_3, …, X_p, where “p” is for possibility. Each of these X_i is a thing that can or might happen—and happen in a certain period of time you designate. The X_i and time period or time limit both must be picked, and picked by you.
Here’s an example. Tomorrow might bring us these events: X_1 = a micrometeorite crashes to earth intact, impinging on your skull; or X_2 = It does’t and it’s a lovely sunny day. The time period is plain.
Considering X_1, you have the option to buy a Mighty Mini Meteor Shield 3000™, a helmet guaranteed to protect against all spaceborne objects under 3 inches in diameter. Retails for $348,000. Comes in your color of choice, tinfoil silver or crazy pink. Or you go out with your pate exposed.
Under X_2, you have the chance of developing a treatable skin cancer that might show twenty years down the line. Or you could wear a fetching straw hat.
The range of what you consider will happen to you, the various scenarios X_i, is up to you to lay out. You must decide what the consequences of each X_i are, and what you would do to prevents or lessen, or even welcome or enhance, the outcomes.
What’s the worst that could happen here? Under X_1, an amusing death. Under X_2, mild discomfort, sometime well after Medicare kicks in.
Death is the maximum worst thing that could happen. You’d like the minimize the risk of the maximum worst thing. Under X_1, which is the scenario with the maximum worst thing, you could either buy the helmet or go bareheaded.
To minimize the maximum risk, you decide shell out the big bucks for the shell.
This is the minimax decision rule.
Minimix, however useful it is in adversarial games, when the other side is actively out to destroy you, has some curious side effects when used as a general decision rule. It works like this: (a) discover in your list the scenario that contains the worst possible risk, (b) take the action that minimizes that risk, and (c) ignore everything else.
There is no mathematical justification for minimax: it is not derived from universal moral principles. It is instead a mathematical encapsulation of how some people make decisions under uncertainty. What does it say about someone who wants to minimize the maximum risk, forgetting all other risks, and even the likelihood of these risks (about which more in a moment)?
It is not a manly attitude for most mortal concerns. For immortal ones, like the status of your eternal soul, sure. Be like Pascal and minimize the maximum risk of Hell. But for deciding whether to drive down to the store for a case of wine, where you might of course be t-boned into a bloody pile of fillets, maybe it’s a tad effeminate.
More than a tad. For you would be frozen into inaction always—if you had a good enough imagination to conjure worst case scenarios lurking everywhere. You couldn’t even sit still sheltering in place in a dark room, the windows slammed shut, sealing in the fetid air, because that meteorite might plunge through roof!
Minimax is effeminate because it is a decision rule based on fear. Sometimes fear is justified! Not usually, though. Minimax tends to be used by the effeminate, because these are the people who enjoy conjuring worst case scenarios.
Again: Minimax operates on the assumptions you bring to it. It is a decision rule, not an assumption generator. It was I that decided on the scenarios, the actions I could take, and the costs associated with them. Minimax is silent on these important topics. Once they are specified, however, minimax springs into action and tells you what to do.
Minimax insists I buy the helmet. Mimimax didn’t tell me that the helmet was an option.
Now when I hop into the car (which I do not own, so I’d have to steal one) to head to the liquor store, I do not consider being t-boned a possibility. Nor do I fret the brakes will fail. Or that a semi driver will spill coffee on this crotch, causing him to steer wildly through my front window. Nor do I ponder any of an infinite number of ways I can be killed. It is I that must bring the scenarios to the minimax decision rule. If I don’t bring weird and wild ones, then the rule can’t see them and won’t select for them.
Let’s try another example. X_1 is millions and millions and millions of deaths from a dread disease, which can be protected against by putting millions out of work (not the same who are killed, of course), enriching the rich, and further strengthening central government by giving it more power, and also weakening it by increasing its debt magnificently.
Our X_2 is a normal course of events, which we treat like the flu, where some die and most don’t, and we do nothing extraordinary like martial law lite.
Minimax says protect against the deaths. Not just now, but forever. Yes, forever, because this Fauci fellow says the disease could become seasonal (as flu now is). So we have to lock down three out of every twelve months in perpetuity. Minimax says saving just one life by these actions will be worth it.
The real problem, as is by now clear, is twofold: (1) using minimax as the decision rule, (2) generating the scenarios.
Minimax is Talebism; minimax is the precautionary principle; minimax is fear. Minimax says quantity of life is superior to quality.
What if not minimax? Far better to accord proper weight to the evidence, and use likelihood to decide. Now this is not so easy, and there is no space here to show how this maneuver can be gamed, too. But this is the direction to go.
Why these scenarios, and why not others? Why these solutions, and why not others? Why are we still trusting models that have already made busted forecasts four months into this thing?
To support this site and its wholly independent host using credit card or PayPal (in any amount) click here