From reader (and listener!) Phil Pilkington (there asterisks are original):
Loved your speech. Hyperskeptical Economist that I am, M Taleb did us some great favours in outlining the silliness of some aspects of economic theory. He also did us the great favour of underlining uncertainty. But man, did he turn out to be a poor representative of the antifaith – more windbag than wiseman.
But onto the more interesting point. It seems to me that your principle of “going-with-the-flow-man-because-sh*t-be-complex”, if I can coin a phrase, is just as ripe for reductio ad absurdism as Taleb’s principle of “OMG-my-life-is-a-constant-panic-attack-DO-SOMETHING”.
By your principle you shouldn’t, say, get chemotherapy when you have cancer (because sh*t be complex) or bring your car to the mechanic when all the lights are flashing on and off and there’s flames coming out of the exhaust pipe (sh*t, complex).
Seems to me you need some middle ground.
Phil speaks of the precautionary principle, which I demonstrated was an irrational basis for decisions. You can—and should, since I’ll assume the details of what I said there—read the speech linked above.
I’m not a fan of symbolic notation unless that notation can cut through the murk, such as it does in math. In probability applications, it is over-used and tempts one to the Deadly Sin of Reification. Here, I think it will help.
According to Taleb and other touters of the precautionary principle, there is some doom which awaits us, a doom which is presumed we can avoid if such-and-such expensive means are employed. Why is this doom nearly upon us? Because some theory says so. Another name for theory is model, which for notation’s sake I’ll use. Thus:
Pr (D | Mi ) ~ 1,
where D is the doom and Mi the theory or model (these really are the same, as far as probability is concerned), and where given Mi, D is as certain as you like. For any use of the PP, interest fixates at once on D. Let’s talk D. Let’s imagine the awful things that will happen under D. Let’s ask What about the children!?
D can be, as Phil imagines, cancer. D does not have to be world doom; your lone doom will suffice. Cancer is bad, cancer is horrible, cancer will kill you unless you do something!
As I said in the speech, we can take all this as true. D is as bad, or even worse, as PPers say. D is something that should be avoided and should be protected against using all available means. Et cetera.
D is not the problem, though. Mi is.
For any D, we can always find an Mi that makes D true or as likely as you wish. For instance, M1 = “D will happen”. Then Pr (D | M1 ) = 1. Doom is certain.
Taleb comes to you in a panic saying, “We must protect against D!”
“Because D is doom!”
And how to you know D will happen?
“Because D will happen!”
Not too convincing, that argument. But it can be made convincing if the talk can be shifted from M1 to the awful consequences of D—and what kind of denier are you, anyway? Do you want people to die? Do you want to die?
Being somewhat suspicious of anyone reaching for my wallet, I ask “What about M?” Obviously, M1 is foolish, but that doesn’t mean it isn’t held by some people in some situations. “Activists”, for instance.
Taleb isn’t as agitated as to believe M1, when it comes to GMOs and global warming. But he does hold vague theories which say Pr (D | M2,3 ) ~ 1 in these cases (models 2 and 3 for GMO and GW).
I emphasize that there is nothing in the world wrong with the formula Pr (D | M2,3 ) ~ 1. The problem is in M2,3. I look at these situations and form my own models, b2 and b3, say. And then I form
Pr (D | Mb2,b3 ) ~ 0.
My theories says D is not likely at all. The argument returns to where it should: about the evidence for and against D, and not D itself.
If I were to receive a cancer diagnosis, then, depending on the details of that diagnosis, and my understanding of the medicine, I might form a model which says
Pr (D | Mcancer ) = fairly high.
And then I might act to protect against D.
That, then, is the “middle ground”. As accurately as possible assessing the evidence for and against D.