From reader (and listener!) Phil Pilkington (there asterisks are original):
Loved your speech. Hyperskeptical Economist that I am, M Taleb did us some great favours in outlining the silliness of some aspects of economic theory. He also did us the great favour of underlining uncertainty. But man, did he turn out to be a poor representative of the antifaith – more windbag than wiseman.
But onto the more interesting point. It seems to me that your principle of “going-with-the-flow-man-because-sh*t-be-complex”, if I can coin a phrase, is just as ripe for reductio ad absurdism as Taleb’s principle of “OMG-my-life-is-a-constant-panic-attack-DO-SOMETHING”.
By your principle you shouldn’t, say, get chemotherapy when you have cancer (because sh*t be complex) or bring your car to the mechanic when all the lights are flashing on and off and there’s flames coming out of the exhaust pipe (sh*t, complex).
Seems to me you need some middle ground.
Phil speaks of the precautionary principle, which I demonstrated was an irrational basis for decisions. You can—and should, since I’ll assume the details of what I said there—read the speech linked above.
I’m not a fan of symbolic notation unless that notation can cut through the murk, such as it does in math. In probability applications, it is over-used and tempts one to the Deadly Sin of Reification. Here, I think it will help.
According to Taleb and other touters of the precautionary principle, there is some doom which awaits us, a doom which is presumed we can avoid if such-and-such expensive means are employed. Why is this doom nearly upon us? Because some theory says so. Another name for theory is model, which for notation’s sake I’ll use. Thus:
Pr (D | Mi ) ~ 1,
where D is the doom and Mi the theory or model (these really are the same, as far as probability is concerned), and where given Mi, D is as certain as you like. For any use of the PP, interest fixates at once on D. Let’s talk D. Let’s imagine the awful things that will happen under D. Let’s ask What about the children!?
D can be, as Phil imagines, cancer. D does not have to be world doom; your lone doom will suffice. Cancer is bad, cancer is horrible, cancer will kill you unless you do something!
As I said in the speech, we can take all this as true. D is as bad, or even worse, as PPers say. D is something that should be avoided and should be protected against using all available means. Et cetera.
D is not the problem, though. Mi is.
For any D, we can always find an Mi that makes D true or as likely as you wish. For instance, M1 = “D will happen”. Then Pr (D | M1 ) = 1. Doom is certain.
Taleb comes to you in a panic saying, “We must protect against D!”
“Because D is doom!”
And how to you know D will happen?
“Because D will happen!”
Not too convincing, that argument. But it can be made convincing if the talk can be shifted from M1 to the awful consequences of D—and what kind of denier are you, anyway? Do you want people to die? Do you want to die?
Being somewhat suspicious of anyone reaching for my wallet, I ask “What about M?” Obviously, M1 is foolish, but that doesn’t mean it isn’t held by some people in some situations. “Activists”, for instance.
Taleb isn’t as agitated as to believe M1, when it comes to GMOs and global warming. But he does hold vague theories which say Pr (D | M2,3 ) ~ 1 in these cases (models 2 and 3 for GMO and GW).
I emphasize that there is nothing in the world wrong with the formula Pr (D | M2,3 ) ~ 1. The problem is in M2,3. I look at these situations and form my own models, b2 and b3, say. And then I form
Pr (D | Mb2,b3 ) ~ 0.
My theories says D is not likely at all. The argument returns to where it should: about the evidence for and against D, and not D itself.
If I were to receive a cancer diagnosis, then, depending on the details of that diagnosis, and my understanding of the medicine, I might form a model which says
Pr (D | Mcancer ) = fairly high.
And then I might act to protect against D.
That, then, is the “middle ground”. As accurately as possible assessing the evidence for and against D.
“cancer will kill you unless you do something!” That is being disproven by the VA refusing to treat a friend of mine. She’s still here and it’s been several years. It is not supposed to work that way. Now, cancer may eventually be the death of her, but at 71, there’s a possibility it won’t—something else will get her first. Note that the cancer is considered treatable, but she is not (too difficult due to a health issue that is an issue only to the VA—if she had medicare, she’d be able to get treatment). Odds are the cancer will kill you, but the odds are not 100%. When you have no options, these things are often discovered. (Consider the case of background radiation and LNT.)
Then there is the doctor who ordered cancer treatment and the patients did not have cancer. He made a lot of money, though he is in jail. People were given chemo while they had no cancers. They followed the Precautionary Principle and ended up in far worse shape.
I obviously do NOT believe one should ignore what a physician diagnosis, but I also do not believe in blind faith to medicine. I’ve seen physicians be wrong and some who thought they were right but later changed their diagnosis based on further evidence. The cancer thing—a biopsy for two years can show no cancer, the removed lump can show no cancer, yet later, after more tests, can show definate cancer. If the further testing is not done, one is pronounced cancer-free when in fact they are not. Asking questions and learning about what is being done from several sources is the only truly rational way to deal with things.
RE: “…the precautionary principle, … an irrational basis for decisions.”
“That, then, is the “middle ground”. As accurately as possible assessing the evidence for and against D.” [“D” is a particularly bad outcome of “doom” resulting from an event, Mi, of indeterminate or debated probability of occurring].
Oddly, discussion of the evaluation of the probability of Mi, and/or the assessment that Mi really leads to D, or not, was basically omitted — mere mention that Mi is a model or theory should make that point, if implicitly. In practice the presumption that Mi- leads-to-D is itself debatable, not only if the correlation/causation exists, but if causation is determined, how much Mi contributes to causing the D outcome may be hotly debated.
The debate is often enough not about assessing/debating the evidence for D so much as the probability of Mi occurring — and when — and having established that, how much Mi contributes to D — why doing something about climate change is a hotly debated issue vs a global defense against a cataclysmic asteroid impact [which we know has happened and is nearly certain to recur] isn’t.
The evidence that people do NOT find the precautionary principle an irrational basis for making decisions abounds. The amount of discretionary insurance coverage people in similar circumstances choose to buy is one familiar example. Such decisions are made in part on logic, and in part on emotion [commonly fear or confidence, both of which, curiously, are themselves founded to some extent in ignorance].
O’Riordan has an accessible discussion of how the precautionary principle plays out in politics/government regulatory decisions; the preview available at
https://books.google.com/books?id=e4vjoqgeaIMC&pg gives one a sense.
Reading about the Italian earthquake that led to prosecution of scientists that didn’t warn it might be so severe illustrates how this principle can be/is applied in hindsight.
If one were to note the picture at the top, one might say,
“He’s doing a differential diagnosis for anterior hip capsule versus Ilio-Psoas / Femoral nerve and lumbar plexus.
But as he’s a whiskery gentleman, he’s got it all wrong and is probably going to charge the man for his services. You can’t test soft tissue with a man’s head stuffed into the floor.
The hand or the knee or even an elbow or hip is the customary apparatus to stabilise the hip, not the boot.
Taleb has stated that intrusive medical interventions are suited for worst case scenarios where the expected outcome is worse than the whatever risks there are in the medicine or medical procedure. Would he go for chemo? Maybe, but he also strikes me as the kind of person who might just choose to die rather than get sick and risk still dying.
On Taleb and theory, how familiar is Brigg with Taleb? He’s far from someone who puts theory first. Most of his arguments in his catalog of books argue precisely the opposite, that theory is fragile. With global warming I’ve seen him propose some threshold of pollution or whatever. It involved some math and complex systems analysis. He comes across as mostly dismissive of the issue as a whole. I’m not sure there is any reason to insult Taleb when I see very little disagreement between this blog and Taleb’s own arguments. There are differences but none of which couldn’t be held amicably. You’re fundamentally on the same side.
Mi is not an ‘event’, it is a model.
Briggs is not discussing the probability of the model, but the probability of the Doom.
That probability comes from applying the model, there being no probability absent the model.
A key question is whether the model is applicable to the case in hand.
And why not the post-precautionary principle?
Assuming there might be (which also means might not be) a reason to panic at the onset of a “crisis” (nothing like blindly submitting to our primate fight or flight instincts), isn’t there some point post-crisis onset where there is sufficient data to know something and reevaluate the precautionary panic response?