Suppose we have a simple decision to make1: implement a government-takeover of all energy companies, so as to regulate with thoroughness their carbon budgets, or leave these entities as they are, semi-governmental entities engaged in a perpetual dance with the EPA, among other agencies.
The first course of action is deemed necessary to avert the horrors of global warming. The second option is fine if global warming turns out to be the product of the fervid imaginations of grant-receiving computer modelers.
This is a problem in decision analysis; as such, it subject to quantification. Or so say users of decision analysis say. Let’s see.
As stated, the problem is easy to write. Only one of these situations will occur:
- Energy companies socialized & Global warming cannot strike
- Energy companies not socialized & Global warming strikes
- Energy companies not socialized & Global warming does not strike
In our simplification, there will be costs in socializing energy, whereas we can consider it free to continue status quo. Global warming cannot strike if companies are socialized, but it might if companies are not socialized. There will be catastrophic costs if global warming strikes. There are no costs if it does not.
We need estimates of costs and of the probability GW strikes. Certain evidence will supply these estimates. Call all this evidence E: E consists in experts’ judgments, actual facts, probable fictions, model outputs, data observations, and so forth. The entire point of this brief post is that E can never be more than a wild guess, thus even if the costs and probabilities derived from E are deduced without error formal quantitative decisions made are more certain than they should be.
Call the probability GW strikes given E as Pr(GW | E). Let the cost of socializing energy be Cse|E, and let the costs of the horrors of GW be CGW|E, where both subscripts indicate the values were derived from E. Then we can write the outcomes
- Pay Cse|E with probability 1
- Pay CGW|E with probability Pr(GW | E)
- Pay nothing with probability 1 – Pr(GW | E)
We need only one additional concept, that of “expected value.” These are the costs we’d “expect” to pay if we do not socialize. Expected value (ignoring its strengths and many, many weaknesses) is easy to calculate: multiply the cost by its probability, summing across the different outcomes. The expected value of doing nothing is:
CGW|E x Pr(GW | E) + 0 x [1 – Pr(GW | E)] = CGW|E x Pr(GW | E),
since there is no cost of not socializing if GW does not strike. The expected value of socializing is
Cse|E x 1 = Cse|E.
Decision analysis says to take the path with the lower cost: here that is either (A) Cse|E (socialize) or (B) CGW|E x Pr(GW | E) (not socialize).
A person advocating socializing will tend to minimize (A), saying the cost of acting is trivial, that socializing might even make money, and not just cost it (that is, Cse|E might be a negative number, indicating negative costs, i.e. profits).
That person may also exaggerate either or both CGW|E and Pr(GW | E), since an increase in either increases the expected value.
But there is less leeway with Pr(GW | E): no matter what the status of E, the probability GW strikes lives between 0 and 1. Thus a move from (say) 0.9 to 0.95—saying we are now 95% and no longer 90% sure GW will strike—will change (B), but not by very much.
It is thus much better for the advocate (activist?) to monkey with CGW|E, which can increase without bound (it lives on the real line; hence, it can take any value). It is the most trivial of mathematical problems to find a cost such that CGW|E
CGW|E x Pr(GW | E) > Cse|E.
The costs and the probability are fixed once E is, so what will happen is the advocate will toy with E, adding to it. It’s not just snakes which will thrive when GW strikes (and thus cause an increase in deadly snake bites), but killer bees will also blossom (and thus cause an increase in deadly bee stings). It’s not just corn which will whither on the vine, but wheat, too, and rice, barely, and the current favorite of the foodies, quinoa, all of which will suffer, thus increasing food costs.
The possibilities are limited only by the imagination, and nothing stokes our fantasy engines more than the end of the world. CGW|E can absolutely always be made as large as you like.
Understand, even if Pr(GW | E) is small—say as little as 10%—the advocate can still increase CGW|E at will. You will finding him doing just that each time the estimate for Pr(GW | E) is lowered.
But the worst is not yet. For in reality, nobody can say with any kind of certainty what Pr(GW | E), CGW|E, and Cse|E are. Their true values are “I don’t know”, “Could be anything”, and “Beats me.” Thus when it comes to calculating the expected value we really have the decision equation
“Could be anything” x “I don’t know” > “Beats me”.
To those who look at this strange result and say, “But that means we don’t know what do to!” I reply, “Yes, that’s right. What’s your point?”
I’m available to speak on this (and many other) topics. See the Contact Page.
Update Typo (what? me have a typo?) has been fixed in last equation. Thanks to Paul Mullen for bringing it to my attention.
Update The last equation is just as unsolvable if we substitute Pr(GW | E) = 0.99. Failing to understand that is what drives climatologists to their excesses. More on this later.
1There are a myriad of ways to complicate this decision, all tending (you will agree after reading) to cause certainty to decrease.