The title sentence was spoken by professor Stephan Lewandowsky from the School of Psychology, University of Western Australia. The psychologist Lewandowsky is concerned that many are not as concerned about climate change as he is.
Lewandowsky is not fretting about the psychological states of warmer citizens. No, sir. He is instead deeply interested in such things as climate feedback sensitivity, heat transfer equations, cloud opacity, and so forth; such states of nature we may describe, in deference to the good prof’s training, as schizophrenic greenhouse gases.
Given his passion, I am sure he knows of what he speaks on these matters, so let’s not question him about physics; rather let’s look at his probability statements and see what we can learn. This will be interesting, because Lewandowsky represents a common type of academic sensitivity.
He beings his analysis with this quotation:
The Chairman of the Future Fund, David Murray, recently suggested on national TV with respect to climate change that “if we’re not certain that the problem’s there, then we don’t – we shouldn’t take actions which have a high severity the other way.“
From this, he reasons:
In a nutshell, the logic of this position can be condensed to “there is so much uncertainty that I am certain there isn’t a problem.” How logical is this position? Can we conclude from the existence of uncertainty that there certainly is no problem?
And this is false. That is, Lewandowsky has falsely derived B = “there is so much uncertainty that I am certain there isn’t a problem” from A = “if we’re not certain that the problem’s there we shouldn’t take actions which have a high severity.”
From his fallacious argument, Lewandowsky then says, C = “Uncertainty should make us worry more than certainty, because uncertainty means that things can be worse than our best guess.” And from this gross absurdity, the good prof derives yet another error, which is more complicated to explain and which we’ll come to next time.
Now, B is, as Lewandowsky intimates, false itself. If there is uncertainty in a proposition, such as D = “the climate will change”, then it is a fallacy to claim because we are uncertain of D, therefore D is false.
But Murray did not make any claim even close to this. Instead, Murray said A, which is that if the uncertainty in D is significant then acting on the possibility of D such that we incur high (i.e. severe) costs is unwarranted. And this is true. Of course, it could be that the penalty we pay if D obtains is so astronomical that no matter how unlikely D is it is well to pay costs of “high severity” now.
But this is not the case with D, because climate change itself is of little interest. What is of importance is how things which matter to people change when the climate does. Let’s call these things E. An example might be E = “worldwide crop yields” (if you don’t like that, pick a horror from this list). It is thus the case that our uncertainty in E is larger than our uncertainty in D, assuming D influences E (which everybody does assume).
For example—a fictional, but reasonably close example—assume Pr(E | D true) = 0.9. It would be wrong to say, “The probability of E is 90%”. It would be right to say, “Assuming the climate warms, the probability of D is 90%.” To find the true uncertainty of E, we must first compute Pr(E | D false). Suppose this is 0.1 (or any other number less than 0.9). Now we need the probability D is true (given our knowledge of physics, etc.; I have suppressed this notation, but it is there). Suppose this is, as Lewandowsky frets, high; say 0.9 too. Then
Pr(E | physics) = Pr(E | D true, physics) x Pr(D true | physics) + Pr(E | D false, physics) x Pr(D false | physics)
Pr(E | physics) = 0.9 x 0.9 + 0.1 x 0.1 = 0.82
where I have “un-suppressed” the notation, and where “physics” means all we know of the physics (and biology, etc.) of climate and crops. The result is a number less than 0.9; i.e. we are less certain of E than of E given D is true.
The answer to this equation will always be less than Pr(E | D true, physics) as long as Pr(E | D false, physics) is smaller than Pr(E | D true, physics), and in all horror scenarios it always is. That is, we will always be less certain of E, which is what we wanted to show. So far it is Murray 1, Lewandowsky 0.
Too much information? Well, I’m sorry about that, but without considering it at this level of detail we won’t be able to appreciate Lewandowsky’s final mistakes. And it is not just Lewandowsky who makes these errors: these fallacies are exceedingly common, so it is worth spending some time on them. We’ll do that in Part II.
I am somewhat nearer the computer, but will still be mostly away. It looks like it won’t be Sunday until I rturn fully.