Today’s post is at The Stream: “Attack Of The Black Swans From Outer Space: Nassim Nicholas Taleb and his co-authors fail to show their precautionary principle provides any guidance in climate policy.”
Aliens from outer space utterly hostile to humanity might attack! They’ll know we’re here because of our electronic emissions, which continuously bathe the earth in a soft glow. If these aliens discover us and manage to get here, it’s obvious that mankind is kaput. As in wiped out. À la mort.
Solution? Hide! Cease immediately all use of anything and everything powered by electricity. Sure, this necessary action will cause some inconveniences such as the ruination of the world’s economy and maybe the odd mass starvation since food will become scarce. But, hey, we’re talking about the survival of the human race. Don’t you care what happens to people? You brute.
What’s the likelihood of an alien attack? It’s complicated, but all the best scientists say it’s not impossible. Anyway, what’s the difference? As long as the chance is non-zero and the costs of failing to act are near infinite, shutting down the world is the only sane move.
What’s that you say? The burden of proof is on me? There’s no evidence of a forthcoming invasion?
What are you, some kind of denier? …
I was asked on Twitter about the “falsification” of climate models. Models are falsified when they say, conditional on their innards, “The probability of X is 0” and X happens. Since the output of the global average temperature from any given model is a point, or anyway that’s what we see, then unless the actual temperature matches that point, the model is formally falsified.
The way around this is either to add some uncertainty to the point or to use an ensemble of models and interpret them probabilistically. The latter approach is more usual. But it isn’t done well. If the “envelope” of the ensemble doesn’t contain the reality, again the models are falsified. Unless a statistical model is fit to the ensemble, as is done sometimes in weather forecasting, it’s very easy to falsify the models.
Mental “fuzz” is usually added to deterministic models so that, in the minds of their creators, the models aren’t falsifed. And this isn’t crazy because it’s rare in physics experiments to expect the reality to perfectly match the predictions. But unless this is done formally, there can be disagreements.
This is why skill is often used to verify. Skill is a measured improvement over some simplified model, like persistence. Persistence says next year will be just like this year. Climate models can’t beat persistence, thus they don’t have predictive skill. I speak loosely here, and even more loosely in the Stream piece, but it’s obvious that the models are busted.
Whether they have hindcasting skill is irrelevant. Hindcast skill is the ability to fit past data, which is no great feat.