Last time at Think Twice Science (video below for those who, embarrassingly, forgot to watch) we discussed bad models and the bad predictions which arise from them. Those bad models, which predicted mass starvation and un-survivable climate change and other such-like jollities, were the Global Cooling models of the 1970s (an era to which we are returning).
Not all models are bad, though. Even though it is tempting, especially in an era deep and steeped in scientism and rotten science, to believe so.
How can you tell a good from bad model? Well, it’s not so easy a lot of the time. Some of the time it is simplicity itself. I show how here.
You, too, can apply these methods to the models that best you in daily life. Like whether to get a mammogram or prostate exam. A million and one uses!
It is, of course, your sworn duty to pass this post on to those who would benefit from it most.
And don’t forget to watch (or re-watch, or to forward on) my first video in this new series: A history of failed climate change predictions.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use PayPal. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.
Discover more from William M. Briggs
Subscribe to get the latest posts sent to your email.

This post my move you up on the Hague’s list.
If you were a betting man would you bet the over or under on Crescent Dunes monthly performance this October.
https://en.wikipedia.org/wiki/Crescent_Dunes_Solar_Energy_Project#cite_note-37
The folks manning the wiki page most be understaffed as they haven’t update the performance of the plant lately.
In the spirit of “all models are wrong, but some are useful”, I’m willing to take a closer look at Finley’s model. Well, ok, not Finley’s model itself, but I’ll re-use the numbers (because Finley’s model as described on an noaa.gov page is way too complicated for for my non-meteorological brain).
Instead of tornadoes, assume the model is predicting failures of widgets. More specifically, assume that a manufacturer of widgets is experiencing mysterious failures of the widgets around six months after sale. A statistician (call him PJ) uses a database of measurements made at the factory to devise a model intended to predict which widgets will fail. This model is tested, and the results match Finley’s numbers. Meanwhile, Marketing has their own model of the failures: they are all customer misuse. But, like tornadoes defiantly killing people despite their non-existence, the manufacturer is suffering high warranty costs and unhappy customers. So while Marketing’s model is more accurate, it’s totally useless to anyone assigned to fix the mysterious failures.
The engineer assigned to RCCA the failures, lacking any other insight into cause, takes a look at PJ’s model and must decide whether to act on it’s implications. In other words, the engineer wants to know if the model is useful. The 51 failures out of 2803 widgets sold is a failure rate of just under 2%; without any details/clues, this would be a very difficult problem to investigate. For just the widgets that PJ’s model predict will fail, the failure rate is much higher, at 28%. The engineer can use PJ’s model to quarantine a couple dozen widgets and subject them to HALT testing (an expensive process all around). So PJ’s inaccurate model is still useful, I think.
How does it make sense to use a model that says, “Tornados do not occur” using data that says they occurred 51 times?