Time for an incomplete mini-lesson on theory confirmation and disconfirmation.
Suppose you have a theory, or model, about how some thing works. That thing might be global warming, stock market prices, stimulating economic activity, psychic mind reading, and on and on.
There will be available a set of historical data and facts that lead to the creation of your theory. You will always find it an easy process to look at those historical data and say to yourself, “My, those data back my theory up pretty well. I am surely right about what drives stock prices, etc. I am happy.”
Call, for ease, your theory MY_THEORY.
It is usually true that if the thing you are interested in is complicated—like the global climate system or the stock market—somebody else will have a rival theory. There may be several rival theories, but let’s look at only one. Call it RIVAL_THEORY.
The creator of RIVAL_THEORY will say to himself, “My, the historical data back my theory up pretty well, too. I am surely right about what drives stock prices, etc. I am happy and the other theory is surely wrong.”
We have a dispute. Both you and your rival are claiming correctness; however, you cannot both be right. At least one, and possibly both, of you is wrong.
As long as we are talking about historical data, experience and human nature shows that the dispute is rarely allayed. What happens, of course, is that the gap between the two theories actually widens, at least in the sense of strength which the theories are believed by the two sides.
This is because it is easy to manipulate, dismiss as irrelevant, recast, or interpret historical data so that it fits what your theory predicts. The more complex the thing of interest, the easier it is to do this, and so the more confidence people have in their theory. There is obviously much more that can be said about this, but common experience shows this is true.
What we need is a way to distinguish the accuracy of the two theories. Because the historical data won’t do, we need to look to data not yet seen, which is usually future data. That is, we need to ask for forecasts or predictions.
Here are some truths about forecasts and theories:
If MY_THEORY says X will happen and X does not happen, then MY_THEORY is wrong. It is false. MY_THEORY should be abandoned, forgotten, dismissed, disparaged, disputed, dumped. We can say that MY_THEORY has been falsified.
For example, if MY_THEORY is about global warming and it predicted X = “The global mean temperature in 2008 will be higher than in 2007” then MY_THEORY is wrong and should be abandoned.
You might say that, “Yes, MY_THEORY said X would happen and it did not. But I do not have to abandon MY_THEORY. I will just adapt it.”
This can be fine, but the adapted theory is no longer MY_THEORY. MY_THEORY is MY_THEORY. The adapted, or changed, or modified theory is different. It is NEW_THEORY and it is not MY_THEORY, no matter how slight the adaptation. And NEW_THEORY has not made any new predictions. It has merely explained historical data (X is now historical data).
It might be that RIVAL_THEORY theory made the same prediction about X. Then both theories are wrong. But people have a defense mechanism that they invoke in such cases. They say to themselves, “I cannot think of any other theory besides MY_THEORY and RIVAL_THEORY, therefore one of these must be correct. I will therefore still believe MY_THEORY.”
This is the What Else Could It Be? mechanism and it is pernicious. I should not have to point out that because you, intelligent as you are, cannot think of an alternate explanation for X does not mean that an alternate explanation does not exist.
It might be that MY_THEORY predicted Y and Y happened. The good news is that we are now more confident that MY_THEORY is correct. But suppose it turned out that RIVAL_THEORY also predicted that Y would happen. The bad news is that you are now more confident that RIVAL_THEORY is correct, too. How can that be when the two theories are different?
It is a sad and inescapable fact that for any set of data, historical and future, there can exist an infinite number of theories that equally well explain and predict it. Unfortunately, just because MY_THEORY made a correct prediction does not imply that MY_THEORY is certainly correct: it just means that it is not certainly wrong. We must look outside this data to the constructs of our theory to say why we prefer MY_THEORY above the others. Obviously, much more can be said about this.
It is often the case that a love affair develops between MY_THEORY and its creator. Love is truly blind. The creator will not accept any evidence against MY_THEORY. He will allow the forecast for X, but when X does not happen, he will say it was not that X did not happen, but the X I predicted was different. He will say that, if you look closely, MY_THEORY actually predicted X would not happen. Since this is usually too patently false, he will probably alter tactics and say instead that it was not a fair forecast as he did not say “time in”, or this or that changed during the time we were waiting for X, or X was measured incorrectly, or something intervened and made X miss its mark, or any of a number of things. The power of invention here is stronger than you might imagine. Creators will do anything but admit what is obvious because of the passion and the belief that MY_THEORY must be true.
Some theories are more subtle and do not speak in absolutes. For example, MY_THEORY might say “There is a 90% chance that X will happen.” When X does not happen, is MY_THEORY wrong?
Notice that MY_THEORY was careful to say that X might not happen. So is MY_THEORY correct? It is neither right or wrong at this point.
It turns out that it is impossible to falsify theories that make predictions that are probabilistic. But it also that case that, for most things, theories that make probabilistic predictions are better than those that do not (those that just say events like X certainly will or certainly will not happen).
If it already wasn’t, it begins to get complicated at this point. In order to say anything about the correctness of MY_THEORY, we now need to have several forecasts in hand. Each of these forecasts will have a probability (that “90% chance”) attached, and we will have to use special methods to match these probabilities with the actual outcomes.
It might be the case that MY_THEORY is never that close in the sense that its forecasts were never quite right, but it might still be useful to somebody who needs to make decisions about the thing MY_THEORY predicts. To measure usefulness is even more complicated than measuring accuracy. If MY_THEORY is accurate more often or useful more often, then we have more confidence that MY_THEORY is true, without ever knowing with certainty that MY_THEORY is true.
The best thing we can do is to compare MY_THEORY to other theories, like RIVAL_THEORY, or to other theories that are very simpler in structure but are natural rivals. As mentioned above, this is because we have to remember that many theories might make the same predictions, so that we have to look outside that theory to see how it fits in with what else we know. Simpler theories that make just as accurate predictions as complicated theories more often turn out to be correct (but not, obviously, always).
For example, if MY_THEORY is a theory of global warming that says that there is a 80% chance that global average temperatures will increase each year, we need to find a simple, natural rival to this theory so that we can compare MY_THEORY against it. The SIMPLE_THEORY might state “there is a 50% chance that global average temperatures will increase each year.” Or it might be that LAST_YEAR’S_THEORY might states “this year’s temperatures will look like last year’s.”
Thus, especially in complex situations, we should always ask, when somebody is touting a theory, how well does that theory make predictions and how much better is it than its simpler, natural rivals. If the creator of the touted theory cannot answer these questions, you are wise to be suspicious of that the theory and to wait until that evidence comes in.