How many pads of paper do I have on my desk right now? How many books are on my shelves this minute?
You don’t know the answer to any of these questions, just as you don’t know if the Tigers will beat the Marlins tonight, whether IBM’s stock price will be higher at the close of today’s bell, and whether the spin of an outermost electron in my pet ytterbium atom is up or down.
You might be able to guess—predict—correctly any or all of these things, but you do not know them. There is some information available that allows you to quantify your uncertainty.
For example, you know that I can have no pads, or one, or two, or some other discrete number, certainly well below infinity. The number is more likely to be small rather than large. And since we have never heard of a universal physical law of “Pads on Desks”, this is about as good as you can do.
In a severe abuse of physics language, we can say that, to you, there are exactly no pads, exactly one pad, exactly two pads, and so forth, where each possibility exists in a superposition until…what? Right: until you look.
I know how many pads of paper I have because, according to my sophisticated measurement, there are three. And now, according your new information, the probability that there are three has collapsed to one. Given this observation—and accepting the observation is without error and given the belief in our mental stability—the event is no longer random, but known.
The point of this tedious introduction is to prove to you that “randomness” merely means “unknown.” Probability, and its brother randomness, are measures of information. What can be random to you can be known to me.
An event could be random to both of us, but that does not mean that we have identical information that would lead us to quantify our probabilities identically. For example, the exact number of books I have on my shelf is unknown to me and you: the event is random to both of us. But I have different information because I can see the number of shelves and can gauge their crowdedness, whereas you cannot.
A marble dropping in a roulette wheel is random to most of us. But not to all. Given the initial conditions—speed of the wheel, spin and force on the marble, where the marble was released, the equations of motion, and so forth—where the marble rests can be predicted exactly. In other words, random to thee but not to me.
I am happy to say that Antonio Acin, of the Institute of Photonic Sciences in Barcelona, agrees with me. On NPR, he said, “If you are able to compute the initial position and the speed of the ball, and you have a perfect model for the roulette, then you can predict where the ball is going to finish — with certainty.” (My aunt Kayla sent me this story.)
The story continues: “[Acin] says everything that appears random in our world may just appear random because of lack of knowledge.” Amen, brother Antonio.
A Geiger counter measures ionizing radiation, such as might occur in a lump of uranium. That decay is said to be “random”, because we do not have precise information on the state of the lump: we don’t know where each atom is, where the protons and so forth are placed, etc. Thus, we cannot predict the exact times of the clicks on the counter.
But there’s a problem. “You can’t be certain that the box the counter is in doesn’t have a mechanical flaw…” In other words, information might exists that allows the clicks to be semi-predictable, in just the same way as the number of books on my selves are to me but not to you.
So Acin and a colleague cobbled together ytterbium atoms to produce “true” randomness, by which they mean the results of an electron being “up” or “down” cannot be predicted skillfully using any information.
In their experiment, the information on the ytterbium atoms’ quantum (which means discrete!) state is not humanly accessible, so we can never do better than always guessing “up”1.
It is misleading to say that they are “generating” randomness—you cannot generate “unknowness.” Instead, they have found a way to block information. Information is what separates the predictable from the unpredictable.
The difference is crucial: failing to appreciate it accounts for much of the nonsense written about randomness and discrete mechanics.
1Brain teaser for advanced readers. Acin’s experiment generates an “up” or “down”, each occurring half the time unpredictably. Why is guessing “up” every time better than switching guesses between “up” and “down”?
Update This is what happens when you write these things at 5 in the morning. The teaser is misspecified. It should read:
Acin’s experiment generates an “up” or “down”, each occurring as they may. When is guessing “up” (or “down” as the case might be) every time better than switching guesses between “up” and “down”?
You will see that I idiotically gave away the answer in my original, badly worded version.