From Anon comes the major announcement: “Quside unveils the world’s first Randomness Processing Unit“.
Quside today unveils its vision for the Randomness Processing Unit (RPU), a device designed to simultaneously accelerate the execution of intensive randomized workloads with reduced energy consumption and execution time savings.
Many of the most relevant simulation, optimization, and prediction workloads rely on stochastic processes. In turn, they require an ever-increasing source of high-quality, high-speed random numbers and associated processing tasks. Current approaches using pseudo-randomness generation in high-performance environments often lead to significant energy consumption and performance inefficiencies, as well as potentially introducing artifacts and co-dependencies in the statistical results.
Who needs “random” numbers”? Well, it depends on what random means. Random means unknown, unpredictable, lack of knowledge of cause.
So where might we need numbers which are unknown, unpredictable, and where the lack of knowledge of their cause is important?
I can think of only three: casinos, cryptography, and conjuring.
Casinos are situations where preventing the lack of cause of numbers is paramount.
Very obviously, casinos do not want slot machines where the numbers are predictable—beyond a certain point. This is also why cards are shuffled, to conceal their order and make the sequence unpredictable—up to a point. Card counters seek ways around this randomness, and, if caught, are banned. (You are not meant to win.)
The “up to a point” means that the unpredictability is within known bounds. A deck of cards has a known constituency, and the randomness is limited to that. You will not draw a “72 of Crustaceans”, for there is no such card. Same with slot machines, which only have fixed outcomes, larger and more complex than cards, to be sure.
It therefore behooves casinos to have means of producing numbers that cannot be predicted except within the known bounds of the gambles (which they call “games”). It also behooves (say behooves three or four times) casinos to boot the odd fellow who hits upon the “randomization scheme”, i.e. who has figured out, at least partially, the cause of the numbers. Such as in card counting.
Casino-like activity is ubiquitous: lotteries, sports, some elections, etc. Here’s an example I used in Uncertainty. The original twelve apostles were reduced to eleven, Judas having gone missing, and they had two equal candidates to choose from to return to a dozen. Then this happened (Acts 1: 23–26; my emphasis):
And they proposed two: Joseph called Barsabas, who was surnamed Justus, and Matthias. And they prayed and said, “You, O Lord, who know the hearts of all, show which of these two You have chosen to take part in this ministry and apostleship from which Judas by transgression fell, that he might go to his own place.” And they cast their lots, and the lot fell on Matthias. And he was numbered with the eleven apostles.
You can call this pagan superstition if you wish, but the same casting of lots occurs each time a zebra-shirted man tosses a coin to see who receives the kickoff. The idea in both cases is the same: remove a known man-made cause from the choice. That is, remove the possibility of bias from the choice.
The referee is not blamed for the result, which was unpredictable; neither were any of the eleven blamed for going with Matthias over Barsabas. Of course, there are plenty of ways to juice coin tosses and lot throwing, which are ways of introducing back those unknown causes. But you have the idea.
Just as obviously as for casinos, messages encoded by secret key cannot have their keys predictable. If you can guess a numerical key, even partially, you can decode a message, at least part way.
Keys must be generated by means that are at least extremely difficult to predict, if not impossible. And there is nothing more unpredictable than certain quantum phenomena, the causes of which are known to be unknown.
Which is not to say they lack cause, just that the causes are not local, in the parlance of physics, and beyond our reach. Also just like casinos, the unpredictability is within known bounds.
Hence devices like the RPU. Hardware random number generators aren’t new. Diodes near their breakdown voltage supply unpredictable voltages, which can be turned easily into numbers. Diodes, and other such devices, are not very efficient, though, and consume a lot of energy. Quside brags that they have found a newer and more efficient process, better than their competitors’, which relieves CPUs from the burden.
I don’t know whether their RPU works as advertised or not, but it is true that removing the job from CPUs is important. For most of them only produce what are called “pseudo-random” numbers, which are numbers which are produced by deterministic algorithms, and so are perfectly predictable—if you know their “seed”, i.e. their starting point.
These CPU-based generators are put through innumerable tests (a pun!)—except one—to see if their outputs are in any way predictable (inside the known bounds). The exception is the known algorithm which generated them. This is a weakness, because if your enemy can guess which generator you use, he only has to uncover the seed to lay all bare.
Which threat, again, is what hardware devices would alleviate, or if they truly work as advertised, eliminate.
The third claimant, which is statistics and probability, which seek “random numbers” for simulations, and for all to approximate numerical integration.
The statistical quest exists because of two reasons: (1) lack of knowledge of straightforward analytical methods (see here and here and here), and (2) the false belief that “random” numbers have mysterious powers that replicate Nature, which some think, wrongly I say, “generates” “random” numbers which somehow, nobody knows how, guarantee the quality of statistical results.
Sometimes experiments are “randomized”—but only in situations where it is known, though not always acknowledged, there are unknown causes. This does not work. If all an observable’s causes were known, then all could be controlled, as in special physics experiments. But because all the causes aren’t known, the “randomization” cannot guarantee an equal partition of causes in “randomized” blocks of the experiments. Thus the “randomization” must be for another purpose.
The randomization should always be seen like referees casting lots: to remove the potential for human bias. To bar, or make less likely, cheating, even unconscious cheating. You don’t give the sicker patients the placebo, and so on. In some cases, the “randomization” takes part in the same statistical ritual as simulations, a sort of blessing on the results. This is wrong, and for the same reasons.
Approximating integrals using “random” numbers is sounder, but inefficient. And only happens, as the links above show, because analytical methods aren’t yet known. I never tire of quoting Jaynes:
It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.
I go through examples in those links. The idea, that some have, that the “randomization” is needed for certain theoretical reasons, to ensure correctness of results, is wrong.
I can’t think of other applications. Which is as far from a proof there are none as can be. Perhaps you know of others that are not equivalent to casinos or crypto or conjuring—and aren’t misguided, like simulation. Let us know.
Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.
Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: email@example.com, and please include yours so I know who to thank.