A lot of kids have alphabet blocks. Twenty six wooden squares with the letters of the alphabet on them.
One way to arrange them is just like in the song, “A B C D E F G…” You know the one. You’re humming it in your mind even now. But there are other ways to arrange the blocks, too. Like “B A C D E F G…” And “C A B D E F G…”
Now set your kid, or grand kid, or an NPR listener, down and have them make every possible arrangement that can be made with the blocks. As a guess—don’t do any math—how long do you think this might take? Suppose it takes one full second to make each arrangement.
Ten minutes? A hour? Maybe a full Saturday afternoon?
About 12 or 13 quintillion years, depending on snack and bathroom breaks. A quintillion is 10^18 years, or 1,000,000,000,000,000,000 years. Some say the universe is about 14 billion years old. That means it would take the lifetimes of a bit under 1 billion universes, each lasting 14 billion years, to make every possible arrangement.
All sizes are relative, and this time, relative to times in our experience, at least in this universe, is big.
You can make this big a bit smaller by speeding the process up. We did one arrangement each second. But why not two? Or even ten? Ten is speedy, yet not hard for a computer. So we’ll imagine our kid is “AI”. A brute beast computer algorithm, one that is much faster than an ordinary child. Let’s do a million arrangements each second. Literal computer child’s play! Now how long will it take for all arrangements?
The math is easy. A million is 6 zeros. All we have to do is subtract those 6 zeros from the 19 we had for one-per-second. That leaves us with 13. So 10,000,000,000,000, or ten trillion years. Still longer than the age of the universe, and by a fair measure.
What if we did a billion arrangements per second? That’s 9 zeros; subtracting gives us 10, which is ten billion years. That’s nicely in the range of the age of the universe.
So, if we did one billion-with-a-B arrangements each second, it would take about the lifetime of the universe to get through them all.
Now let’s add a twist. Suppose we take all these alphabet blocks and dump them in a big bin, and we shake it around for some time, and then dump the lot on the ground. Further suppose there is a machine with 26 square holes, into which fit one block per hole.
This machine is constructed to flash a green light if the blocks when in each of the holes line up as in the alphabet song, and it flashes a red light otherwise. With me?
Our next step is to get a Harvard Studies graduate and have her try various combinations to try to get the light to glow green. She won’t have any familiarity with the alphabet song, which is white supremacist—the letters on the block will only appear to her to be strange scratchings—so she won’t know what order to insert the blocks. She will instead have to try various combinations “at random” until she finds the one that works.
What is the probability that, on her first try, she gets the light to glow green? Which is to say, that she gets the right order? Well, I won’t bother you with the calculation, but it turns out to be about 2.5×10^-27. Which, if you like to stare at long numbers, is 0.0000000000000000000000000025.
What is the probability she gets it right on her second guess? That might be trickier, until we recall this is a Harvard Studies graduate. Meaning, we suppose, there is no possibility she will remember the order she tried first. So the probability is, again, 2.5×10^-27.
Even if she could remember, there is no clue in the sequence she tried first that she was “close” if it was wrong. For instance, suppose she got the first 24 letters right, and ended her first try “…U V W X Z Y”. The light glows red. She won’t know that all she had to do is swap the last two.
We already saw that it would take her a mighty long time to try every sequence, even if she could keep track of her previous guesses. Too long, even though it turns out she might not have to try every sequence until she got lucky. But whatever. She will never make it.
If we let the alphabet blocks be certain chemicals, the Harvard Studies graduate standing in for “random” arrangements of them, the machine’s green light indicating genes that function, then we have, in rough form, the neo-Darwinian theory of “random” mutations causing evolution.
Now don’t screw your fedora around your ears and start making hersterical noises like that mother on the phone in a Christmas Story. The analogy is, indeed, inexact, as all analogies are.
But it errs on the side of simplicity. Real so-called random mutations are vastly more complex, and there have been, the claim runs, untold trillions upon trillions upon trillions upon of them. You get the idea. Our block example represents just one new mutation of a dirt-simple gene.
We also see that the “random” theory cannot be believed. But don’t worry! Some kind of intelligent design, as long as we’re careful defining these words, must be true.
If you doubt any of this, try it yourself. Make it easy on yourself and only use the letters “A B C D E F G”—G for gene, you see.
I got this idea from a post by Ann Barnhardt.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: firstname.lastname@example.org, and please include yours so I know who to thank.