This is the second post reviewing Sabine Hossenfelder’s new book Existential Physics. First is my criticism of Many Worlds. If you are short of time, skip to the bottom today.
Entropy, some say, is a measure of disorder, though perhaps a metric of lack of order better describes what they mean. Others, such as Yours Truly, see entropy as a conditional measure of information. Let’s see what’s right.
Old Hoss gives the common view of entropy, the supposed tendency toward a decrease in order:
The biological processes involved in aging and exactly what causes them are still the subject of research, but loosely speaking, we age because out our bodies accumulate errors that are likely to happen but unlikely to spontaneously reverse. Cell-repair mechanisms can’t correct these errors indefinitely and with perfect fidelity Thus, slowly, bit by bit, our organs functions a little less efficiently…
Perhaps this sounds right to you.
Then she says things like this, trying to sharpen the view of cause in vague phrases like “spontaneously reverse”: “A system can evolve toward a more likely state only if an earlier state was less likely.”
This has no unconditional meaning, because probability has no unconditional meaning. A state can only be “likely” with respect to a set of explicit conditions, or premises. Same with “less likely.” And, of course, no state evolves simply because conditional probabilities and start and finish with different numbers. States “evolve”, or rather change, because of powers acted upon them. It has zero to do with probability.
Just so, our bodies don’t accumulate errors that are “likely” unless you specify the conditions of that likelihood and of the causes operating to change cells. There is no such thing in a causal sense of “spontaneously reverse”. That errors accumulate is a truth we all see, but the question is why, what causes?
After pages of stuff like this, mixing up cause with knowledge of cause, we get a…spontaneous reversal! “Entropy thus is really a measure of our ignorance, not a measure for the actual state of the system.”
This is exactly right.
She gives an excellent example of this. There is a grid of cells. At first all the cells at the bottom are colored, the top all blank. This is a well recognized kind of order; maybe it even pleases your eye.
She then presents another grid with the colored grids placed helter skelter. If you want to play along, it’s a 10×10, standard (x,y) grid. The colored grids are (3,1), (4,1), (5,9), (2,6), (5,3), (5,8), (9,7), and there are some others. Try it. To the eye, the pleasing order has vanished, replaced, as some would say, by randomness.
Then comes her neat trick: “The super-nerds among you will immediately have recognized these sequences [there are two grids in her example] as the first twenty digits of π”.
Very prettily done! “Randomness” is in the eye of the beholder. It does not exist on its own. That seemingly chaotic distribution of squares is full of order—conditioned on certain knowledge. Random is just as dependent on conditions as probability is, and for the same reason: because they are the same, probability being the quantification (at times) of random.
Point is this: if randomness doesn’t exist, which it doesn’t, then neither does entropy. Both are measures of our ignorance of cause. Old Hoss gets this, and gets that entropy doesn’t apply as a cause or a “law”:
To me, that’s the major reason the second law of thermodynamics shouldn’t be trusted for conclusions about the fate of the universe. Our notion of entropy is based on how we currently perceive the universe; I don’t think it’s fundamentally correct.
To which I say, amen. And to which Old Hoss should also say amen, especially since she just said it. Alas, several pages later she forgot she said it. Going back to her example of the grid, and mixing up the colors of the squares, she says:
After you have maximized [entropy] and reached an equilibrium state, entropy can coincidentally decrease again. Small out-of-equilibrium fluctuations are likely; bigger ones, less likely.
As a statement of how some systems behave at some times, it’s fine, with our imaginations filling in premises for those “likely”s and times. But it’s just not true unconditionally. If you doubt that, I’ll give you an irrefutable proof of it. Rather, you will.
You are the proof. You are an increase in order, your conception and, if you escape the womb, your birth are highly ordered events, with respect to, say, a bag of chemicals. These events, of order increasing with respect to certain knowledge, happen everywhere, all the time. And they happen because of certain powers being exercised.
So we’re right back to the beginning. We need to understand cause, what powers exist in things, and how they operate or not.
SKIP TO HERE
Old Hoss says this, and she is far from alone in saying things like it:
In an infinite amount of time, anything that can happen will eventually-happen—no matter how unlikely.
No. No no no no no. No.
Again, things are only “likely” or “unlikely” with respect to premises we pick. Probability is a measure of our knowledge, or rather its level of certainty. Events happen only because of some power being used. Probability isn’t a cause.
I’m guessing Old Hoss, and the legion of others who agree with her, have in mind things like universal roulette wheels, being spun over and over again, ad infinitum, where we wait for 30 to hit (or whatever). In this example, we have a model, a probability of 30, based on our knowledge of the causes involved, which include the material and functional and final causes, and not just efficient causes.
According to that model, we can calculate the probability of at least one 30 in whatever number of spins you care to name. The larger the number of spins, the higher this probability is. After an infinite number of spins, the probability is 1.
But that doesn’t mean a real roulette wheel will ever hit 30. The depends on the causes in play, like for instance formal causes: the wheel is missing its 30. Your model also assumes the croupier never tires: the efficient cause is always there. Will it be? Forever?
All right, who cares about roulette. Here’s the big error (which she herself doesn’t appear to fully buy): “With quantum fluctuations, low-entropy objects (brains!) can appear even out of vacuum—and then disappear again.”
By “low-entropy objects” she means highly ordered ones, like brains. What we see here, and will time and again in Old Hoss’s book, is the Deadly Sin of Reification. Believing your model is Reality.
You hear this kind of thing from Quantum Reifiers all the time. They give probability to things like brains and Buicks “spontaneously” coming into being from “fluctuations.” They do this because they can imagine it, and, according to their models, can’t rule it out. The model says it can happen, so given long enough, like the wheel, a copy of yourself “spontaneously” appears next to you and laughs.
This should signal to QRs that something has gone wrong with their model, and that alternate metaphysics of nature are in order (they do exist). That hand waving about causes: calling them mysterious “fluctuations” does not prove you have understood them. Indeed, it proves the opposite.
Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: firstname.lastname@example.org, and please include yours so I know who to thank.