Bringing Order To Disordered Views Of Entropy

Bringing Order To Disordered Views Of Entropy

This is the second post reviewing Sabine Hossenfelder’s new book Existential Physics. First is my criticism of Many Worlds. If you are short of time, skip to the bottom today.

Entropy, some say, is a measure of disorder, though perhaps a metric of lack of order better describes what they mean. Others, such as Yours Truly, see entropy as a conditional measure of information. Let’s see what’s right.

Old Hoss gives the common view of entropy, the supposed tendency toward a decrease in order:

The biological processes involved in aging and exactly what causes them are still the subject of research, but loosely speaking, we age because out our bodies accumulate errors that are likely to happen but unlikely to spontaneously reverse. Cell-repair mechanisms can’t correct these errors indefinitely and with perfect fidelity Thus, slowly, bit by bit, our organs functions a little less efficiently…

Perhaps this sounds right to you.

Then she says things like this, trying to sharpen the view of cause in vague phrases like “spontaneously reverse”: “A system can evolve toward a more likely state only if an earlier state was less likely.”

This has no unconditional meaning, because probability has no unconditional meaning. A state can only be “likely” with respect to a set of explicit conditions, or premises. Same with “less likely.” And, of course, no state evolves simply because conditional probabilities and start and finish with different numbers. States “evolve”, or rather change, because of powers acted upon them. It has zero to do with probability.

Just so, our bodies don’t accumulate errors that are “likely” unless you specify the conditions of that likelihood and of the causes operating to change cells. There is no such thing in a causal sense of “spontaneously reverse”. That errors accumulate is a truth we all see, but the question is why, what causes?

After pages of stuff like this, mixing up cause with knowledge of cause, we get a…spontaneous reversal! “Entropy thus is really a measure of our ignorance, not a measure for the actual state of the system.”

This is exactly right.

She gives an excellent example of this. There is a grid of cells. At first all the cells at the bottom are colored, the top all blank. This is a well recognized kind of order; maybe it even pleases your eye.

She then presents another grid with the colored grids placed helter skelter. If you want to play along, it’s a 10×10, standard (x,y) grid. The colored grids are (3,1), (4,1), (5,9), (2,6), (5,3), (5,8), (9,7), and there are some others. Try it. To the eye, the pleasing order has vanished, replaced, as some would say, by randomness.

Then comes her neat trick: “The super-nerds among you will immediately have recognized these sequences [there are two grids in her example] as the first twenty digits of π”.

Very prettily done! “Randomness” is in the eye of the beholder. It does not exist on its own. That seemingly chaotic distribution of squares is full of order—conditioned on certain knowledge. Random is just as dependent on conditions as probability is, and for the same reason: because they are the same, probability being the quantification (at times) of random.

Point is this: if randomness doesn’t exist, which it doesn’t, then neither does entropy. Both are measures of our ignorance of cause. Old Hoss gets this, and gets that entropy doesn’t apply as a cause or a “law”:

To me, that’s the major reason the second law of thermodynamics shouldn’t be trusted for conclusions about the fate of the universe. Our notion of entropy is based on how we currently perceive the universe; I don’t think it’s fundamentally correct.

To which I say, amen. And to which Old Hoss should also say amen, especially since she just said it. Alas, several pages later she forgot she said it. Going back to her example of the grid, and mixing up the colors of the squares, she says:

After you have maximized [entropy] and reached an equilibrium state, entropy can coincidentally decrease again. Small out-of-equilibrium fluctuations are likely; bigger ones, less likely.

As a statement of how some systems behave at some times, it’s fine, with our imaginations filling in premises for those “likely”s and times. But it’s just not true unconditionally. If you doubt that, I’ll give you an irrefutable proof of it. Rather, you will.

You are the proof. You are an increase in order, your conception and, if you escape the womb, your birth are highly ordered events, with respect to, say, a bag of chemicals. These events, of order increasing with respect to certain knowledge, happen everywhere, all the time. And they happen because of certain powers being exercised.

So we’re right back to the beginning. We need to understand cause, what powers exist in things, and how they operate or not.

SKIP TO HERE

Old Hoss says this, and she is far from alone in saying things like it:

In an infinite amount of time, anything that can happen will eventually-happen—no matter how unlikely.

No. No no no no no. No.

Again, things are only “likely” or “unlikely” with respect to premises we pick. Probability is a measure of our knowledge, or rather its level of certainty. Events happen only because of some power being used. Probability isn’t a cause.

I’m guessing Old Hoss, and the legion of others who agree with her, have in mind things like universal roulette wheels, being spun over and over again, ad infinitum, where we wait for 30 to hit (or whatever). In this example, we have a model, a probability of 30, based on our knowledge of the causes involved, which include the material and functional and final causes, and not just efficient causes.

According to that model, we can calculate the probability of at least one 30 in whatever number of spins you care to name. The larger the number of spins, the higher this probability is. After an infinite number of spins, the probability is 1.

But that doesn’t mean a real roulette wheel will ever hit 30. The depends on the causes in play, like for instance formal causes: the wheel is missing its 30. Your model also assumes the croupier never tires: the efficient cause is always there. Will it be? Forever?

All right, who cares about roulette. Here’s the big error (which she herself doesn’t appear to fully buy): “With quantum fluctuations, low-entropy objects (brains!) can appear even out of vacuum—and then disappear again.”

By “low-entropy objects” she means highly ordered ones, like brains. What we see here, and will time and again in Old Hoss’s book, is the Deadly Sin of Reification. Believing your model is Reality.

You hear this kind of thing from Quantum Reifiers all the time. They give probability to things like brains and Buicks “spontaneously” coming into being from “fluctuations.” They do this because they can imagine it, and, according to their models, can’t rule it out. The model says it can happen, so given long enough, like the wheel, a copy of yourself “spontaneously” appears next to you and laughs.

This should signal to QRs that something has gone wrong with their model, and that alternate metaphysics of nature are in order (they do exist). That hand waving about causes: calling them mysterious “fluctuations” does not prove you have understood them. Indeed, it proves the opposite.

Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

12 Comments

  1. Dieter Kief

    Thanks! Sabine Hossenfelder does have some influence in Germany, so: Thx. for making her flaws/ inconsistencies – – – explicit (Hegel & Analytical Phliosophy scholar and John Rawls’ latest book foreword excelling Robert B. Brandom, hehe).

    Engineer-blogger and Dead Head Achmed E. Newman did the same as you did – it’s just that on his page the posts are best found by simply scrolling down – about a dozen posts – then there’ll pop up two Hossenfelder inquiries that might catch the attention of one or the othr afficionado of the – idiosyncracies of the climate change debate.
    Achmed E. Newman focusses on the energy-transfer within the layers of our spheres and looks En Détail at what Sabine Hossenfelder gets right (quite some, as it turns out) – and what not (yep, there’s this stuff too).
    https://www.peakstupidity.com/

  2. McChuck

    Entropy is much more easily understood as a measure of averageness. The more things are like all the other things around them (in every possible sense and meaning), the lower the entropy. Things/conditions/systems which contain wildly different states are high entropy.

    More work can be done in a higher entropy system, because there have to be differences for energy to flow. That’s why engines work more efficiently when fed cold air.

  3. will she-they realize what a brilliant mind has recognized her-their book (incl. copy & paste) and will she-they draw real life conclusions from what has been said here? e.g. write here? she-their free will rather opposes this.

  4. Cookie

    So the question can be reframed into knowledge of a system constantly but didn’t Heisenberg prove that measuring a system alters it?

    So can we ever have total knowledge of any system.

    Maybe we can have knowledge at certain levels but not total knowledge.

    Anyhow I prefer to stick the label entropy on this question and stick it in my draw?

  5. Bob Kurland

    Interesting article, Matt, thanks. I didn’t begin to understand anything about entropy until the second year I taught thermo, this despite having taken a grad course in thermo from Percy Bridgman, a Nobel Prize winner for his work in thermo. I believe the formulation that gives most insight into thermo is Shannon’s definition of entropy in information theory. Is this where you’re going Briggs? A lot of confusion about entropy changes in the real world is that people don’t distinguish between forms of the Second Law for isolated and open systems. I don’t think it has been possible to derive the Second Law from statistical mechanics (Boltzmann did fail-). The Second Law for the universe is a consequence of initial conditions on the Big Bang (-see Chapter ??? in Penrose’s “Road to Reality”). Nevertheless, if it is the case that physical “laws” are derived from empirical evidence, then I
    agree with Einstein: “It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.”

  6. Rudolph Harrier

    The statement “in an infinite amount of time, anything that can happen will eventually-happen—no matter how unlikely” has caused no end of confusion for students. In a purely mathematical setting it is of course completely false.

    For example, consider an infinite sequence s_n where each entry is a 1 or a 0. Then if the term “possible” means anything at all, it means that if we do not know s_n, but know it is of this type, then s_n could be 1 or 0. But of course it is completely possible that we have a constant sequence of 0’s, meaning that we never have s_n = 1 even though in some way this is “possible.” Yet a student may say that there eventually MUST be a 1, in a sort of infinite gambler’s fallacy.

    In reality most students can accept constant sequences, but get tripped up in more complicated sets. For example, consider a system of linear equations whose solutions are an infinite, but proper, subset of the space of all possible values. Many students will see that there are infinitely many conclusions and conclude that ANY set of values that they choose will be a solution, since in the infinite set of solutions we must surely get to SOME solution. But of course there is no need for this to happen. Similarly for approximations: there are infinitely many power series we could set up, so surely one of them must approximate our function well, but not so if our function is not analytic, etc.

    I’ve seen people try to justify this by saying if the probability goes to 1 then we can get anything. But of course that ignores the fact that events that have been given probability 1 routinely fail to happen. For example, we randomly select a number in the interval [0,1]. With the normal assumptions the probability of selecting a positive number is 1, but if selected 0 that event would fail to happen. And more generally for any x in [0,1] the probability of NOT selecting x would be 1, so there is always a probability 1 event that fails to happen.

    It seems like people believe that there are some events which just “technically” have probability 1, and thus could fail to occur, and ones which “really” have probability 1, and thus must occur. Kind of similar to the idea that .9 repeating can’t equal 1 because there must be an infinitesimal in between them, leading to the idea that .9 repeating might “technically” equal 1, but you can think of it as less than 1 because it doesn’t “really” equal 1.

  7. Milton Hathaway

    Steve Mould – A better description of entropy:

    https://www.youtube.com/watch?v=w2iTCm0xpDc

    It does seem like entropy is an overloaded word, used to describe analogous but fundamentally disparate things in different fields of study, inviting confusion and misapplication between said fields.

    My verdict on Sabine Hossenfelder is still out. I enjoy her skeptical approach to many topics, but then she’ll go and annoy me by throwing all skepticism out the window with other topics, like CAGW. I’m still giving her the benefit of the doubt, that she’s just choosing her battles carefully, to maximize her impact, not to mention her income.

  8. > After an infinite number of spins, the probability is 1.

    I agree with Rudolph Harrier on this. There is no requirement for an infinite number of roulette spins to eventually produce a 30. The probability certainly converges to 1, and 1 certainly is the limit, but that still doesn’t mean it gets to 1. Only if you add in an axiom akin to “every possible event can be found in an infinite sequence” does a 30 certainly happen.

  9. David Marwick

    I am happy to admit that I am a very pedestrian nobody stuck in a tangible physical reality which can only be made sense of by reference to a complementary metaphysical reality consisting of Life, Truth and a Generosity of Goodness.

    Perhaps I missed something in the “debate” between the “Hoss” and the “Boss” (Briggs) but they both seem to assume that the only “cause” of physical phenomenon is “chance” or “probability”. Both seem to assume that, given enough time, Order will create itself and de-create itself by the same “random” process. Maybe that’s a reasonable supposition if the fundamental premise is that Nothing can, and does, turn itself into Everything with an infinity of hypothetical “multiverses”, “wormholes” and 14 bn years so that things that don’t (and can’t) happen in our physical reality have happened in an irrepressible “becoming” tending to some undefinable “Omega Point”.

    Call me some kind of throw-back to an “unenlightened pre Mediaeval time” if you like, but I will continue to be sure that every contingent thing or event must have a prior and sufficient cause… except for the First Cause. That First Cause I will continue to call God; the original Power (Life), Intellect and Will (the most generous giver of goodness).

    Concerning Entropy. The First Cause, having created time, space and all in it, said “it’s all good” in which it was a perpetuated perfection designed to last… until the main purpose of Creation said, effectually, “this is a good spot and we’ll run it without You, Mr God, good and true will be whatever suits us”. “Orrite,” says God, “but you’ll have to do it with effort and pain, things will wear out and fall down and you won’t have My light to guide you unless you ask for it and conform.”

    Tom Aquinas said that the whole of Creation was “denatured” by the Original Rebellion of humanity. That, in my opinion, was what became known as Entropy.

  10. David Marwick

    I forgot to mention that I think that “statistician” and “philosopher” are mutually exclusive concepts. A “statistician” assumes that statistical “information” is all that is needed in an irrational “effect” while a philosopher rightly assumes that there is no effect without a cause.

    All Modernism, both secular and “theological” is based on the assumption of a “reality” “becoming” according to the dialectic of the “old thesis” (the way things are) competing with a self generating “antithesis” to form a “synthesis that becomes the thesis” ad infinitum.

  11. Kevin T Kilty

    “…Entropy, some say, is a measure of disorder,…”

    I have dealt with the problem of having to describe entropy for 50 years beginning with a chemstry lab section in the spring of 1974 all the way to the last time I taught enginnering thermodynamics in a second tier engineering school. “Disorder” is just about the least useful explanation one can use.

    There are two better approaches in my view. The first is that entropy simply describes evolution toward more likely states. This is at the foundation of the “broken egg not being likely to reconstitute itself” description. It’s the time’s arrow argument.

    The other view stems from a general description of the first law of thermodynamics — changes to internal energy are equal to heat minus work. If it’s possible to explain that work in an example system is done on a pressure-volume path, by using an automobile engine perhaps as an example, then it may be possible to convince someone that there is such a thing as temperature-entropy passage of heat. In this way of looking at things entropy is simply a state function no different than volume. It certainly takes the mystique out of the concept.

  12. Jerry

    “In an infinite amount of time, anything that can happen will eventually-happen—no matter how unlikely.” This is the same false reasoning behind the Infinite Monkey Theorem: “Given enough time, a monkey provided with a typewriter will produce the compete works of Shakespeare.” This is yet another example of a “thought experiment,” a concept popularized by Einstein.

    Some “computer scientists” claim to have done it: they programmed computers to generate random sequences of characters, in the manner of monkeys, cherry-picked sequences that match Shakespeare, and assembled them. 99.99% of Shakespeare’s complete works reproduced, they claim. Complete and utter nonsense on many levels.

    Some years back, another team of researchers placed keyboards in a cage of monkeys. The monkeys produced many pages of single characters by sitting on the keyboards, and showed great interesting in urinating on them. No Shakespeare. But at least it was an actual experiment.

Leave a Reply

Your email address will not be published. Required fields are marked *