The multiverse might be real. God might in His wisdom and love of completeness and true diversity and the joy of filling all possible potentials with actuality might have created such a thing. Indeed, the wondrous complexity and size of the known universe is traditionally taken as an argument for the existence of God. The multiverse simply carries that idea to its limit—in the mathematical and philosophical sense.

Still, there is something appalling about the idea. The multiverse takes parsimony by the short hairs and kicks it in the ass. Talk about multiplying entities beyond necessity! An uncountable infinity of universes that cannot be seen might sound good on paper, but only because we trouble grasping how truly large an uncountable infinity is.

What follows is not a proof against the multiverse, but an argument which casts doubt on the idea. It was inspired by Jeremy Butterfield’s review of Sabine Hossenfelder’s *Lost in Math: How Beauty Leads Physics Astray.* (hat Tip Ed Feser; I haven’t read Hossenfelder’s book), a review we need to examine in depth first.

Her book “emphasizes supersymmetry, naturalness and the multiverse. She sees all three as wrong turns that physics has made; and as having a common motivation—the pursuit of mathematical beauty.” Regular readers have heard similar complaints here, especially about all those wonderful limit results for distributions of test statistics which give rise to p-values, which aren’t what people thought they were. Also time series. But never mind all that. On to physics!

“…Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories…A similar defence might well be given for studying string theory.”

How about the multiverse?

Here, Hossenfelder’s main criticism is, I think, not simply that the multiverse is unobservable: that is, the other pocket universes (domains) apart from our own are unobservable. That is, obviously, ‘built in’ to the proposal; and so can hardly count as a knock-down objection. The criticism is, rather, that we have very little idea how to confirm a theory postulating such a multiverse.

We discussed non-empirical confirmation of theories earlier in the week. We need to understand what is meant by fine-tuning, a crucial concept.

As to supersymmetry, which is a family of symmetries transposing fermions and bosons: the main point is not merely that it is unobserved. Rather, it is unobserved at the energies recently attained at the LHC at which—one should not say: ‘it was predicted to be observed’; but so to speak—‘we would have been pleased to see it’. This cautious choice of words reflects the connection to Hossenfelder’s second target: naturalness, or in another jargon,

fine-tuning. More precisely, these labels are each other’s opposites: naturalness is, allegedly, a virtue: and fine-tuning is the vice of not being natural.

Butterfield says “naturalness” is “against coincidence”, “against difference”, “for typically.”

By against coincidence he means “There should be some explanation of the value of a fundamental physical parameter.” This is the key thought for us. There has to be a reason—a *cause*—of the value of the electron charge or fine structure constant; indeed *any* and *every* constant. Butterfield says “the value [of any constant] should not be a ‘brute fact’, or a ‘mere matter of happenstance’, or a ‘numerical coincidence’.”

The against difference concept is related to how parameters, i.e. constants, are estimated. And typicality means the value of the parameter must be defined in a rigorously defined “theoretical framework.”

Namely: there should be a probability distribution over the possible values of the parameter, and the actual value should not have too low a probability. This connects of course with orthodox statistical inference. There, it is standard practice to say that if a probability distribution for some variable is hypothesized, then observing the value of a variable to lie ‘in the tail of the distribution’—to have ‘a low likelihood’ (i.e. low probability, conditional on the hypothesis that the distribution is correct)—disconfirms the hypothesis that the distribution is the correct one: i.e. the hypothesis that the distribution truly governs the variable. This scheme for understanding typicality seems to me, and surely most interested parties—be they physicists or philosophers—sensible, perhaps even mandatory, as part of scientific method. Agreed: questions remain about:

(a) how far under the tail of the distribution—how much of an outlier—an observation can be without disconfirming the hypothesis, i.e. without being ‘atypical’;

(b) how in general we should understand ‘confirm’ and ‘disconfirm’, e.g. whether in Bayesian or in traditional (Neyman-Pearson) terms; and relatedly

(c) whether the probability distribution is subjective or objective; or more generally, what probability really means.

That “standard” statistical practice is now being jettisoned (follow the link above for details of this joyful news). Far better to assess the probability a proposition is true given explicitly stated evidence. For that is exactly what probability is: a measure of truth.

Again, never mind that. Let’s discuss cause. Butterfield says he follows Hume and his “constant conjunctions”, which is of course the modern way. But that way fails when thinking about what causes parameters. There are no conjunctions, constant or otherwise.

Ideally, what a physicist would love is a mathematical-like theorem with rigorous premises from which are deduced the value of each and every physical constant/parameter. That would provide the explanation for each constant, and an explanation is a lovely thing to have. But an explanation is not a cause, and knowing only a effect’s efficient cause might not tell you about its final cause, or reason for being.

Now in the multiverse (if it exists) sits our own universe, with its own set of constants with specific values which we can only estimate and which are, as should be clear, theory dependent. A different universe in the unimaginably infinite set could and would have different values for all or some of the constants.

An anthropic-type argument next enters which says we can see what we can see because we got lucky. Our universe had just the right values needed to produce beings like us—notice the implicit and enormous and unjustified assumption that only material things exist—beings that could posit such things as multiverses. But we had to get real lucky, since it appears that even minute deviations from the constants would produce universes where beings like us would not exist. We discussed before arguments against fine-tuning and parameter cause: here and here. Do read these.

Probability insinuates itself long about here. What is the probability of all this fine-tuning? It does’t exist. No thing has a probability/. All probability is conditional on the premises assumed. And once we start on the premises of the multiverse we very quickly run into some deep kimchi. For one of these premises is, or appears to be (I ask for correction from physicists), uncountability. There is not just a countable infinity of universes, but an uncountable collection of them. This follows from the continuity assumption about the values of constants. They live on the real line; or, because there may be relations between them, the real hyper-cube.

Well, probability breaks down at infinities. We instead speak of limits, but that’s a strictly mathematical concept. What does it mean physically to have a probability approach a limit? I don’t know, but I suspect it has no meaning. Butterfield is aware of the problem.

“For all our understanding of probability derives from cases where there are many systems (coins, dice…or laboratory systems) that are, or are believed to be, suitably similar. And this puts us at a loss to say what ‘the probability of a cosmos’, or similar phrases like ‘the probability of a state of the universe’, or ‘the probability of a value of a fundamental parameter’ really mean” [ellipsis original].

I disagree, for all the reasons we’ve discussed many times. Probability is not a measure of propensity, though probability can be used to assess uncertainty of propensity, and to make predictions. Butterfield then rightly rejects naive frequentism. But he didn’t quite say he rejected it because counting multiverses is impossible. Such a thing can never be done. Still, probability as true survives.

Back to fine-tuning and some words of Weinberg about fine-tuning quoted by Butterfield (all markings original):

We assumed the probability distribution was completely flat, that all values of the constant are equally likely. Then we said, ;What we see is biased because it has to have a value that allows for the evolution of life. So what is the biased probability distribution?’ And we calculated the curve for the probability and asked ‘Where is the maximum? What is the most likely value?’ … [Hossenfelder adds: ‘the most likely value turned out to be quite close to the value of the cosmological constant which was measured a year later’.]…So you could say that if you had a fundamental theory that predicted a vast number of individual big bangs with varying values of the dark energy [i.e. cosmological constant] and an intrinsic probability distribution for the cosmological constant that is flat…then what living beings would expect to see is exactly what we see.

What premise allowed the idea of a “flat” prior on a constant’s value? Only improper probabilities, which is to say, not probabilities at all result from this premise. Unless we want to speak of limits of distributions again—but where is the justification for that?

All right. Here’s where we are. No physicist has any idea why the constants which are measured (or rather estimated) take the values they do. The values must have a reason: Butterfield and Hossenfelder agree. That is, they must have a cause.

Now if the multiverse exists (and here we recall our previous arguments against fine-tuning), our universe, even though it is one of an uncountable infinity, must have a reason why it has these values for its constants. You cannot say “Well, all values are represented somewhere in the multiverse. We have these.” That’s only a restatement of the multiverse premises. We have to say why *this* universe was *caused* to have these values, and, it follows, why others were caused to have other values.

Well, so much is not new. Here is what is (finally!).

You’ll grant that math is used to do the figurings of all this work. Math itself relies on my constants, assumptions, and so on. Like the values of π and *e*. Something caused these constants to take the values they do, too (a longer argument about this is here). They cannot exist for no reason, and the reason cannot be “chance”, which is without power.

There is no hint, as far as I can discover, that multiverse theorists believe the values of these mathematical and logical constants differ, as do physical constants. That physical constants differ is only an assumption anyway. So why not assume math, logic, and truth differ? But if they do, then there is no saying *what* could happen in any multiverse. You can’t use the same math as in our universe to diagnose an imagined selection from the multiverse. You don’t know what math to use.

Everything is put into question. And we’re further from the idea of cause. That we run from it ought to tell use something important. The problem of cause is not solved by the multiverse. There has to be a reason each universe has its own parameter values, and there has to be a reason it has the values of mathematical constants. This might be the same reason; or again, it might not be. The cause has to be there, however. It cannot be absent.

It is of interest that we initially thought the physics might be variable but the math not. Math is deeper down, in an epistemological sense; so deep, that we have forgotten to ask about cause. At any rate, because it seems preposterous to assume math changes from universe to universe, because math seems best to use fixed and universal (to make a pun), there is reason to question whether physics changes, too.

Special and general relativity are direct consequences of the Pythagorean theorem, pi, and their joining in trigonometry. The laws of the universe only work if space if inherently flat, that is to say, Euclidean.

Most physicists have given up exploring cause, and now seek to build fairy castles of conjecture on the sands of probability.

In cosmology (and quantum mechanics) we are accustomed to our theories revealing a bigger reality. We have gone from existing on a stationary sphere about which the sun and planets orbit, to a tiny fragment on the edge of a spiral galaxy, in an infinite universe, that exploded into existence 13.8 billion years ago.

Our best candidate theories about how big-bangs occur, i.e. the underlying physical process, predict observable effects, and unobservable consequences, and explains why these consequences are unobservable. Again we are accustomed to such things already. We know that our universe exists beyond our event horizon, and why we will never interact with that region.

Why the constants of nature take their particular values is an open question, and the explanation for one or more may be anthropomorphic. To complain that this messes up your interpretation of probability, so it cannot be so, doesn’t seem a particularly strong argument.

Theoretical physics is great—no one has to prove a thing and thousands of science fiction ideas (some sold as reality….) come from it. Of course, calling it philosophy makes these physicists angry, but that’s what it is. Philosophy has no truth. It’s all just mental gymnastics.

The multiverse has a use—thousands of science fiction writings based on it. Beyond that, useless. Catch 22, can never confirm, etc.

Doesn’t the Catholic religion teach that God had to supernaturally intervene in nature to produce us? If that’s the case, I don’t see how you can also use the fine-tuning argument with a straight face.

@ Swordfish-

“…I don’t see how…”

You can add this case to a long list of things you can’t (or won’t) see.

@Sheri

“The multiverse has a use”.

Yes indeed it has, well at least the Level III multiverse has. It is estimated that 1/3 of US economy depends on quantum mechanics (which gives us Level III), and when quantum computers become available, we will be directly using it.

RE: “appalling … uncountable Infinity of universes …”

That, and pretty all that follows, illustrates how humans make god in our human image, we can pontificate at length about the omniscience & omnipotence of an All Mighty deity then anthropomorphize god as if such an all-powerful infinite deity sees things from a meager Earthly perspective.

If we truly believe god’s power is infinite, that s/he or it transcends time and so on & so forth, then for god to create one or some number to an infinity of universes is really not that impressive and certainly within s/he or it’s awesomeness & power. From our puny perspective that might seem profound & baffling, but from the awesome power we ascribe to god a multiverse seems like just the sort of artistic development we ought to expect.

Isn’t it rather presumptuous for any human to critique what god might do?

That this happens is prima fascie evidence that god made us for his amusement—watching how some mortals ponder, in all seriousness, a divinity beyond their capacity to comprehend. To god this is probably like our amusement at some cartoons, such as the Far Side toon about dogs trying to solve the doorknob mystery, or, watching a dog chase a spot from a flashlight.

Tom, can you name any technology we would not have if scientists had not postulated the multiverse? Can you name any that cannot possibly work if the multiverse does not exist?

If not, then your claim is obviously and trivially false.