The Biggest Mistake In Science Applied To Universe As Simulation

The Biggest Mistake In Science Applied To Universe As Simulation

If you’re an academic, you might think the biggest mistake in science is failing to win the grant, and thus lose the power to calm the Savage Powers that demand Overhead. However personal that calamity is, it it not the biggest philosophical error in science.

Which is mistaking the ontic for the epistemic. Confusing what is for what we know, or can know, of what is. In its worst form it becomes the Deadly Sin of Reification, in which the scientist comes to believe his model of Reality is Reality.

These thoughts help explain the curious debate about whether the world (or, as they like to say, the universe) is a “simulation”, in which the Biggest Mistake is rife.

The idea started formally a couple of decades ago when Bolstrom wondered if he could peer back at himself from inside his Apple II. (Or whatever cool computer academics were using then.) Here’s the Abstract of his infamous paper:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

And this was before weed was legal.

By “posthuman” he meant something like computer-beings. Which means he embraced, with obvious gusto, the machine metaphor of life and thought. He also implicitly assumes the myth of endless progress, a magical kind of belief which assumes that, one day, man (or aliens) will be able to control all creation.

These kinds of theories are like those that propose to solve biogenesis by insisting life first came from “outer space”. Well, okay, maybe. But then where did that life come from? Alien-origin of life merely pushes the problem back one step, and does not solve it.

Saying we are in a simulation created by clever aliens does not solve consciousness or rationality or life, and merely begs the question whether the aliens who simulated us are themselves simulations. And if not, why not. When do the simulations stop?

Minds cannot be simulated on computers, for the reasons we discussed before. Not if by “simulated” you mean with Bolstrom “made into real versions but made of other materials”. If you instead mean modeled, i.e. reduce to simpler objects in order to gain understanding of the real thing, we can do that, but then again we cannot be simulations. We can only model minds.

And then you run into the problem Searle first introduced of defining what a computer is (see link above). What are its boundaries? We can define the world is a computer, because why not? After all, computable things happen in the world. But this is just silly, because if everything is a computer, then nothing is. We have merely relabeled the problem.

Enter the paper “Consequences of Undecidability in Physics on the Theory of Everything” by Mir Faizal and others (including Larry Krauss). This made a minor splash a week or two back, and is contra simulation. From the opening (with my emphasis):

General relativity treats spacetime as dynamical and exhibits its breakdown at singularities. This failure is interpreted as evidence that quantum gravity is not a theory formulated within spacetime; instead, it must explain the very emergence of spacetime from deeper quantum degrees of freedom, thereby resolving singularities. Quantum gravity is therefore envisaged as an axiomatic structure, and algorithmic calculations acting on these axioms are expected to generate spacetime. However, Gödel’s incompleteness theorems, Tarski’s undefinability theorem, and Chaitin’s information-theoretic incompleteness establish intrinsic limits on any such algorithmic programme. Together, these results imply that a wholly algorithmic “Theory of Everything” is impossible: certain facets of reality will remain computationally undecidable and can be accessed only through non-algorithmic understanding. 

Another simpler way to say those first two sentences is “Our model of gravity doesn’t seem to match our model of quantum mechanics, but we sure hope gravity turns out to be quantum, i.e. discrete.” After the many years of ardent search, one doubts.

Gödel, Tarski, and Chaitin all speak of the epistemic (see the Class). What we can know, and what we can believe. Not of what is. Of what we can know about what is. Gödel proved, in essence, that there are some true propositions we cannot prove but that we can accept. And though he did not say it this way, we accept by faith, by calling upon our powers of intellection using forms of induction. As I’ve pointed out many times, all mathematicians do this in their forming, and founding subjects upon, axioms. Which can only be believed but not proved empirically or by the usual rules of “facts and logic.” Faith is at the bottom of everything.

But these thoughts, and limitations on thoughts, don’t make worlds.

Our authors insist, “Because any putative simulation of the universe would itself be algorithmic, this framework also implies that the universe cannot be a simulation.”

Though I agree the universe is not a simulation, because of other grounds, their conclusion does not follow. It’s true that not all things can be proved using standard tools, which is the implication of Gödel et alia. But that true conclusion does not mean all manner of things can be believed, or proved by faith, or just plain assumed. There are no unicorns, but you can create a model, an algorithm, of a unicorn (and if you claim unicorns do exist, then put in any imaginary beast you like). All it requires is a definition, which is something you provide from your imagination. In your world you can invent as many “fundamental” particles as you like.

A better argument against simulation, defined as our world and us being the product of an algorithm, besides that it is goofy and in the end explains nothing (because of that regress), is that it requires infinite resources, or if not infinite, then at least the size of the world itself.

For several reasons. One being the possibility that space itself is absolutely continuous (see this). If that’s so, then an infinite power is required to keep the world in existence. You can write an equation that expresses absolute continuity trivially, but you cannot simulate it except discretely. You might envision creating an analog version of space for your world out of spare parts, but that would require something the size of the world in which to realize it. Even an alien Apple won’t do it.

There are many who say quantum mechanics is only discrete (i.e. Planck scale etc.), and thus so is the world at base and not continuous, and I have been tempted to that position myself, but this risks the Deadly Sin of Reification. But suppose it is so, we should be able to solved the so-called measurement problem. It would be resolved by an algorithm, which physicists say is impossible.

Some proposed solutions of quantum mechanics involve extra-material parts of Reality (see Wolfgang Smith), and of course these cannot be an algorithm.

Another reason is the immateriality of the intellect and will. That which is not material cannot be an algorithm. Or cannot be implemented as one. See this proof (not simple). Our ability (rarely exercised, it’s true) to rational thought is proof itself against any finite algorithm.

The Science Daily article linked above summarizes their paper:

The team’s findings rest on the evolving understanding of what reality truly is. Physics has moved far beyond Isaac Newton’s view of solid objects moving through space. Einstein’s theory of relativity replaced that classical model, and quantum mechanics transformed it yet again. Now, at the forefront of theoretical physics, quantum gravity proposes that even space and time are not fundamental elements. Instead, they arise from something deeper — pure information.

This is idealism. The authors aren’t sympathetic to this “news”, though, because in the paper they conclude:

The arguments presented here suggest that neither ‘its’ nor ‘bits’ may be sufficient to describe reality. Rather, a deeper description, expressed not in terms of information but in terms of non-algorithmic understanding, is required for a complete and consistent theory of everything.

I agree with that, but again we mustn’t confuse our understanding of how the world is constructed with how the world is constructed. That we cannot understand how the world is built does not by itself mean that the world cannot be built by Someone who can understand. And again, if you can create a world by making up the details. As long as there are no contradictions, you can even simulate it in Bolstrom’s sense.

But if it’s going to be a world like ours, with all these infinities running around, you are not going to be able to build a machine to simulate it on.

Here are the various ways to support this work:


Discover more from William M. Briggs

Subscribe to get the latest posts sent to your email.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *