Bob Kurland is a retired, cranky, old physicist, and convert to Catholicism. He shows that there is no contradiction between what science tells us about the world and our Catholic faith.
Perhaps the best argument in favour of the thesis that the Big Bang supports theism is the obvious unease with which it is greeted by some atheistic physicists. At times this has led to scientific ideas, such as continuous creation or an oscillating universe, being advanced with a tenacity which so exceeds their intrinsic worth that one can only suspect the operation of psychological forces lying very much deeper than the usual desire of a theorist to support his/her theory. (Emphasis added). Chris Isham1
We concluded in the second post in this series with the observation that General Relativity must break down at some point close to the extrapolated t=0, near the big bang, and that perforce, quantum mechanical models had to be used for a theory of creation. As Ellis, Isham and Grib point out, there are fundamental problems in doing so.
A major one is the so-called measurement problem, which is at the heart of difficulties in the interpretation of quantum mechanics. The quantum mechanical state function can be represented as a superposition of several possible states that could be measuredâ€”when the measurement is made and a particular state results, then the superposition â€œcollapsesâ€ into the state that is measured (e.g. Schrodinger’s cat paradox).
An associated difficulty is the probability interpretation for measurement: the universe state function (wave function) gives probabilities that particular values of dynamical variables will be measured—what does probability mean in this context; are there an infinite number of possible universes (corresponding to various possible measurements) and who does the measurement? To quote Christopher Isham (referring to the measurement problem):
This poses the obvious problems of (i) when is an interaction between two systems to count as a measurement by one system of a property of the other? and (ii) what happens if there is an attempt to restore a degree of unity by describing the measurement process in quantum mechanical terms rather than the language of classical physics which is normally used? There is no universally accepted answer to either of these questions. (Emphasis added). Chris Isham2
That being said, the following quantum mechanical models have been proposed for the origin of the universe (the list is not exhaustive, and only general comments on each will be given; for more information please see the cited articles):
- Quantum fluctuations in the vacuum (Tryon, 1979).
- Tunneling from “superspace” into “real” space-time (Vilenkin, 1983)
- The Hartle-Hawking Block Universe, replacement of t by ti (i=square root of -1) (Hartle, Hawking, 1981)
- Chaotic Inflation (Linde, 1986)
- The Participatory Universe (Wheeler, 1990)
- Creation from non-Boolean logic to Boolean by an “observer” (Grib,1990)
Note that in none of these (except possibly 3 or 5) was the creation “ex nihilo; for 1, the vacuum pre-existed; for 2 the “superspace” (a hypothetical space of multi-dimensions); for 4, previous universes from which a “bubble” universe emerged via inflation; for 6, a hypothetical space of quantum universe states.
Model 1, Quantum fluctuations in the vacuum, is deficient in the following respect. There is nothing in this model to specify a unique time at which the fluctuations to enable creation should occur. Accordingly there might be creation of many universes, interacting with each other, but such has not been observed. And to emphasize again, a vacuum is not “nothing”…there is space, virtual particles, annihilation and creation operators, occupied zero-point energy levels from which the fluctuations occur.
For 3, the Hartle-Hawking model, the replacement of t by ti gives a term t^2 instead of -t^2 in the quantum mechanical equation, which enables the quantum mechanical equation to be solved without a singularity. The variable t becomes space-like, rather than time-like at very early values, and the space-like ti gradually becomes a time-like variable (goes back to t) as the value of t increases. An exact value for the time of origin becomes undefined (where does the earth start, at the South pole?).
The diagram illustrates this (vertical axis is increasing “t”). Note that there is no experimental justification for the replacement of t by ti; the justification is “esthetic”, that is the substitution removes the singularity at t=0. It is said that the coordinate ti “gradually changes” from space-like to t, time-like…how is the gradual change effected? Is the universe a fraction f with ti and a fraction 1-f with t? I have never seen this explained.
In order to understand the significance of models 5 (the Particpatory Universe of John Wheeler) and 6 (the quantum logic model of Andrej Grib), a comment on an interpretation of quantum mechanics that links quantum mechanics to consciousness will be helpful. (A general discussion of the various interpretations of quantum mechanics is beyond the scope of this summary; see also references in my previous posts “Do quantum entities have free will…” and “Quantum Divine Action via God, the Berkeleyan Observer…“)
The Participatory Universe and Quantum Logic models stem from the interpretation, first set forth by Von Neumann, London and Wigner, that since measurement is done by an observer, the final step in the measurement process must be awareness of the measurement result by the consciousness of the observer, which therefore must be an intrinsic part of quantum mechanics.
Wheeler construes the basic relation to consciousness to imply a universe that is information (“It from Bits”), and that by looking back in time, we create the past universe, as symbolized in this famous icon (click here).
Grib’s quantum logic model invokes a reality of non-Boolean logic that we (as observers) convert to Boolean logic situations, which is the only type of logic that our minds can comprehend. Grib speculates that perhaps it was God who made the initial observation to create a “real” universe (one perceived according to Boolean logic). According to Grib, time is a framework (lattice) for arraying the non-Boolean events in a framework that can be scanned as Boolean, and quantum mechanics is the theory for converting the non-Boolean system to Boolean.
Although there are some recent preliminary results from B-mode measurements of the Cosmic Background radiation that support the existence of inflation (not necessarily chaotic inflation), nevertheless it should be clear that none of these models can be confirmed or denied by measurements. Thus they are outside the realm of science, but properly belong to the domain of mathematical metaphysics (my take). As in the Hartle-Hawking model assumptions are made to remove the singularity from the singularity at t=0, R=0. Such models without a singularity are to many physicists more aesthetically pleasing than those with because the absence of a singularity implies (to them) the absence of a Creator.
We’ll explore some implications of these models for theology in the next post in this series, Creatio ex nihilo: Theology vs. Physics.
1Chris Isham, “Creation of the Universe as Quantum Process” in Physics, Philosophy and Theology–A Common Quest for Understanding.
2Chris Isham, “Quantum Theories of the Creation of the Universe”; Andrej Grib, “Quantum Cosmology, the Role of the Observer, Quantum Logic” in Quantum Cosmology and the Laws of Nature–Scientific Perspectives on Divine Action (click on the book icon, and then on the article listed on the right).
“This poses the obvious problems of (i) when is an interaction between two systems to count as a measurement by one system of a property of the other? …”
That’s only a problem for those interpretations with a wavefunction collapse. The problem is that the collapse process is ad hoc, vaguely specified, and philosophically incoherent in conjunction with SR – but those are all general problems with collapse models, not specific to the question of the beginning of the universe.
“… and (ii) what happens if there is an attempt to restore a degree of unity by describing the measurement process in quantum mechanical terms rather than the language of classical physics which is normally used?”
You get Everett-Wheeler.
I’m afraid that the first Isham quote is a matter of wishfull thinking. The theist notion is that God created this universe. There is no logical necessity that the being that created this universe is God, if this universe is indeed created. God could have created a universe with beings capable enough of creating a universe on their own. And that universe could be our universe. Or a universe with beings capable enough of creating our universe.
The sole reason most people do not go that way is that there is no difference between having 1 intermediate universe, or 41.
I am not sure why the measurement paradox is such a problem. It is a problem for the interpretations. They all use the same math.
The hypotheses, aren’t they just an attempt to start thinking about Quantum Gravity? The theorethical physicist attempt of software prototyping. Just bouncing around a few ideas to see if something sticks.
@NIV: I agree, the relative state theory (Wheeler/Everett) does solve the “measurement problem” but it runs into problems of its own. I’ve always wondered about the following with relative state theory. Perhaps you can enlighten me. Let’s consider a free particle (not altogether realistic)–no localization… if you measure the particle’s position with a definite value are there created an infinite number of universes in which the particle has other values of position ? Seems to me to violate Ockham’s razor. And what about the arbitrariness of choosing a basis state…if we measure the problem as having a definite position (localized) do we create an infinite number of universes in which there a spread of momentum values?
There have been other solutions to the “measurement problem” … Bas van Fraassen has one (as I recall) in which there are two kinds of state-functions, one evolving deterministically according the Schrodinger equation, and one which is measured. Nancy Cartwright dispenses with the unitary condition to remove the measurement problem.
So, I think Isham’s judgment, “There is no generally accepted solution (to the measurement problem)” stands.
Personally, I’m partial to the consciousness thesis.
But of course, which interpretation is chosen depends on one’s point of view. So I think the statement “There is no generally
I think you miss the point of the quote. It’s not whether God exists or not, but whether physicists put forth theories (not entirely sustainable scientifically) because they want to eliminate the possibility of creation at an instant in time, which would lend itself (but is not necessary) to theism. With respect to this last point there will be a post “Creatio ex nihilo: Theology versus (?) Physics.”
A further point: Fred Hoyle (who originate the term “Big Bang” as a derisive comment) and developed (along with Gold and others) a steady-state theory of continuous creation, later came to believe in a Creative Intelligence , after he had found the remarkable anthropic coincidence of an excited carbon-12 nuclear energy level which enabled sufficient formation of carbon-12 nuclei for carbon-based life to exist. Here’s his quote:
“A common sense interpretation of the facts suggests that a super-intellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”
“Letâ€™s consider a free particle (not altogether realistic)â€“no localizationâ€¦ if you measure the particleâ€™s position with a definite value are there created an infinite number of universes in which the particle has other values of position ?”
No. No universes are created. That was just an analogy used by DeWitt to explain the idea to the layman.
You start off with the observer in a pure state of ignorance, and the particle given a wavefunction which can be considered a superposition of the particle at all points in space. So by merely accepting the wavefunction as a concept (which almost all physicists do,) you have already implicitly accepted the infinity of possible states.
Now the observer interacts with the particle. Now it’s a fairly well-known bit of classical physics that if you allow two independent oscillators to briefly interact, the joint system will converge on a superposition of what are known as ‘normal modes’ of vibration. The differential equations describing their joint motion can be described using a matrix, the interaction introduces non-zero off-diagonal terms, and the resulting system is solved by correlated oscillations related to the eigenvectors of the matrix representing the interaction. When the interaction stops, if the correlated oscillations are both simultaneously also solutions of the independent equations, then the state of correlation persists.
The same should happen with quantum oscillators. When two quantum systems interact, they enter into a joint state related to the eigenvectors of the interaction that correlates their states. If the correlated state also happens to be a solution of the non-interacting wave equation, then the correlation persists, the quantity in question is an ‘observable’, and an observation has been made.
So no universes are created. All we are doing is extending the spread in the wavefunction of the particle to a corresponding spread in the wavefunction of the observer, and the latter is no more a violation of Ockham’s razor than the former.
The reason we’re not aware of it is that QM is linear and the eigenvectors are orthogonal, which means they don’t interact. It’s like the way the ripples on a pond pass through one another without affecting one another. So the fact that the different eigenvectors of the observer superposition don’t interact is really no more controversial than the idea that the different parts of an electron’s wavefunction don’t electrostatically repel one another. If the electron flying through one slit repelled the same electron passing through the other, there would be a noticeable distortion in the interference pattern. There isn’t, because the electron passing through one slit cannot ‘see’ itself passing through the other. It is as if each was in its own separate universe, with all the others invisible.
Another advantage to the theory is that it allows partial measurements. Take your example of measuring the position of a particle. If you actually did so, and the particle collapsed to a pure position state, it’s momentum would be randomly spread across the entire range, and it would shoot off at close to the speed of light. So actually you never do measure the position of the particle – you can narrow it down to a fairly tight peak, but even *after* the measurement there will still be some uncertainty/spread. It’s not clear how you can have a wavefunction collapse most of the way but stop short, but the interacting oscillators will only converge on the eigenvector states at a finite rate, and an finite interaction time will result in an approximate measurement.
The entire justification for the theory is Ockham’s razor. It takes the microscopic quantum theory that everyone already accepts for particles, and extends it to macroscopic observers, working out the consequences. Contrary to intuitive expectations, quantum theory predicts that observers will experience an approximately classical world. There’s no need for wavefunction collapse, it has no observable consequences, and so Everett-Wheeler rejects it as an ‘unnecessary entity’.
But since wavefunction collapse *does* have no observable consequences, you can’t actually prove it doesn’t happen. It’s metaphysics, and a matter more of aesthetics than scientific fact. Since wavefunction collapse is certainly much easier to work with practically, and is intuitively much easier to get one’s head around, and since all these are only mental models invented for the purposes of prediction/calculation, I don’t see that it matters much which you pick. A mental model is not the thing itself, and not uniquely defined. The map is not the territory.
Some physicist will have that notion, I’m sure, but theists aren’t the only people disagreeing with physicists.
Regarding the Hoyle comment, in my opinion everybody who believes that the universe is created for people to live in doesn’t have their facts straight. The universe is as good as uninhabitable for people.
Humans can only live as they are born on the African savanna’s. And if there are places they can walk to which looks enough like the African savannah, they can live there too. Everywhere else, they need lots of technology. Ridiculous amounts of technology by the time you get out of the lower 5 kilometers of the atmosphere.
If there’s any place that is not fit for humans, it is this bloody universe.
But for some reason this seems to be the minority standpoint.
Thanks NIV for the extended reply. I’ll have to chew on what you said and digest it.
Iâ€™m afraid that the first Isham quote is a matter of wishfull thinking. The theist notion is that God created this universe.
Isham was not asserting that in the quote. He was asserting that the gallimaufry of unsupported alternate theories was a matter of wishful thinking themselves: the atheist notion that no-God not-created “this” universe.
Of course, LemaÃ®tre disposed of that a long time ago. The start-up of a space-time continuum is simply not the same kind of thing as the creation of the universe. Said creation does not happen simply at one point a long time ago, but at every moment in which stuff is maintained in existence. Even is the universe is eternal, as Aquinas assumed sec. arg., it must still be created.
The entire justification for the theory is Ockhamâ€™s razor.
One wishes folks would stop taking Ockham’s name in vain. Moderns don’t seem to “get” what he was saying. His was an argument against excessive complexity in models, not in the universe. See Copelston, A History of Philosophy.
I’ve decided to start picturing Bob as a Teutonic Knight!
I don’t like the way Ockham is thrown around these days either, YOS.
The Big Bang theory, and a couple others, in a single known-universe setting or even beyond, can fit well into theistic speculation of cosmogeny, but offers no clue as to whether a deity would be involved.
The Big Bang theory, and a couple others, in a single known-universe setting or even beyond, can fit well into theistic speculation of cosmogeny, but offers no clue as to whether a deity would be involved.
Of course not. God is not supposed as some sort of scientific hypothesis hauled up to explain how matter was changed from one form to another. Secondary causation is quite adequate and, if one’s interests are thereby satisfied, quite sufficient.
I get that, and that’s fine, but you have to have some reason to think that God would be necessary to something. Where is God necessary in any of this? God is not a satisfying answer. It’s really not an answer at all. And the God of Abraham is silly. Just plain silly on the face of it.
Where is God necessary in any of this?
There wouldn’t be any “this” in the first place. There wouldn’t be any “natures” whose regularities we could study. But all that is yet to come. Take it one step at a time.
Yes, the idea that scientific theories can only deal with secondary causes. That theory is based on the proposition that God created the universe (http://en.m.wikipedia.org/wiki/Secondary_causation).
If one subscribes to the notion that as long as a theory is falsifiable, it is scientific, whether that theory is about primary or secondary causes is not important.
JMJ–I think you’re crediting my ancestral roots too literally. I have an idea that when the grandparents/great-grandparents immigrated, the Ellis Island officer asked them what their name was and they thought he was asking where they were from (Kurland is a district–swampy–in southern Lithuania). There was a Bob Kurland who may have been descended from the Teutonic Knights, an all-American Basketball player, and Basketball Hall of Fame, but I don’t think there are any family ties.
NIV, thank you again for your detailed comment on why you believe in the Everett relative state theory. I understood your explanation (it bears some relation to the Gell-Mann / ??? Many Histories interpretation), but I wanted to check out other accounts, go back and look at my own books, Albert’s “Quantum Mechanics and Experience” and Lockwood’s “Mind Brain and the Quantum”. There’s also a review by Jeffrey Barrett in the Stanford Encyclopedia of Philosophy
I won’t attempt to summarize this other than to say among the problems it explores with Everett’s formulation are two that I think are important: 1) how do you choose the preferred basis system for the state function of the universe; 2) some of the interpretations developed from Everett’s formulation –e.g. DeWitt’s many worlds –are ontologically extravagant. With respect to the latter, to say that we can conceive of an infinity (or a very large number) of basis states is not equivalent to saying that all those states exist.
My own preference, and it has a religious bias, is for the many minds interpretation.
You’re welcome. One of the better ‘popular’ books that goes into the Everett-Wheeler interpretation in detail is David Deutch’s ‘The Fabric of Reality’. Some of what I mentioned is based on that, but there’s a lot more in there on his work in quantum computation which is also very interesting. There’s also Everett’s dissertation, which is on the web, and worth reading (or at least, skimming – if only to see how conventional it all is).
“1) how do you choose the preferred basis system for the state function of the universe;”
It depends what you mean. Preferred by who, for what purpose?
You get ‘preferred’ basis states as the result of ‘measurements’ because of the nature of the physical interactions defining them. The preferred basis comes from the eigenstates of the joint interaction matrix (or the infinite-dimensional equivalent).
(Note that the usual formulation of matrix mechanics only considers the part of the matrix applied to the observed system, which often has degeneracies that make the answer ambiguous and has founded some objections, but really you need to include the effect on (and of) the observer as well.)
Why do certain sorts of interactions occur very often and others don’t? That’s a deeper question that’s not so easy to answer. I could have a go, but it doesn’t really have anything to do with the acceptability of Everett-Wheeler. The same question applies to all flavors of QM equally.
But generally, there is no preferred basis. The universe’s state is what it is. Any basis used to describe it is equally valid. It’s sort of like asking why the numbers used to describe the universe are all in base 10. The answer is that it is purely for the convenience of observers who are limited in the way they are constructed, to be able to easily see the world only in a certain subset of ways.
“2) some of the interpretations developed from Everettâ€™s formulation â€“e.g. DeWittâ€™s many worlds â€“are ontologically extravagant.”
I agree. Ever since first understanding what the theory was really about, I have found the term ‘Many Worlds’ annoying out of all proportion; and likewise with ‘Parallel Universes’. They’ve been brilliant for popularizing the idea with the public and science fiction writers, but the misunderstandings they cause have driven physicists away in droves.
It’s just a straight subset of QM: it takes everything apart from wavefunction collapse. It’s simply about what an observer regarded as a macroscopic quantum system would actually experience, according to the rules that everyone already knows and accepts for microscopic world. And because it’s a subset of standard QM, any inconsistencies or errors in it must also be in QM too.
The effort to retain wavefunction collapse has led physicists to accept all sorts of strange things like non-locality, anti-realism, and indeterminism that in other circumstances would have been considered good reason to reject a hypothesis. It’s also non-linear, irreversible, and asymmetric, which while not impossible are widely regarded as undesirable, and have led to all sorts of paradoxes and peculiarities. On aesthetic and philosophical grounds, I don’t see that there’s any competition. But it’s definitely more mind-boggling to work with, and since collapse always gives the same answers, and makes things simpler, I’d accept Copenhagen in the same sort of spirit I accept Newtonian gravity, and no criticism to anyone who uses it. It works, even if it does have a few philosophical problems.