Quantum Mechanics
QUANTUM MECHANICS
Quantum mechanics has the distinction of being considered both the most empirically successful and the most poorly understood theory in the history of physics.
To take an oftcited example of the first point: The theoretically calculated value of the anomalous magnetic moment of the electron using quantum electrodynamics matches the observed value to twelve decimal places, arguably the best confirmed empirical prediction ever made. To illustrate the second point, we have the equally oftcited remarks of Niels Bohr, "Anyone who says that they can contemplate quantum mechanics without becoming dizzy has not understood the concept in the least," and of Richard Feynman, "[We] have always had (secret, secret, close the doors!) we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents." How could both of these circumstances obtain?
For the purposes of making predictions, quantum theory consists in a mathematical apparatus and has clear enough rules of thumb about how to apply the mathematical apparatus in various experimental situations. If one is doing an experiment or observing something, one must first associate a mathematical quantum state or wave function with the system under observation. For example, if one prepares in the laboratory an electron beam with a fixed momentum, then the quantum state of each electron in the beam will be something like a sine wave. In the case of a single particle it is common to visualize this wave function as one would a water wave: as an object extended in space. Although this visualization works for a single particle, it does not work in general, so care must be taken. But for the moment, this simple visualization works. The wave function for the electron is "spread out" in space.
The second part of the mathematical apparatus is a dynamical equation that specifies how the quantum state changes with time so long as no observation or measurement is made on the system. These equations have names like the Schrödinger equation (for nonrelativistic quantum mechanics) and the Dirac equation (for relativistic quantum field theory). In the case of the electron mentioned earlier the dynamical equation is relevantly similar to the dynamical equation for water waves, so we can visualize the quantum state as a little plane water wave moving in a certain direction. If the electron is shot at a screen with two slits in it, then the quantum state will behave similarly to a water wave that hits such a barrier: circularly expanding waves will emerge from each slit, and there will be constructive and destructive interference where those waves overlap. If beyond the slits there is a fluorescent screen, we can easily calculate what the quantum state "at the screen" will look like: It will have the peaks and troughs characteristic of interfering water waves.
Finally comes the interaction with the screen. Here is where things get tricky. One would naively expect that the correct way to understand what happens when the electron wave function reaches the screen is to build a physical model of the screen and apply quantum mechanics to it. But that is not what is done. Instead, the screen is treated as a measuring device and the interaction with the screen as a measurement, and new rules are brought into play.
The new rules require that one first decide what property the measuring device measures. In the case of a fixed screen it is taken that the screen measures the position of a particle. If instead of a fixed screen we had an absorber on springs, whose recoil is recorded, then the device would measure the momentum of the particle. These determinations are typically made by relying on classical judgments: There is no algorithm for determining what a generic (physically specified) object "measures," or indeed whether it measures anything at all. But laboratory apparatus for measuring position and momentum have been familiar from before the advent of quantum theory, so this poses no real practical problem.
Next, the property measured gets associated with a mathematical object called a Hermitian operator. Again, there is no algorithm for this, but for familiar classical properties like position and momentum the association is established. For each Hermitian operator there is an associated set of wave functions called the eigenstates of the operator. It is purely a matter of mathematics to determine the eigenstates. Each eigenstate has associated with it an eigenvalue : The eigenvalues are supposed to correspond to the possible outcomes of a measurement of the associated property, such as the possible values of position, momentum, or energy. (Conversely, it is typically assumed that for every Hermitian operator, there corresponds a measurable property and possible laboratory operations that would measure it, although there is no general method for specifying these.)
The last step in the recipe for making predictions can now be taken. When a system is measured, the wave function for the system is first expressed as a sum of terms, each term being an eigenstate of the relevant Hermitian operator. Any wave function can be expressed as a sum of such terms, with each term given a weight, which is a complex number. For example, if an operator has only two eigenstates, call them 1> and 2>, then any wave function can be expressed in the form α1> + β 1>, with α and β complex numbers such that α^{2} + β ^{2} = 1. (This is the case, for example, when we measure the socalled spin of an electron in a given direction, and always get one of two results: spin up or spin down.) Recall that each eigenstate is associated with a possible outcome of the measurement: 1>, for example, could be associated with getting spin up, and 2> with getting spin down. The quantum mechanical prediction is now typically a probabilistic one: the chance of getting the result associated with 1> is α^{2}, and the chance of getting the result associated with 2> is β ^{2}. In general, one writes out the wave function of the system in terms of the appropriate eigenstates, and then the chance of getting the result associated with some eigenstate is just the square of the complex number that weights the state.
We can now see how quantum theory makes empirical predictions: So long as one knows the initial quantum state of the system and the right Hermitian operator to associate with the measurement, the theory will allow one to make probabilistic predictions for the outcome. Those predictions turn out to be exquisitely accurate.
If a Hermitian operator has only a finite number of eigenstates, or the eigenvalues of the operator are discrete, then any associated measurement should have only a discrete set of possible outcomes. This has already been in the case of spin; for a spin1/2 particle such as an electron, there are only two eigenstates for the spin in a given direction. Physically, this means that when we do an experiment to measure spin (which may involve shooting a particle through an inhomogeneous magnetic field) we will get only one of two results: Either the particle will be deflected up a given amount or down a given amount (hence spin up and spin down). In this case the physical quantity is quantized ; it takes only a discrete set of values. But quantum theory does not require all physical magnitudes to be quantized in this way; the position, momentum, or energy of a free particle is not. So the heart of quantum theory is not a theory of discreteness, it is rather just the mathematical apparatus and the rules of application described earlier.
The Measurement Problem
Why, then, is the quantum theory so puzzling, or so much more obscure than, say, classical mechanics? One way that it differs from classical theory is that it provides only probabilistic predictions for experiments, and one might well wonder, as Albert Einstein famously did, whether this is because "God plays dice with the universe" (i.e., the physical world itself is not deterministic) or whether the probabilities merely reflect our incomplete knowledge of physical situation. But even apart from the probabilities, the formulation of the theory is rather peculiar. Rules are given for representing the physical state of a system and for how that physical state evolves and interacts with other systems when no measurement takes place. This evolution is perfectly deterministic. A different set of rules is applied to derive predictions for the outcomes of experiments, and these rules are not deterministic. Still, an experiment in a laboratory is just a species of physical interaction, and ought to be treatable as such. There should be a way to describe the physical situation in the lab, and the interaction of the measured system with the measuring device, that relies only on applying, say, the Schrödinger equation to the physical state of the system plus the lab.
John S. Bell put this point succinctly, "If you make axioms, rather than definitions and theorems, about the 'measurement' of anything else, then you commit redundancy and risk inconsistency" (1987, p. 166). You commit redundancy because while the axioms about measurement specify what should happen in a measurement situation, the measurement situation, considered as a simple physical interaction, ought also to be covered by the general theory of such interactions. You risk inconsistency because the redundancy produces the possibility that the measurement axioms will contradict the results of the second sort of treatment. This is indeed what happens in the standard approaches to quantum mechanics. The result is called the measurement problem.
The measurement problem arises from a conflict in the standard approach between treating a laboratory operation as a normal physical interaction and treating it as a measurement. To display this conflict, we need some way to represent the laboratory apparatus as a physical device and the interaction between the device and the system as a physical interaction. Now this might seem to be a daunting task; a piece of laboratory apparatus is typically large and complicated, comprising astronomically large numbers of atoms. By contrast, exact wave functions are hard to come by for anything much more complicated than a single hydrogen atom. How can we hope to treat the laboratory operation at a fundamental level?
Fortunately, there is a way around this problem. Although we cannot write down, in detail, the physical state of a large piece of apparatus, there are conditions that we must assume if we are to regard the apparatus as a good measuring device. There are necessary conditions for being a good measuring device, and since we do regard certain apparatus as such devices, we must be assuming that they meet these conditions.
Take the case of spin. If we choose a direction in space, call it the x– direction, then there is a Hermitian operator that gets associated with the quantity x– spin. That operator has two eigenstates, which we can represent as x– up>_{S} and x– down>_{S}. The subscript s indicates that these are states of the system to be measured. We have pieces of laboratory equipment that can be regarded as good devices for measuring the x– spin of a particle. We can prepare such an apparatus in a state, call it the "ready" state, in which it will function as a good measuring device. Again, we do not know the exact physical details of this ready state, but we must assume such states exist and can be prepared. What physical characteristics must such a ready state have?
Besides the ready state, the apparatus must have two distinct indicator states, one of which corresponds to getting an "up" result of the measurement and the other that corresponds to getting a "down" result. And the key point about the physics of the apparatus is this: It must be that if the device in its ready state interacts with a particle in the state x– up>_{S}, it will evolve into the indicator state that is associated with the up result, and if it interacts with a particle in state x– down>_{S}, it will evolve into the other indicator state.
This can be put in a formal notation. The ready state of the apparatus can be represented by ready>_{A}, the up indicator state by "up">_{A}, and the down indicator state by "down">_{A}. If we feed an x– spin up particle into the device, the initial physical state of the system plus apparatus is represented by x– up>_{S}ready>_{A}, if we feed in an x– spin down particle the initial state is x– down>_{S}ready>_{A}. If the apparatus is, in fact, a good x– spin measuring device, then the first initial state must evolve into a state in which the apparatus indicates up, that is, it must evolve into x– up>_{S}"up">_{A}, and the second initial state must evolve into a state that indicates down, that is, x– down>_{S}"down">_{A}. Using an arrow to represent the relevant time evolution, then, we have for any good x– spin measuring device
x– up>_{S}ready>_{A} → x– up>_{S}"up">_{A} and
x– down>_{S}ready>_{A} → x– down>_{S}"down">_{A}.
We have not done any real physics yet, we have just indicated how the physics must come out if there are to be items that count as good x– spin measuring devices, as we think there are.
The important part of the physics that generates the measurement problem is the arrow in the representations listed earlier, the physical evolution that takes one from the initial state of the system plus apparatus to the final state. Quantum theory provides laws of evolution for quantum states such as the Schrödinger and Dirac equations. These would be the equations one would use to model the evolution of the system plus apparatus as a normal physical evolution. And all these dynamical equations have a common mathematical feature; they are all linear equations. It is this feature of the quantum theory that generates the measurement problem, so we should pause over the notion of linearity.
The set of wave functions used in quantum theory form a vector space. This means that one can take a weighted sum of any set of wave functions and get another wave function. (The weights in this case are complex numbers, hence it is a complex vector space.) This property was mentioned earlier when it was noted that any wave function can be expressed as a weighted sum of the eigenvectors of an observable. An operator on a vector space is just an object that maps a vector as input to another vector as output. If the operator O maps the vector A to the vector B , we can write that as
O (A ) = B .
A linear operator has the feature that you get the same result whether to operate on a sum of two vectors or you first operate on the vectors and then takes the sum. That is, if O is a linear operator, then for all vectors A and B ,
O (A + B ) = O (A ) + O (B ).
The dynamical equations evidently correspond to operators; they take as input the initial physical state and give as output the final state, after a specified period has elapsed. But further, the Schrödinger and Dirac equations correspond to linear operators. Why is this important?
We have already seen how the physical state of a good x– spin measuring device must evolve when fed a particle in the state x– up>_{S} or the state x– down>_{S}. But these are not the only spin states that the incoming particle can occupy. There is an infinitude of spin states, which correspond to all the wave functions that can be expressed as αx– up>_{S} + βx– down>_{S}, with α and β complex numbers such that α ^{2} + β ^{2} = 1. Correspondingly, there is an infinitude of possible directions in space in which one can orient a spin measuring device, and each of the directions is associated with a different Hermitian operator. For a direction at right angles to the x– direction, call it the y– direction, there are eigenstates y– up>_{S} and y– down>_{S}. These states can be expressed as weighted sums of the x– spin eigenstates, and in the usual notation
y– up>_{S} = 1/√2x– up>_{S} + 1/√2x– down>_{S} and
y– down>_{S} = 1/√2x– up>_{S} − 1/√2x– down>_{S}.
So what happens if we feed a particle in the state y– up>_{S} into the good x– spin measuring device?
Empirically, we know what happens: About half the time the apparatus ends up indicating "up" and about half the time it ends up indicating "down." There is nothing we are able to do to control the outcome: y– up eigenstate particles that are identically prepared nonetheless yield different outcomes in this experiment.
If we use the usual predictive apparatus, we also get this result. The "up" result from the apparatus is associated with the eigenstate x– up>_{S} and the "down" result associated with x– down>_{S}. The general recipe tells us to express the incoming particle in terms of these eigenstates as 1/√2x– up>_{S} + 1/√2x– down>_{S}, and then to take the squares of the weighting factors to get the probabilities of the results. This yields a probabilistic prediction of 50 percent chance "up" and 50 percent chance "down," which corresponds to what we see in the lab.
But if instead of the usual predictive apparatus we use the general account of physical interactions, we get into trouble. In that case, we would represent the initial state of the system plus apparatus as y– up>_{S}ready>_{A}. The dynamical equation can now be used to determine the physical state of the system plus apparatus at the end of the experiment.
But the linearity of the dynamical equations already determines what the answer must be. For
y– up>_{S}ready>_{A} = (1/√2x– up>_{S} + 1/√2x– down>_{S})ready>_{A}
= 1/√2x– up>_{S}ready>_{A} + 1/√2x– down>_{S}ready>_{A}.
But we know how each of the two terms of this superposition must evolve, since the apparatus is a good x– spin measuring device. By linearity, this initial state must evolve into the final state
1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A}.
That is, the final state of the apparatus plus system must be a superposition of a state in which the apparatus yields the result "up" and a state in which the apparatus yields the result "down." That is what treating the measurement as a normal physical interaction must imply.
So by making axioms about measurements, we have both committed redundancy and achieved inconsistency. The axioms say that the outcome of the experiment is not determined by the initial state; each of two outcomes is possible, with a 50 percent chance of each. But the treatment of the measurement as a normal physical interaction implies that only one final physical state can occur. And furthermore, that final physical state is an extremely difficult one to understand. It appears to be neither a state in which the measuring apparatus is indicating "up" nor a state in which the apparatus is indicating "down," but some sort of symmetric combination of the two. If all the physical facts about the apparatus are somehow represented in its wave function, then it seems that at the end of the experiment the apparatus can neither be indicating up (and not down) nor down (and not up). But we always see one or the other when we do this experiment.
At this point our attention must clearly be turned to the mathematical object we have called the wave function. The wave function is supposed to represent the physical state of a system. The question is whether the wave function represents all of the physical features of a system, or whether systems represented by the same wave function could nevertheless be physically different. If one asserts the former, then one believes that the wave function is complete, if the latter, then the wave function is incomplete. The standard interpretations of the quantum formalism take the wave function to be complete, interpretations that take it to be incomplete are commonly called hidden variables theories (although that is a misleading name).
The wave function 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A} does not represent the apparatus as indicating up (and not down) or as indicating down (and not up). So if the wave function is complete, the apparatus, at the end of the experiment, must neither be indicating up (and not down) nor down (and not up). But that flatly contradicts our direct experience of such apparatus. This is the measurement problem. As Bell puts it, "Either the wave function, as given by the Schrödinger equation, is not everything, or it is not right" (1987, p. 201).
Collapse Interpretations
collapse tied to observation
What is one to do? From the beginning of discussions of these matters, Einstein held the argument to show that the wave function is not everything and hence that quantum mechanics is incomplete. The wave function might represent part of the physical state of a system, or the wave function might represent some features of ensembles, collections, or systems, but the wave function cannot be a complete representation of the physical state an individual system, like the particular x– spin measuring device in the laboratory after a particular experiment is done. For after the experiment, the apparatus evidently either indicates "up" or it indicates "down," but the wave function does not represent it as doing so.
By contrast, the founders of the quantum theory, especially Bohr, insisted that the wave function is complete. And they did not want to deny that the measuring device ends up indicating one determinate outcome. So the only option left was to deny that the wave function, as given by the Schrödinger equation, is right. At some times, the wave function must evolve in a way that is not correctly described by the Schrödinger equation. The wave function must "collapse." The standard interpretation of quantum mechanics holds that the wave function evolves, at different times, in either of two different ways. This view was given its canonical formulation in John von Neumann's Mathematical Foundations of Quantum Mechanics (1955). Von Neumann believed (incorrectly, as we will see) that he had proven the impossibility of supplementing the wave function with hidden variables, so he thought the wave function must be complete. When he comes to discuss the time evolution of systems, Von Neumann says "[w]e therefore have two fundamentally different types of interventions which can occur in a system S . … First, the arbitrary [i.e., nondeterministic] changes by measurement. … Second, the automatic [i.e., deterministic] changes which occur with the passage of time" (p. 351). The second type of change is described by, for example, the Schrödinger equation, and the first by an indeterministic process of collapse.
What the collapse dynamics must be can be read off from the results we want together with the thesis that the wave function is complete. For example, in the x– spin measurement of the y– spin up electron, we want there to be a 50 percent chance that the apparatus indicates "up" and a 50 percent chance that it indicates "down." But the only wave function that represents an apparatus indicating "up" is "up">_{A}, and the only wave function for an apparatus indicating "down" is "down">_{A}. So instead of a deterministic transition to the final state
1/√ 2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A}
we must postulate an indeterministic transition with a 50 percent chance of yielding x– up>_{S}"up">_{A} and a 50 percent chance of yielding x– down>_{S}"down">_{A}.
It is clear what the collapse dynamics must do. What is completely unclear, though, is when it must do it. All Von Neumann's rules say is that we get collapses when measurements occur and deterministic evolutions "with the passage of time." But surely measurements also involve the passage of time; so under exactly what conditions do each of the evolutions obtain? Collapse theories, which postulate two distinct and incompatible forms of evolution of the wave function, require some account of when each type of evolution occurs.
Historically, this line of inquiry was influenced by the association of the problem with "measurement" or "observation." If one begins with the thought that the nonlinear evolution happens only when a measurement or observation occurs, then the problem becomes one of specifying when a measurement or observation occurs. And this in turn suggests that we need a characterization of an observer who makes the observation. Pushing even further, one can arrive at the notion that observations require a conscious observer of a certain kind, folding the problem of consciousness into the mix. As Bell asks, "What exactly qualifies some physical systems to play the role of 'measurer'? Was the wave function of the world waiting to jump for thousands of millions of years until a singlecelled living creature appeared? Or did it have to wait a little longer, for some better qualified system … with a Ph.D.?" (1987, p. 117).
This line of thought was discussed by Eugene Wigner, "This way out of the difficulty amounts to the postulate that the equations of motion of quantum mechanics cease to be linear, in fact that they are grossly nonlinear if conscious beings enter the picture" (1967, p. 183). Wigner suggests that the quantum measurement problem indicates "the effect of consciousness on physical phenomena," a possibility of almost incomprehensible implications (not the least of which: How could conscious beings evolve if there were no collapses, since the universe would surely be in a superposition of states with and without conscious beings!). In any case, Wigner's speculations never amounted to a physical theory, nor could they unless a physical characterization of a conscious system was forthcoming.
So if one adopts a collapse theory, and if the collapses are tied to measurements or observations, then one is left with the problem of giving a physical characterization of an observation or a measurement. Such physicists as Einstein and Bell were incredulous of the notion that conscious systems play such a central role in the physics of the universe.
spontaneous collapse theories
Nonetheless, precise theories of collapse do exist. The key to resolving the foregoing puzzle is to notice that although collapses must be of the right form to make the physical interactions called "observations" and "measurements" have determinate outcomes, there is no reason that the collapse dynamics itself need mention observation or measurement. The collapse dynamics merely must be of such a kind as to give outcomes in the right situations.
The most widely discussed theory of wave function collapse was developed by Gian Carlo Ghirardi, Alberto Rimini, and Tulio Weber (1986) and is called the spontaneous localization theory or, more commonly, the GRW theory. The theory postulates an account of wave function collapse that makes no mention of observation, measurement, consciousness, or anything of the sort. Rather, it supplies a universal rule for both how and when the collapse occurs. The "how" of the collapse involves localization in space; when the collapse occurs, one takes a single particle and multiplies its wave function, expressed as a function of space, by a narrow Gaussian (bell curve). This has the effect of localizing the particle near the center of the Gaussian, in the sense that most of the wave function will be near the center. If the wave function before the collapse is widely spread out over space, after the collapse it is much more heavily weighted to a particular region. The likelihood that a collapse will occur centered at a particular location depends on the square amplitude of the precollapse wave function for that location. The collapses, unlike Schrödinger evolution, are fundamentally nondeterministic, chancy events.
The GRW collapse does not perfectly locate the wave function at a point. It could not do so for straightforward physical reasons: The localization process will violate the conservation of energy, and the more narrowly the postcollapse wave function is confined, the more new energy is pumped into the system. If there were perfect localizations, the energy increase would be infinite—and immediately evident. (It follows from these same observations that even in the "standard" theory there are never collapses to perfectly precise positions—even after a socalled position measurement.)
Therefore, the GRW theory faces a decision: Exactly how localized should the localized wave function be? This corresponds to choosing a width for the Gaussian: The narrower the width, the more energy that is added to the system on collapse. The choice for this width is bounded in one direction by observation—the energy increase for the universe must be below observed bounds, and particular processes, such as spontaneous ionization, should be rare—and in the other direction by the demand that the localization solve the measurement problem. As it happens, Ghirardi, Rimini, and Weber chose a value of about 10^{–5} centimeters for the width of the Gaussian. This is a new constant of nature.
Beside the "how" of the collapse, the GRW theory must specify the "when." It was here that we saw issues such as consciousness getting into the discussion: If collapses occur only when measurements or observations occur, then we must know when measurements or observations occur. The GRW theory slices through this problematic neatly; it simply postulates that the collapses take place at random, with a fixed probability per unit time. This introduces another new fundamental constant: the average time between collapses per particle. The value of that constant is also limited in two directions; on the one hand, we know from interference experiments that isolated individual particles almost never suffer collapses on the time scale of laboratory operations. On the other hand, the collapses must be frequent enough to resolve the measurement problem. The GRW theory employs a value of 10^{15} seconds, or about 100 million years, for this constant.
Clearly, the constant has been chosen large enough to solve one problem: Individual isolated particles will almost never suffer collapses in the laboratory. It is less clear, though, how it solves the measurement problem.
The key here is to note that actual experiments record their outcomes in the correlated positions of many, many particles. In our spin experiment we said that our spin measuring device must have two distinct indicator states: "up"> and "down">. To be a useful measuring device, these indicator states must be macroscopically distinguishable. This is achieved with macroscopic objects—pointers, drops of ink, and so on—to indicate the outcome. And a macroscopic object will have on the order of 10^{23} particles.
So suppose the outcome "up"> corresponds to a pointer pointing to the right and the outcome "down"> corresponds to the pointer pointing to the left. If there are no collapses, the device will end up with the wave function 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A}. Now although it is unlikely that any particular particle in the pointer will suffer a collapse on the time scale of the experiment, because there are so many particles in the pointer, it is overwhelmingly likely that some particle or other in the pointer will suffer a collapse quickly: within about 10^{–8} seconds. And (this is the key), since in the state 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A} all the particle positions are correlated with one another, if the collapse localizes a single particle in the pointer, it localizes all of them. So, if having the wave functions of all the particles in the pointer highly concentrated on the right (or on the left) suffices to solve the measurement problem, the problem will be solved before 10^{–4} seconds has elapsed.
The original GRW theory has been subject to much discussion. In a technical direction there have been similar theories, by Ghirardi and Rimini and by Philip Perle, that make the collapses to be continuous rather than discrete. More fundamentally, there have been two foundational questions: First, does the only approximate nature of the "localization" vitiate its usefulness in solving the measurement problem, and second, does the theory require a physical ontology distinct from the wave function? Several suggestions for such an additional ontology have been put forward, including a mass density in spacetime, and discrete events ("flashes") in spacetime.
The addition of such extra ontology, beyond the wave function, reminds us of the second horn of Bell's dilemma: Either the wave function as given by the Schrödinger equation is not right or it is not everything. The versions of the GRW theory that admit a mass density or the flashes postulate that the wave function is not everything, do so in such a way that the exact state of the extra ontology can be recovered from the wave function. The more radical proposal is that there is extra ontology, and its state cannot be read off the wave function. These are the socalled hidden variables theories.
Additional Variables Theories
According to an additional variables theory, the complete quantum state of the system after a measurement is indeed 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A}. The outcome of the measurement cannot be read off of that state because the outcome is realized in the state of the additional variables, not in the wave function. It immediately follows that for any such theory, the additional ontology, the additional variables, had best not be "hidden": since the actual outcome is manifest, the additional variables had best be manifest. Indeed, on this approach the role of the wave function in the theory is to determine the evolution of the additional variables. The wave function, since it is made manifest only through this influence, is really the more "hidden" part of the ontology.
The best known and most intensively developed additional variables theory goes back to Louis de Broglie, but is most intimately associated with David Bohm. In its nonrelativistic particle version, Bohmian mechanics, physical objects are constituted of alwayslocated point particles, just as was conceived in classical mechanics. At any given time, the physical state of a system comprises both the exact positions of the particles and a wave function. The wave function never collapses: it always obeys a linear dynamical equation like the Schrödinger equation. Nonetheless, at the end of the experiment the particles in the pointer will end up either all on the right or all on the left, thus solving the measurement problem. This is a consequence of the dynamics of the particles as determined by the wave function.
It happens that the particle dynamics in Bohmian mechanics is completely deterministic, although that is not fundamentally important to the theory and indeterministic versions of Bohm's approach have been developed. The dynamical equation used in Bohmian mechanics is much more importantly the simplest equation that one can write down if one assumes that the particle trajectories are to be determined by the wave function and that various symmetries are to be respected. If one starts with idea that there are particles and that quantum theory should be a theory of the motion of those particles that reproduces the predictions of the standard mathematical recipe, Bohmian mechanics is the most direct outcome.
Since Bohmian mechanics is a deterministic theory, the outcome of any experiment is fixed by the initial state of the system. The probabilities derived from the standard mathematical recipe must therefore be interpreted purely epistemically: they reflect our lack of knowledge of the initial state. This lack of knowledge turns out to have a physical explanation in Bohmian mechanics: Once one models any interaction designed to acquire information about a system as a physical interaction between a system and an observer, it can be shown to follow that initial uncertainty about the state of the target system cannot be reduced below a certain bound, given by the Heisenberg uncertainty relations.
This illustrates the degree to which the ontological "morals" of quantum theory are held hostage to interpretations. In the standard interpretation, when the wave function of a particle is spread out, there is no further fact about exactly where the particle is. (Because of this, position measurements in the standard theory are not really measurements, i.e., they do not reveal preexisting facts about positions.) In Bohm's interpretation, when the wave function is spread out, there is a fact about exactly where the particle is, but it follows from physical analysis that one cannot find out more exactly where it is without thereby altering the wave function (more properly, without altering the effective wave function that we use to make predictions). Similarly, in the standard interpretation, when we do a position measurement on a spread out particle, there is an indeterministic collapse that localizes the particle—it gives it an approximate location. According to Bohm's theory the same interaction really is a measurement: It reveals the location that the particle already had. So it is a fool's errand to ask after "the ontological implications of quantum theory": the account of the physical world one gets depends critically on the interpretation of the formalism.
Bohm's approach has been adapted to other choices for the additional variables. In particular, interpretations of field theory have been pursued in two different ways: with field variables that evolve indeterministically, and with the addition to Bohmian mechanics the possibility of creating and annihilating particles in an indeterministic way. Each of these provides the wherewithal to treat standard field theory.
There have been extensive examinations of other ways to add additional variables to a noncollapse interpretation, largely under the rubric of modal interpretations. Both rules for specifying what the additional variables are and rules for the dynamics of the new variables have been investigated.
A Third Way?
There are also some rather radical attempts to reject each of Bell's two options and to maintain both that the wave function, as given by the Schrödinger equation, is right and that it is everything—that is, it is descriptively complete. Since a wave function such as 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A} does not indicate that one outcome rather than the other occurred, this requires maintaining that it is not the case that one outcome rather than the other occurred.
This denial can come in two flavors. One is to maintain that neither outcome occurred, or even seemed to occur, and one is only somehow under the illusion that one did. David Z. Albert (1992) investigated this option under the rubric the bare theory. Ultimately, the bare theory is insupportable, since any coherent account must at least allow that the quantum mechanical predictions appear to be correct.
The more famous attempt in this direction contends that, in some sense, both outcomes occur, albeit in different "worlds." Evidently, the wave function 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A} can be written as the mathematical sum of two pieces, one of which corresponds to a situation with the apparatus indicating "up" and the other to a situation with the apparatus indicating "down." The many worlds theory attempts to interpret this as a single physical state, which somehow contains or supports two separate "worlds," one with each outcome.
The many worlds interpretation confronts several technical and interpretive hurdles. The first technical hurdle arises because any wave function can be written as the sum of other wave functions in an infinitude of ways. For example, consider the apparatus state 1/√2 "up">_{A} + 1/√2 "down">_{A}. Intuitively, this state does not represent the apparatus as having fired one way or another. This state can be called D_{1}>_{A}. Similarly, D_{2}>_{A} can represent the state 1/√2 "up">_{A} − 1/√2 "down">_{A}, which also does not correspond to an apparatus with a definite outcome. The state 1/√2x– up>_{S}"up">_{A} + 1/√2x– down>_{S}"down">_{A}, which seems to consist in two "worlds," one with each outcome, can be written just as well as 1/√2y– up>_{S}D_{1}>_{A} + 1/√2y– down>_{S}D_{2}>_{A}. Written in this way, the state seems to comprise two worlds: one in which the electron has y– spin up and the apparatus is not in a definite indicator state, the other in which the electron has y– spin down, and the apparatus is in a distinct physical state that is equally not a definite indicator state. If these are the "two worlds," then the measurement problem has not been solved, it has been merely traded as a single world without a definite outcome for a pair of worlds neither of which has a definite outcome.
So the many worlds theory would first have to maintain that there is a preferred way to decompose the global wave function into "worlds." This is known as the preferred basis problem.
A more fundamental difficulty arises when one tries to understand the status of the probabilities in the many worlds theory. In a collapse theory the probabilities are probabilities for collapses to occur one way rather than another, and there is a physical fact about how the collapses occur, and therefore about frequencies of outcomes. In an additional variables theory the probabilities are about which values the additional variables take, and there is a physical fact about the values they take and therefore about frequencies of outcomes. But in the many worlds theory, whenever one does an experiment like the spin measurement described earlier, the world splits: There is no frequency with which one outcome occurs as opposed to the other. And more critically, that the world "splits" has nothing to do with the amplitude assigned to the two daughter worlds.
Suppose, for example, that instead of feeding a y– spin up electron into our x– spin measuring device, we feed in an electron whose state is 1/2x– up>_{S} + √3/2 x– down>_{S}. By linearity, at the end of the experiment, the state of the system plus apparatus is 1/2x– up>_{S}"up">_{A} + √3/2 x– down>_{S}"down">_{A}. Even if we have solved the preferred basis problem and can assert that there are now two worlds, one with each outcome, notice that we are evidently in exactly the same situation as in the original experiment: Whenever we do the experiment, the universe "splits." But the quantum formalism counsels us to have different expectations in the two cases: in the first case, we should expect to get an "up" outcome 50 percent of the time, in the second case only 25 percent of the time. It is unclear, in the many worlds theory, what the expectations are for, and why they should be different.
Another interpretation of the quantum formalism that has been considered is the many minds theory of Barry Loewer and Albert. Despite the name, the many minds theory is not allied in spirit with the many worlds theory: It is rather an additional variables theory in which the additional variables are purely mental subjective states. This is somewhat akin to Wigner's appeal to consciousness to solve the measurement problem, but where Wigner's minds affect the development of the wave function, the minds in this theory (as is typical for additional variables theories) do not. The physical measurement apparatus in the problematic case does not end up in a definite indicator state, but a mind is so constituted that it will, in this situation, have the subjective experience of seeing a particular indicator state. Which mental state the mind evolves into is indeterministic. The preferred basis problem is addressed by stipulating that there is an objectively preferred basis of physical states that are associated with distinct mental states.
The difference between the many worlds and the many minds approaches is made most vivid by noting that the latter theory does not need more than one mind to solve the measurement problem, where the problem is now understood as explaining the determinate nature of our experience. A multiplicity of minds are added to Loewer and Albert's theory only to recover a weak form of mindbody supervenience: Although the experiential state of an individual mind does not supervene on the physical state of the body with which it is associated, if one associates every body with an infinitude of minds, the distribution of their mental states can supervene on the physical state of the body.
A final attempt to address the problems of quantum mechanics deserves brief mention. Some maintain that the reason quantum mechanics is so confusing is not because the mathematical apparatus requires emendation (e.g., by explicitly adding a collapse or additional variables) or an interpretation (i.e., an account of exactly which mathematical objects represent physical facts), but because we reason about the quantum world in the wrong way. Classical logic, it is said, is what is leading us astray. We merely need to replace our patterns of inference with quantum logic.
There is a perfectly good mathematical subject that sometimes goes by the name quantum logic, which is the study, for example, of relations between subspaces of Hilbert space. These studies, like all mathematics, employ classical logic. There is, however, no sense in which these studies, by themselves, afford a solution to the measurement problem or explain how it is that experiments like those described earlier have unique, determinate outcomes.
The Wave Function, Entanglement, epr, and NonLocality
For the purposes of this discussion, the wave function has been treated as if it were something like the electromagnetic field: a field defined on space. Although this is not too misleading when discussing a single particle, it is entirely inadequate when considering collections of particles. The wave function for N particles is a function not on physical space, but on the 3Ndimensional configuration space, each point of which specifies the exact location of all the N particles. This allows for the existence of entangled wave functions, in which the physical characteristics of even widely separated particles cannot be specified independently of one another.
Consider R and L, a pair of widely separated particles. Among the wave functions available for this pair is one that ascribes x– spin up to R and x– spin down to L, which is written as x– up>_{R}x– down>_{L}, and one that attributes x– spin down to R and x– spin up to L:x– down>_{R}x– up>_{L}. These are called product states, and all predictions from these states about how R will respond to a measurement are independent of what happens to L, and vice versa.
But besides these product states, there are entangled states like the singlet state : 1/√2x– up>_{R}x– down>_{L}  1/√2x– down>_{R}x– up>_{L}. In this state the x– spins of the two particles are said to be anticorrelated since a measurement of their x– spins will yield either up for R and down for L or down for R and up for L (with a 50 percent chance for each outcome). Even so, if the wave function is complete, then neither particle in the singlet state has a determinate x– spin: the state is evidently symmetrical between spin up and spin down for each particle considered individually.
How can the x– spins of the particles be anticorrelated if neither particle has an x– spin? The standard answer must appeal to dispositions: although in the singlet state neither particle is disposed to display a particular x– spin on measurement, the pair is jointly disposed to display opposite x– spins if both are measured. Put another way, on the standard interpretation, before either particle is measured neither has a determinate x– spin, but after one of them is measured, and, say, displays x– spin up, the other acquires a surefire disposition to display x– spin down. And this change occurs simultaneously, even if the particles happen to be millions of miles apart.
Einstein found this to be a fundamentally objectionable feature of the standard interpretation of the wave function. In a paper coauthored with Boris Podolsky and Nathan Rosen (EPR 1935), Einstein pointed out this mysterious, instantaneous "spooky actionatadistance" built into the standard approach to quantum theory. It is uncontroversial that an x– spin measurement carried out on L with, say, an "up" outcome" will result in a change of the wave function assigned to R: It will now be assigned the state x– down>_{R}. If the wave function is complete, then this must reflect a physical change in the state of R because of the measurement carried out on L, even though there is no physical process that connects the two particles. What EPR pointed out (using particle positions rather than spin, but to the same effect) was that the correlations could easily be explained without postulating any such actionatadistance. The natural suggestion is that when we assign a particular pair of particles the state 1/√2x– up>_{R}x– down>_{L} − 1/√2x– down>_{R}x– up>_{L}, it is a consequence of our ignorance of the real physical state of the pair: The pair is either in the product state x– up>_{R}x– down>_{L} or in the product state x– down>_{R}x– up>_{L}, with a 50 percent chance of each. This simple expedient will predict the same perfect anticorrelations without any need to invoke a real physical change of one particle consequent to the measurement of the other.
So matters stood until 1964, when Bell published his famous theorem. Bell showed that Einstein's approach could not possibly recover the full range of quantum mechanical predictions. That is, no theory can make the same predictions as quantum mechanics if it postulates (1) that distant particles, such as R and L, have each their own physical state definable independently of the other and (2) measurements made on each of the particles have no physical affect on the other. Entanglement of states turns out to be an essential feature—arguably the central feature—of quantum mechanics. And entanglement between widely separated particles implies nonlocality: The physics of either particle cannot be specified without reference to the state and career of the other.
The spooky actionatadistance that Einstein noted is not just an artifact of an interpretation of the quantum formalism; it is an inherent feature of physical phenomena that can be verified in the laboratory. A fundamental problem is that the physical connection between the particles is not just spooky (unmediated by a continuous spacetime process), it is superluminal. It remains unclear to this day how to reconcile this with the theory of relativity.
See also Bohm, David; Bohmian Mechanics; Many Worlds/Many Minds Interpretation of Quantum Mechanics; Modal Interpretation of Quantum Mechanics; Nonlocality; Philosophy of Physics; Quantum Logic and Probability.
Bibliography
Albert, David Z. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1992.
Bell, John S. Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy. Cambridge, U.K.: Cambridge University Press, 1987.
Dürr, Detlef, Sheldon Goldstein, and Nino Zanghi. "Quantum Equilibrium and the Origin of Absolute Uncertainty." Journal of Statistical Physics 67 (1992): 843–907.
Ghirardi, GianCarlo, Alberto Rimini, and Tulio Weber. "Unified Dynamics for Microscopic and Macroscopic Systems." Physical Review 34 (2) (1986): 470–491.
Maudlin, Tim. Quantum Nonlocality and Relativity: Metaphysical Intimations of Modern Physics. Malden, MA: Blackwell, 2002.
Von Neumann, John. Mathematical Foundations of Quantum Mechanics. Translated by Robert T. Beyer. Princeton, NJ: Princeton University Press, 1955.
Wheeler, John Archibald, and Wojciech Hubert Zurek, eds. Quantum Theory and Measurement. Princeton, NJ: Princeton University Press, 1983.
Wigner, Eugene. Symmetries and Reflections. Westport, CT: Greenwood Press, 1967.
Tim Maudlin (2005)
Cite this article
Pick a style below, and copy the text for your bibliography.

MLA

Chicago

APA
"Quantum Mechanics." Encyclopedia of Philosophy. . Encyclopedia.com. 19 Aug. 2018 <http://www.encyclopedia.com>.
"Quantum Mechanics." Encyclopedia of Philosophy. . Encyclopedia.com. (August 19, 2018). http://www.encyclopedia.com/humanities/encyclopediasalmanacstranscriptsandmaps/quantummechanics
"Quantum Mechanics." Encyclopedia of Philosophy. . Retrieved August 19, 2018 from Encyclopedia.com: http://www.encyclopedia.com/humanities/encyclopediasalmanacstranscriptsandmaps/quantummechanics
Citation styles
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the mostrecent information available at these sites:
Modern Language Association
The Chicago Manual of Style
http://www.chicagomanualofstyle.org/tools_citationguide.html
American Psychological Association
Notes:
 Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
 In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.