Table of Contents
The Problem
The measurement problem becomes a problem only when we neglect to specify the nature of the observer’s Hilbert space. Postulates I (Systems are described by vectors in a Hilbert space) and II (Time evolution occurs via some given Hamiltonian for a particular system) are fine in that regard. These two postulates deal only with the description of a quantum system. It is the third postulate (Measurement leads to collapse of state vector to an eigenstate) where there is a problem.
A measurement is said to occur whenever one quantum system – the “observer” – described by a Hilbert space ( $ H_{O} $ ) interacts with another system described by a Hilbert space ($H_{S}$). The complete Hilbert space of the system (“observer” and the “observed”) is given by:
$$ H_{O+S} = H_{O} \otimes H_{S} $$
To actually realize the dichotomy between an “internal” and “external” observer, the size of the observer’s Hilbert space, given by its dimension ($dim(H_O)$), must be comparable to ($dim(H_S)$) – the dimension of the Hilbert space corresponding to the system under observation. Instead, what we generally encounter is ($dim(H_O) \gg dim(H_S)$) as is the case for, say, an apparatus with a vacuum chamber and other paraphernalia which is being used to study an atomic scale sample.
In this case the apparatus is not described by the three states ($\{\ket{ready}, \ket{up}, \ket{down}\}$), but by the larger family of states ($\{\ket{ready;\alpha}, \ket{up;\alpha}, \ket{down;\alpha}$) where ($\alpha$) parametrizes the “helper” degrees of freedom of the apparatus which are not directly involved in generating the final output, but are nevertheless present in any interaction. Examples of these d.o.f are the states of the electrons in the wiring which transmits data between the apparatus and the system.
The initial state of the complete system is of the form:
$$\ket{\psi_i} = \ket{ready;\alpha} (\mu \ket{1} + \nu \ket{0} )$$
When $ H_O$ interacts with $ H_S$ in such a way, that a measurement is said to have occurred, the final state of the composite system can be written as:
$$\ket{\psi_i} = \ket{up;\alpha} (\mu_{up} \ket{1} + \nu_{up} \ket{0}) + \ket{down;\alpha} (\mu_{down} \ket{1} + \nu_{down} \ket{0})$$
In a complete self-consistent theory, one would hope that all paradoxes regarding measurement could be resolved by understanding unitary evolution of the full Hilbert space ($H_{sum}$). This is not quite the case. Consider the case when the system being observed is a spin-1/2 object with a two dimensional Hilbert space ($H_{sys}$) a basis for which can be written as ($\{ \ket{0}; \ket{1} \} $). The Hilbert space of the observing apparatus ($H_{obs}$) is large enough to describe all the possible positions of dials, meters and probes on the apparatus. Let us assume that ($H_{obs}$) can itself be written as a tensor product:
$$H_{obs} = H_{pointer} \otimes H_{res}$$
For some poorly understood reason, when ($N_{obs} \rightarrow \infty$), an interaction between the two systems – observer and subject – causes the state of the subject to “collapse” to one of the eigenstates of the operator (or “property”) of the subject being measured ($\ket{\psi_{sub}} \rightarrow \ket{\phi^i_{obs}}$).
When QM was first invented, it was understood that the measuring apparatus is a classical system requiring an infinite number of degrees of freedom for its complete description. Thus the “collapse” that occurs is because of something that happens at the interface of the classical measuring apparatus and the quantum system being observed. This ad-hoc separation of the classical from the quantum came to known as the “Heisenberg cut” (or “Bohr cut” depending of your reading of history). Since the quantum description of systems with even a few degrees of freedom appeared to be a great technical feat in those early days, physicists didn’t have much reason to worry about systems with large ($N \gg 1$) dimension Hilbert spaces.
Mechanisms for Wavevector Collapse
To address the lack of understanding of state vector collapse in QM, and to get a grasp on the description of systems with large Hilbert spaces, first the many-worlds interpretation (MWI) and later the consistent histories or decoherence framework was constructed.The MWI sheds little direct insight into the practical question of state vector collapse, and seemingly compounded the problem by introducing the additional question about the “reality” of the many different possibilities that are allowed in the many-worlds framework, and how the “reality” we love and observe can emerge out of this multitude of possibilities. As noted by Zurek [RMP, 2003]:
In essence, the many-worlds interpretation does not address, but only postpones, the key question. The quantum-classical boundary is pushed all the way to- wards the observer, right against the border between the material universe and the consciousness, leaving it at a very uncomfortable place to do physics. The many- worlds interpretation is incomplete: it does not explain what is effectively classical and why.
The latter skirted the question of measurement and focused more on the question of the emergence of the classical from the quantum.
There have been attempts at addressing the question of state vector collapse which involve ad-hoc modification of the quantum dynamics, such as those which make non-linear modifications to the superposition principle [Ghiraridi, Rimini, Weber, 1985 ]. While interesting in their own right, such models do not appear to have observational support and only lead to a new dilemma wherein one must explain the origin of the non-linearities in superpositions of quantum states.
Penrose [Penrose, 1996]has suggested that these non-linearities arise when one tries to superpose two quantum states which live in different spacetimes and thus do not share the same time-translation operator. In other words, his suggestion is that gravitation plays an important role in understanding state-vector collapse. This is a tempting notion. If true it would imply that the wave-function of a system massive enough to exert a gravitational pull on external objects cannot be arbitrarily spread out. This would explain why (sufficiently) massive systems always appear to be extremely well-localized but it would also imply that one will be unable to construct stable superpositions of (sufficiently) massive macroscopic objects in the laboratory. Experiments are presently underway to test the truth of this statement.
To be contd …