Foundations independent of measurements


Traditional foundations of quantum mechanics all depend far too heavily on the concept of (hypothetical, idealized) experiments. This is one of the reasons why these foundations are still unsettled, 85 years after the discovery of the basic equations for modern quantum mechanics. No other theory has such shaky foundations.
The main reason is that the idealization that goes as hard core into the usual interpretations (where they are taken very seriously) is in reality only a didactical trick to make the formal definitions of quantum mechanics a bit easier to swallow for the newcomer. Except in a few simple cases, it is too far removed from experimental practice to tell much about real experiment, and hence about how quantum mechanics is used in real applications.

In experimental physics, measurement is a very complex thing - far more complex than Born's rule (the usual starting point) suggests. To measure the distance between two galaxies, the mass of the top quark, or the Lamb shift - just to mention three basic examples - cannot be captured by the idealistic measurement concept used there, nor by any of the refinements of it discussed in the literature.
In each of the three cases mentioned, one assembles a lot of auxiliary information and ultimately calculates the measurement result from a best fit of a model to the data. Clearly the theory must already be in place in order to do that. We don't even know what a top quark should be whose mass we are measuring unless we have a theory that tell us this!
And the Lamb shift (one of the most famous real observables in the history of quantum mechanics) is not even an observable in the traditional sense!

But to define quantum mechanics (or any other physical theory) properly, we only need to define the corresponding calculus and then say how to relate the quantities that can be calculated from quantum mechanical models (or models of the theory considered) to the stuff experimental physicists talk about.
We may then proceed as in the modern account of the oldest of the physical sciences: Euclidean geometry, where (on laboratory scales) there is consensus about how theory and reality correspond:
We develop a theory that simply gives a precise formal meaning to the concepts physicists talk about. This is pure math, in case of geometry consisting of textbook linear algebra and analytic geometry. The identification with real life is done after having the theory (though the theory and the nomenclature was _developed_ with the goal to enable this identification in a way consistent with tradition):
For geometry, by declaring anything in real life resembling an ideal point, line, plane, circle, etc., to be a point, line, plane, circle, etc., if and only if it can be assigned in an approximate way (determined by the heuristics of traditional measurement protocols, whatever that is) the properties that the ideal point, line, plane, circle, etc., has, consistent to the assumed accuracy with the deductions from the theory. If the match is not good enough, we can explore whether an improvement can be obtained by modifying measurement protocols (devising more accurate instruments or more elaborate error-reducing calculation schemes, etc.) or by modifying the theory (to a non-Euclidean geometry, say, which uses the same concepts but assumes slightly different properties relating them.
For quantum mechanics, by declaring anything in real life resembling an ideal photon, electron, atom, molecule, crystal, ideal gas, etc., to be a photon, electron, atom, molecule, crystal, ideal gas, etc., if and only if it can be assigned in an approximate way (determined by the heuristics of traditional measurement protocols, whatever that is) the properties that the ideal photon, electron, atom, molecule, crystal, ideal gas, etc., has, consistent to the assumed accuracy with the deductions from the theory.
This is precisely the way Callen, one of the great expositors of thermodynamics, justifies phenomenological equilibrium thermodynamics in his famous textbook

He writes on p.15: ''Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory.''

This identification process is fairly independent of the way measurements are done, as long as they are capable to produce the required accuracy for the matching, hence carries no serious philosophical difficulties.
We need the theory already to define precisely what it is that we observe.
On the other hand, the theory must be crafted in such a way that it actually applies to reality - otherwise the observed properties cannot match the theoretical description.
As a result, theoretical concepts and experimental techniques complement each other in a way that, if a theory is reaching maturity, it has developed its concepts to the point where they are a good match to reality. We then say that something in real life ''is'' an instance of the theoretical concept if it matches the theoretical description to our satisfaction.
It is not difficult to check that this holds not only in physics but everywhere where we have clear concepts about some aspect of reality.

If the match between theory and observation is not good enough, we can explore whether an improvement can be obtained by modifying measurement protocols (devising more accurate instruments or more elaborate error-reducing calculation schemes, etc.) or by modifying the theory (to a hyper quantum mechanics, say, which uses the same concepts but assumes slightly different properties relating them).

Then, having established informally that the theory is an appropriate model for the physical aspects of reality, one can study the measurement problem rigorously on this basis:
One declares that a real instrument (in the sense of a complete experimental arrangement including the numerical postprocessing of raw results that gives the final result) performs a real measurement of an ideal quantity if and only if the following holds: Modeling the real instrument as a macroscopic quantum system (with the properties assigned to it by statistical mechanics/thermodynamics) predicts raw measurements such that, in the model, the numerical postprocessing of raw results that gives the final result is in sufficient agreement with the value of the ideal quantity in the model.
Thus measurement analysis is now a scientific activity like any other rather than a philosophical prerequisite for setting up a consistently interpreted quantum mechanics.

The procedure outlined above is the basis of my thermal interpretation of quantum mechanics. The thermal interpretation is superior to the interpretions found in the literature, since it

  • acknowledges that there is only one world;
  • is observer-independent and hence free from subjective elements;
  • satisfies the principles of locality and Poincare invariance, as defined in relativistic quantum field theory;
  • is by design compatible with the classical ontology of ordinary thermodynamics;
  • has no split between classical and quantum mechanics;
  • applies both to single quantum objects (like a quantum dot, the sun or the universe) and to statistical ensembles;
  • allows to derive Born's rule in the limit of a perfect von-Neumann measurement (the only case where Born's rule has empirical content);
  • has no collapse (except approximately in non-isolated subsystems);
  • uses no concepts beyond what is taught in every quantum mechanics course.
    No other interpretation combines these merits. The thermal interpretation leads to a gain in clarity of thought, which results in saving a lot of time otherwise spent in the contemplation of meaningless or irrelevant aspects arising in poor interpretations.
    This interpretation is called the thermal interpretation since it agrees with how one does measurements in thermodynamics (the macroscopic part of quantum mechanics, derived via statistical mechanics), and therefore explains naturally the classical properties of our quantum world. It is outlined in my slides and described in detail in Chapter 10 of version 2 (or Chapter 7 of version 1) of my book see also my The key is a generalization of the notion of observable that matches the actual practice.
    Progressing from the Born rule to so-called positive operator valued measures (POVMs) is already a big improvement, commonly used in quantum optics and quantum information theory. These are adequate for measurements in the form of clicks, flashes or events (particle tracks) in scattering experiments (only).
    But they still does not cover measurements of, say, the Lamb shift, or of particle form factors. One needs to be even more general:
    Any function of the model parameters defining a quantum system (i.e., the Hamiltonian and the state) may be an observable. Clearly, anything computable in quantum mechanics belongs there.
    Indeed, whenever we are able to compute something from raw measurements according to the rules of the theory, and it agrees with something derivable from quantum mechanics, we call the result of that computation a measurement of the latter. This correctly describes the practice of measurement.
    Adhering to this interpretation, those foundational problems are gone that were introduced into quantum mechanics by basing the foundations on an ill-defined and unrealistic measurement concept.

    The thermal interpretation is based on the observation that quantum mechanics does much more than predict probabilities for the possible results of experiments done by Alice and Bob. In particular, it quantitatively predicts the whole of classical thermodynamics.
    For example, it is used to predict the color of molecules, their response to external electromagnetic fields, the behavior of material made of these molecules under changes of pressure or temperature, the production of energy from nuclear reactions, the behavior of transistors in the chips on which your computer runs, and a lot more.
    The thermal interpretation therefore takes as its ontological basis the states occurring in the statistical mechanics for describing thermodynamics (Gibbs states) rather than the pure states figuring in a quantum mechanics built on top of the concept of a wave function. This has the advantage that the complete state of a system completely and deterministically determines the complete state of every subsystem - a basic requirement that a sound, observer-independent interpretation of quantum mechanics should satisfy.
    The axioms for the formal core of quantum mechanics are those specified in the entry ''Postulates for the formal core of quantum mechanics'' of Chapter A4 of my

    There only the minimal statistical interpretation agreed by everyone is discussed. The thermal interpretation goes far beyond that, assigning states and an interpretation for them to individual quantum systems, in a way that large quantum systems are naturally described by essentially classical observables (without the need to invoke decoherence or collapse). The new approach is consistent with assigning a well-defined (though largely unknown) state to the whole universe, whose properties account for everythng observable within this universe.
    The fundamental mathematical description of reality is taken to be standard quantum field theory. It doesn't matter for the thermal interpretation whether or not there is a deeper underlying deterministic level.

    In the thermal interpretation of quantum physics, the directly observable (and hence obviously ''real'') features of a macroscopic system are the expectation values of the most important fields Phi(x,t) at position x and time t, as they are described by statistical thermodynamics. If it were not so, thermodynamics would not provide the good macroscopic description it does.
    However, the expectation values have only a limited accuracy; as discovered by Heisenberg, quantum mechanics predicts its own uncertainty. This means that is objectively real only to an accuracy of order 1/sqrt(V) where V is the volume occupied by the mesoscopic cell containing x, assumed to be homogeneous and in local equilibrium. This is the standard assumption for deriving from first principles hydrodynamical equations and the like. It means that the interpretation of a field gets more fuzzy as one decreases the size of the coarse graining - until at some point the local equilibrium hypothesis is no longer valid.
    This defines the surface ontology of the thermal interpretation. There is also a deeper ontology concerning the reality of inferred entities - the thermal interpretation declares as real but not directly observable any expectation of operators with a space-time dependence that satisfy Poincare invariance and causal commutation relations. These are distributions that produce measurable numbers when integrated over sufficiently smooth localized test functions.

    Deterministic chaos is an emergent feature of the thermal interpretation of quantum mechanics, obtained in a suitable approximation. Approximating a multiparticle system in a semiclassical way (mean field theory or a little beyond) gives an approximate deterministic system governing the dynamics of these expectations. This system is highly chaotic at high resolution. This chaoticity seems enough to enforce the probabilistic nature of the measurement apparatus. Neither an underlying exact deterministic dynamics nor an explicit dynamical collapse needs to be postulated.
    The same system can be studied at different levels of resolution. When we model a dynamical system classically at high enough resolution, it must be modeled stochastically since the quantum uncertainties must be taken into account. But at a lower resolution, one can often neglect the stochastic part and the system becomes deterministic. If it were not so, we could not use any deterministic model at all in physics but we often do, with excellent success.
    This also holds when the resulting deterministic system is chaotic. Indeed, all deterministic chaotic systems studied in practice are approximate only, because of quantum mechanics. If it were not so, we could not use any chaotic model at all in physics but we often do, with excellent success.

    The chaoticity of the semiclassical approximation is enough to explain the randomness inherent in the measurement of quantum observables.
    This gives an ensemble interpretation to the Heisenberg uncertainty relation. The Heisenberg uncertainty principle states that the product of the variances of p and q is bounded below by a small number. It doesn't say what the variances represent. The general consensus is that the variance represents an ensemble average - i.e., the result of a statistics over many independent measurements on identically prepared systems.
    (In order to take the variance as a time average one would need to invoke an ergodic theorem stating that the time average equals the ensemble average. However such an ergodic theorem makes sense only semiclassically, and is valid only for very simple systems. Most systems are far from ergodic. Thus the interpretation of the variance as a time average is usually not warranted. This also means that - in contrast to what usually happens in the popular literature - the so-called vacuum fluctuations cannot be interpreted as fluctuations in time.)


    Arnold Neumaier (Arnold.Neumaier@univie.ac.at)
    A theoretical physics FAQ Thermal Interpretation FAQ