Content uploaded by Jürg Martin Fröhlich

Author content

All content in this area was uploaded by Jürg Martin Fröhlich on Sep 25, 2016

Content may be subject to copyright.

The Quest for Laws and Structure

Jürg Fröhlich

Abstract. The purpose of this paper is to illustrate, on some concrete examples, the quest of the-

oretical physicists for new laws of Nature and for appropriate mathematical structures that enables

them to formulate and analyze new laws in as simple terms as possible and to derive consequences

therefrom. The examples are taken from thermodynamics, atomism and quantum theory.1

Contents

1 Introduction: Laws of Nature and Mathematical Structure 1

2 The Second Law of Thermodynamics 4

3 Atomism and Quantization 9

4 The structure of Quantum Theory 18

5 Appendix on Entropy 27

1. Introduction: Laws of Nature and Mathematical Structure

“Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of

things.” – Isaac Newton

The editor of this book has asked us to contribute texts that can be understood by

readers without much formal training in mathematics and the natural sciences. Somewhat

against my natural inclinations I have therefore attempted to write an essay that does

not contain very many heavy formulae or mathematical derivations that are essential for

an understanding of the main message I would like to convey. Actually, the reader can

understand essential elements of that message without studying the formulae. I hope that

Newton was right and that this little essay is worth my efforts.

Ever since the times of Leucippus (of Miletus, 5th Century BC) and Democritus (of

Abdera, Thrace, born around 460 BC) – if not already before – human beings have strived

for the discovery of universal laws according to which simple natural processes evolve.

Leucippus and Democritus are the originators of the following remarkable ideas about

how Nature might work:

1One might want to add to the title: “. .. and for Uniﬁcation” – but that would oblige us to look farther aﬁeld

than we can in this essay.

2Jürg Fröhlich

(1) Atomism (matter consists of various species of smallest, indivisible building blocks)

(2) Nature evolves according to eternal Laws (processes in Nature can be described

mathematically, their description being derived from laws of Nature)

(3) The Law of Causation (every event is the consequence of some cause)

Atomism is an idea that has only been fully conﬁrmed, empirically, early in the 20th Cen-

tury. Atomism and Quantum Theory turn out to be Siamese twins, as I will indicate in

a little more detail later on. The idea that the evolution of Nature can be described by

precise mathematical laws is central to all of modern science. It has been reiterated by

different people in different epochs – well known are the sayings of Galileo Galilei 2and

Eugene P. Wigner 3. The overwhelming success of this idea is quite miraculous; it will

be the main theme of this essay. The law of causation has been a fundamental building

block of classical physics 4; but after the advent of quantum theory it is no longer thought

to apply to the microcosm.

In modern times, the idea of universal natural laws appears in Newton’s Law of Universal

Gravitation, which says that the trajectories of soccer balls and gun bullets and the mo-

tion of the moon around Earth and of the planets around the sun all have the same cause,

namely the gravitational force, that is thought to be universal and to be described in the

form of a precise mathematical law, Newton’s celebrated “1/r2- law”. The gravitational

force is believed to satisfy the “equivalence principle”, which says that, locally, gravita-

tional forces can be removed by passing to an accelerated frame, (i.e., locally one cannot

distinguish between the action of a gravitational force and acceleration). This principle

played an important role in Einstein’s intellectual journey to the General Theory of Rela-

tivity, whose 100th anniversary physicists are celebrating this year. Incidentally, the 1/r2-

law of gravitation explains why a mechanics of point particles, which represents a con-

crete implementation of the idea of “atoms”, is so successful in describing the motion of

extended bodies, such as the planets orbiting the sun. The point is that the gravitational

attraction emanating from a spherically symmetric distribution of matter is identical to

the force emanating from a point source with the same total mass located at the center of

gravity of that distribution. This fact is called “Newton’s theorem”. It is reported that it

took Newton a rather long time to understand and prove it. (I recommend the proof of

this beautiful theorem as an exercise to the reader.) Newton’s theorem also applies to sys-

tems of particles with electrostatic Coulomb interactions. In this context it has played an

important role in the construction of thermodynamics for systems of nuclei and electrons

presented in [1].

But rather than meditating Newton’s law of universal gravitation, I propose to consider

the Theory of Heat and meditate the Second Law of Thermodynamics; (see sect. 2, and

[2]). This will serve to illustrate the assertion that discovering a Law of Nature is a mir-

acle far deeper and more exciting than cooking up some shaky model that depends on

2“Philosophy is written in that great book which ever lies before our eyes – I mean the Universe – but we

cannot understand it if we do not ﬁrst learn the language and grasp the symbols, in which it is written. This book

is written in the mathematical language.”

3“The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” in: Communications in Pure and

Applied Mathematics, vol. 13, No. I (1960).

4“Alle Naturwissenschaft ist auf die Voraussetzung der vollständigen kausalen Verknüpfung jeglichen

Geschehens begründet.” – Albert Einstein, (talk at Physical Society in Zurich, 1910)

The Quest for Laws and Structure 3

numerous parameters and can be put on a computer, with the purpose to ﬁt an elephant; (a

rather dubious activity that has become much too fashionable). Our presentation will also

illustrate the claim that discovering and formulating a law of Nature and deriving conse-

quences therefrom can only be achieved once the right mathematical structure has been

found within which that law can be formulated precisely and further analyzed. This will

also be a key point of our discussion in section 4, which, however, is considerably more

abstract and demanding than the one in section 2.

New theories or frameworks in physics can often be viewed as “deformations” of pre-

cursor theories/frameworks. This point of view has been proposed and developed in [3]

and references given there. As an example, the framework of quantum mechanics can be

understood as a deformation of the framework of Hamiltonian mechanics. The Poincaré

symmetry of the special theory of relativity can be understood as a deformation of the

Galilei symmetry of non-relativistic physics; (conversely, the Galilei group can be ob-

tained as a “contraction” of the Poincaré group). Essential elements of the mathematics

needed to understand how to implement such deformations have been developed in [4]

and, more recently, in [5]. In section 3, we illustrate this point of view by showing that

atomistic theories of matter can be obtained by deformation/quantization of theories treat-

ing matter as a continuous medium. This is a relatively recent observation made in [6]

– perhaps more an amusing curiosity than a deep insight. It leads to the realization that

the Hamiltonian mechanics of systems of identical point particles can be viewed as the

quantization of a theory of dust described as a continuous medium (Vlasov theory).

In section 4, we sketch a novel approach (called “ETH approach”) to the foundations of

quantum mechanics. We will only treat non-relativistic quantum mechanics, which is a

theory with a globally deﬁned time. (But a relativistic incarnation of our approach appears

to be feasible, too.) Most people, including grown-up professors of theoretical physics,

appear to have rather confused ideas about a theory of events and experiments in quantum

mechanics. Given that quantum mechanics has been created more than ninety years ago

and that it may be considered to be the most basic and successful theory of physics, the

confusion surrounding its interpretation may be perceived as something of an intellectual

scandal. In section 4 we describe ideas that have a chance to lead to progress on the way

towards a clear and logically consistent interpretation of quantum mechanics. For those

readers who are able to follow our thought process, the presentation in section 4 will show,

I hope convincingly, how important the quest for (or search of) an appropriate mathemati-

cal structure is when one attempts to formulate and then understand and use new theories

in physics. It will lead us into territory where the air is rather thin and considerable ab-

straction cannot be avoided. Clearly, neither the mathematical, nor the physical details of

this story, which is subtle and lengthy, can be explained in this essay. But I believe it is

sufﬁciently important to warrant the sketch contained in section 4. Readers not familiar

with the standard formulation of basic quantum mechanics and some functional analysis

may want to stop reading this essay at the end of section 3.

4Jürg Fröhlich

2. The Second Law of Thermodynamics

“The thermal agency by which a mechanical effect may be obtained is the transference of

heat from one body to another at a lower temperature.” – Sadi Carnot

Nicolas Léonard Jonel Sadi Carnot was a French engineer who, after a faltering mil-

itary career, developed an interest in Physics. He was born in 1796 and died young of

cholera in 1832. In his only publication, “Réﬂexions sur la puissance motrice du feu et

sur les machines propres à développer cette puissance”, of 1824, Carnot presented a very

general law governing heat engines (and steam locomotives): Let T1denote the abso-

lute temperature of the boiler of a steam engine with a time-periodic work cycle, and let

T2< T1be the absolute temperature of the environment which the engine is immersed

in (coupled to). Carnot argued that the “degree of efﬁciency”, η, of the engine, namely

the amount of work, W, delivered by the engine during one work cycle divided by the

amount of heat energy, Q, needed during one work cycle to heat the boiler and keep it at

its (constant) temperature T1is always smaller than or equal to 1−(T2/T1), i.e.,

η:= W

Q≤1−T2

T1

,(1)

a quantity always smaller than 1– so, some of the energy used to heat the boiler is ap-

parently always “wasted”, in the sense that it cannot be converted into mechanical work

but is released into the environment! Carnot’s law can also be read in reverse: Unless

the environment, which a heat engine is immersed in, has a temperature strictly smaller

than the “internal temperature” of that heat engine (i.e., the temperature of its boiler), it is

impossible to extract any mechanical work from the engine.

Carnot’s law is unbelievably simple and unbelievably interesting, because it is universally

applicable and because it has spectacular consequences. For example, it says that one can-

not generate mechanical work simply by cooling a heat bath, such as the Atlantic Ocean,

at roughly the same temperature as that of the atmosphere. In other words, it is impos-

sible to extract heat energy from a heat bath in thermal equilibrium and convert it into

mechanical work without transmitting some part of that heat energy into a heat bath at a

lower temperature. One says that it is impossible to construct a “perpetuum mobile” of

the second kind. This is very sad, because if “perpetua mobilia” of the second kind existed

we would never face any energy crisis, and the climate catastrophe would not threaten us.

Carnot’s discovery gave birth to the theory of heat, Thermodynamics, and his law later

led to the introduction of a quantity called Entropy, which I introduce and discuss below.

This quantity is not only fundamental for thermodynamics and statistical mechanics, but,

somewhat surprisingly, has come to play a crucial role in information theory and has ap-

plications in biology. Scientists have studied it until the present time and keep discovering

new aspects and applications of entropy.

Actually, entropy was originally deﬁned and introduced into thermodynamics by Rudolf

Julius Emanuel Clausius, born Rudolf Gottlieb (1822 – 1888), who was one of the central

ﬁgures in founding the theory of heat. He realized that the main consequences deduced

from the so-called “Carnot cycle” (a mathematical description of the work cycle of a heat

engine, alluded to above, leading to the law expressed in Eq. (1)) can also be derived from

the following general principle: Consider two macroscopically large heat baths, one at

temperature T1(the boiler) and the other one (the refrigerator) at temperature T2< T1.

The Quest for Laws and Structure 5

Imagine that the two heat baths are connected by a thermal contact (e.g., a copper wire

hooked up to the boiler at one end and to the refrigerator at the other end). Then – if

there isn’t any heat pump connected to the system that consumes mechanical work – heat

energy always ﬂows from the boiler to the refrigerator. This assertion has become known

as the 2nd Law of Thermodynamics in the formulation of Clausius.

It led him to discover entropy. Once one understands what entropy is and what properties

it has, one can derive Clausius’ law – at least for sufﬁciently simple heat baths – in the

following precise form: Let Pi(t)be the amount of heat energy released per second by

heat bath/reservoir iat time t, with i= 1,2. Then, for sufﬁciently simple models of heat

baths, one can show that

P1(t) + P2(t)→0,as t→ ∞,(2)

and that P1(t)has a limit, denoted P∞, as time t→ ∞. (This last claim is the really

difﬁcult one to understand; see [7] and references given there.) The 2nd Law of Thermo-

dynamics in the formulation of Clausius then says that

P∞1

T2

−1

T1

| {z }

>0

≥0,(3)

i.e., after having waited for a sufﬁciently long time until the total system has reached a

stationary state, heat bath 1(the boiler) releases a positive amount of heat energy per

second, P∞>0, while heat bath 2(the refrigerator) absorbs/swallows an equal amount

of heat energy.

Before I deﬁne entropy and present some remarks explaining what Carnot’s Law

(1) and Clausius’ Law (3) have to do with entropy (-production), I would like to draw

some general lessons. It has become somewhat fashionable among scientists not properly

trained in mathematics and physics to try to export physical or chemical laws, such as

Carnot’s law (1), to other ﬁelds; e.g., to the social sciences. So, for example, inspired

by Carnot’s law, one might speculate that creative activities will be almost entirely ab-

sent in a completely just and harmonious society that is in perfect equilibrium, (W= 0

if all temperature differences vanish, i.e., T1=T2). This might then be advanced as an

argument against striving for social justice and harmony. One may go on and speculate

that Carnot’s law explains why the degree of efﬁciency of society’s investment in various

human endeavors, such as science, tends to be smaller than right-wing politicians would

like it to be. Encouraged by such “insights”, one starts to construct models describing the

yield of society’s investment in science that depend on hundreds of parameters and involve

some “non-linear equations”. Of course, these models turn out to be too complicated to

be studied analytically, but are believed to describe “chaotic behavior”. So they are put

on a computer, which can produce misleading data if the models really describe chaotic

behavior. But, after adjusting sufﬁciently many of those parameters, the models appear to

describe reality, and they are then used to determine the allotment of funding to different

groups of researchers. – And so on. Well, let me pause to warn against frivolous trans-

plantations of concepts, such as entropy, non-linear dynamics, chaos, catastrophe theory,

etc. from the context that has given rise to them, to entirely different contexts. Without

the necessary caution this may lead to bad mistakes! For example, the degree of efﬁciency

6Jürg Fröhlich

of society’s investment in science and engineering has been much, much higher than one

might reasonably and na¨

ively expect – Carnot’s Law simply does not apply here!

I think that, in doing honest and serious science, one should be humble. Heat engines

are highly complicated pieces of mechanical engineering. I admire the engineers who

were able to see through the intricacies involved in designing such machines. That there is

auniversal law as simple as Carnot’s Law (1) that applies to all of them should be viewed

as a miracle. And, although this law is very, very simple, to discover it and understand

why it applies to all heat engines, independently of their mechanical complexity, is a

highly non-trivial accomplishment! Carnot’s discovery was not published in ’Science’

or ’Nature’, and his h-index5equals 1. But the impact of his discovery has been truly

enormous. The point I wish to make here is that the discovery of a reliable and universal

Law of Nature, even of a very simple one, such as Carnot’s, is a miracle that happens only

relatively rarely. Physics is concerned with the study of phenomena that are so simple that

one may hope to discover precise mathematical laws governing some of these phenomena

– and, yet, the discovery of such laws is a rather rare event, and it is advisable not to expect

that an interesting one is found every second year.

I now turn to some remarks about entropy and how one of its properties enables us to

understand the origin of Carnot’s and Clausius’ laws; (see [8] for more details).

Let us consider a boiler at temperature T1and a refrigerator at temperature T2connected

by a thermal contact. The quantity

P1(t)/T1+P2(t)/T2

is an expression for the amount of “entropy production” per second at time t. If entropy

production per second has a limit, as t→ ∞, then this limit is always non-negative! I will

try to explain this a little later in this section. If the state of the system consisting of the

boiler, the refrigerator and the thermal contact approaches a stationary state, as t→ ∞6,

then the “heat ﬂows” Pi(t), i = 1,2,have limits, as ttends to ∞. Together with the

simple fact (2), this implies the 2nd Law in the formulation of Eq. (3)!

Next I turn to Carnot’s Law (1). Let ∆Q%

1(n)denote the amount of heat energy lost by

the boiler of a heat engine, i.e., heat bath 1, in the nth work cycle, and let ∆Q.

2(n)be the

amount of heat energy absorbed by the refrigerator, heat bath 2, during the nth cycle. By

energy conservation, the amount of mechanical work, W(n), produced by the heat engine

in the nth cycle is then given by

W(n)=∆Q%

1(n)−∆Q.

2(n).

It turns out that the “entropy production” in the nth cycle is given by

σ(n) := −∆Q%

1(n)

T1

+∆Q.

2(n)

T2

(4)

If the state of the total system, consisting of the boiler 1and the refrigerator 2connected

to one another by the heat engine, approaches a time-periodic state, as n→ ∞, then

5Deﬁnition of the h-index – for “Hirsch index”: Suppose a scientist has written n+mpapers of which n

have been quoted (by other people) at least ntimes, while mhave been quoted less than ntimes. Then the

h-index of this scientist is h=n.

6a property that tends to be very difﬁcult to prove and is understood only for rather simple examples; see [7]

The Quest for Laws and Structure 7

the entropy production, σ(n), per cycle approaches a non-negative limit, σ∞. Then (4)

implies that

η=W

Q≡lim

n→∞

W(n)

∆Q%

1(n)=lim

n→∞

∆Q%

1(n)−∆Q.

2(n)

∆Q%

1(n)

≤1−T2

T1

≡ηCarnot,(5)

which is Carnot’s law (1).

What is difﬁcult to understand (and is only proven for simple, idealized model sys-

tems) is that, in the example considered by Clausius, the state of the total system ap-

proaches a stationary state, as time tends to ∞, while in the example of the heat engine

considered by Carnot, the state of the total system approaches a time-periodic state, as the

number of completed work cycles approaches ∞. In fact, these properties can only be es-

tablished rigorously for inﬁnitely extended heat baths of a very simple kind [9]; although

they are expected to hold in quite general models of heat baths in the thermodynamic

limit. Real heat baths are ﬁnite, but macroscopically large. Then the laws of Clausius and

Carnot are only valid typically, i.e., in most cases observed in the lab.

Next, we attempt to explain how entropy is deﬁned in statistical mechanics. This may

serve as a ﬁrst illustration of the importance of the quest for (mathematical) structure

in the natural sciences. For fun, and because, in section 4, I will review a few facts

about quantum mechanics, I choose to explain this within quantum statistical mechanics.

However, the deﬁnitions and the reasoning are similar in classical statistical mechanics.

Let Sbe a ﬁnitely extended physical system described quantum-mechanically. In standard

quantum mechanics, states of Sare described by so-called density matrices, ρ, acting on

some Hilbert space H. A density matrix ρis a positive linear operator acting on Hthat

has a ﬁnite trace, i.e.,

tr (ρ) :=

∞

X

n=1

hen, ρeni<∞,

where {en}∞

n=1 is a complete system of mutually orthogonal unit vectors in H, (i.e., an

orthonormal basis in H), hϕ, ψiis the scalar product of two vectors, ϕand ψ, in the

Hilbert space H, and ρ ψ is the mathematical expression for the vector in Hobtained by

applying the linear operator ρto the vector ψ∈ H. In fact, for a density matrix, the trace

is normalized to 1,

tr (ρ) = 1.

So-called pure states of Sare described by orthogonal projections, Pψ, onto vectors ψ∈

H. (Obviously, such projections are special cases of density matrices.)

The von Neumann entropy,S(ρ), of a state ρof Sis deﬁned by 7

S(ρ) := −tr (ρlnρ)(6)

We note that S(ρ)is non-negative, for all density matrices ρ, (because 0<ρ<1), and

vanishes if and only if ρis a pure state. Moreover, it is a concave functional on the space

7For a strictly positive operator ρ, the operator lnρis well deﬁned – one can use the so-called spectral theorem

for self-adjoint operators to verify this claim.

8Jürg Fröhlich

of density matrices. (Finally, it is subadditive and “strongly subadditive” [10], a deep

property with interesting applications in statistical mechanics and (quantum) information

theory.) Von Neumann entropy plays an important role in statistical mechanics. However,

in many applications, and, in particular, in thermodynamics, another notion of entropy is

more important: relative entropy! This is a functional that depends on two states, ρ1and

ρ2, of S. The relative entropy of ρ1, given ρ2, is deﬁned by

S(ρ1kρ2) := tr ρ1(lnρ1−lnρ2),(7)

(assuming that ρ1vanishes on all vectors in Hon which ρ2vanishes). Relative entropy

has the following properties:

(i) Positivity: S(ρ1kρ2)≥0.

(ii) Convexity: S(ρ1kρ2)is jointly convex in ρ1and ρ2.

(iii) Monotonicity: Let Tbe a trace-preserving, “completely positive” map on the con-

vex set of density matrices on H. Then

Sρ1kρ2≥ST(ρ1)kT (ρ2).

See, e.g., [11] for precise deﬁnitions and a proof of property (iii). I don’t think that it is

important that all readers understand what is being written here. I hope those who don’t

may now feel motivated to learn a little more about entropy. To get them started, I include

an appendix where property (i) – positivity of relative entropy – is derived. I think it is

interesting to see how relative entropy and, in particular, the fact that it is positive can be

applied to understand inequalities (1) (Carnot) and (3) (Clausius).

Let us start with (3). Let ρtbe the true state at time tof the total system consisting of

the two heat baths, 1and 2, joined by a thermal contact; and let ρeq denote the state

describing perfect thermal equilibrium of the heat baths 1and 2at temperatures T1and

T2, respectively, before they are coupled by a thermal contact; (the state of the thermal

contact decoupled from the heat baths is unimportant in this argument). Then a rather

straightforward calculation shows that

d

dtS(ρtkρeq ) = P1(t)

T1

+P2(t)

T2

.(8)

Now, if the state ρtof the total system approaches a stationary state, ρ∞, as t→ ∞,

then the right side of Eq. (8) has a limit, as ttends to ∞, and hence the time derivative,

dS(ρtkρeq )/dt, of the relative entropy S(ρtkρeq )has a limit, denoted σ∞, as ttends to

∞. Since S(ρtkρeq)is non-negative, by property (1) above, σ∞must be non-negative;

and this proves inequality (3)!

Next we turn to the proof of (1). Let ρndenote the state at the beginning of the nth work

cycle of a system consisting of the two heat baths 1and 2connected to one another by a

heat engine that exhibits a time-periodic work cycle, and let ρeq be the state of the system

with the heat engine removed (meaning that the heat engine is not connected to the heat

baths and is in a state of very high temperature), which describes thermal equilibrium of

The Quest for Laws and Structure 9

the heat baths 1and 2at temperatures T1and T2, respectively. It is quite simple to show

that

σ(n) := S(ρn+1kρeq )−S(ρnkρeq ) = −∆Q%

1(n)

T1

+∆Q.

2(n)

T2

.

If the total system approaches a time-periodic state, as n→ ∞, then the right side of this

equation approaches a well-deﬁned limit, as ntends to ∞, and hence

lim

n→∞σ(n) =: σ∞

exists, too. Since S(ρn+1 kρeq )is non-negative, for all n= 1,2,3, . . . , by property (1)

above, σ∞must be non-negative, too. This proves (5)!

Note that, apparently, the difference between the degree of efﬁciency, η, of a heat engine

and the Carnot degree, ηCarnot = 1 −T2/T1, can be expressed in terms of the amount of

entropy that is produced per work cycle.

A deﬁnition and a few important properties of (relative) entropy can be found in the

appendix.

To summarize the message I have intended to convey in this section, let me ﬁrst repeat

my claim that the discovery of precise and universally applicable laws of Nature, such

as Carnot’s or Clausius’ laws, is a miracle that only happens quite rarely. Second, we

have just learned on these examples that a deeper understanding of the origin of laws of

Nature emerges only once one has found the right mathematical structure within which

to formulate and analyze them. In our examples, the key structure is the one of states

of physical systems and their time evolution, and of a functional deﬁned on the space of

(pairs of) states, namely relative entropy.

3. Atomism and Quantization

“The crucial step was to write down elements in terms of their atoms...I don’t know how

they could do chemistry beforehand, it didn’t make any sense.” – Sir Harry Kroto

“Hier (namely in Quantum theory) liegt der Schlüssel der Situation, der Schlüssel nicht

nur zur Strahlungstheorie, sondern auch zur molekularen Konstitution der Materie.” 8–

Arnold Sommerfeld

Let me recall that, almost 500 years BCE, Leucippus and Democritus proposed the

idea that matter is composed of “atoms”. Although their idea played an essential role

in the birth of modern chemistry – brought forward by John Dalton (1766-1844) and his

followers – and in the work of James Clerk Maxwell (1831-1879) on the theory of gases,

the existence of atoms was only unambiguously conﬁrmed experimentally at the begin-

ning of the 20th Century by Jean Perrin (1870-1942).9From the point of view of the

mechanics known to scientists towards the end of the 19th Century, it must have looked

8Quantum Theory is the key not only for the theory of radiation but also for an understanding of the atom-

istic constitution of matter, in: “Das Plancksche Wirkungsquantum und seine allgemeine Bedeutung für die

Molekularphysik”.

9As one ﬁnds in Wikipedia: In 1895, Perrin showed that cathode rays were negatively charged. He then

determined Avogadro’s number by several different methods. He also explained the source of solar energy as

10 Jürg Fröhlich

appropriate to describe matter as a continuous medium – as originally envisaged for ﬂuid

dynamics by Daniel Bernoulli (1700-1782) and Leonhard Euler (1707-1783), the famous

mathematicians and mathematical physicists from Basel. The atomistic structure of the

Newtonian mechanics of point particles could have appeared as merely an artefact well

adapted to Newton’s 1/r2- law of gravitation, as already mentioned above. The most

elegant and versatile formulation of classical mechanics known towards the end of the

19th Century was the one discovered by William Rowan Hamilton (1805-1865). In this

formulation, physical quantities pertinent to a mechanical system are described as real-

valued continuous functions on a space of pure states, Γ, the so-called “phase space” of

the system, thought to be what mathematicians call a “symplectic manifold”. The reader

does not need to know what symplectic manifolds are. It is enough to believe me that if

the space of pure states of a physical system has the structure of a symplectic manifold

then the physical quantities of the system (i.e., the real-valued continuous functions on

Γ) determine so-called Hamiltonian vector ﬁelds, which are generators of one-parameter

groups of ﬂows on Γ. As such, they form a Lie algebra: To every pair, Fand G, of real-

valued, continuously differentiable functions representing two physical quantities of the

system one can associate a real-valued continuous function, {F, G}, the so-called Poisson

bracket of Fand G. If F=His the Hamilton function of the system whose associated

vector ﬁeld generates the time evolution of the system, and if Gis such that {H, G}= 0

then Gis conserved under the time evolution determined by H– one says that Gis a

“conservation law”. Furthermore Ggives rise to a ﬂow on Γthat commutes with the time

evolution on Γ; i.e., the vector ﬁeld associated with Ggenerates a one-parameter group of

symmetries of the system – connection between symmetries and conservation laws.

If one starts from a model of matter as a continuous medium and attempts to describe

it as an instance of Hamiltonian mechanics one is necessarily led to consider inﬁnite-

dimensional Hamiltonian mechanics, or Hamiltonian ﬁeld theory. Examples of Hamilto-

nian ﬁeld theories are the Vlasov theory of material dust (such as large clusters of stars)

often used in astrophysics and cosmology, Euler’s description of incompressible ﬂuids

such as water, and Maxwell’s theory of the electromagnetic ﬁeld, including wave optics.

In 1925, Heisenberg10 and, soon after, Dirac11 discovered how one can pass from

the classical Hamiltonian mechanics of a fairly general class of physical systems to the

quantum mechanics of these systems. Their discoveries are paradigmatic examples of the

importance of ﬁnding the natural mathematical structure that enables one to formulate a

new law of Nature.

Heisenberg’s 1925 paper on quantum-theoretical “Umdeutung” contains the revolution-

ary idea to associate with each physical quantity of a Hamiltonian mechanical system

represented by a real-valued continuous function Fon the phase space Γof the system

thermonuclear reactions of hydrogen.

After Albert Einstein had published his explanation of Brownian motion of a “test particle” as due to collisions

with atoms in a liquid, Perrin did experimental work to verify Einstein’s predictions, thereby settling a century-

long dispute about John Dalton’s hypothesis concerning the existence of atoms.

10Werner Heisenberg (1901-1976): “Über quantentheoretische Umdeutung kinematischer und mechanischer

Beziehungen”, Zeitschrift für Physik 33(1925), 879 - 893

11Paul Adrien Maurice Dirac (1902-1984): “On the Theory of Quantum Mechanics”, Proc. Royal Soc. (1926),

661- 677

The Quest for Laws and Structure 11

a “symmetric matrix” (more precisely, a self-adjoint linear operator), b

F, representing the

same physical quantity – but in a quantum-mechanical description of the system! Since

matrix multiplication is non-commutative, two operators, b

Fand b

G, representing physi-

cal quantities of a quantum-mechanical system do generally not commute with one an-

other. Dirac then recognized that one should replace the Poisson bracket, {F, G}, of two

functions on phase space by i

~×the commutator,[b

F , b

G], of the corresponding matrices,

where ~is Planck’s constant. Thus, the Heisenberg-Dirac recipe for the “quantization” of

a Hamiltonian system reads as follows:

F(real function on Γ) 7→ b

F(self-adjoint linear operator)

{F, G}(Poisson bracket)7→ i

~[b

F , b

G] (commutator)(9)

Remarks:

•The commutator, [A, B], between two matrices or linear operators Aand Bis de-

ﬁned by

[A, B] := A·B−B·A.

•Planck’s constant ~is sometimes replaced by another so-called “deformation pa-

rameter”, such as Newton’s constant GN, or some other “coupling constant”, etc.

•The operators b

Fare usually thought to act on a separable Hilbert space H.

The map

b:F7→ b

F

is not an algebra homomorphism, because the real-valued continuous functions on Γform

an abelian (commutative) algebra under point-wise multiplication, whereas matrix multi-

plication is non-commutative; moreover, the product, b

F·b

G, of two self-adjoint operators,

b

Fand b

G, is not a self-adjoint operator, unless the operators b

Fand b

Gcommute.

Let me brieﬂy digress into somewhat more technical territory. Readers not familiar with

the notions discussed in the following paragraph are advised to pass to Eq. (13). For the

purposes of a general discussion, one can always assume that the functions Fon Γare

bounded and that the operators b

Fare bounded operators on H. In the analysis of systems

with inﬁnitely many degrees of freedom, such as the electromagnetic ﬁeld, it is actually

convenient to use a more abstract formulation, interpreting the operators b

Fas elements

of a C∗- algebra, C, that plays the role, in quantum mechanics, that the algebra, C(Γ), of

bounded continuous functions on phase space Γplays in classical mechanics.

In classical mechanics, states are given by probability measures on phase space Γ.

This is equivalent to saying that states are given by positive normalized linear functionals

on the algebra C(Γ) of continuous functions on Γ. This deﬁnition of states can immedi-

ately be carried over to quantum mechanics: A state of a quantum system whose physical

quantities are represented by the self-adjoint operators in a C∗- algebra Cis a positive

normalized linear functional on C.

Deﬁnition: A positive normalized linear functional, ρ, on a C∗- algebra Acontaining an

12 Jürg Fröhlich

identity element 1,12 (for example, A=C(Γ), where Γis a compact topological space,

or A=C), is a C- linear map,

ρ:A 3 X7→ ρ(X)∈C,(10)

with the properties that

ρ(X)≥0,for every positive operator X∈ A, ρ(1) = 1.(11)

So-called pure states on Aare states that cannot be written as convex combinations of

at least two distinct states. In the example where A=C(Γ), a pure state is a Dirac delta

function on a point of Γ. This means that pure states of a Hamiltonian mechanical system

can be identiﬁed with points in phase space Γ(and, hence, the space of pure states of such

a system does usually not have any relationship to a linear space, as would be the case in

standard quantum mechanics).

From a pair, (A, ρ), of a C∗- algebra Aand a state ρon Aone can always reconstruct a

Hilbert space, Hρ, and a representation, πρ, of Aon Hρ. This is the contents of the so-

called Gel’fand-Naimark-Segal construction. (See, e.g., [12] for deﬁnitions, basic results

and proofs.)

We recall that if the operators b

Fand b

Grepresenting two physical quantities of some

system do not commute then they cannot be measured simultaneously: If the system is

prepared in a state ρthe uncertainties, ∆ρb

Fand ∆ρb

G, in a simultaneous measurement

of the quantities represented by the operators b

Fand b

Gsatisfy the celebrated Heisenberg

Uncertainty Relation

∆ρb

F·∆ρb

G≥1

2|ρ([ b

F , b

G])|.(12)

As a special case we mention that if xdenotes the position of a particle moving on the real

line Rand pdenotes its momentum then

∆x·∆p≥~

2,(13)

in an arbitrary state of the system.

The Heisenberg-Dirac recipe expressed in Eq. (9) can be applied to Vlasov theory 13

and Maxwell’s theory 14 , and to many other interesting examples of Hamiltonian systems

or Hamiltonian ﬁeld theories. In these examples, atomism always arises as a consequence

of quantization.

In the following, I propose to sketch the example of Vlasov theory. This is a theory

describing the mechanics of (star) dust viewed as a continuous material medium. A state

of dust at time tis described by the density, ft(x, v), of dust with velocity v∈R3observed

at the point xin physical space E3. Clearly ft(x, v)is non-negative, and

Zd3xZd3v ft(x, v) = ν,

12this will always be assumed in what follows

13ﬁrst proposed by Anatoly Alexandrovich Vlasov (1908 – 1975) in 1938.

14named after the eminent Scottish mathematical physicist James Clerk Maxwell (1831 – 1879)

The Quest for Laws and Structure 13

where νis the number of moles of dust. This quantity is conserved, i.e., independent of

time t. The density of dust at the point x∈E3at time tis given by

nt(x) := Zd3v ft(x, v).

The equation of motion of the state ftis given by the so-called Vlasov (collision-free

Boltzmann) equation:

∂

∂t ft(x, v) + v· ∇xft(x, v)− ∇Veff [ft](x)· ∇vft(x, v)=0,(14)

where

Veff[ft](x) := V(x) + Zd3y φ(x−y)nt(y).(15)

In this expression, V(x)is the potential of an external force acting on the dust at the point

x∈E3,φ(x−y)is a two-body potential describing the force between dust at point xand

dust at point yin physical space.

In the following, I sketch how Vlasov theory can be quantized by applying the Heisenberg-

Dirac recipe. Since my exposition is somewhat more technical than the rest of this essay,

I want to disclose what the result of this exercise is: The quantization of Vlasov theory is

nothing but the Newtonian mechanics of an arbitrary number of identical point particles

moving in physical space under the inﬂuence of an external force with potential given

by the function Vand interacting with each other through two-body forces whose poten-

tial is given by N−1φ, where N−1is a “deformation parameter” that plays the role of

Planck’s constant ~in the Heisenberg-Dirac recipe. – Readers not familiar with inﬁnite-

dimensional Hamiltonian systems or not very interested in mathematical considerations

are encouraged to proceed to the material after Eq. (33).

Since ft(x, v)≥0, it can be written as a product (factorized)

ft(x, v) = αt(x, v)·αt(x, v ),(16)

where αt(x, v)is a complex-valued function of (x, v), with

|αt(x, v)|=pft(x, v).

Clearly, the phase, αt/|αt|, of αtis not observable. Perhaps surprisingly, it appears to

be a good idea to encode the time evolution of the density ftinto a dynamical law for

αt. Here is how this can be done: Let Γ1:= E3×R3denote the “one-particle phase

space” of pairs, (x, v), of points in physical space and velocities. By Γ∞:= H1(Γ1)we

denote the complex Sobolev space of index 1 over Γ1. This space can be interpreted as

an ∞- dimensional afﬁne phase space. Functions α∈Γ∞and their complex conjugates,

α, serve as complex coordinates for Γ∞. The symplectic structure of Γ∞can be encoded

into the Poisson brackets:

{α(x, v), α(x0, v0)}={α(x, v ), α(x0, v0)}= 0,

{α(x, v), α(x0, v0)}=−iδ(x−x0)δ(v−v0).(17)

14 Jürg Fröhlich

We introduce a Hamilton functional on Γ∞:

H(α, α) := iZ Z d3xd3v α(x, v)[v· ∇x−(∇V)(x)· ∇v]α(x, v)(18)

−i

2Z Z d3xd3v α(x, v)∇xZ Z d3x0d3v0φ(x−x0)|α(x0, v0)|2· ∇vα(x, v ).

Hamilton’s equations of motion are given by

˙αt(x, v) = {H, αt(x, v)},˙

αt(x, v) = {H, αt(x, v)}.(19)

It is a straightforward exercise [6] to show that these equations imply the Vlasov equation

for the density ft(x, v) = |αt(x, v)|2.

Note that this theory has a huge group of local symmetry transformations: The “gauge

transformations”

αt(x, v)7→ αt(x, v)eiθt(x,v )(20)

where the phase θt(x, v)is an arbitrary real-valued, smooth function on Γ1, are symme-

tries of the theory. The global gauge transformation obtained by setting θt(x, v) =: θ∈R

form a continuous group of symmetries, 'U(1), that gives rise to a conservation law,

kαtk2

2=Zd3xZd3v ft(x, v) = const. in time t, (21)

in accordance with Noether’s theorem.

Next, we propose to quantize Vlasov theory by applying the Heisenberg-Dirac recipe

(9) to the variables α, α; i.e., we replace αand αby operators,

α7→ bα=: a, α 7→ b

α=: a∗,(22)

and trade the poisson brackets in (17) for commutators:

[a(x, v), a(x0, v0)] = [a∗(x, v ), a∗(x0, v0)] = 0,

and

[a(x, v), a∗(x0, v0)] = N−1·δ(x−x0)δ(v−v0),(23)

where the dimensionless number Nν is proportional to the number of “atoms” present

in the system; i.e., the role of Planck’s constant ~is played by N−1. The creation- and

annihilation operators, a∗and a, act on Fock space, F:

F:= ⊕∞

n=0Fn,(24)

where

F0:= C|0i,with a(x, v)|0i= 0,∀x, v,

and

Fn:= DZ· · · Zϕn(x1, v1, ..., xn, vn)

n

Y

i=1

a∗(xi, vi)d3xid3vi|0iE,(25)

The Quest for Laws and Structure 15

where ·indicates that the (linear) span is taken.

The physical interpretation of the “n-particle wave functions” ϕnis that

fn(x1, v1, ..., xn, vn) := |ϕn(x1, v1, ..., xn, vn)|2(26)

is the state density on n-particle phase space

Γn:= Γ×n

1

for nidentical classical particles moving in physical space. (The state of the system is

obtained by multiplying the densities fnby the Liouville measures Qn

i=1 d3xid3vi.)

The “Hamilton operator” generating the dynamical evolution of the states of the quan-

tized theory is obtained by replacing the functions αand αin the Hamilton functional

H(α, α)introduced in (18) by the operators a∗and a, respectively, and writing all cre-

ation operators a∗to the left of all annihilation operators a; (“Wick ordering”). The

time-dependent Schrödinger equation for the evolution of vectors in Fthen implies the

Liouville equations for the densities deﬁned in (26),

˙

ft(x1, v1, ..., xn, vn) = −

n

X

i=1 vi· ∇xi+F(Xi)· ∇vift(x1, v1, ...xn, vn),(27)

where

F(xi) := −∇xiV(xi) + N−1X

j6=i

φ(xi−xj)

is the total force acting on the ith particle, which is equal to the external force, −(∇V)(xi),

plus the sum of the forces exerted on particle iby the other particles in the system; the

strength of the interaction between two particles being proportional to N−1. The equa-

tions (27) are equivalent to Newton’s equations of motion for nidentical particles with

two-body interactions moving in physical space, (which are Hamiltonian equations of

motion).

“Observables” of this theory are operators on Fock space Fthat are invariant under

the symmetry transformations given by

a(x, v)7→ a(x, v)eiθt(x,v ), a∗(x, v)7→ a∗(x, v)e−iθt(x,v),(28)

corresponding to the symmetries (20); (they are the elements of an inﬁnite-dimensional

group of local gauge transformations). These symmetries imply that the particle number

operator

b

N:= Zd3xZd3v a∗(x, v)a(x, v)

is conserved under the time evolution, and that (in the absence of an afﬁne connection

that gives rise to a non-trivial notion of parallel transport of “wave functions”, ϕn) the

observables of the theory are described by operators that are functionals of the densities

a∗(x, v)a(x, v). These operators generate an abelian (i.e., commutative) algebra. To-

gether with the equations of motion (27), this means that the structure of observables and

the predictions of this “quantum theory” are classical, in the sense that all observables can

16 Jürg Fröhlich

be diagonalized simultaneously and hence have objective values, and the time evolution

of the system is deterministic. In fact, this theory is just a reformulation of the Newto-

nian mechanics of systems of arbitrarily many identical non-relativistic particles moving

in physical space E3under the inﬂuence of an external potential force and interacting with

each other through two-body potential forces.

Thus, what we have sketched here is the perhaps somewhat remarkable observation

(see [6], and references given there) that the classical Newtonian mechanics of the particle

systems studied above, which treats matter as atomistic, can be viewed as the quantization

of Vlasov theory, which treats matter as a continuous medium of dust. Conversely, Vlasov

theory can be viewed as the “classical limit” of the Newtonian mechanics of systems of

O(N)identical particles with two-body interactions of strength ∝N−1, which is reached

when N→ ∞. This has been shown (using different concepts) in [13]. Apparently, the

parameter N−1plays the role of Planck’s constant ~.

To express these ﬁndings in words, it appears that a mechanics taking into account the

atomistic structure of matter arises as the result of quantization of a mechanics that treats

matter as a continuous medium.

Mathematical digression on “pre-quantization” of the one-particle phase space and

on the passage to the quantum theory of systems of an arbitrary number of identical

non-relativistic particles (bosons) with two-body interactions

Obviously the one-particle phase space Γ1carries a symplectic structure given by the

symplectic 2-form

ω:= dx∧dv.

One-particle “wave functions”, α(x, v), can be viewed as section of a complex line bundle

over Γ1associated to a principal U(1)- bundle. We equip this bundle with a connection,

A=Axdx+Avdv, (i.e., a gauge ﬁeld, namely a mathematical object analogous to the

well known vector potential in electrodynamics), whose curvature, i.e., the ﬁeld tensor

associated with A, is given by

dA=ω.

In these formulae, dxand dvare differentials, and “d” denotes exterior differentiation.

The connection Aintroduces a notion of parallel transport on the line bundle of “wave

functions” α. The symmetries (20) can then always be obeyed by replacing ordinary

partial derivatives by covariant derivatives, i.e.,

(∇x,∇v)7→ (∇x−iAx,∇v−iAv),

and products

α(x, v)α(x, v)7→ α(x, v )Uγ(A)α(x0, v0),(29)

where Uγ(A)is a complex phase factor describing parallel transport along a path γfrom

the point (x0, v0)∈Γ1to the point (x, v)∈Γ1. These replacements lead us to the

theory of “pre-quantization” of one-particle mechanics formulated over the one-particle

phase space Γ1. By applying the Heisenberg-Dirac recipe (22), (23) and then using the

connection Ato deﬁne parallel transport of creation- and annihilation operators, a∗, a,

and n-particle wave functions ϕn, we arrive at what is called “pre-quantization” of the

mechanics of arbitrary n-particle systems.

The Quest for Laws and Structure 17

In principle, the introduction of a connection Aon the line bundle of one-particle

“wave functions” αwould allow one to consider vast generalizations of Vlasov dynamics,

based on using (29), and, subsequently, of the quantized theory resulting from the replace-

ments (22), (23). Some of these generalizations could be understood as Vlasov theories

on a “non-commutative phase space”, namely the non-commutative phase space obtained

by applying the Heisenberg-Dirac recipe (9) to the Poisson brackets

{xi, xj}=0={pi, pj},

and

{xi, pj}=−δi

j, i, j = 1,2,3.

This leads us to the question whether standard quantum mechanics of systems of ar-

bitrarily many identical non-relativistic particles could be rediscovered by appropriately

extending the ideas discussed so far. One approach to answering this question is to pass

from pre-quantization, as sketched above, to genuine quantization by following the recipes

of geometric quantization, à la Kostant and Souriau; see, e.g., [14]. (An alternative is to

consider “deformation quantization”, see [15], which, however, is usually inadequate to

deal with concrete problems of physics.) We cannot go into explaining how this is done,

as this would take us too far away from our main theme. Instead, we return to Vlasov

theory, whose states are represented by densities f(x, v)on one-particle phase space Γ1.

We propose to replace the factorization (16) of f(x, v)by the Wigner factorization

f~(x, v) = 1

(2π)3Ze−iv·yψ(x−~y

2)ψ(x+~y

2)d3y, (30)

where the “Schrödinger wave function” ψis an arbitrary function in L2(R3). Assum-

ing that the time-dependent Schrödinger wave function ψtsolves the so-called Hartree

equation

i~∂tψt=−~2

2∆ + V+|ψt|2∗φψt(31)

one ﬁnds that f~,t solves the Vlasov equation in the limit where ~tends to 0.

To understand and prove this claim it is advisable to interpret f~(x, v)as the Wigner

transform of a general one-particle density matrix, ρ, i.e.,

f~(x, v) = 1

(2π)3Ze−ivy ρ(x−~y

2, x +~y

2)d3y.

Expression (30) is the special case where ρ(x, y) = ψ(x)ψ(y)is the pure state corre-

sponding to the wave function ψ. The equation of motion for the density f~is derived

from the Liouville-von Neumann equation of motion for the density matrix ρ,

~˙ρ=−i[Heff, ρ](32)

corresponding to the effective Hamiltonian

Heff := −~2

2∆ + V+n∗φ, (33)

18 Jürg Fröhlich

where n(x) = ρ(x, x) = Rf~(x, v)d3vis the particle density, and (n∗φ)(x) :=

Rn(y)φ(x−y)d3y. It is then not hard to see that, formally, the Liouville-von Neu-

mann equation of motion (32), with Heff as in (33), implies the Vlasov equation for f~, as

~approaches 0.

The Hartree equation (31) for the Schrödinger wave function ψturns out to be a Hamil-

tonian evolution equation on an inﬁnite-dimensional phase space ˆ

Γ∞with complex coor-

dinates given by the Schrödinger wave functions ψand their complex conjugates ψ. Ap-

plying the Heisenberg-Dirac recipe to quantize Hartree theory (with the same deformation

parameter, N−1, as in Vlasov theory), one arrives at the theory of gases of non-relativistic

Bose atoms moving in an external potential landscape described by the potential Vand

with two-body interactions given by the potential N−1φ. This is an example of a quantum-

mechanical many-body theory. In the limiting regime where N→ ∞, i.e., in the so-called

mean-ﬁeld (or classical) limit, one recovers Hartree theory. Details of this story can be

found in [6].

Vlasov theory has many interesting applications in cosmology and in plasma physics.

As an example I mention the rather subtle analysis of Landau damping in plasmas pre-

sented in [16]. Hartree theory is often used to describe Bose gases in the limiting regime

of high density and very weak two-body interactions, corresponding to N→ ∞. An-

other, somewhat more subtle limiting regime (low density, strong interactions of very short

range) is the Gross-Pitaevskii limit considered in [17]. Hartree theory with smooth, at-

tractive two-body interactions of short range features solitary-wave solutions. In a regime

where the two-body interactions are strong, the dynamics of multi-soliton conﬁgurations

is well approximated by the Newtonian mechanics of point particles of varying mass mov-

ing in an external potential Vand with two-body interactions ∝φ. However, whenever

the motion of the solitons is not inertial they experience friction. This has been discussed

in some detail in [18]. This observation may have interesting application in cosmology, as

ﬁrst suggested in [19].

To conclude this section, we mention that the atomistic nature of the electromagnetic

ﬁeld, which becomes manifest in the quanta of light or photons, can be understood by

applying the Heisenberg-Dirac recipe to Maxwell’s classical theory of the electromagnetic

ﬁeld (the deformation parameter being Planck’s constant ~). Historically, this was the ﬁrst

example of a quantum theory. Its contours became visible in Planck’s law of black-body

radiation and Einstein’s discovery of the quanta of radiation, the photons.

4. The structure of Quantum Theory

“... Thus, the ﬁxed pressure of natural causality disappears and there remains, irrespec-

tive of the validity of the natural laws, space for autonomous and causally absolutely

independent decisions; I consider the elementary quanta of matter to be the place of these

‘decisions’.” – Hermann Weyl, 1920.

In section 3, we have seen that the atomistic constitution of matter may be understood

as resulting from Heisenberg-Dirac quantization of a “classical” Hamiltonian theory that

The Quest for Laws and Structure 19

treats matter as a continuous medium, such as Vlasov theory. In the following, we pro-

pose to sketch some fundamental features of quantum mechanics proper. It turns out that

the deeply puzzling features of quantum mechanics arise from the non-commutativity of

the algebra generated by the linear operators that represent physical quantities/properties

characterizing a physical system. This non-commutativity turns out to be intimately re-

lated to the atomistic constitution of matter! In a sense, Hartree theory is a quantum

theory – Planck’s constant ~appears explicitly in the Hartree equation that describes the

time evolution of physical quantities of the theory. Hartree theory describes matter (more

precisely interacting quantum gases) as a continuous medium. As a result, the algebra

of physical quantities of this theory is abelian (commutative). When it is quantized ac-

cording to the Heisenberg-Dirac recipe– as indicated in section 3 – one arrives at a theory

(namely non-relativistic quantum-mechanical many-body theory) providing an atomistic

description of matter, and the algebra of operators representing physical quantities be-

comes non-commutative.

The purpose of this section is to sketch some general features of non-relativistic quan-

tum mechanics related to its probabilistic nature and its fundamental irreversibility. Our

analysis is intended to apply to a large class of physical systems; and it is based on the

assumption that the linear operators providing a quantum-mechanical description of phys-

ical quantities and events of a typical physical system, S, generate a non-abelian (non-

commutative) algebra. An example of an important consequence of this assumption is the

phenomenon of entanglement (see below), which does not appear in classical physics.

In classical physics, the operators representing physical quantities always generate

an abelian (commutative) algebra, Ec, over the complex numbers invariant under taking

the adjoint of operators and closed in the operator norm. By a theorem due to I. M.

Gel’fand (see, e.g., [12]), such an algebra is isomorphic to the algebra of complex-valued

continuous functions over a compact topological (Hausdorff) space, Γ,15 i.e.,

Ec'C(Γ).(34)

The operator norm, kFk, of an element F∈ E cis the sup norm of the function on Γ

corresponding to F, which we also denote by F. The physical quantities of the system

are described by the real-valued continuous functions on Γ, which are the self-adjoint

elements of Ec.States of the system are given by probability measures on Γ;pure states

correspond to atomic measures, i.e., Dirac δ- functions, supported on points, ξ, of Γ.

Thus, the pure states are “characters” of the algebra Ec, i.e., positive, normalized linear

functionals, δξ, with the property that

δξ(F·G) = δξ(F)·δξ(G).

Passing to a subsystem of the system described by the algebra Ecamounts to selecting

some subalgebra,Ec

0, of the algebra Ecinvariant under taking adjoints and closed in the

operator norm. Characters of Ecobviously determine characters of Ec

0; i.e., pure states

of the system remain pure when one passes to a subsystem. This implies that the phe-

nomenon of entanglement is completely absent in classical physics.

15of course, Γis usually not a symplectic manifold – it is symplectic, i.e., a “phase space”, only if the system

is Hamiltonian

20 Jürg Fröhlich

Time evolution of physical quantities from time sto time tis described by ∗automorphisms,

τt,s, of Ec, which form a one-parameter groupoid. This may sound curiously abstract. But

it turns out that any such groupoid is described by ﬂow maps,

γt,s : Γ →Γ,Γ3ξ(s)7→ ξ(t) = γt,s(ξ(s)) ∈Γ.

Under fairly general hypotheses on the properties of the maps γt,s they are generated by

(generally time-dependent) vector ﬁelds, Xt, on Γ; i.e., the trajectory ξ(t) := γt,s(ξ)of a

point ξ∈Γis the solution of a differential equation,

˙

ξ(t) = Xtξ(t),with ξ(s) = ξ∈Γ.

These properties of time evolution are preserved when one passes from the description of

a physical system to the one of a subsystem. Since all this may sound too abstract and

quite incomprehensible, I summarize the main features of classical physics in words:

(A) The physical quantities of a classical system are represented by self-adjoint oper-

ators that all commute with one another. They correspond to the bounded, real-

valued, continuous functions on a “state space” Γ.

(B) Pure states of the system can be identiﬁed with points in its state space Γ.

(C) All physical quantities have objective and unique values in every pure state of the

system. Conversely, the values of all physical quantities of a system usually de-

termine its state uniquely. Thus, pure states have an “ontological meaning”: They

contain complete information on all properties of the system at a given instant of

time.

(D) Mixed states are given by probability measures on Γ. Probabilities associated with

such mixed states are expressions of ignorance, i.e., of a lack of complete knowl-

edge of the true state of the system at a given instant of time.16

(E) Time evolution of physical quantities and states is completely determined by ﬂow

maps, γt,s, from the state space Γto itself specifying which pure states, ξ(t), at time

tcorrespond to initial states, ξ(s), chosen at time s. Thus, the “Law of Causation”

holds (as formulated originally by Leucippus and Democritus), and there is per-

fect determinism; (disregarding from the possibly huge problems of computation of

dynamics for chaotic systems).

(F) All these properties of a classical description of physical systems are preserved

upon passing to the description of a subsystem (that may interact strongly with its

complement).

16One should add that, pragmatically, mixed states play an enormously important role in that they often

enable us to make concrete predictions on quantities that are deﬁned as time-averages along trajectories of true

states of which one expects that they are identical to ensemble averages. Often, only the ensemble averages are

accessible to concrete calculations, using measures describing certain mixed states, such as thermal equilibrium

states, while time-averages along trajectories of true states remain inaccessible to quantitative evaluation.

The Quest for Laws and Structure 21

Well, for better of worse, these wonderful features of classical physics all disappear

when one passes to a quantum-mechanical description of reality! One of the ﬁrst problems

one encounters when one analyzes general features of a quantum-mechanical description

of reality is that one does not know how to describe the time evolution of physical quan-

tities of a system unless that system has interactions with the rest of the universe that are

so tiny that they can be neglected over long stretches of time. Such a system is called

“isolated”. In this section, we limit our discussion to isolated systems; (but see, e.g.,

[20, 21].)

Here is a pedestrian deﬁnition of an isolated physical system – according to quantum

mechanics:

Let Sbe an isolated physical system that we wish to describe quantum-mechanically.

(1) The physical quantities/properties of Sare represented by bounded self-adjoint op-

erators. They generate a C∗- algebra E, i.e., an algebra of operators invariant under

taking adjoints and closed in an operator norm with certain properties; (see, e.g.,

[12]). For simplicity, we suppose that the spectra of all the operators correspond-

ing to physical quantities of Sare ﬁnite point spectra. Then every such operator

A=A∗has a spectral decomposition,

A=X

α∈σ(A)

αΠα,(35)

where σ(A)is the spectrum of A, i.e., the set of all its eigenvalues, and Παis the

spectral projection of Acorresponding to the eigenvalue α, (i.e., the orthogonal

projection onto the eigenspace of Aassociated with the eigenvalue α, in case Ais

made to act on a Hilbert space).

(2) An event possibly detectable in Scorresponds to an orthogonal projection Π=Π∗

in the algebra E. But not all orthogonal projections in Erepresent events. Typi-

cally, a projection Πcorresponding to an event possibly detectable in Sis a spectral

projection of an operator in Ethat represents a physical quantity of S.

(3) So far, time has not appeared in our characterization of physical systems, yet. Time

is considered to be a real parameter, t∈R. All physical quantities of Spossibly

observable during the interval [s, t]⊂Rof times generate an algebra denoted by

E[s,t].17 It is natural to assume that if [s0, t0]⊂[s, t](s≤s0, t ≥t0)

E[s0,t0]⊆ E[s,t]⊆ E.(36)

Events possibly detectable during the time interval [s, t]are represented by certain

self-adjoint (orthogonal) projections in the algebra E[s,t].

(4) Instruments: An “instrument”, IS[s, t], serving to detect certain events in Sdur-

ing the time interval [s, t]is given by a family of mutually orthogonal (commuting)

projections, {Πα}α∈IS[s,t]⊂ E[s,t]. Typically, these projections will be spectral

17Technically speaking, this algebra is taken to be a von Neumann algebra, which has the advantage that, with

an operator A∈ E[s,t], all its spectral projections also belong to E[s,t].

22 Jürg Fröhlich

projections of commuting self-adjoint operators representing certain physical quan-

tities of Sthat may be observable/measurable in the time interval [s, t]. For the

quantum mechanics describing a physical system Sto make concrete predictions it

is necessary to specify its list of instruments {I(i)

S[si, ti]}i∈LS, where LSlabels all

instruments of S. It should be noted that instruments located in different intervals of

time may be related to each other by the time evolution of S. (Thus, for autonomous

systems, it sufﬁces to specify all instruments I(i)

S[0,∞), i = 1,2,3, . . . All other

instruments of Sare conjugated to the ones in this list by time translation. Luckily,

we do not need to go into all these details here.) We emphasize that the operators

belonging to different instruments all of which are located in the same interval of

times do, in general not commute with each other. For example, one instrument

may measure the position of a particle at some time belonging to an interval I∈R,

while another instrument may measure its momentum at some time in I.

Remark: For most quantum systems, the set of instruments tends to be very sparse.

There are many very interesting examples of idealized mesoscopic systems for

which the set of instruments serving to detect events at time tconsists of the spectral

projections of a single self-adjoint operator X(t), with

X(t) = US(s, t)X(s)US(t, s),

where US(t, s)is the unitary propagator of the system Sdescribing time translations

of operators representing physical properties of Sobservable at time sto operators

representing the same physical quantities at time t; (we use the Heisenberg picture

– as one should always do).

The notion of an “instrument” is not intrinsic to the theory and may depend on

the “observer”, but only in the sense that the amount of information available on

a given physical system depends on our abilities to retrieve information about it,

(which may change with time). The situation is similar to the one encountered in a

description of the time evolution of systems in terms of stochastic processes.

Deﬁnition. We deﬁne the algebras

E≥t:= _

t0:t<t0<∞

E[t,t0],for t∈R,(37)

where (·)represents completion in the operator norm of E. The algebra E≥tis the algebra

of all events possibly detectable at times ≥t, i.e., happening in the future of time t.18 By

property (36) we have that

E ⊇ E≥t⊇ E≥t0⊇ E[t0,t00],(38)

whenever t<t0≤t00.

18Since we are interested in projections representing events possibly detectable at times ≥t, it may be advan-

tageous to assume that the algebras E≥tare actually von Neumann algebras; see, e.g., [12].

The Quest for Laws and Structure 23

Next, we describe the key idea underlying our approach to quantum mechanics:

A necessary condition for a physical system Sto feature events that may be detectable

around or after some time t0(=the present), using suitable instruments IS[t0,∞), is that

E≥t⊃

6=E≥t0,for some past time t<t0.(39)

Property (39) expresses a fundamental loss of access to information concerning the past

(in (39): before time t0, but after time t) that occurs in systems featuring detectable events.

A property similar to (39), but appropriate for local relativistic quantum theory, has been

established for quantum electrodynamics (QED), formulated in the language of algebraic

quantum ﬁeld theory, by Detlev Buchholz and the late John Roberts in [22]. It is a conse-

quence of Huygens’ Principle 19 for theories with massless modes or particles, such as the

photons of QED. It should be emphasized that a property perfectly analogous to (39) can

also be derived for classical relativistic ﬁeld theories obeying Huygens’ Principle. Sim-

ple models of non-autonomous systems for which property (39) can be proven for certain

(discrete) times t0have been discussed in [23].

We must ask why property (39) may actually represent a fundamental property (an “ax-

iom”, if you will) of the quantum theory of events and experiments. Our explanation is

based on exploiting the phenomenon of entanglement. Suppose that the system Shas

been prepared in a state ρat some time t0. (How a system can be prepared in a speciﬁc

state at approximately a ﬁxed time is a question that we cannot answer in this essay; but

see [24], where it is discussed at length.) The state ρmay be a pure state on the algebra E.

We deﬁne a state ρton the algebra E≥tby setting

ρt:= ρ|E≥t, ρt(A) = ρ(A),∀A∈ E≥t.(40)

Because of Eq. (39), the state ρtmay be a mixed state on the algebra E≥teven if it is

apure state on the algebra E, assuming that these algebras are non-commutative. This

is what entanglement is all about! Furthermore, because of loss of access to information

as expressed in (39), the states ρt“evolve” in time. This means that, at certain times

(which one can predict), one may be able to use an “instrument”, in the sense of item

4 above, to detect an event, in the sense of item 2 above, of which there were no signs

at earlier times. Indeed, it is precisely the fundamental property of “loss of access to

information”, as expressed in (39), that makes it possible to gain information about a

system by detecting events happening in it! One may want to call this fact the “Second

Law of quantum measurement theory”. Here is a rough indication of how to understand

these things somewhat more precisely:

Given that a system Shas been prepared in a state ρat some time t0, it may happen

that, around some later time t, the state ρtis an incoherent superposition of eigenstates

of a family of commuting self-adjoint projections belonging to the algebra E≥tand rep-

resenting events detectable at time tor later; see item (2), above. These projections may

be those of an instrument IS[t, ∞), in the sense of item (4) above. Mathematically, this

19after the celebrated scientist Christiaan Huygens (1629-1695), who explained many phenomena related to

the wave properties of light with the help of the idea of light spheres emanating from all points in physical space

already reached by light

24 Jürg Fröhlich

means that

ρt(A) = X

α∈IS[t,∞)

ρ(ΠαAΠα) + ρ(Π⊥AΠ⊥),X

α∈IS[t,∞)

Πα=1−Π⊥,(41)

where {Πα}α∈IS[t,∞)=IS[t, ∞)∈ E≥tis an instrument, and Π⊥projects on whatever

is not identiﬁable by this instrument.

Well, things are a little more subtle than that, as we will explain presently. Given a (C∗- or

von Neumann) algebra Mand a state ρon M, we deﬁne the adjoint action of an operator

A∈ M on the state ρto be given by a bounded linear functional, adA(ρ), deﬁned as

follows:

adA(ρ)(B) := ρ([A, B]),∀B∈ M.(42)

We deﬁne the “centralizer” of the state ρto be the subalgebra

Cρ:= {A∈ M :adA(ρ)=0}(43)

of the algebra M.20 Furthermore, let Zρdenote the center of Cρ.21

Given a state ρon the algebra E, we deﬁne Cρtto be the centralizer of the state ρton the

algebra E≥t, and we denote the center of Cρtby Zρt.

We are now prepared to say what it means, quantum-mechanically, that an event detectable

by an instrument IS[t, ∞)happens at a certain time, given that we know the state the sys-

tem has been prepared in.

Axiom concerning events in quantum mechanics:

(I) Given that the system has been prepared in state ρ, the ﬁrst event after the prepara-

tion of the system, detectable by some instrument, IS[t, ∞), of S, happens as soon

as equation (41) holds true, provided all the projections Πα∈ IS[t, ∞)and the

projection Π⊥belong to the center Zρtof the centralizer Cρtof the state ρt.

(II) The probability to detect the event Πα∈ IS[t, ∞), is given by Born’s Rule:

Prob{Παhappens}=ρ(Πα)(44)

and ρ(Π⊥)is the probability that the instrument does not detect anything it can

identify.

(III) If the event corresponding to the projection Παis detected then the state to be used

for predictions after time tmust be taken to be

ρt,α(A) := ρ(ΠαAΠα)

ρ(Πα),∀A∈ E≥t(45)

20It is an easy exercise that I recommend to the reader to show that Cρis an algebra contained in Mand that

ρis a trace on Cρ

21The center, Z, of an algebra Nconsists of all operators in Nthat commute with all operators in N. Note

that Zis an abelian subalgebra of N.

The Quest for Laws and Structure 25

and if the instrument does not detect anything it can identify then the state

ρ⊥

t(A) := ρ(Π⊥AΠ⊥)

ρ(Π⊥),∀A∈ E≥t(46)

must be used.

Item (III) of the axiom is sometimes called the “collapse of the wave function”, a

terrible expression, because the “collapse” involved here is not a physical process, but the

passage to a conditional expectation.

The formulation of the basic “Axiom concerning events” given above lacks certain

elements of precision that cannot be provided here, because they involve concepts – such

as conditional expectations deﬁned on non-abelian algebras, etc. – and mathematical

subtleties that one cannot explain on a page or two; (see, however, [25]). A precise formu-

lation of this axiom shows that the approximate time (≈si0) at which the ﬁrst event is de-

tected after the preparation of the state of the system 22 and the instrument, I(i0)

S[si0, ti0],

for some i0∈ LS,that detects this ﬁrst event can be predicted if one knows the state the

system has been prepared in; see [25].

Loss of access to information, as formulated in property (39), together with items (II) and

(III) of the basic Axiom are fundamental expressions of the probabilistic nature of quan-

tum mechanics (i.e., of its indeterminism) and of its fundamental irreversibility.

Whenever an event happens, in the sense of item (I) of the basic Axiom, then we should

pass to the corresponding conditional state given in Eq. (45) to make predictions of the

future evolution of the system, whereas if the instrument does not detect any event it can

identify then the state in Eq. (46) must be used to predict the future. The passage from the

state ρtto one of the states in (45) and (46) is obviously not a linear process and cannot

be derived from the solution of any Schrödinger equation. The statements that the time

evolution of states in quantum mechanics is described by a Schrödinger equation and that

the Heisenberg picture and the Schrödinger picture are equivalent are not tenable when

one studies physical systems featuring events – and, ultimately, only such systems are in-

teresting for physics.

“I leave to several futures (not to all) my garden of forking paths” – Jorge Luis

Borges23

To summarize our ﬁndings, one may say that the time evolution of states of physical sys-

tems featuring events is described, in quantum mechanics, by a generalized “branching

process”. At every fork of the process, an event detectable by some instrument of the

system happens, or an event not identiﬁable by that instrument happens – as formulated

in the basic Axiom. The probabilities of the different outcomes are given by Born’s Rule.

If one takes notice of the particular event happening at the fork one is advised to use the

corresponding state, as given in (45) and (46), for improved predictions of the future. This

is a new initial state, and one then studies whether the system will feature another event in

the future, in the sense of the basic Axiom, when prepared in this new initial state, etc. The

22i.e., the approximate time at which “a detector clicks”

23in: “El jardín de senderos que se bifurcan,” Editorial Sur, 1941– I thank P. F. Rodriguez for having drawn

my attention to this story.

26 Jürg Fröhlich

different possibilities form a tree-like structure (a little like the different descendants of a

parent in population dynamics – but with the difference that, in quantum mechanics, only

one “descendant”, among all possible “descendants”, is real), and the actual trajectory

of the system corresponds to a path on this tree-like structure, called a “history”. This

has motivated me to call our approach to quantum mechanics the “ETH approach” – for

“Events”, “Trees”, and “Histories”. In quantum mechanics, the “ontology” of a system S

lies in its possible “histories”, (the probabilities or “frequencies”24 of which are predicted

by the theory).

It should be emphasized that, in quantum mechanics, the notion of “conserved quantities”,

such as energy, momentum and angular momentum, becomes somewhat fuzzy in systems

featuring events, because such quantities are actually not strictly conserved along “histo-

ries”: If the instrument involved in the detection of an event does not commute with the

operator corresponding to a conserved quantity this quantity is not conserved when the

event is detected. This follows from the “collapse rules” (45) and (46).

I conclude this essay by drawing an analogy between quantum mechanics and the stan-

dard theory of stochastic (or branching) processes: The ﬁltration of algebras {E≥t}t∈Rin

quantum mechanics is the analogue of a ﬁltration of abelian algebras, {Ec

≥t}t∈R, of func-

tions deﬁned on the path space Ξof a stochastic process with state space X, where the

functions belonging to Ec

≥tonly depend on the part ξ≥t(·) := {ξ(t0)∈X:t0> t}of

the trajectory ξ(·)∈Ξof the process at times tor later. Quantum-mechanical events are

somewhat analogous to events featured by a stochastic process, (for example the event that

a trajectory ξ(·)of a stochastic process visits a certain measurable subset Ωof Ξwhose

deﬁnition only depends on the part ξ≥tof the trajectory). In the case of standard stochastic

processes, all possible events generate an abelian algebra, and one can therefore assume

that the “true” state of the system at time tcorresponds to a point ξ(t)∈X, for all times

t. In quantum mechanics, this is not the case! It tends to be rare that an “event” detectable

by some “instrument” happens. This is a consequence of the non-commutativity of the

algebras E≥t, t ∈R.

In contrast to the situation in classical theories, the state of a system does not have an

ontological signiﬁcance in quantum mechanics; (the word “state” may therefore be con-

sidered to be a misnomer). It merely enables us to make plausible bets on possible events

that may (or may not) happen in the future. In quantum mechanics, the “ontology” lies

in the “histories of events” of a system, (every event giving rise to a new initial state in

the range of the projection that corresponds to the event, as expressed in item (III) – the

“collapse postulate” – of the basic Axiom).

Acknowledgements. I am very grateful to numerous former PhD students of mine and

colleagues for discussions and collaboration on various results presented in this essay.

Their names can be inferred from the bibliography attached to this essay. Among them,

I gratefully mention B. Schubnel, who was my companion on my journey through the

landscape sketched in section 4. Part of this paper was written while I was visiting the

School of Mathematics of the Institute for Advanced Study at Princeton. I wish to ac-

knowledge the ﬁnancial support from the ‘Giorgio and Elena Petronio Fellowship Fund’,

24a notion due to Jacob Bernoulli (1655-1705), a member of the famous Bernoulli family of Basel

The Quest for Laws and Structure 27

and I warmly thank my colleague and friend Thomas C. Spencer for generous hospitality

at the Institute and many very enjoyable discussions.

5. Appendix on Entropy

In this appendix I recall the deﬁnition of the von Neumann entropy of a density matrix

and the deﬁnition of relative entropy for a pair of density matrices. I then state the most

important properties of relative entropy and derive its positivity from an inequality due to

O. Klein.

The von Neumann entropy of a density matrix ρis deﬁned by

S(ρ) := −tr(ρlnρ)(47)

It is obviously non-negative and vanishes only if ρis a pure state. It has various important

properties among which one should mention that it is concave, subadditive and strongly

subadditive; see [11].

More important for our considerations in section 2 is another functional, called “relative

entropy”, deﬁned on pairs of density matrices: Let ρand σbe density matrices on H; the

relative entropy of ρgiven σis introduced as follows:

S(ρkσ) := tr ρ(ln ρ−ln σ),(48)

and it is assumed that ker (σ)⊆ker (ρ). Important properties of relative entropy are:

•Positivity:

S(ρkσ)≥0,with ”=”iff ρ=σon ker (ρ)⊥.(49)

•Convexity: S(ρkσ)is jointly convex in ρand in σ.

For the material in section 2, positivity and joint convexity of relative entropy are the

crucial properties.

Next, we state and prove a general inequality, due to O. Klein,25 which turns out to

imply the positivity of relative entropy. Let fbe a real-valued, strictly convex function on

the real line, and let Aand Bbe self-adjoint operators on H. Then

tr (f(B)) ≥tr (f(A)) + tr (f0(A)·(B−A)),(50)

with “=” only if A=B.

Proof of inequality (50):

Let {ψj}∞

j=0 be a complete orthonormal system (CONS) of eigenvectors of Bcorrespond-

ing to eigenvalues βj,j= 0,1,2, . . . Let ψbe a unit vector in H, and cj:= hψj, ψi. Then

hψ, f (B)ψi=X

j

|cj|2f(βj)≥f(X

j

|cj|2βj) = f(hψ, Bψi),(51)

25Oskar Benjamin Klein (1894-1977) was an eminent Swedish theorist. For example, independently of

Kaluza, he invented the Kaluza-Klein uniﬁcation of gravitation and electromagnetism involving a compact ﬁfth

dimension of spacee-time, and, in 1938, he was ﬁrst to propose a non-abelian gauge theory of weak interactions

28 Jürg Fröhlich

by convexity of f; which, moreover, also implies that

f(hψ, Bψi)≥f(hψ, Aψ i) + f0(hψ, Aψi)· hψ, (B−A)ψi.

If ψis an eigenvector of Athen the R.S. is

=hψ, [f(A) + f0(A)·(B−A)]ψi.(52)

Eq. (50) follows by summing Eqs. (51) and (52) over a CONS of eigenvectors of A.2

As an application we set f(x) = xln(x). Then

f0(x) = ln(x)+1,and f00 (x) = 1

x>0,for x > 0,

i.e., fis convex on R+. We set A:= σand B:= ρ. Then Aand Bare positive operators

and hence, by the convexity of fon R+, Klein’s inequality (50) implies that

tr (ρln(ρ)) = tr (f(B))

≥tr (f(A)) + tr (f0(A)·(B−A))

=tr (σln(σ)) + tr ([ln(σ) + 1](ρ−σ))

=tr (ρln(σ)),(53)

and we have used the fact that tr (ρ) = tr (σ) (= 1), and the cyclicity of the trace. This

proves the positivity of relative entropy.

2

The joint convexity of the relative entropy S(ρkσ)in ρand σis a fairly deep property

that we do not prove here. Instead, we show that the von Neumann entropy S(ρ)is a

concave functional of ρ. Let ρ=pρ1+ (1 −p)ρ2. We apply Klein’s inequality (50) twice,

with the following choices:

•B1:= ρ1, A := ρ

•B2:= ρ2, A := ρ

Taking a convex combination of the two resulting inequalities, we ﬁnd that

ptr (ρ1ln(ρ1)) + (1 −p)tr (ρ2ln(ρ2))

≥tr (ρln(ρ)) + p(1 −p)tr (ρ1−ρ2)[ln(ρ) + 1]

+ (1 −p)ptr (ρ2−ρ1)[ln(ρ) + 1]

=tr (ρln(ρ)),(54)

which completes the proof of concavity of S(ρ) = −tr (ρln(ρ)).

2

For deep and sophisticated entropy inequalities we refer the reader to [10, 11].

The Quest for Laws and Structure 29

References

[1] E. H. Lieb and J. L. Lebowitz, The Constitution of Matter: Existence of Thermodynamics for

Systems Composed of Electrons and Nuclei, Adv, Math. (1972), 316–398.

[2] R. Jost, Das Carnotsche Prinzip, die absolute Temperatur und die Entropie, in: “Das Märchen

vom Elfenbeinernen Turm – Reden und Aufsätze”, pp. 183 – 188, K. Hepp, W. Hunziker and

W. Kohn (editors), Berlin, Heidelberg: Springer-Verlag 1995.

[3] M. Flato, A. Lichnérowicz and D. Sternheimer, Déformations 1-différentiables des algèbres de

Lie attachées à une variété symplectique ou de contact, Compositio Math. 31 (1975), 47–82;

F. Bayen, M. Flato, C. Fronsdal, A. Lichnérowicz and D. Sternheimer, Deformation Theory and

Quantization I-II, Annals of Physics (NY) 111 (1978), 61–110; and (1978),111–151.

[4] M. Gerstenhaber, The cohomology structure of an associative ring, Ann. Math. 78(1963), 267–

288.

[5] M. Kontsevich, Deformation Quantization o Poisson Manifolds, Lett. Math. Physics 66 (2003),

157–216; (see also ref. [15]).

[6] J. Fröhlich, A. Knowles and A. Pizzo, Atomism and Quantization, J. Phys. A40 (2006), 3033–

3045;

J. Fröhlich, A. Knowles and S. Schwarz, On the Mean-Field Limit of Bosons with Coulomb

Two-Body Interaction, Commun. Math. Phys. 288 (2009), 1023–1059.

[7] J. Fröhlich, M. Merkli and D. Ueltschi, Dissipative Transport: Thermal Contacts and Tunneling

Junctions, Ann. H. Poicaré 4(2003), 897–945;

C.-A Pillet and V. Jak˘

sic, Non-equilibrium Steady States of Finite Quantum Systems Coupled

to Thermal Reservoirs, Commun. Math. Phys. 226 (2002), 131–162.

[8] W. K. Abou Salem and J. Fröhlich, Status of the Fundamental Laws of Thermodynamics, J.

Stat. Phys. 126 (2007), 1045–1068.

[9] J. Fröhlich, M. Merkli, S. Schwarz and D. Ueltschi, Statistical Mechanics of Thermodynamic

Processes, in: “A Garden of Quanta” (Essays in honor of Hiroshi Ezawa), J. Arafune et al.

(editors), London, Singapore and Hong Kong: World Scientiﬁc Publ. 2003;

W. K. Abou Salem and J. Fröhlich, Cyclic Thermodynamic Processes and Entropy Production,

J. Stat. Phys. 126 431–466 (2006).

[10] E. H. Lieb and M. B. Ruskai, A Fundamental Property of Quantum-Mechanical Entropy, Phys.

Rev. Letters 30 (1973), 434–436;

Proof of Strong Subadditivity of Quantum-Mechanical Entropy, J. Math. Phys. 14 (1973), 1938-

1941.

[11] D. Sutter, M. Berta and M. Tomamichel, Multivariate Trace Inequalities, arXiv:1604.03023v2

[math-ph] 30 Apr 2016.

[12] O. Bratteli and D. W. Robinson, Operator Algebras and Quantum Statistical Mechanics I, New

York: Springer-Verlag 1979;

Operator Algebras and Quantum Statistical Mechanics II, New York: Springer-Verlag 1981.

[13] W. Braun and K. Hepp, The Vlasov dynamics and its ﬂuctuations in the 1/N limit of interacting

classical particles, Commun. Math. Phys. 56 (1977), 101-113.

[14] N. M. J. Woodhouse, Geometric Quantization, Oxford Mathematical Monographs, Oxford:

Clarendon Press 1991;

A. A. Kirillov, Geometric Quantization, in: “Dynamical Systems IV: Symplectic Geometry and

its Applications”, V. I. Arnol’d and S. P. Novikov (editors), Berlin, Heidelberg: Springer-Verlag

1990.

30 Jürg Fröhlich

[15] A. S. Cattaneo and G. Felder, A Path Integral Approach to the Kontsevich Quantization For-

mula, Commun. Math. Phys. 212 (2000), 591–611, and references given there.

[16] C. Mouhot and C. Villani, On Landau Damping, J. Math. Phys. 51 (2010), 015204.

[17] L. Erdös, B. Schlein and H.-T. Yau, Derivation of the cubic non-linear Schrödinger equation

from quantum dynamics of many-body systems, Inventiones Math. 167 (2007), 515–614;

Derivation of the Gross-Pitaevskii equation for the dynamics of Bose-Einstein condensates,

Ann. Math. 2(172) (2010), 291–370;

P. Pickl, Derivation of the Time Dependent Gross-Pitaevskii Equation with External Fields, Rev.

Math. Phys.27 (2015), 1550003;

A. Knowles and P. Pickl, Mean-Field Dynamics: Singular Potentials and Rate of Convergence,

Commun. Math. Phys. 298 (2010), 101–139.

[18] J. Fröhlich, T. P. Tsai and H.-T. Yau, On a Classical Limit of Quantum Theory and the Non-

Linear Hartree Equation, GAFA - Special Volume GAFA 2000 (2000), 57–78;

On the Point-Particle (Newtonian) Limit of the Non-Linear Hartree Equation, Commun. Math.

Phys. 225 (2002), 223-274;

J. Fröhlich, B. L. G. Jonsson and E. Lenzmann, Effective Dynamics for Boson Stars, Nonlin-

earity 32 (2007), 1031–1075;

W. K. Abou Salem, J. Fröhlich and I. M. Sigal, Colliding Solitons for the Nonlinear Schrödinger

Equation, Commun. Math. Phys. 291 (2009), 151–176.

[19] W. H. Aschbacher, Large systems of non-relativistic Bosons and the Hartree equation, Diss.

ETH Zürich No. 14135, Zürich April 2001;

W. H. Aschbacher, Fully discrete Galerkin schemes for the nonlinear and nonlocal Hartree

equation, Electron. J. Diff. Eqns. 12 (2009), 1–22.

[20] E. B. Davies, Quantum Theory of Open Systems, New York: Academic Press 1976.

[21] W. De Roeck and J. Fröhlich, Diffusion of a Massive Quantum Particle Coupled to a Quasi-

Free Thermal Medium, Commun. Math. Phys. 303 (2011), 613–707.

[22] D. Buchholz and J. E. Roberts, New Light on Infrared Problems: Sectors, Statistics and Spec-

trum, Commun. Math. Phys. 330 (2014), 935–972;

D. Buchholz, Collision Theory for Massless Bosons, Commun. Math. Phys. 52 (1977), 147–

173.

[23] B. Schubnel, Mathematical Results on the Foundations of Quantum Mechanics, Diss. ETH

Zürich No. 22382, Zürich December 2014.

[24] J. Fröhlich and B. Schubnel, The Preparation of States in Quantum Mechanics, J. Math. Phys.

57 (2016), 042101.

[25] J. Fröhlich and B. Schubnel, Quantum Probability Theory and the Foundations of Quantum

Mechanics, in: “The Message of Quantum Science – Attempts Towards a Synthesis”, Ph. Blan-

chard and J. Fröhlich (editors), Lecture Notes in Physics 899, Berlin, Heidelberg, New York:

Springer-Verlag 2015;

Ph. Blanchard, J. Fröhlich and B. Schubnel, A “Garden of Forking Paths” −the Quantum Me-

chanics of Histories of Events, Nuclear Physics B (in press), available online 12 April 2016;

J. Fröhlich, paper in preparation.

Jürg Fröhlich, Theoretical Physics, HIT K42.3, ETH Zurich, CH-8093 Zurich, Switzerland

E-mail: juerg@phys.ethz.ch