PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

This paper discusses Feynman's derivation of the Hamiltonian matrix in the famous Caltech Lectures on Quantum Mechanics, which is illustrative of the mainstream interpretation of what probability amplitudes may or may not represent. We refer to this argument as Feynman's Time Machine argument because the "apparatus" that is considered in the derivation is, effectively, the mere passage of time. We show Feynman's argument is ingenious but, at the same time, deceptive. Indeed, the substitution (for what Feynman refers to as "historical and other reasons") of real-valued coefficients (K) by pure imaginary numbers (−iH/ħ) effectively introduces the periodic functions (complex-valued exponentials) that are needed to obtain sensible probability functions. The division by Planck's quantum of action also amounts to an insertion of the Planck-Einstein relation through the backdoor. The argument is, therefore, typical of similar arguments: one only gets out what was already implicit or explicit in the assumptions. The implication is that two-state systems can be described perfectly well using classical mechanics, i.e. without using the concepts of state vectors and probability amplitudes. This paper, therefore, complements earlier deconstructions of some of Feynman's arguments, most notably his argument on 720-degree symmetries (which we referred to as "the double life of −1") as well as the reasoning behind the establishment of the boson-fermion dichotomy. This paper, therefore, concludes our classical or realist interpretation of quantum mechanics.
Content may be subject to copyright.
Feynmans Time Machine
Jean Louis Van Belle, Drs, MAEc, BAEc, BPhil
30 June 2020
This paper discusses Feynman’s famous derivation of the Hamiltonian matrix in his equally famous
Caltech Lectures on Quantum Mechanics. We use Feynman’s argument because it is very illustrative of
the mainstream interpretation of what probability amplitudes may or may not represent. We refer to
the argument as Feynman’s Time Machine argument because the “apparatus” that is considered in the
derivation is, effectively, the mere passage of time.
We show Feynman’s argument is ingenious but, at the same time, very deceptive. Indeed, the
substitution for what Feynman refers to as “historical and other reasons” – of real-valued coefficients
(K) by pure imaginary numbers (iH) effectively introduces the periodic functions (complex-valued
exponentials) that are needed to obtain sensible probability functions. The division by Planck’s quantum
of action also amounts to an insertion of the Planck-Einstein relation through the backdoor. The
argument is, therefore, typical of similar quantum-mechanical arguments: one only gets out what was
already implicit or explicit in the assumptions. The implication is that two-state systems can be
described perfectly well using classical mechanics, i.e. without using the concepts of state vectors and
probability amplitudes.
This paper, therefore, complements earlier logical deconstructions of some of Feynman’s arguments,
most notably his argument on 720-degree symmetries which we referred to as “the double life of 1”
as well as the reasoning behind the establishment of the boson-fermion dichotomy. This paper may,
therefore, conclude our classical or realist interpretation of quantum mechanics.
Table of Contents
Introduction .................................................................................................................................................. 1
The maser as a two-state system ................................................................................................................. 2
The two states of the ammonia molecule ................................................................................................ 2
The state concept...................................................................................................................................... 3
What is the reference frame? ................................................................................................................... 4
Potential wells and tunneling ................................................................................................................... 5
Modeling uncertainty................................................................................................................................ 6
Feynman’s Time Machine ............................................................................................................................. 9
What is that we want to calculate? ............................................................................................................ 14
Conclusions ................................................................................................................................................. 15
Annex: Amplitude math rules explained .................................................................................................... 16
To explain what a probability amplitude might actually be, one has to get into the specifics of the
situation: explaining how a maser or a laser might work as opposed to, say, having a look at the
polarization states of a photon, are two very different endeavors. However, despite the very different
physicality of these systems, they allow for a similar approach in terms of their quantum-mechanical
The question is: why is that so?
The answer is: both phenomena involve periodicity and regularity some oscillation, in other words
which can be logically represented by the same mathematical functions: a sinusoid or what Nature
seems to prefer a combination of a sine and a cosine, i.e. an oscillation in two dimensions rather than
one only, so that is Euler’s a·eiθ = a·(cosθ + i·sinθ) function.
The frequency of these oscillations is given by the Planck-Einstein relation: f = E/h. We should note the
Planck-Einstein relation also gives us what we refer to as the natural unit for the system: its period T =
1/f = h/E.
When analyzing a maser or a laser, the energy E will be some energy difference between two states. This
energy difference is measured with reference to some average energy (E0) and is, therefore, usually
written as 2A. The period of the oscillation is, therefore, given by
 
In order to ensure the probabilities slosh back and forth the way they are supposed to which is as
continuous functions ranging between 0 and 1 we can present the probabilities of being in one or the
other state (P1 and P2) as squared sine and cosine functions: P1 = sin2(2A·t/ħ) and cos2(2A·t/ħ).
periodicity of these functions is effectively equal to π when measuring time in units of ħ/A (Figure 1)
and they also respect the normalization condition (0 P 1). Most importantly, Pythagoras’ Theorem
(or basic trigonometry, we would say) also ensures they respect the rule that the probabilities must
always add up to 1:
P1 + P2 = sin2(2A·t/ħ) + cos2(2A·t/ħ) = 1
The examples here (laser/maser and polarization of photons) are examples of two-state systems. However, our
analysis will be valid for, or generalizable to, n-state systems. A priori, the analysis should, therefore, also be valid
for n (i.e. for wavefunctions).
The sine and cosine function are the same function but with a phase difference of 90 degrees. We, therefore,
may think of some kind of perpetuum mobile: two oscillations working in tandem and transferring (potential
and/or kinetic) energy to and from each other. We developed this metaphor in one of very first papers which, if
only because of its naïve simplicity, we may still recommend.
The concept of an angular time period (1/ω = ħ/A = T/2π) – the time per radian of the oscillation is not in use
but would actually be useful here: we will, in fact, use it as the time unit in the graph of the probabilities.
The frequency in these functions is an angular frequency, which is why we have a factor 2: ω = 2A/ħ ω·t =
The ħ/A time unit is an angular time period (1/ω = ħ/A = T/2π): see footnote 3.
Figure 1: The probability functions for a two-state system
In fact, we do not see any other functional forms which would respect the above-mentioned conditions
for meaningfulness in the context of defining probabilities.
We will now show how Feynman smuggles, so to speak, all of these functions and conditions into his
argument when introducing the concept of probability amplitudes and constructing the Hamiltonian
The maser as a two-state system
The two states of the ammonia molecule
While Feynman presents a general argument, he uses the maser as an example so as to focus ideas. We
will, therefore, do the same.
The ammonia maser is one of the very first practical applications of the theory of quantum mechanics. It
was built in the early 1950s (remember Feynman wrote his Lectures in the early 1960s) and its inventor,
Charles Townes, wanted the m in maser to refer to molecular. The mechanism is similar to that of a
laser, which was invented a few years later: the a, s, e, r in maser effectively refer to the same as in laser
(amplification by stimulated emission of radiation).
However, instead of electromagnetic waves in the frequency spectrum of (visible) light, a maser
produces microwave, radiowave or infrared frequencies. The latter are associated with lower energies,
which correspond to the smaller differences between the energies that are associated with the position
of the nitrogen atom in the ammonia (NH3) molecule. The idea of the state may, therefore, be identified
with the idea of the position of the nitrogen atom in the ammonia molecule (Figure 2).
In case you wonder what an electric field actually is, we mean an electrostatic field, which originates from static
chargesas opposed to a magnetic field, which originates from moving charges.
Figure 2: Ammonia molecules with opposite dipole moments in an electrostatic field
The state concept
Figure 2 clearly shows position states 1 and 2 have nothing to do with the spin state of the molecule
as a whole: that is the same in the right- and left-hand side illustrations, as shown by the rotation arrow
around the symmetry axis of this molecule. There is no spin flip here or anything similar
, and one
should also not think that this NH3 molecule goes from state 1 to 2, or vice versa, by flipping over as a
wholeby changing its orientation its space, that is.
No! What happens here is that the nitrogen atom (N), somehow, manages to tunnel through the plane
that is formed by the three hydrogen atoms (H3). We will come back to this. Before we do so, we should
note that we have not introduced much quantum-mechanical symbolism yet, so let us quickly do this
The 1 and 2 notation represent physical base states here. This ϕ notation is known as the ket in
Dirac’s bra-ket notation and always refers to some initial state that may or may not change. In contrast,
the χ| notation is a bra-state and refers to some final state. These initial and final states are separated
by time states may change as the clock keeps ticking without us intervening in any way
alternatively, because we put the particle through some apparatus, process, or force fieldwhich we
may denote by A or S. We may, therefore, say some apparatus or process will operate on some (initial)
state ϕ to produce some (end) state χ|. This is written in the way which will be familiar to the
χ | A | ϕ
We gratefully acknowledge the online edition of Feynman’s Lectures for this illustration.
As Feynman puts it, we assume all vibrational and rotational modes are exactly the same in the two states.
These changes are, in fact, at the heart of Feynman’s argumentas we will see in a moment.
Note one needs to read this from right to left, like Arabic or Hebrew. We are not sure why Dirac chose this
reverse order. It adds to the magic, of course, but it is a convention only.
Because this looks quite formidable, we should give a practical example as part of our discussion of the
ammonia maser: if the electric field the Ɛ in the illustration
is very strong or, if it is being applied
long enough, then an atom in the 1 state will go into the 2 state so as to ensure the electric dipole
moment of the ammonia molecule () is aligned with the electric field.
This is all quite logical because
the energy of the ammonia molecule as a whole will be lower if and when it can align its dipole moment
with the field.
What is the reference frame?
We should note that the notion of an energy difference between the two states can only be defined with
reference to some external field: we can say that the NH3 molecule has more energy in state 1 than in
state 2 because its polarity in state 1 opposes the field. We may, therefore, say that the external field
establishes the frame of reference: what is up or down, left or right, and back or front can, effectively,
only be defined with a reference to this externally applied field.
This may seem to be a trivial
philosophical remark but physicists sometimes seem to lose sight of this when doing more complicated
abstract mathematical calculations.
We also need to make another philosophical remark here: are we talking the dipole moment of the
molecule or the nitrogen atom? It is an electric dipole moment, so it must be the dipole moment of the
molecule, right? Atoms may have a magnetic moment
but they would not have an electric moment,
The answer is: yes, and no. Something must cause the ammonia molecule to be polar and that
something is the configuration of the system: nitrogen has 7 electrons, which are shared with the
hydrogen nuclei in covalent bonds. A covalent bond blurs the idea of an electron belonging to one atom
only. One may think of it like this: the valence electrons allow the hydrogen to also (partly) fill its shell
with paired electrons.
We usually use E for an electric field but we use the Ɛ symbol here so as to ensure there is no confusion with the
E that is used to denote energy.
Notation is tricky once again because we use the same symbol to refer to a magnetic moment in another
context. However, we trust the reader is smart enough to know what is what here.
The reader may think this electric field has the same axis of symmetry as the NH3 molecule and that we may,
therefore, not be able to distinguish left from right or vice versa. However, this problem is solved because it is
assumed we have knowledge of the spin direction (see the rotation arrow in Figure 2). We also know what is back
and front because we are doing this experiment and we, therefore, have some idea of our own relative position
vis-à-vis the electric field and the ammonia molecule. In short, we may say that the experiment as a whole comes
with the relevant frame of reference for the measurement of position, energy and whatever other physical
property or quantity we would want to observe here.
All atoms with an uneven number of electrons have a magnetic moment because electrons in a pair (remember
the standard configuration of an electron orbital has two electrons) will have opposite spin. The silver atoms which
Otto Stern and Walther Gerlach sent through their apparatus in 1922, for example, have 47 electrons. It is
interesting to note that a similar line-up happens if we consider the nucleus alone: when applying an external
magnetic field, pairs of nucleons will line up so as to lower the joint energy of the system.
Figure 3: The charge distribution in an ammonia molecule
We will let the reader google more details of the structure of this system.
At this point, the reader
should just note an analysis in terms of individual atoms is not all that useful: the ideas of positively
charged nuclei and electron densities are far more relevant than the idea of an individual nitrogen atom
flipping through some potential barrieralthough the latter idea is what we are going to be talking
about, of course!
We will not dwell on this. Just remember this when you are getting confused or if we would happen to
be using non-specific language ourselves
: we are talking the state of the ammonia molecule (or the
molecular system, we should say) but this state in this discussion, at least is determined by the
relative position of the nitrogen.
Potential wells and tunneling
If there is an energy difference between state 1 than in state 2, then how can we explain the nitrogen
atom tends to stay where it is? How is that possible? The reader will be familiar with the concept of a
potential well if not, google it and the reader should, therefore, note that the potential energy of the
N atom will effectively be higher in state 1 than in state 2 but, because of the energy barrier (the wall
of the potential well), it will tend to stay where it isas opposed to lowering its energy by shifting to the
other position, which is a potential well itself!
Of course, one needs to read all of the above carefully: we wrote that the nitrogen atom will tend to
stay where it is. From time to time, it does tunnel through. The question now becomes: when and how
does it do that? That is a bit of a mystery, but one should think of it in terms of dynamics. We modeled
particles as charges in motion.
Hence, we think of an atom as a dynamic system consisting of a bunch
of elementary (electric) charges. These atoms, therefore, generate an equally dynamic electromagnetic
We gratefully acknowledge the source of this illustration: the virtual Elmhurst College Chemistry Book, Charles H.
Ophardt, 2003.
There are various ways to look at it. The Chembook illustration shows a lonely electron pair but you should note
the nitrogen atom also wants fully-filled (sub-)shells. Its 1s and 2s subshells have two, but the three 2p (subshells)
each lack one electron, and then the 1s orbitals of the three hydrogen atoms lack one too. We, therefore, have five
valence electrons. The nitty-gritty of the charge distribution is, therefore, quite complicated.
This inevitably happens when getting into quantum-mechanical descriptions so we will not apologize for it.
See our previous papers.
field structure. We, therefore, have some lattice structure that does not arise from the mere presence of
charges inside but also from their pattern of motion.
Can we model this? Feynman did not think this was possible.
In contrast, we believe recent work on
this is rather promisingbut we must admit it has not been done yet: it is, effectively, a rather
complicated matter and, as mentioned, work on this has actually just started!
We will, therefore, not
dwell on this either: you should do your PhD on it! 
The point is this: one should take a dynamic view of the fields surrounding charged particles. Potential
barriers and their corollary: potential wells should, therefore, not be thought of as static fields: they
vary in time. They result from or more charges that are moving around and thereby create some joint or
superposed field which varies in time. Hence, a particle breaking through a ‘potential wall’ or coming out
of a potential ‘well’ is just using some temporary opening corresponding to a very classical trajectory in
space and in time.
There is, therefore, no need to invoke some metaphysical Uncertainty Principle: we may not know the
detail of what is going onbut we should be able to model it using classical mechanics!
Modeling uncertainty
The reader should, once again, note that the spin state or angular momentum state is the same in the
1 and 2 states. Hence, the only uncertainty we have here is in regard to the position of the nitrogen
atom (N) vis-à-vis the plane that is formed by the three hydrogen atoms (H). As long as we do not
actually investigate, we cannot know in what state this nitrogen atom or the molecule as a whole
actually is.
Paraphrasing Wittgenstein
, we can say our theory can only tell us what might be the case: it is only
some measurement that can establish what actually is the case.
We can, of course, also prepare the
NH3 molecule by polarizing it in a strong-enough electric field. However, in either case, we will, of
course, disturb the system and, by doing so, put it in some new state.
We do not want to do that. Instead, we will try to model our uncertainty in regard to the position of the
You should also do some thinking on the concept of charge densities here: the different charge densities inside
of the ammonia molecule do not result from static charge distribution but because the negative charges inside
(pointlike or not) spend more time here than there, or vice versa.
We will soon quote his remarks on this, verbatim, so be patient for the time being!
In case you would want to have an idea of the kind of mathematical techniques that are needed for this, we
hereby refer you to a recent book on what is referred to as nuclear lattice effective field theory (NLEFT).
You should also do some thinking on the concept of charge densities here: the different charge densities inside
of the ammonia molecule do not result from static charge distribution but because the negative charges inside
(pointlike or not) spend more time here than there, or vice versa.
We refer to Wittgenstein’s theses in his Tractatus Logico-Philosophicus, which our reader may or more likely
may not be familiar with.
Of course, investigation may be useless because our measurement methods may disturb the system and,
therefore, force it into one of the two states. We are probing one of the smallest of small things here, so not
disturbing it will not be easy: measurement may, therefore, not be feasible from a practical point of view !
nitrogen atom, in the absence of a measurement or polarization, by thinking of it in very much the same
way as we think of the proverbial cat in the equally proverbial Schrödinger box: because we do not know
if it is dead or alive, we can only associate some abstract logical state with ita combination of being
dead and alive which exists in our mind only.
Fortunately, the state of the ammonia molecule is much less dramatic or critical as that of Schrödinger’s
cat, and we will simply write it as:
|ϕ = C1·1 + C2·2
This looks like a very simple formula but it is actually quite revolutionary what we are doing here
1. The 1 and the 2 states are (logical) representations of what we think of as a physical state: they are
possible realitiesor real possibilities, whatever term one would want to invent for it. When using them
in a mathematical equation like this, we will think of them as state vectors.
There is a lot of mathematical magic here, and so one should wonder: what kind of vectors are we
talking about? Mathematicians refer to them as Hilbert vectors
and Figure 4 shows why Schrödinger
liked them so much: whatever they might represent, we can effectively add and multiply them,
Figure 4: Adding cats or states? Adding them dead, alive, or in-between?
Some kind of quantum leap, we might saybut that would probably confuse the reader.
This is actually incorrect: they are referred to as being vectors in a Hilbert space. It depends on what you think of
as being special: we think it is the vectors, rather than the space, so we add Hilbert’s name to the vectors rather
than the space. In case you wonder, David Hilbert is not English. He was German. He died in 1943 and his tomb
carries these words: Wir müssen wissen. Wir werden wissen, which we may translate as: “We must know. We will
We saw this cartoon on MathExchange, which references AbstruseGoose as the source. The date on this cartoon
(1935) is somewhat weird: Paul A.M. Dirac published the first edition of his Principles of Quantum Mechanics in
1930. It may also be mentioned that, while the cat seems to be Schrödinger’s alright (the man who puts the cat in
the box wears Schrödinger’s glasses), the bra-ket notation was invented by Dirac. Schrödinger’s seminal paper for
the 1927 Solvay Conference (La Mécanique des Ondes) makes use of wave functions only. One of the reasons we
like Feynman’s Lectures on Quantum Mechanics is him going from discrete states (mostly two-state systems) to
then generalize to an infinite number of discrete states what, in practice, amounts to continuous states, which are
It is really like adding apples and oranges. What do you get when you do that? Some fruits, right?  So
we will talk about fruits but we should not forget the fruit consist of apples and oranges: that is the fruit
menu of today, in any case (we might get grapes and bananas tomorrow).
The point is this: logic or logical states may be fuzzy, but physical states are not: the fruit is an apple or
an orangenot something in-between. Likewise, the nitrogen nucleus is either here or therenot
somewhere in-between.
2. So where are we? Yes. We were talking physical states. We multiply these with C1 and C2 in the |ϕ =
C1·1 + C2·2 formula: C1 and C2 are complex numbers (or complex functions, to be precise). Of course,
because we are multiplying them with these state vectors one may want to think of them as vectors
too. That is not so difficult: complex numbers have a direction and a magnitude, so it is easy to think of
them as vectors alright!
So what happens when we multiply apples or oranges with some number? We get two apples, or half an
orange. It depends on the fruit and the number. But so here we multiply with some complex number.
That is hard to visualize: we know a complex number includes the idea of an orientation in space (a
complex number is defined by its length and its direction in space) but this idea does not help us very
much here. What does help is to think about what we are doing herelogically speaking, that is: we are
using two discrete physical states to produce some new logical state which is defined by two complex-
valued coefficients or to be more precise complex-valued functions. These functions will be well-
behaved continuous functions.
Functions of what? Functions of time! To be precise, we will equate both of them with a complex-valued
exponential function whose general shape is C = a·ei·ω·t.
One should note that all of these assumptions which Feynman introduces rather casually are not
innocent: at this point, we are swapping the physics of the situation for some mathematical or logical
representation of what might or might not be going on. If that is uncertainty, then it is our
uncertaintynot Nature’s! Hence, this |ϕ state which is the sum of the C1·1 and C2·2 states is not
a physical but a logical state: it exists in our mind only.
Why in our mind only? Because we are not
trying to measure anything so we are in a state of uncertainty ourselves: we think of some fruit but we
are not being specificwe are not talking apples or oranges here.
modeled by wave mechanicsas opposed to matrix mechanics. It, therefore, bridges the two approaches, which
complement each other, of course!
At least not in a time interval that would be sufficiently large to be relevant! One should think of the time in-
between states as being too short to measure!
We prefer such visualization or conceptualization to the idea of complex numbers being two-dimensional
numbers. That is correct too, of course, but perhaps not so easy to visualize.
You may think we should distinguish a third physical state: the state of our nitrogen atom while it is moving from
position 1 to position 2 or vice versa. However, we assume this happens so quickly that the time that is spent in
this state is negligible. We think the state itself is, therefore, negligible.
We are not talking an apple-orange smoothie either!
Let us stop the philosophy here: let us now present Feynman’s derivation of the Hamiltonian, which we
refer to as his Time Machine argumentfor reasons which will soon be clear.
Feynman’s Time Machine
The objective of Feynman’s rather convoluted argument is to calculate those C1 and C2 coefficients in
the |ϕ = C1·1 + C2·2 formula. These coefficients are all that matters now: we do no longer care about
how we can possibly represent the physical base states.
As mentioned above, we will equate both C1 and C2 with a complex-valued exponential function whose
general shape is C = a·ei·ω·t. Feynman calls it ‘trial’ solutions to the set of differential equations he will
develop but that should not mask the ruse: Feynman imposes these functional shapes in his argument.
Why does he do that? It is because the derivative of a complex exponential and of a real-valued
exponential too, of course! is an exponential function itself! So they make sense. Of course they do:
there are actually no other solutions to the set of Hamiltonian equations we will derive, so it all comes
as a package! Let us show how it works.
We write those coefficients C1 and C2 as functions of time, so we write them as C1(t) and C2(t). We will
also have time derivatives dC1(t)/dt and dC2(t)/dt. So far, so good. Now we get to the meat of the
matter: Feynman’s lecture on how states change with time
here makes for a great but rather
complicated abstract logical argument which involves time as an apparatus. Feynman sums this up as
We have already talked about how we can represent a situation in which we put something
through an apparatus. Now one convenient, delightful “apparatus” to consider is merely a wait
of a few minutes; that is, you prepare a state ϕ, and then before you analyze it, you just let it sit.
Perhaps you let it sit in some particular electric or magnetic fieldit depends on the physical
circumstances in the world. At any rate, whatever the conditions are, you let the object sit from
time t1 to time t2.
After this introduction follows one or two pages of theory, in which Feynman introduces Uij = i U j
coefficients to describe the system (he does it for a n-state system, so we have states i or j = 1, 2, 3,…, n).
These represent Feynman’s ‘time apparatus’: the state may remain the same or go into another state as
time passes by and so that is what the nn matrix, operator, process or whatever one would want to call
it with the coefficients Uij describes.
Now, we have all of the coefficients Ci that describe the amplitude to be in state i. These are functions of
Feynman makes the point quite explicitly by moving to another set of base states, which he denotes as I and II,
as opposed to 1 and 2. These new base states are pure logical base states: they are also orthogonal and also
observe other mathematical conditions so as to make sure we get the same well-behaved probability functions we
want to get.
We significantly abbreviate the argument here because we think Feynman makes it longer than it should be:
Feynman’s Lectures on Quantum Mechanics, Chapter 8, section 4. The extra whistles and bells in Feynman’s
argument probably serve to divert the reader’s attention away from the various deus ex machina moves which, in
sharp contrast to the sidekicks, remain largely unexplained.
time and so we should think of their time derivatives.
Feynman thinks of the time derivatives in terms
of (infinitesimally small) differentials and, hence, writing something like this effectively makes sense:
  
  
The Uij(t+t, t) element is a differential itself, and it is, obviously, a function of both t and t:
1. If t is equal to 0, no time passes by and the system will just be in the same state: the state is
just the same state as the previous state. Why? Because there is no previous state here, really:
the previous and the current state are just the same.
2. If t is very small but non-zero, then there is some chance that the system may go from state i
to state j. Feynman models this by writing:
   
Feynman introduces yet another coefficient here: Kij. Make no mistake about it: Kij is a real-valued
proportionality coefficient. It is just as real-valued as t and, therefore, as Uij!
Of course, we should, somehow, incorporate the fact that, for very small t, the system is more likely to
remain in the same state than to change. Feynman models this by introducing the Kronecker delta
function. This all sounds and looks formidable but you will (hopefully) see the logic if you think about it
for a while:
    
   
  
The idea behind this formula is pretty much the same as that of using the first-order derivative for a
linear (first-order) approximation of the value of a function f(x0 + x)
      
 
This is illustrated below (Figure 5). Feynman obviously uses Kronecker’s ij function to substitute for the
function f in the formulas above, and so we should relate this to the probabilities. Indeed, the system is
much more likely to have stayed in the same state (as opposed to going through a state change) if Δt is
very small (probability close to 1), but more likely to change if more and more time goes byso the
probability to stay in the same state then goes down.
The differential equations are, obviously, right around the corner now.
Feynman carefully avoids any discussion as to whether we should think of the Uij coefficient as being real- or
complex-valuedand for good reasons: there is effectively no reason whatsoever to assume it should be complex-
Figure 5: A first-order approximation of a function
We will not dwell too much on thisnot because we do not want to but because you have to think all of
this through for yourself in order to understand what we are writing here.
Just think about that
proportionality with time:
Uii(t + Δt, t) = 1 + Kii·Δt (i = j)
Uij(t + Δt, t) = 0 + Kij·Δt = Kij·Δt (i = j)
The question this triggers, is really this: what are the relevant units here? We measure these 0 and 1
values in what unit, exactly? That question is answered by Feynman’s grand deus ex machina move, and
that is to replace these Kij coefficients simple real-valued proportionality coefficients by taking the
i/ħ out of these coefficients.”
He writes he does so for historical and other reasons
but, of course, this is the point at which he
actually uses the Planck-Einstein relation: why suddenly divide by ħ otherwise?
It is surely not an innocent operation: not only does it introduce Planck’s constant – totally out of the
blue! but it also inserts the imaginary unit (i) in equations which by replacing the linear
approximation with proper functions will turn into a set of differential equations. As mentioned, the
We did a few blog posts on this, but we should probably rewrite these to incorporate the more recent ideas we
develop in this paper here. Kronecker’s ij function
We quote from the above-mentioned lecture (chapter 8 of Volume III of Feynman’s Lectures, which have been
made available online by Caltech.
He just says we should, of course, not confuse the imaginary unit i here with the index i. Jokes like this remind me
of one of the books that was written on him: “Surely You’re Joking, Mr. Feynman!
A sneak peek at the final solutions for our two-state system (the maser) tells us H11 = H22 = E0 and H12 = H21 = A.
Needless to say, if we take the i/ħ factor out of the Kij coefficients, we should also take them out of the 0 and 1
terms. Also note that E0 can be set to zero. It is just a matter of the reference point for the (potential) energy.
Mathematically, it amounts to shifting the origin of the energy axis. Just substitute and see what makes sense (or
not). One thing is for sure: there is a lot of hocus-pocus herea lot of things that are implicit but are surely not
innocent or merely ‘historical reasons’ only.
implicit insertion of the Planck-Einstein relation also fixes the (time) unit, which is just the reciprocal of
the (angular) frequency A/ħ.
It, therefore, totally changes the character of their solutions: we will get the periodic functions we need,
so it works (of course, it does)but it is plain illegal from a logical point of view. Again, we will not dwell
too long on this because we want the reader to think this through for himself. Hence, to make this
rather long story short, we just note that Feynman re-writes the above as:
     
Re-inserting this expression in the very first and some more hocus-pocus
and re-arranging then gives
the set of differential equations with the Hamiltonian coefficients that you were probably waiting for:
  
This is the set of differential equations Feynman then uses for the two-state system representing the
maser too. Indeed, for a two-state system, this is a set of two equations only:
  
  
These equations basically define the Hamiltonian coefficients Hij in terms of the average energy E0 and
the energy difference between the two states and this average:
 
 
 
What this energy E0 (note that this average energy can be set to zero
) and the energy difference A
actually means in the context of the particular system which Feynman used as an example the maser
is illustrated below (Figure 6). It shows what happens to these energy levels in the presence of an
external electric field ().
See footnote 3.
The hocus-pocus here is, however, significantly less suspicious than the deus ex machina move when doing the
mentioned substitution of coefficients!
See footnote 39.
Figure 6: Separation of energy states when applying an external field
Figure 6 shows we can actually not talk of separate energy states if no external field is being applied: the
energy of the ammonia molecule is just E0 and there is no such thing as a higher or a lower energy state.
In contrast, when an external field is being applied, we will have a higher or lower energy state
depending on the position of the nitrogen atom and, therefore, of its position state.
There is another thing we should mention heresomething Feynman does not make very explicit
either: when the external field becomes somewhat stronger, the nitrogen atom will no longer equally
divide its time over position 1 and 2: if possible, at all, it will want to lower its energy permanently by
staying in the lower energy state. This is, effectively, how we can polarize the ammonia molecules in a
maser. Hence, the illustrations above are valid only for very small values of Ɛ0: if we apply a stronger
field, all ammonia molecules will align their dipole moment and stay aligned.
In any case, assuming we are applying a small enough field only or no field at all we can solve the
equations and calculate C1 and C2 as follows:
 
  
How did we calculate that? We did not: we refer to Feynman here.
He introduces the mentioned so-
called ‘trial’ solutionswhich are the solution, of course! The point is this: we can now take the
absolute square of these amplitudes to get the probabilities:
 
We gratefully acknowledge the online edition of Feynman’s Lectures for this illustration too.
Reference above: Feynman’s Lectures, Volume III, Chapter 8, pages 8-11 to 8-14.
Those are the probabilities shown in Figure 1. The probability of being in state 1 starts at one (as it
should), goes down to zero, and then oscillates back and forth between zero and one, as shown in that
P1 curve, and the P2 curve mirrors the P1 curve, so to speak. As mentioned also, it is quite obvious they
also respect the requirement that the sum of all probabilities must add up to 1: cos2θ + sin2θ = 1, always.
Is that it? Yes. We have been too long already and so we must conclude our paper here. We will do so by
asking the question we should have started with.
What is that we want to calculate?
We wanted to calculate that cycle time πħ/A (or the related frequency), and so we did that. And then we
did not, of coursebecause all of the above uses an A = μƐ0 equation. We talked about the dipole
moment (μ), but not about Ɛ0. So how do we get Ɛ0? How do we calculate it?
The answer is: we do not calculate it. No one does. Its value must be related to the strength of the
external field Ɛ, but what field are or should we be applying here? Feynman is rather vague about that,
but we get some kind of answer in his next lecture.
It turns out that, when actually operating an
ammonia maser, we will apply an electric field that varies sinusoidally with a frequency that is equal or
very near to the so-called resonant frequency of the molecular transition between the two states. This
field is this:
Ɛ = Ɛ0·2cos(ωt) = Ɛ0·(e i·ωt + ei·ωt)
ω = ω0 = 2A/ħ
The question now becomes: what is that resonant frequency? This is, effectively, a circular argument:
we define A in terms of μ and Ɛ0, and vice versa! In fact, we need to ask ourselves this: what determines
E0? There is no conclusive theoretical answer to that question: it is, apparently, just something we
measure experimentally. Indeed, at the very end of his argument, Feynman writes this
In the discussion up to this point, we have assumed values of E0 and A without knowing how to
calculate them. According to the correct physical theory, it should be possible to calculate these
constants in terms of the positions and motions of all the nuclei and electrons. But nobody has
ever done it. Such a system involves ten electrons and four nuclei and that’s just too
complicated a problem. As a matter of fact, there is no one who knows much more about this
molecule than we do. All anyone can say is that when there is an electric field, the energy of the
two states is different, the difference being proportional to the electric field. We have called the
coefficient of proportionality 2μ, but its value must be determined experimentally. We can also
say that the molecule has the amplitude A to flip over, but this will have to be measured
Chapter 9 of Vol. III, which deals with the ammonia maser specifically, as opposed to just mentioning what is
needed for this heuristic derivation of the Hamiltonian matrix (Chapter 8).
To be truthful, it is not at the very end of his exposébut just quite late in the game (section 9-2), and what
follows does not give us anything more in terms of first principles.
experimentally. Nobody can give us accurate theoretical values of μ and A, because the
calculations are too complicated to do in detail.
This, then, amounts to admitting defeat: we cannot calculate what we wanted to calculate based on first
principles. Not a great success!
We solved many mysteries in this paper by highlighting the circularity (and/or plain deceit) in
Feynman’s quantum-mechanical arguments but we are still left with one question: why do we need to
take the (absolute) square of some complex-valued amplitude to get a probability?
Frankly, we would reverse that question: why and how can we calculate amplitudes by taking the square
root of the probabilities? Why does it all work out? Why is it that the amplitude math mirrors the
probability math? Why can we relate them through these squares or square roots when going from one
representation to another?
The answer to this question is buried in the math too, but is based on simple arithmetic. Note, for
example, that, when insisting base states or state vectors should be orthogonal, we actually demand
that their squared sum is equal to the sum of their squares:
(a + b)2 = a2 + b2 = a2 + b2 = a2 + b2 + 2a·b a·b = 0
This is a logical or arithmetic condition which represents the physical condition: two physical states must
be discrete states. They do not overlap: it is either this or that. We can then add or multiply these
physical states mix them so as to produce logical states, which express the uncertainty in our mind
(not in Nature!) because these base states are, effectively, independent. That is why we can use them
to construct another set of (logical) base vectors, which will be (linearly) independent too! It is only
because of the physics behind.
The more fundamental point is this, however: we can spare ourselves the trouble of calculating
amplitudes! We can, just as well, say that we are looking at some classical oscillation here and that as
usual we can use the Planck-Einstein relation to determine its frequency. The relevant energy to be
used is an energy difference and the situation, therefore, resembles the energy difference between, say,
two electron orbitals in the Rutherford-Bohr model of an atom. The following equation is, therefore,
quite self-evident:
 
Such simpler classical description does not need any ill-defined concepts such as state vectors and
probability amplitudes. Nor does it need convoluted arguments to calculate functions that have no real
If there is something you remember from vector algebra, it should be : one has to choose an unambiguous origin
for the vector space. The physicality of the situation we are modeling has a similar significance here. We elaborate
this point in the Annex to this paper.
Annex: Amplitude math rules explained
The most important point that we tried to make in this paper is that we need to be aware of the switch
that is made from discrete physical states to continuous logical states in the quantum-mechanical
description of phenomena. Such awareness, then, explains the quantum-mathematical rules for
probabilities and amplitudes. Following Richard Feynman
, we may represent this rules as two related
or complementary sets. The first set of rules is more definitional or procedural than the other one,
although both are intimately related:
(i) The probability (P) is the square of the absolute value of the amplitude ()
: P = 2
(ii) In quantum mechanics, we add or multiply probability amplitudes rather than probabilities:
P = 1 + 2 2 or, for successive events, P = 1·2 2
Probability amplitudes are complex-valued functions of time and involve the idea of a particle or a
system going from one state (i) to another (j). We write:
= j i
The latter notation is used to write down the second set of quantum-mechanical rules:
I. j i = ij
II. = all I i i
III. = *
You probably know these rules from your physics course(s). You should not think of them as being
obscure. Here is the common-sense explanationstarting from the bottom-up:
1. Rule III shows what happens when we reverse time two times: we go from state to (instead of
going from and ) and we also take the complex conjugate, so we put a minus sign in front of the
imaginary unitwhich amounts to putting a minus sign in front of the time variable in the argument.
We reverse time two times and, therefore, are describing the same process.
2. Rule II just say what we wrote in the first set of rules: we have to add amplitudes when there are
several ways to go from state to .
3. Rule I is the trickiest one. It involves those base states (i and j instead of or ), and it specifies that
condition of orthogonality. How can we interpret it? We can do by taking the absolute square
using rule III:
Richard Feynman, Lectures on Quantum Mechanics, sections III-1-7 (p. 1-10) and III-5-5 (p. 5-12).
The square of the absolute value (aka modulus) is a bit of a lengthy expression so we refer to is as the absolute
square. It may but should not confuse the reader.
Note we also use the mathematical rule which says that the square of the modulus (absolute value) of a complex
number is equal to the product of the same number and its complex conjugate.
 i i 2 = i i  i i * = i i 2 = 1 = Pi = i (i = j)
 j i 2 = j i  j i * = j i 2 = 0 = Pi = j (i j)
The logic may not be immediately self-evident so you should probably look at this for a while. If you do,
you should understand that the orthogonality condition amounts to a logical tautology: if a system is in
state i, then it is in state i and not in some different state j. This is what expressed in the i i 2 = Pi = i = 1
and j i 2 = i j 2 = Pi = j = 0 condition.
Is it that simple? Yes. Or at least that is what we think. 
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.