PreprintPDF Available

Lectures on Physics Chapter VI : All of Quantum Math

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

The special problem we try to get at with these lectures is to maintain the interest of the very enthusiastic and rather smart people trying to understand physics. They have heard a lot about how interesting and exciting physics is—the theory of relativity, quantum mechanics, and other modern ideas—and spend many years studying textbooks or following online courses. Many are discouraged because there are really very few grand, new, modern ideas presented to them. Also, when they ask too many questions, they are usually told no one really understands or, worse, to just shut up and calculate. Hence, we were wondering whether or not we can make a course which would save them by maintaining their enthusiasm. This paper is the sixth chapter of such (draft) course.
Lectures on Physics Chapter VI :
All of Quantum Math
Jean Louis Van Belle, Drs, MAEc, BAEc, BPhil
23 April 2021 (revised on 16 September 2022)
Abstract
The special problem we try to get at with these lectures is to maintain the interest of the very
enthusiastic and rather smart people trying to understand physics. They have heard a lot about how
interesting and exciting physics isthe theory of relativity, quantum mechanics, and other modern
ideasand spend many years studying textbooks or following online courses. Many are discouraged
because there are really very few grand, new, modern ideas presented to them. Also, when they ask too
many questions, they are usually told no one really understands or, worse, to just shut up and calculate.
Hence, we were wondering whether or not we can make a course which would save them by
maintaining their enthusiasm. This paper is the sixth chapter of such (draft) course.
1
Contents
Introduction .................................................................................................................................................. 1
The nature of mass and energy ..................................................................................................................................... 3
Matter, antimatter, dark matter, and CPT-symmetry ................................................................................................... 6
The charge/mass ratio and quantum field theory ......................................................................................................... 8
The use of matrices and matrix algebra in physics ........................................................................................................ 9
The Hamiltonian matrix ............................................................................................................................................. 9
An example of a two-state system: the maser ........................................................................................................ 13
State vector rules .................................................................................................................................................... 15
The scattering matrix .............................................................................................................................................. 17
Hermiticity in quantum physics ............................................................................................................................... 18
Form factors and the nature of quarks ................................................................................................................... 19
Perturbation theory ..................................................................................................................................................... 20
Final notes ................................................................................................................................................................... 20
Annex: Applying the concepts to Dirac’s wave equation ............................................................................................ 21
References ................................................................................................................................................................... 26
1
This abstract paraphrases Feynman’s introduction to his Lectures on Physics. All is intended for fun, of course! For
those who track the history and versions of this paper, we revised of the body of this paper on 12 September but,
in addition, we added an annex on Diracs energy and wave equation, which should help the eager amateur
physicist to apply the rather basic concepts in this paper while doing his own research.
1
Introduction
As an amateur physicist, or an amateur philosopher of Nature, you need to feel comfortable with the
jargon of academic physicists to understand whatever it is they are trying to say. This paper is intended
to help you with that. We must warn you, though: some (probably most) of the interpretations of the
math that we will be presenting here are unconventional and contradict grand mainstream
generalizations to a smaller or larger extent.
One example of such generalization is Bell’s ‘No-Go Theorem’, which tells us there are no hidden
variables that can explain quantum-mechanical weirdness in some kind of classical way, but we think of
Einstein’s reply to younger physicists who would point out that this or that thought experiment violated
this or that axiom or theorem in quantum mechanics:Das ist mir Wurscht.
2
We, too, take Bell’s
Theorem for what it is: a mathematical theorem. Remind yourself of what a theorem is: it is a logical
argument that uses inference rules to go from assumptions (other axioms or theorems) to some
conclusion. You cannot disprove a theorem, unless you can disprove the assumptions
3
, and you can only
disprove the assumptions if you can show the conclusions do not make sense.
Mathematical theorems, therefore, respect the GIGO principle: garbage in, garbage out. We will be like
mountaineers who try to climb an impossible mountain: we will go to places that, according to
conventional wisdom, are inaccessible. We note that Bell himself did not like his own ‘proof’ and
thought that some “radical conceptual renewal” might, one day, disprove his conclusions.
4
He,
therefore, kept exploring alternative theories himself including Bohm’s pilot wave theory, which is a
hidden variables theory until his untimely demise. Hence, unlike other mainstream introductory
courses on quantum mechanics, I will not say anything at all about Bell’s Theorem.
5
Also, when you go through this paper, you will see I leave out (almost) all math related to base
transformations: the transformation rules for amplitudes or symmetric or asymmetric wavefunctions
depending on the symmetry or asymmetry of the quantum-mechanical system that is being considered.
It is a bit hard to explain briefly why we do so, but the gist of the matter is this: it is not because 3D
space has a 360-degree symmetry in each and every direction, that a quantum-mechanical system must
have such symmetry in order to be understood without resorting to base transformations. Think of the
precession of atomic magnets or if you want to get rid of the idea of a nucleus or center any electron
in orbital or circular motion in a magnetic field. The precession frequency will be very different from the
orbital frequency and, hence, the system as such will, obviously, not have a 360-degree symmetry.
Hence, we think a lot of the math if not all of it related to base transformations results from
physicists being unable to develop a mathematical representation of the system in terms of rotations
2
This apocryphal quote is attributed to Einstein, but we have not found an original reference.
3
Or its logic, of coursebut we are not aware of any successful challenge to its logic.
4
John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169172, Cambridge University Press,
1987, referenced in Wikipedia.
5
As a non-mainstream thinker in the weird world of quantum physics, I have often been asked to, somehow,
disprove Bell’s Theorem to prove my point of view. I consider my argument on what a theorem can do and cannot
do, sufficient reason to not entertain such questions.
2
within rotations, which is why quaternion math comes in handy: we do not know why it did not break
through, but things are what they are.
6
We will also not talk about wave equations in this paper. That is because we have a fundamentally
different view of what wavefunctions and wavepackets actually are. The paper that gets the most
downloads
7
is my paper on the nature of the de Broglie frequencies, which we interpret as orbital rather
than linear frequencies. We recommend a reading of it perhaps simultaneously with this one, because
it does use some of the formalism which we do not explain in the said paper, but which we do not work
out in this one here. We must also refer to that paper for the reader who would want a brief explanation
of what we think of as the sense and nonsense wave equations: our interpretation of wavefunctions is
radically different and, hence, the interpretation of what wave equations can do and not do is very
different too. We explain that briefly in the mentioned paper and, hence, we do not dwell on it here
again.
We are getting ahead of ourselves here. We should not anticipate what you will learn in this and my
other papers. The only thing I should say is that my papers are inspired by a desire to make you truly
understand all things quantum-mechanical. The quest for a proper explanation (as opposed to a mere
calculation or further adding to what I think of an incomplete theory (a complete theory should respect
Occam’s Razor principle) or, worse, a theory that gets of the wrong foot, or somewhat less scathing as
a critique mere heuristic arguments) is a noble personal objective. The journey is challenging,
however: it drove not only Paul Ehrenfest
8
but other geniuses too (think of the later years in Dirac’s
scientific career, for example) close to madness and/or depression. We, therefore, feel that we are in
good company. However, unlike the mentioned examples, we share Einstein’s happy intuition: the world
including the quantum-mechanical world can be understood in terms of space and time, albeit
relativistic spacetime.
9
Get up, and grab your crampons, harness, slings, and carabiners. Let us go for the summit push. The
weather is good. The window of opportunity is there. We have been waiting long enough now. 
6
We dedicated a separate albeit rather short paper to quaternion math in the context of the forces within a
proton, which we think of as an oscillation in three dimensions or to put it differently a combination of two
planar oscillations, which amounts to two simultaneous rotations in 3D space. These two rotations involve the
proton charge and, hence, the rotation frequency should be the same, but the 3D orientation of these two
rotations is different, which may or may not be a reason why the system may or may not have a simple 360-degree
symmetry. While not using quaternion math, we will use multiple imaginary units (i and j) as rotation operators.
7
I have only published papers as working papers and, hence, I can boast few or no citations at all. However, at the
date of revision of this paper (12 September 2022), the paper had 8100+ downloads and a fair number of
recommendations, as a result of which the RG stats show its RI (research interest) score was higher than 89% of all
RG research items. That score is likely to get better because its RI score is also higher than 96% of research items
published in 2020. That is in line with my general RI score which, as for now, remains higher than 99% of RG
members who first published in 2020 even if I all but stopped publishing since April 2021. Hence, I do think my
unconventional or non-mainstream interpretation of quantum physics makes sense to a lot of people.
8
See our blog post on Ehrenfest’s suicide.
9
We all know Minkowski’s famous 1908 comment on Einstein’s special relativity theory, a few years after it was
published as part of his Annus Mirabilis papers (1905): “Henceforth space by itself, and time by itself, are doomed
to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” We do
not think of a true union of space and time (3D space is 3D space, and time is time) but, yes, we do see the
‘independent reality’ of objects in space and time now.
3
The nature of mass and energy
We explained the nature of mass in previous papers on elementary physics.
10
This is a paper on
advanced mathematical concepts and the referenced papers are prerequisites: we do not want to
repeat ourselves all of the time. However, we must and will summarize some basics here.
At the macro-level, mass appears as inertia to a change in the state of linear motion of an object or
particle. That is how it appears in Newton’s first law of motion which in its relativistically correct form
is written as F = dp/dt = d(m·v)/dt.
11
Now, the idea of a particle is a philosophical or ontological
concept and we will, therefore, avoid it to some extent, at least and prefer to speak of things we can
measure, such as charge and, yes, mass. We will also speak of physical lawsbecause these are based
on measurements too.
From the Planck-Einstein and mass-energy equivalence relations, we get the following fundamental
equation for a frequency per unit mass (f/m or, expressing frequency in radians per second rather than
cycles per second, ω/m):





This humongous value
12
is an exact value since the 2019 redefinition of SI units, which fixed the value of
ħ, and just like c and ħ, you may think of it as some God-given number but just like the fine-structure
constant
13
it is just a number which we derived from a more limited number of fundamental constants
of Nature.
It reflects the true nature of mass at the micro-level. You must appreciate that is quite different from
mass being, at the macro-level, a measure of inertia. At the most fundamental level, matter is nothing
but charge in motion. Such interpretation is not mainstream, but it is consistent with Wheeler’s ‘mass
without mass’ ideas and – more importantly, probably with the 2019 revision of the system of SI units,
in which mass also appears as a derived unit from more fundamental constants now, most notably
Planck’s constant.
This f/m ratio is, of course, valid for all matter or let us be precise for all (stable) elementary
particles.
14
However, it is important to note that, while the f/m ratio is the same for both the electron as
well as the proton mass, the q/me and q/mp ratios are, obviously, very different. We, therefore, do
associate two very different charge oscillations with them: we think of the electron and proton as a two-
10
See our papers in our K-12 level physics series.
11
The formula is relativistically correct because both m and v are not constant: they are functions varying in time
as well and that is why we cannot easily take them out of the d()/dt brackets.
12
A number with 50 zeros would be referred to as one hundred quindecillion (using the Anglo-Saxon short scale)
or one hundred octillions (using the non-English long scale of naming such astronomic numbers).
13
The fine-structure constant pops up in electromagnetic theory, and is co-defined with the electric and magnetic
constants. Their CODATA values are related as follows:
󰇛󰇜
󰇛󰇜
󰇛󰇜

14
Note that the electron and proton (and their anti-matter counterparts) are stable, but the neutron (as a free
particle, i.e., outside of a nucleus) is not, even if its average lifetime (almost 15 minutes) is very large as compared
to other non-stable particles.
4
and three-dimensional ring current, respectively. Hence, while these specific oscillator equations are,
theoretically and mathematically, compatible with any mass number, we do not think of the electron
and proton energies as variables but as constants of Nature themselves.
In short, we must think of the electron and the proton mass as fundamental constants too because, as
far as we know, these are the only two stable constituents of matter, and they also incorporate the
negative and positive elementary charge, respectively.
15
The f/m = c2/h formula above holds for both
and, combined with Newton’s force law (m = F/a: mass as inertia to change of (a state of) motion), we
conclude that the mass idea is one single concept but that we should, at the very minimum, distinguish
between electron and proton mass. Of course, Einstein’s mass-energy relation tells us it might be better
to just talk about two fundamental energy levels (Ee and Ep), and to re-write the f/m = c2/h expression
above as the Planck-Einstein relation applied to two (different) oscillations:


As mentioned above, in the realist interpretation we have been pursuing, we effective think of the two
oscillations as a planar and a spherical oscillation, respectively, which is reflected in the wavefunction
which we use to represent the electron and proton, respectively. Indeed, the effective radius of a free
electron follows directly from the orbital velocity formula v = c = ωr = ωa and the Planck-Einstein
relation
16
:



Hence, we write the wavefunction of an electron as:


This notation introduces the imaginary unit, which serves as a rotation operator and, therefore, denotes
the plane of oscillation. The sign of the imaginary unit () indicates the direction of spin and,
15
As mentioned above, the neutron is only stable inside of the nucleus, and we think of it as a combination of a
positive and negative charge. It is, therefore, reducible and, as such, not truly elementary. However, such view is,
obviously, part of another speculative model of ours and, hence, should not be a concern to the reader here.
16
We write this as a vector cross-product, and assume an idealized circular orbital when writing the position vector
r as a wavefunction r = ψ = a·e±iθ = a·[cos(±θ) + i · sin(±θ)]. The magnitude ris, obviously, equal to a·e±iθ = a.
This is a variant of Wheeler’s mass-without-mass model because the model assumes a pointlike (but not
necessarily infinitesimally small or zero-dimensional) charge, whose rest mass is zero and, therefore, moves at
lightspeed and acquires relativistic mass only. As such, it is photon-like, but photons (light-particles) carry no
charge. The a = r notation may be somewhat confusing because a is also used to denote accelerationan entirely
different concept, of course!
5
interpreting 1 and 1 as complex numbers (cf. the boldface notation), we do not treat as a common
phase factor.
17
As mentioned several times already, we think of the proton oscillation as an orbital oscillation
in three rather than just two dimensions. We, therefore, have two (perpendicular) orbital oscillations,
with the frequency of each of the oscillators given by ω = E/2ħ = mc2/2ħ (energy equipartition theorem),
and with each of the two perpendicular oscillations packing one half-unit of ħ only.
18
Such spherical
view of a proton fits with packing models for nucleons and yields the experimentally measured radius of
a proton:
󰇧
 󰇨 

The 4 factor here is the one distinguishing the formula for the surface of a sphere (A = 4πr2) from the
surface of a disc (A = πr2).
19
We may now write the proton wavefunction as a combination of two
elementary wavefunctions:

󰇧

󰇨
While the electron and proton oscillation are very different, the calculations of their magnetic moment
based on a ring current model (with a 2 correction to take the spherical nature of the proton into
account) strongly suggest the nature of both oscillations and, therefore, the nature of all mass, is
electromagnetic. However, we may refer to the electron and proton mass as electromagnetic and
nuclear mass respectively because protons (and neutrons) make up most of the mass of atomic nuclei
20
,
while electrons explain the electromagnetic interaction(s) between atoms and, therefore, explain
molecular shapes and other physical phenomena.
Finally, the two oscillations may be associated with the two lightlike particles we find in Nature: photons
and neutrinos. These lightlike particles carry energy (but no charge) but are traditionally associated with
electromagnetic and nuclear reactions respectively (emission and/or absorption of photons/neutrinos,
respectively), which also explains why referring to the three-dimensional proton oscillation as a nuclear
oscillation makes sense.
21
17
See our paper on Euler’s wavefunction and the double life of
1, October 2018. This paper is one of our very
early papers a time during which we developed early intuitions and we were not publishing on RG then. We
basically take Feynman’s argument on base transformations apart. The logic is valid, but we should probably
review and rewrite the paper in light of the more precise intuitions and arguments we developed since then, even
if as mentioned I have no doubt as to the validity of the argument.
18
Such half-units of ħ for linearly polarized waves also explains the results of Mach-Zehnder one-photon
interference experiments. There is no mystery here.
19
We also have the same 1/4 factor in the formula for the electric constant, and for exactly the same reason
(Gauss’ law).
20
Binding energy also electromagnetic in nature makes up for the rest.
21
See our paper on the nuclear force hypothesis, April 2021.
6
Matter, antimatter, dark matter, and CPT-symmetry
When analyzing ring currents and electromagnetic fields, we will usually distinguish the electric and
magnetic field vectors E and B, and we can also describe these field vectors in terms of the i- and j-
planes and (dynamic) combinations thereof, which requires the use of quaternion rather than complex
algebra.
22
We will use the E and B vectors in a short while. Let us first further introduce the topic of anti-
matter by making a few more introductory remarks.
We should, first, note that the electron and proton masses or radii cannot be related: the two energy
levels and, therefore, the magnitude of the two forces, are very different, even if both the electron and
proton can both described in terms of elementary oscillator math. An anti-proton is, therefore, not a
three-dimensional variant of the electron, and a positron is not a two-dimensional variant of an anti-
proton. We have a fundamental asymmetry in Nature here which cannot be further reduced.
As for the distinction between matter and anti-matter, we may apply Occam’s Razor Principle to match
all mathematical possibilities in the expression with experimentally verified (physical) realities, and we
may, therefore, think of the sign of the (elementary) wavefunction coefficient (A) as modeling
matter/antimatter, while the sign of the complex exponent (iEt/ħ) captures the spin direction of
matter/antimatter particles. We thus write the wavefunction of stable (two-dimensional) particles as:

The wavefunction of unstable particles (transients) involves an additional decay factor :

Finally, we repeat that light- or field-particles differ from (stable) matter-particles because they carry no
charge. Their oscillation (if photons are electromagnetic oscillations, then neutrinos must be nuclear
oscillations) is, therefore, not local: they effectively travel at the speed of light.
23
We know antimatter should not only have opposite charge, but also opposite parity. What does that
mean? Let us write out the wavefunctions, abstraction away from the coefficient A (which is just a
matter of unit normalization):
spin up
spin down
electron (matter)
eiθ = cosθ + isinθ
eiθ = cos(θ) + isin(θ) = cosθ isinθ
positron (antimatter)
eiθ = cosθ isinθ
= cos(θ−) + isin(θ−) = ei(θ−)
eiθ = cos(θ) isin(θ) = cosθ + isinθ
= cos(θ−) isin(θ−) = e
i(θ−)
22
See our paper on the nuclear force and quaternion math, April 2021. Needless to say, an alternative and
perhaps deeper level of description is possible using the scalar and vector potentials, which are effectively often
used in more conventional treatments of the basics of quantum physics. However, we feel the formalism of four-
vectors is far more abstract and may, therefore, obscure what is actually going on. In any case, the equivalence of
both descriptions is well established, and it should, therefore, not matter in the end.
23
For a more historical perspective on these questions, see our critical discussion of de Broglie’s matter-wave
concept, May 2020.
7
We can see that a spin-up positron and a spin-up electron have the same wavefunction, except for a
phase shift equal to (180 degrees). Likewise, we have the same wavefunction for a spin-down positron
and a spin-down electron, except for the same phase shift (). As mentioned above, we do not think of
as a common phase factor: ei ei in quantum physics. When you go from +1 to 1 in the complex
space, it matters how you get there. So this should be OK, then? All mathematical possibilities
correspond to different physical realities?
Maybe. Maybe not. There is one more asymmetry in physical reality, as well as in the mathematical
model: depending on whether you apply a left- or right-hand rule, the electric and magnetic field vectors
will also be related by a rotation operator.
24
We write:
B = jE/c
We believe this signature, metric, or parity effectively distinguishes matter from antimatter. We,
therefore, do believe all cases are covered in a description that respects Occam’s Razor Principle: all
mathematical possibilities correspond to physical realities, and there is no redundancy in the
description. It makes a lot of sense to us because:
It would explain the dark matter in the Universe as antimatter: antimatter oscillations can,
obviously, be associated with antiphotons and antineutrinos, which do not normally interact
with matter. Hence, this would explain the huge amount of dark matter (and also of dark
energy) in the Universe
25
, and there is, therefore, no need to hypothesize new particles.
26
It would explain why electron-positron annihilation (or matter-antimatter pair
annihilation/creation in general) involves (1) two photons (we think one is an anti-photon) and
(2) the presence of a nucleus: we believe the nucleus is not there to absorb excess kinetic
energy, but electromagnetic energy of opposite signature.
27
This alternative concept of parity would explain why classical (or mainstream) CP-symmetry is
not always present in nuclear reactions.
24
To be precise, we must combine the classical (physical) right-hand rule for E and B, the right-hand rule for the
coordinate system, and the convention that multiplication with the imaginary unit amounts to a counterclockwise
rotation by 90 degrees.
25
According to research quoted by NASA, roughly 68% of the Universe would be dark energy, while dark matter
makes up another 27%. Hence, all normal matter including our Earth and all we observe as normal matter
outside of it would add up to less than 5% of it. Hence, NASA rightly notes we should, perhaps, not refer to
‘normal’ matter as ‘normal’ matter at all, since it is such a small fraction of the universe!
26
For our thoughts on cosmology, see our paper on the finite Universe, in which we make a case for considering
gravitation as a pseudo-force resulting from the curvature of spacetime, in line with Einstein’s general relativity
theory. We think this is the only possible explanation for the accelerating expansion of our Universe: there must be
other Universes out there, beyond our time horizon. These can only tear our Universe apart, so to speak, if the
gravitational effect is instantaneous: it must, therefore, be perpetuated as an instantaneous change in the
curvature of spacetime rather than as an energy-carrying wave. We offer some more thoughts on that in an Annex
to a paper which may interest the reader as well which was written in parallel to this one.
27
For more background, see our paper on pair production/annihilation as a nuclear process.
8
We suggest the idea of CP-symmetry is incomplete: the above-mentioned conceptualization of matter
versus anti-matter should ensure complete symmetry or, at the very least, explain asymmetries in
nuclear reactions by much more mundane factors than deus ex machina quantum-mechanical
asymmetry principles. Plain entropy or statistical analysis should do.
The charge/mass ratio and quantum field theory
Mass-without-mass models and charge-without-charge models cannot explain measurable reality
28
separately, which is probably the reason why John Wheeler wanted his geometrodynamics research
program to be based on the combination of three separate strands: (i) mass-without-mass, (ii) charge-
without-charge, (iii) fields-without-fields. Mass and charge are both very flexible unitary (or unifying)
concepts. Zero-dimension mass or charge are nonsensical (nothing is nothing), but one-dimensional
(linear), two-dimensional (planar), and three-dimensional (spherical) oscillations make sense, because
they can be used to define space (distance and rotation) and time as mental constructs.
The concept of a field injects the concept of physical space: mathematical space is Cartesian (cogito ergo
sum, which I translate as: I am, therefore, space is). Physical space must be curved because the concept
of infinity is a mathematical idealization. Relativity theory relates the concepts of mathematical space
and physical space. Relativity theory tells us the centripetal force in an elementary particle is modelled
by the Lorentz force (F), which we may divide by the mass factor so as to relate (unit) charge and (unit)
mass:

󰇛󰇜


󰇛 󰇜
We use a different rotation operator (imaginary unit) here (j instead of i) because the plane in which the
magnetic field vector B is rotating differs from the E- plane.
29
The gyromagnetic ratio is defined as the
factor which ensures the equality of (1) the ratio between the magnetic moment (which is generated by
the ring current) and the angular momentum and (2) the charge/mass ratio:
The angular momentum is measured in units of ħ, and the Planck-Einstein relation tells us it must be
equal to one unit of ħ. In contrast, the magnetic moment is usually measured in terms of the Bohr
magneton, which involves an additional ½ factor (B = qħ/2m). The gyromagnetic ratio is not an artificial
28
We use this term to avoid ontological/epistemological discussions. We had our share of those, and they lead to
nothing.
29
Whether we consider the magnetic field vector to lag or lead the electric field vector depends on us using a
right- or left-hand rule. Using a consistent convention here, we believe the magnetic field vector will lead the
electric field vector when considering antimatter. Antimatter may, therefore, be modeled by putting a minus sign
in front of the wavefunction (see our paper on the Zitterbewegung hypothesis and the scattering matrix). We
believe the dark matter in the Universe to consist of antimatter, and it is dark because the antiphotons and
antineutrinos they emit and absorb are hard to detect with matter-made equipment. In regard to cosmology, we
must add we think gravity is not a force but a geometric feature of physical space (i.e. space with actual
matter/energy in it). We believe this is a rational belief grounded in the accelerating expansion of our Universe:
the horizon of our Universe is given by the speed of light. Our Universe may, therefore, be torn apart by other
Universes that are beyond observation, but this is only possibly when such gravitational effects are instant (i.e.
part of the geometry of physical spacetime).
9
construct, but it is misunderstood: it is rooted in the concept of the effective (relativistic) mass of the
pointlike charge.
Fields are quantized too, of course, but it might help you to think of the association with the
philosophical/ontological concept of a particle (I am referring to the standard academic portrayal of
quantum field theory now
30
) as a historical error. Such thinking will help you to reduce complicated
theoretical problems to more mundane reflections about clear-cut empirical measurements.
31
When everything is said and done, quantum math models two very different types of situations:
equilibrium and non-equilibrium systems. Matrix algebra and perturbation theory is relevant to
both, as we will show in the next sections of this paper.
Indeed, we consider all of the above as prolegomena to what follows. Hence, the reader may want to
take a break before reading through the rest.
The use of matrices and matrix algebra in physics
Two obvious examples of the straightforward use of matrix algebra in quantum physics is the use of the
Hamiltonian matrix when modeling discrete n-state systems (the states are usually reduced to simple
energy states) and the use of the scattering matrix when analyzing particle reactions.
We wrote two rather extensive papers on this
32
and shall, therefore, only very briefly present the main
conclusions here.
The Hamiltonian matrix
The Hamiltonian equations (and matrix) usually model a system which oscillates between two or more
energy states, which are referred to as base states. The (unknown) state of the system is, therefore,
logically described by the following state vector equation:
|ϕ = C1·1 + C2·2
The 1 and the 2 states are (logical) representations of what we think of as a physical state: they are
possible realitiesor real possibilities, whatever term one would want to invent for it. When using them
in a mathematical equation like this, we will think of them as state vectors. In contrast, the |ϕ state
which is the sum of the C1·1 and C2·2 states is not a physical but a logical state: it exists in our mind
only, and the system itself is always in some (base) state (e.g. up, or down).
C1 and C2 are complex numbers (or complex functions, to be precise). Of course, because we are
multiplying them with these state vectors one may want to think of them as vectors too. That is not so
difficult: complex numbers have a direction and a magnitude, so it is easy to think of them as vectors
alright!
33
However, they are not only vector but cyclical functions too. In short, the C1 and C2 coefficients
30
A very typical mainstream textbook is Ian J.R. Aitchison, Anthony J.G. Hey, Gauge Theories in Particle Physics,
2013 (4th edition).
31
See our K-12 level paper on the concept of a field, October 2020.
32
We explained the Hamiltonian matrix and equations in our paper on what we refer to as Feynman’s time
machine, and scattering matrices are explained in our paper on the Zitterbewegung hypothesis and the scattering
matrix.
33
We prefer such visualization or conceptualization to the idea of complex numbers being two-dimensional
numbers. That is correct too, of course, but perhaps not so easy to visualize.
10
are complex-valued wavefunctions.
We assumed only two possible states so far. When generalizing to a n-state system, we may introduce
Uij = i U j coefficients to describe the system.
34
So we have n states i or j = 1, 2, 3,…, n, and for
simplicity, we will assume it is just time which makes the system stay in the same state or, else, go into
another state. Now, we have all of the coefficients Ci that describe what is referred to as a quantum-
mechanical amplitude to be in state i. These amplitudes are functions of time and we will, therefore,
want to think of their time derivatives. We can do so in terms of (infinitesimally small) differentials and,
hence, writing something like this effectively makes sense:
󰇛󰇜
󰇛󰇜󰇛󰇜
The Uij(t+t, t) element is a differential itself, and it is, obviously, a function of both t and t:
1. If t is equal to 0, no time passes by and the system will just be in the same state: the state is
just the same state as the previous state. Why? Because there is no previous state here, really:
the previous and the current state are just the same.
2. If t is very small but non-zero, then there is some chance that the system may go from state i
to state j. We can model this by writing:
󰇛󰇜
Feynman introduces yet another coefficient here: Kij. Now, this all makes sense when thinking of Kij as a
real-valued proportionality coefficient just as real-valued as t but, soon enough, we will replace
these Kij coefficients simple real-valued proportionality coefficients by complex numbers. To be
precise, we will write Kij as Kij = iHij/ħ. It is a great trick: the insertion of the imaginary unit (i) ensures
we get cyclical functions rather than linear ones, and because the Hamiltonian coefficients are Hij are
energy levels, we also smuggle in the Planck-Einstein relation here, which as an added benefit also
fixes the (time) unit, which is just the reciprocal of the (angular) frequency A/ħ.
35
for the detail. Let us
wrap this up by just jotting down the relevant substitutions and equations.
Of course, we should, somehow, incorporate the fact that, for very small t, the system is more likely to
remain in the same state than to change. Feynman models this by introducing the Kronecker delta
function. This all sounds and looks formidable but you will (hopefully) see the logic if you think about it
for a while: 󰇛󰇜


The idea behind this formula is pretty much the same as that of using the first-order derivative for a
linear (first-order) approximation of the value of a function f(x0 + x). The above amounts to modeling
34
We closely follow Feynman’s argument here.
35
We know this probably sounds like Chinese, but the reader may turn to the referenced paper (see footnote 32)
for all of the detail. Of course, we recommend reading Feynman’s lecture on it right next to it.
11
the proportionality with time
36
like this:
Uii(t + Δt, t) = 1 + Kii·Δt (i = j)
Uij(t + Δt, t) = 0 + Kij·Δt = Kij·Δt (i = j)
This, of course, triggers the question: what are the relevant units here? We measure these 0 and 1
values in what unit, exactly? That question is answered by Feynman’s grand deus ex machina move, and
that is to replace these Kij coefficients simple real-valued proportionality coefficients by taking the
factor
i/ħ out of these coefficients.”
37
He writes he does so for historical and other reasons
38
but, of
course, this is the point at which he inserts the Planck-Einstein relation and cyclical functions. It totally
changes the character of the solutions to the equations: we will get the periodic functions we need, so it
works (of course, it does)but you are entitled to think of this as a grand trick, indeed. To make this
rather long story short, we just note that Feynman re-writes the above as:
󰇛󰇜
Re-inserting this expression in the very first equation and doing some more hocus-pocus
39
and re-
arranging then gives the set of differential equations with the Hamiltonian coefficients that you were
waiting for:
󰇛󰇜
 󰇛󰇜
󰇛󰇜
For a two-state system, this is a set of two equations only:

 

 
We can then define the Hamiltonian coefficients Hij in terms of the average energy E0 and the energy
difference between the two states and this average energy E0:
 
 

What this energy E0 (note that this average energy can be set to zero) and the energy difference A
actually means in the context of a particular system, depends on the system, of course!
36
As mentioned, in this rather simple example, we assume it is just time which is driving the change of the system
from one state to another.
37
We quote from the above-mentioned lecture (chapter 8 of Volume III of Feynman’s Lectures, which have been
made available online by Caltech.
38
He just says we should, of course, not confuse the imaginary unit i here with the index i. Jokes like this remind me
of one of the books that was written on him: “Surely You’re Joking, Mr. Feynman!
39
The hocus-pocus here is, however, significantly less suspicious than the deus ex machina move when doing the
mentioned substitution of coefficients!
12
We refer the reader to Feynman’s example the maser
40
to check how things actually work out. When
analyzing a maser or a laser, the energy A will be some energy difference between two states which, as
mentioned above, will effectively be measured with reference to the average energy (E0) and is,
therefore, written as 2A. The period of the oscillation is, therefore, given by
41
:

The cyclical complex exponentials which Feynman refers to as trial solutions but, of course, they are
the solutions! ensure that the probabilities (calculated from the absolute square
42
of these functions)
slosh back and forth the way they are supposed towhich is as continuous functions ranging between 0
and 1. Of course, these probabilities always need to add up to one, so they must be squared sine and
cosine functions: P1 = sin2(2A·t/ħ) and cos2(2A·t/ħ). The periodicity of these functions is effectively equal
to π when measuring time in units of ħ/A (Figure 1), and they also respect the normalization condition (0
P 1). Most importantly, Pythagoras’ Theorem (or basic trigonometry, we would say) also ensures
they respect the rule that the probabilities must always add up to 1:
P1 + P2 = sin2(2A·t/ħ) + cos2(2A·t/ħ) = 1
Figure 1: The probability functions for a two-state system
The ħ/A time unit is an angular time unit (1/ω = ħ/A = T/2π): time is measured in radians. We can,
indeed, use the radian as a unit of time as well as a unit of distance, as illustrated below (Figure 2).
Figure 2: The radian as unit of distance and of time
Perhaps you will find it easier to think of the radian as equivalent time and distance unit when playing
with the associated derivatives, and taking their ratios
43
:
40
See: Feynman’s Lectures, Vol. III, Chapter 9 (the ammonia maser) and 10 (other two-state systems). We also
warmly recommend reading Chapter 11 (on the Pauli spin matrices).
41
The concept of an angular time period (1/ω = ħ/A = T/2π) – the time per radian of the oscillation is not in use
but would actually be useful here: we will, in fact, use it as the time unit in the graph of the probabilities.
42
We should say: the absolute value of the square. But that is a bit lengthy.
43
We must make an important remark here: our playing with differentials here assumes a normalized concept of
velocity: we can only use the radian simultaneously as a time and distance unit when defining time and distance
13




 
 


 
 


As for the arrow of time, we talk about that in our other (elementary) paper on math, which also deals
with wavefunctions but talks about them from a slightly different perspective.
44
An example of a two-state system: the maser
For the lazy reader who does not want to go back and forth between this and our other papers, we will
briefly present how we get those solutions for a maser-like system. What is calculated is the flip-flop
amplitude for the nitrogen atom (N) in an ammonia molecule (NH3).
The system is shown below (Figure 3). Do not worry too much about the nature of the dipole moment.
45
Nitrogen has 7 electrons, which are shared with the hydrogen nuclei in covalent bonds. A covalent bond
blurs the idea of an electron belonging to one atom only and in this particular case causes the
ammonia molecule to be polar.
Figure 3: Ammonia molecules with opposite dipole moments in an electrostatic field
46
We should mention one thing here, which Feynman does not make very explicit: when there is an
external field, the nitrogen atom will no longer equally divide its time over position 1 and 2: if possible,
at all, it will want to lower its energy permanently by staying in the lower energy state. This is,
effectively, how we can polarize the ammonia molecules in a maser. Hence, the illustrations above are
valid only for very small values of Ɛ0: if we apply a stronger field, all ammonia molecules will align their
dipole moment and stay aligned. In any case, assuming we are applying a small enough field only or no
field at all we can solve the equations and then we get this for the C1 and C2 functions:
units such that v = / = 1. The in this equation is the circumference of the circle (think of it as a circular
wavelength), and = T/2 is the (reduced) cycle time.
44
See: The language of math, April 2021.
45
It is an electric dipole momentnot magnetic. But the logical framework and analysis works for magnetic dipole
moments as well.
46
We gratefully acknowledge the online edition of Feynman’s Lectures for this illustration.
14

󰇛
󰇜

󰇛
󰇜
How did we calculate that? We did not: we refer to Feynman here.
47
He introduces the mentioned so-
called ‘trial’ solutionswhich are the solution, of course! The point is this: we can now take the
absolute square of these amplitudes to get the probabilities:
󰇛
󰇜
󰇛
󰇜
Those are the probabilities shown in Figure 1. The probability of being in state 1 starts at one (as it
should), goes down to zero, and then oscillates back and forth between zero and one, as shown in that
P1 curve, and the P2 curve mirrors the P1 curve, so to speak. As mentioned also, it is quite obvious they
also respect the requirement that the sum of all probabilities must add up to 1: cos2θ + sin2θ = 1, always.
Is that it? Yes. That is all there is to it. We may, perhaps, just add some remarks on quantum-mechanical
tunneling. The plane of the hydrogen atoms is an energy barrier: think of it as the wall of the potential
well. So the nitrogen atom should tend to stay where it isas opposed to shifting to the other position,
which is a potential well itself! But the reality is that, from time to time, it does tunnel through. The
question now becomes: when and how does it do that? That is a bit of a mystery, but one should think
of it in terms of dynamics. We modeled particles as charges in motion.
48
Hence, we think of an atom as a
dynamic system consisting of a bunch of elementary (electric) charges. These atoms, therefore,
generate an equally dynamic electromagnetic field structure. We, therefore, have some lattice structure
that does not arise from the mere presence of charges inside but also from their pattern of motion.
49
Can we model this? Feynman did not think this was possible. Indeed, he wrote this about it
50
:
“In the discussion up to this point, we have assumed values of E0 and A without knowing how to calculate
them. According to the correct physical theory, it should be possible to calculate these constants in terms
of the positions and motions of all the nuclei and electrons. But nobody has ever done it. Such a system
involves ten electrons and four nuclei and that is just too complicated a problem. As a matter of fact,
there is no one who knows much more about this molecule than we do. All anyone can say is that when
there is an electric field, the energy of the two states is different, the difference being proportional to the
electric field. We have called the coefficient of proportionality 2μ, but its value must be determined
experimentally. We can also say that the molecule has the amplitude A to flip over, but this will have to be
47
Reference above: Feynman’s Lectures, Volume III, Chapter 8, pages 8-11 to 8-14.
48
See our previous papers.
49
You should also do some thinking on the concept of charge densities here: the different charge densities inside
of the ammonia molecule do not result from static charge distribution but because the negative charges inside
(pointlike or not) spend more time here than there, or vice versa.
50
To be truthful, it is not at the very end of his exposébut just quite late in the game (section 9-2), and what
follows does not give us anything more in terms of first principles.
15
measured experimentally. Nobody can give us accurate theoretical values of μ and A, because the
calculations are too complicated to do in detail.”
In contrast, we believe recent work on this is rather promisingbut we must admit it has not been done
yet: it is, effectively, a rather complicated matter and, as mentioned, work on this has actually just
started!
51
We will, therefore, not dwell on this either: you should do your PhD on it! 
The point is this: one should take a dynamic view of the fields surrounding charged particles. Potential
barriers and their corollary: potential wells should, therefore, not be thought of as static fields: they
vary in time. They result from or more charges that are moving around and thereby create some joint or
superposed field which varies in time. Hence, a particle breaking through a ‘potential wall’ or coming out
of a potential ‘well’ is just using some temporary opening corresponding to a very classical trajectory in
space and in time.
52
There is, therefore, no need to invoke some metaphysical Uncertainty Principle: we
may not know the detail of what is going onbut we should be able to model it using classical
mechanics!
Let us get back to the topic we were talking about: the use of matrices and matrix algebra in quantum
physics. Before we do so, we will quickly want to say a few words about how one uses these energy or
other quantum-mechanical states in calculations. Indeed, it will be much easier for you to not consider
them as something mysterious now that you have a specific example of a system in mind. And we are
actually still talking matrices because vectors are (1-by-n or n-by-1) matrices too, so we feel that we are
not digressing too much here! 
State vector rules
The most important point that you need to understand is that this state vector business continually
switches from discrete physical states to continuous logical states in the quantum-mechanical
description of phenomenaand then back again once we have a wavefunction for those logical states.
That is actually all you need to know to understand the quantum-mathematical rules for probabilities
and amplitudes. Following Richard Feynman
53
, we may represent this rules as two related or
complementary sets. The first set of rules is more definitional or procedural than the other one,
although both are intimately related:
(i) The probability (P) is the square of the absolute value of the amplitude ()
54
: P = 2
(ii) In quantum mechanics, we add or multiply probability amplitudes rather than probabilities:
P = 1 + 2 2 or, for successive events, P = 1·2 2
51
In case you would want to have an idea of the kind of mathematical techniques that are needed for this, we
hereby refer you to a recent book on what is referred to as nuclear lattice effective field theory (NLEFT).
52
You should do some thinking on the concept of charge densities here: the different charge densities inside of the
ammonia molecule do not result from static charge distribution but because the negative charges inside (pointlike
or not) spend more time here than there, or vice versa.
53
Richard Feynman, Lectures on Quantum Mechanics, sections III-1-7 (p. 1-10) and III-5-5 (p. 5-12).
54
The square of the absolute value (aka modulus) is a bit of a lengthy expression so we refer to it as the absolute
square. It may but should not confuse the reader.
16
As we have shown above for the maser system (or for any n-state system, really), probability amplitudes
are complex-valued functions of time and involve the idea of a particle or a system going from one state
(i) to another (j). We write:
= j i
The latter notation is used to write down the second set of quantum-mechanical rules:
I. j i = ij
II. = all I i i
III. = *
You probably know these rules but you did never quite understand them, right? You should not think of
them as being obscure. Here is the common-sense explanationstarting from the bottom-up:
1. Rule III shows what happens when we reverse time two times: we go from state to (instead of
going from and ) and we also take the complex conjugate, so we put a minus sign in front of the
imaginary unitwhich amounts to putting a minus sign in front of the time variable in the argument.
We reverse time two times and, therefore, are describing the same process.
2. Rule II just say what we wrote in the first set of rules: we have to add amplitudes when there are
several ways to go from state to .
3. Rule I is the trickiest one. It involves those base states (i and j instead of or ), and it specifies that
condition of orthogonality. How can we interpret it? We can do by taking the absolute square
55
and
using rule III:
 i i 2 = i i  i i * = i i 2 = 1 = Pi = i (i = j)
 j i 2 = j i  j i * = j i 2 = 0 = Pi = j (i j)
The logic may not be immediately self-evident so you should probably look at this for a while. If you do,
you should understand that the orthogonality condition amounts to a logical tautology: if a system is in
state i, then it is in state i and not in some different state j. This is what expressed in the i i 2 = Pi = i = 1
and j i 2 = i j 2 = Pi = j = 0 condition.
Is it that simple? Yes. Or at least that is what we think.  Let us now, finally, go back to the business at
hand: matrix algebra in the context of quantum physics.
55
Note we also use the mathematical rule which says that the square of the modulus (absolute value) of a complex
number is equal to the product of the same number and its complex conjugate.
17
The scattering matrix
56
While the Hamiltonian is used (mainly) to model stable systems, another type of matrix the scattering
matrix will be useful to analyze an entirely different problem: particle reactions. Let us take the
example of a rather typical K0 + p 0 + + decay reaction and write it as follows
57
:
󰇣 
 󰇤󰇟󰇠


Using wavefunction math, we may represent the (neutral) K-meson (kaon), proton, lambda-particle, and
pion by wavefunctions and insert them into the two equations:


The minus sign of the coefficient of the antikaon wavefunction reflects the point we made above:
matter and antimatter are each other opposite, and quite literally so: the wavefunctions AeiEt/ħ and
+AeiEt/ħ add up to zero, and they correspond to opposite forces and different energies too!
58
To be
precise, the magnetic field vector is perpendicular to the electric field vector but instead of lagging the
electric field vector by 90 degrees (matter) it will precede it (also by 90 degrees) for antimatter, and
the nuclear equivalent of the electric and magnetic field vectors should do the same (we have no reason
to assume something else).
59
Indeed, the minus sign of the wavefunction coefficient (A) reverses both
the real as well as the imaginary part of the wavefunction.
However, it is immediately obvious that the equations above can only be a rather symbolic rendering of
what might be the case. First, we cannot model the proton by an AeiEt/ħ wavefunction because we think
of it as a 3D oscillation. We must, therefore, use two rather than just one imaginary unit to model two
oscillations. This may be solved by distinguishing i from j and thinking of them as representing rotations
in mutually perpendicular planes. Hence, we should probably write the proton as
60
:
56
This is a summary of a summary of the above-referenced paper on the Zitterbewegung hypothesis and the
scattering matrix.
57
Of course, there are further decay reactions, first and foremost the 0 + + + p + + reaction. We chose the
example of the K0 + p reaction (neutral K-mesons) because Feynman uses it prominently in his discussion of high-
energy reactions (Feynman, III-11-5).
58
See our previous remarks on the lag or precession of the phase factor of the components of the wavefunction.
Needless to say, masses and, therefore, energies are positive, always, but the nature of matter and antimatter is
quite different.
59
We think this explains dark matter/energy as antimatter: the lightlike particles they emit, must be
antiphotons/antineutrinos too, and it is, therefore, hard to detect any radiation from antimatter. See our paper on
cosmology.
60
We use an ordinary plus sign, but the two complex exponentials are not additive in an obvious way (i j). Note
that t is the proper time of the particle. The argument of the (elementary) wavefunction a·ei is invariant and,
therefore, incorporates both special as well as general relativity theory (see our paper on the language of physics).
18

󰇧

󰇨
In addition, the antikaon may combine an electromagnetic (2D) and a nuclear (3D) oscillation and we
may, therefore, have to distinguish more than two planes of oscillation. Last but not least, we should
note that the math becomes even more complicated because the planes of oscillation of the antikaon
and the proton are likely to not coincide. We, therefore, think some modified version of Hamilton’s
quaternion approach may be applicable, in which case we have i, j and k rotations. Furthermore, each of
these rotations will be specific to each of the particles that go in and come out of the reactions, so we
must distinguish, say, the iK, jK, kK, from the i, j, k rotations.
61
The j and k rotations may be reserved for the two perpendicular (nuclear) rotations, while the Euler’s
imaginary unit (i) would model the electromagnetic oscillation (not necessarily perpendicular to any of
the two components of the nuclear oscillation). In addition, we must note these planes of rotations are
likely to rotate in space themselves: the angular frequency of the orbital rotations has a magnitude and
a direction. If an external field or potential is present, then the planes of oscillation will follow the
regular motion of precession. In the absence thereof, the angular rotation will be given by the initial
orbital angular momentum (as opposed to the spin angular momentum).
We must make one more remark: while the math above probably looks daunting enough already, the
charges in composite particles (stable or unstable
62
) may not follow nice circular orbitals: they may be
elliptical, so we should probably inject more quantum numbers in the equations (besides the principal
quantum number, which gives us the energy levels). As you can see, things become quite complicated,
even when we are modelling something that is supposed to be quite simple here (two particles that
decay into another pair of particles).
Hermiticity in quantum physics
As you can see, the remarks above amount to a substantial number of constraints on the matrix and/or
vector equation(s). We refer to Bombardelli for a further discussion of these.
63
Here we only note on
obvious constraint: the hermiticity of the matrix, which models physical reversibility. This is explained as
follows.
We can think of the S-matrix (or its inverse, as we need to reverse the order of the state vector and S-
matrix to get an equation like the one below) as an operator. Let us write it (its inverse) as A
64
, and let us
denote the state it operates on as , so we have this rather simple expression:
A|ψ
61
The K and subscripts denote the (neutral) antikaon and lambda-particle, respectively. We use an underbar
instead of an overbar to denote antimatter in standard script (i.e. when not using the formula editor).
62
Unstable particles are modelled by the addition of a transient factor in the wavefunction, which we should do
(and have done above) to all particle wavefunctions in the matrix equation. Fortunately, we can measure decay
times, so this does not add any unknowns to the equations.
63
D. Bombardelli, Lectures on S-matrices and integrability, 2016.
64
We should use the hat because the symbol without the hat is reserved for the matrix that does the operation
and, therefore, already assumes a representation, i.e. some chosen set of base states. However, let us skip the
niceties here.
19
Now, we can then think of some (probability) amplitude that this operation produces some other state
|ϕ, which we would write as: ϕ|A|ψ
We can now take the complex conjugate:
ϕ|A|ψ* = ψ|A†|ϕ
A† is, of course, the conjugate transpose of A: Aij=(Aji)*, and we will call the operator (and the matrix)
Hermitian if the conjugate transpose of this operator (or the matrix) gives us the same operator matrix,
so that is if A† = A. Many operators are Hermitian. Why? Well… What is the meaning of ϕ|A|ψ* =
ψ|A†|ϕ = ψ|A|ϕ? Well… In the ϕ|A|ψ we go from some state |ψ to some other state ϕ|.
Conversely, the ψ|A|ϕ expression tells us we were in state |ϕ but now we are in the state ψ|.
So, is there some meaning to the complex conjugate of an amplitude like ϕ|A|ψ? We say: yes, there
is! Read up on time reversal and CPT symmetry! Based on the above and your reading-up on CPT
symmetry we would think it is fair to say we should interpret the Hermiticity condition as a physical
reversibility condition.
Note that we are not talking mere time symmetry here: reversing a physical process is like playing a
movie backwards and, hence, we are actually talking CPT symmetry herebut with the interpretation of
antimatter and the related concept of parity as presented above.
Form factors and the nature of quarks
All that is left is to wonder what the S-matrix and the coefficients s11, s12, s21, and s22 actually represent.
We think of them as numbers complex or quaternion numbers but sheer numbers (i.e. mathematical
quantities rather than ontological/physical realities) nevertheless. This raises a fundamental question in
regard to the quark hypothesis. We do not, of course, question the usefulness of the quark hypothesis
to help classify the rather enormous zoo of unstable particles, nor do we question the massive
investment to arrive at the precise measurements involved in the study of high-energy reactions (as
synthesized in the Annual Reviews of the Particle Data Group). However, we do think the award of the
Nobel Prize of Physics to CERN researchers Carlo Rubbia and Simon Van der Meer (1984), or in case of
the Higgs particle Englert and Higgs (2013) would seem to have awarded 'smoking gun physics' only, as
opposed to providing any ontological proof for the reality of virtual particles.
65
In this regard, we should also note Richard Feynman's discussion of reactions involving kaons, in which
he writing in the early 1960s and much aware of the new law of conservation of strangeness as
presented by Gell-Man, Pais and Nishijima also seems to favor a mathematical concept of strangeness
or, at best, considers strangeness to be a composite property of particles rather than an
65
The rest mass of the Higgs particle, for example, is calculated to be equal to 125 GeV/c2. Even at the speed of
light - which such massive particle cannot aspire to attain it could not travel more than a few tenths of a
femtometer: about 0.310-15 m, to be precise. That is not something which can be legitimately associated with the
idea of a physical particle: a resonance in particle physics has the same lifetime. We could mention many other
examples. We wrote a rather controversial position paper on this, which we recommend you to readif only to
try to define your own ideas and position about these matters.
20
existential/ontological concept.
66
In fact, Feynman's parton model
67
seems to bridge both conceptions at
first, but closer examination reveals the two positions (quarks/partons as physical realities versus
mathematical form factors) are mutually exclusive. We think the reinvigorated S-matrix program, which
goes back to Wheeler and Heisenberg
68
, is promising because unlike Feynman’s parton theory it does
not make use of perturbation theory or other mathematically flawed procedures (cf. Dirac's criticism of
QFT in the latter half of his life).
Perturbation theory
We must mention one more tool in the quantum-mechanical toolbox: perturbation theory. Perturbation
theory was designed to model disequilibrium states, such as transitions. Think of an electron moving
from one electron orbital (a shell or a subshell) to another. Perturbation theory thinks of this transition
as being caused by an external perturbation: an external field that messes with the (stable) field of the
particle itself and, therefore, causes instability.
The math here is rather convoluted, and we will, therefore, refer the reader to a good basic introduction
to mathematical physics. We love the way Mathews and Walker (Mathematical Methods of Physics,
1970) deal with it. They have worked very closely with Richard Feynman they actually did the write-up
of his Lectures (1963) and so their treatment of the matter is a rather seamless complement to those
Lectures.
Unfortunately, perturbation theory was used improperly by the second and third generation of quantum
physicists, who thought perturbations must be quantum field particlesor virtual particles as they are
also referred to. We refer the reader to our papers with more historical background on this, in particular
our paper on the concept of de Broglie’s matter-wave.
69
Final notes
We quickly revised this paper because I had some time on my hands and I note the paper did get some
reads. The revision mainly involved the introductory sections. I did not delve into the scattering matrix
again, and I also realize I left out quite a lot of tools and concepts that are (also) routinely used in
academic approaches to the topic at hand (quantum mechanics). One of these are the Pauli matrices
and very much related the matrix algebra of rotations, which is very much related. However, there is
only so much one can do and, in my post scriptum to my RG papers, I distil a rather long list of issues
which I should further examine. Let us see whether I find the necessary time and energy for that in the
coming years. I am afraid my day job, family and other adventures in life may make that difficult.
Brussels, 12 September 2022
66
See: Feynman’s Lectures, III-11-5.
67
See, for example: W.-Y. P. Hwang, Toward Understanding the Quark Parton Model of Feynman, 1992.
68
See D. Bombardelli, Lectures on S-matrices and integrability, 2016. We opened a discussion thread on
ResearchGate on the question.
69
See, for example, our interpretation of the discussions between Oppenheimer and Dirac at the occasion of the
1948 Solvay Conference (de Broglie’s matter-wave: concept and issues, p. 6-7).
21
Annex: Applying the concepts to Diracs wave equation
You will, perhaps, want to apply all of the formalism that you have learnt here. Let us do so by
explaining Dirac’s wave equation for a free particle, so you can understand Dirac’s development in, say,
his Nobel Prize Lecture. We do so by a walk-through the relevant sections in his Principles of Quantum
Mechanics.
70
Do not worry too much, because we will keep things quite simple. Let us go for it.
Dirac starts by writing down the Hamiltonian which, he notes, is just the (non-relativistic) kinetic energy
of the (free) particle. Think of a free electron moving as part of a beam or being fired one-by-one or just
floating around. We introduced the Hamiltonian in this paper for a two-state system: we had two
(discrete) energy levels there, whose energy was written with reference to some average energy E0: E0 +
A and E0 A, respectively. The Hamiltonian was a matrix consisting of four Hamiltonian coefficients:
 
 

This Hamiltonian matrix was then used in a set of two differential equations (two states, two equations),
from which we could then derive two wavefunctions whose absolute square gave us the probabilities
as a function of time of being in one or the other state. Back to Dirac now. His Hamiltonian for what is
the simplest of systems (a free particle) is just one real or scalar number:
󰇛
󰇜
Following remarks are probably useful:
The px, py, and pz are the momentum in the x, y, and z-direction in 3D space, respectively. They
make up the momentum vector p = (px, py, pz), whose square p2 = p2 = px2 + py2 + pz2 = m2vx2 +
m2py2 + m2pz2 is, quite simply, the squared magnitude of the (linear) momentum of our particle
(p).
The momentum is, of course, equal to p = v or, as a vector, p = m·v.
The other factor in Dirac’s Hamiltonian is 1/2m. Let us forget about the 1/2 (we will say
something more about that below) and just note that we can combine the 1/m factor with the
p2 factor: p2/m = p2/m = mvx2 + mpy2 + mpz2 = mv2.
Hence, the Hamiltonian appears to be the classical non-relativistic kinetic energy KE = mv2/2. We
must note that, because we are talking a particle moving in free space here, there is no potential
energy. Hence, the kinetic energy is, effectively, the only energy that matters.
Now, our story would become terribly long if we would try to explain the 1/2m factor, so we must refer
the reader here to our treatment of Schrödinger’s wave equation for an electron in free space.
71
At the
same time, we must note the following points from our interpretation of Schrödinger’s wave equation:
70
He wrote these in 1930, but we refer to the fourth edition (May 1957). We refer to sections 30 (the free particle)
and 67 (the wave equation for the electron), mainly.
71
This treatment can be found in the annex to our paper on de Broglie’s matter-wave, as well as in other papers
where we discuss Schrödinger’s equation, such as our paper on electron propagation in a lattice.
22
The 1/2m factor, which Dirac just copies from Schrödinger’s wave equation, is a 1/2meff factor,
with meff being the effective mass of an electron. Our electron model reveals this effective mass
is the relativistic mass of the pointlike charge inside of the electron which it acquires because
of its motion.
The above probably sounds like Chinese but fits with Schrödinger’s Zitterbewegung model of
an electron. The point to note is that the effective mass is half of the total mass of the electron,
as Feynman convincingly demonstrates in his treatment of an electron moving in free space.
72
While we do not want to elaborate our electron model once again, the point made above is important
enough to elaborate, and we want to quote Richard Feynman on it. He does not even bother to talk
about Dirac’s wave equation, and sticks to Schrödinger’s wave equation for an electron in free space,
which he writes as: 


To be precise, he writes it as:
󰇛󰇜

󰇛󰇜

Noting that 1/i = i and, yes, that the former expression generalizes from a one-dimensional linear x
coordinate to 3D x = (x, y, z) position vectors, you will appreciate that both expressions are the same.
You may also think there is another difference between Schrödinger’s and Dirac’s Hamiltonian: that
imaginary unit. It is there in Schrödinger’s equation, but not in Dirac’s Hamiltonian. Why not? The
answer is that we just gave you Dirac’s Hamiltonian. Not his wave equation. He will also bring in the
imaginary unit. Hence, Schrödinger’s and Dirac’s Hamiltonian for their wave equation are essentially the
same, till now, that is. Do not worry. But the question raises an interesting question: why is it there, in
all equations, both for linear as well as for orbital motion? Indeed, we noted that the imaginary unit
serves as a rotation operator but, yes, this is an equation for linear motion (as opposed to Schrödinger’s
full-blown equation for the motion of an electron in atomic orbitals), so why is it there?
The answer is this: yes, the imaginary unit i is a rotation operator but, when linear motion is involved, it
brings in that cyclicity or periodicity of the wavefunction that we are seeking: the wavefunctions that
come out as solutions to the wave equation any wave equation are complex-valued functions,
always. We said a few things about that in this paper already but, for a full-blown development, see our
paper on the math behind what we refer to as Feynman’s time machine.
Let us move on. The next step in Dirac’s development is that he replaces a so-called classical or non-
relativistic energy concept for a relativistically correct formula for the kinetic energy. Now that is
probably the most crucial mistake. We think it is a common mistake to consider Schrödinger’s equation
as essentially non-correct because it is, supposedly, not relativistically correct.
72
We refer here to Feynman’s Lectures on Physics, III-16, sections 1, 2 and 5.
23
Indeed, at no point in Feynman’s development of Schrödinger’s equation do we see a dependence on the
classical concept of kinetic energy. We should probably ask Feynman himself but, as he is now dead, we
can only quote him:
We do not intend to have you think we have derived the Schrödinger equation but only wish to
show you one way of thinking about it. When Schrödinger first wrote it down, he gave a kind of
derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the
arguments he used were even false, but that does not matter: the only important thing is that
the ultimate equation gives a correct description of nature.
There is no if or hesitation here, and the equation he refers to is the one which has meff in it. Not the one
with m or me in it. Indeed, it is the relation between meff and me that Feynman did not manage to
solve. Our electron model does do the trick: meff = m/2. Half of the electron energy is potential, and the
other half is kinetic, and so that is the correct energy concept to be used in the wave equation. We think
there is no need to invoke classical or relativistic formulas.
Let us quickly go to the next logical question. How did Feynman and all those other great scientists get it
wrong, then? Are we sure about what we write? Why that factor two: why 2 times meff? Again, we wrote
about that before, in this paper and in others: Richard Feynman himself and all textbooks based on his
treatment of the matter rather shamelessly substitute the concept of the effective mass of an electron
for me rather than me/2, simply noting or assuming that the effective mass of an electron becomes the
free-space mass of an electron outside of the lattice.
73
That is plain wrong.
Now you will say: if it is a mistake to do that, then Schrödinger’s equation would not work, and it does.
My answer here is this: it does because of another mistake or imperfection. Schrödinger’s equation does
not incorporate electron spin and, hence, models orbitals for electron pairs: two electrons instead of
one. One with spin up, and the other with spin down. You may want to argue with that, but we need to
get on with Dirac’s wave equation, so that is what we will do now, and we will leave it to you to think
about all of the above.
So, back to Dirac’s Hamiltonian. We argued that Dirac did not quite know what Schrödinger’s wave
equation actually modeled and that he got confused by the 1/2meff and p2 = p2 combination: he thinks of
it as classical energy and thinks it should be replaced by a relativistic kinetic energy concept. Let us
quote him:
For a rapidly moving particle, such as we often have to deal with in atomic theory, [p2/2m =
mv2/2] should be replaced by the relativistic formula 󰇛
󰇜.
74
We said there was no need to introduce some kind of relativistic energy concept here but, now that
Dirac has done so, we must explain the formula, of course: why and how would the 󰇛
󰇜 formula correspond to a relativistic energy concept?
73
See Feynman, III-16, equations 16.12 and 16.13.
74
We already gave you the reference above, but it is good to be precise here. Here, we quote from Diracs
Principles of Mechanics (fourth edition), section 30: it is the paragraph which introduces equation (23) in his
development.
24
Dirac himself notes that the constant term mc2 corresponds to the rest-energy of the particle in the
theory of relativity” and that “it has no influence on the equations of motion.” In other words, we must
probably think of the m2c2 as m02c2. That is rather confusing notation, to say the least. However, his
remark indicates that he may have gotten this from looking at one or both of these relativistically
correct equations
75
below: 

But how, exactly? If the m in the m2c2 is actually equal to m0 (as Dirac seems to suggest, then the
󰇛
󰇜󰇛

󰇜 factor works out like this:



We note that the squared momentum m2v2 is relativistically correct. We must only make sure we get
the Lorentz factor in when switching from relativistic to rest mass. It is rather obvious but, in light of
Dirac’s rather sloppy treatment of m and m0, we want to make sure you have it all in front of you:


The point is, we can now understand Dirac’s Hamiltonian somewhat better:
󰇛
󰇜


Really? Dirac’s m for m0 substitution was fishy, at best, and plain wrong, at worst. In any case, we know
now that the m in the 󰇛
󰇜 must be the electron’s rest mass m0, but it is very
tricky to keep track of things like that in such rather complicated developments, so we are quite
suspicious of the consistency of Dirac’s argument and, surely, because we feel he started of the wrong
foot from the start!
Let us make a jump now from Dirac’s Principles of Mechanics to his Nobel Prize Lecture, in which he says
his wave equation is based on this energy equation:


Where does that come from? Dirac immediately states that W should be interpreted as the kinetic
energy W and, as we would expect, that pr is the (linear) momentum vector (r = 1, 2, 3). To be frank, it is
immediately clear that Dirac is equally confused about energy concepts here: this time he equates total
energy to kinetic energy. The implicit or explicit argument is that we are talking about a free particle
and, hence, that there is no potential energy. We strongly refute that, not only because it is obvious that
any wave equation resulting from this equation would be of little use in real life (space is filled with
potentials and free space, therefore, is a theoretical concept only) but more importantly our
75
For the derivation of these formula, which are somewhat less straightforward than they may look at first, see:
Feynman I-15-9 equation (15.18). We may also want to look back at the derivations of formulas like

, which one can find in as Feynman I-16-5 equation (16-13).
25
interpretation of wave-particles suggests kinetic and potential energy constitute half of the total mass or
energy of the elementary particle!
In any case, let us give you the formula which Dirac uses. It is this:

It is just one of the many relativistically correct formulas involving mass, momentum and energy, and
this one, in particular, you can find in Feynman I-16-5. It’s equation (16-13), to be precise. All you need
to do is substitute W for E = mc2 and then divide all by c2:






So here you are. All the rest is the usual hocus-pocus: we substitute classical variables by operators, and
we let them operate on a wavefunction, and then we have a complicated differential equation to solve
and as we made abundantly clear in this and other papers
76
, when you do that, you will find non-
sensical solutions, except for the one that Schrödinger pointed out: the Zitterbewegung electron, which
we believe corresponds to the real-life electron.
76
One of our papers you may want to check here is our brief history of quantum-mechanical ideas. We had a lot of
fun writing that one, and it is not technical at all.
26
References
The reference list below is limited to the classics we actively used, and publications of researchers whom
we have been personally in touch with:
Richard Feynman, Robert Leighton, Matthew Sands, The Feynman Lectures on Physics, 1963
Albert Einstein, Zur Elektrodynamik bewegter Körper, Annalen der Physik, 1905
Paul Dirac, Principles of Quantum Mechanics, 1958 (4th edition)
Conseils Internationaux de Physique Solvay, 1911, 1913, 1921, 1924, 1927, 1930, 1933, 1948
(Digithèque des Bibliothèques de l'ULB)
Jon Mathews and R.L. Walker, Mathematical Methods of Physics, 1970 (2nd edition)
Patrick R. LeClair, Compton Scattering (PH253), February 2019
Herman Batelaan, Controlled double-slit electron diffraction, 2012
Ian J.R. Aitchison, Anthony J.G. Hey, Gauge Theories in Particle Physics, 2013 (4th edition)
Timo A. Lähde and Ulf-G. Meissner, Nuclear Lattice Effective Field Theory, 2019
Giorgio Vassallo and Antonino Oscar Di Tommaso, various papers (ResearchGate)
Diego Bombardelli, Lectures on S-matrices and integrability, 2016
Andrew Meulenberg and Jean-Luc Paillet, Highly relativistic deep electrons, and the Dirac equation,
2020
Ashot Gasparian, Jefferson Lab, PRad Collaboration (proton radius measurement)
Randolf Pohl, Max Planck Institute of Quantum Optics, member of the CODATA Task Group on
Fundamental Physical Constants
David Hestenes, Zitterbewegung interpretation of quantum mechanics and spacetime algebra (STA),
various papers
Alexander Burinskii, Kerr-Newman geometries (electron model), various papers
Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922) and Philosophical Investigations
(posthumous)
Immanuel Kant, Kritik der reinen Vernunft, 1781
ResearchGate has not been able to resolve any citations for this publication.
We opened a discussion thread on ResearchGate on the question
  • See D Bombardelli
See D. Bombardelli, Lectures on S-matrices and integrability, 2016. We opened a discussion thread on ResearchGate on the question.