ArticlePDF Available

Abstract and Figures

Given the experimental precision in condensed matter physics -- positions are measured with errors of less than 0.1pm, energies with about 0.1meV, and temperature levels are below 20mK -- it can be inferred that standard quantum mechanics, with its inherent uncertainties, is a model at the end of its natural lifetime. In this presentation I explore the elements of a future deterministic framework based on the synthesis of wave mechanics and density functional theory at the single-electron level.
Content may be subject to copyright.
arXiv:1311.5470v1 [physics.gen-ph] 8 Nov 2013
Elements of physics for the 21st century
Werner A. Hofer
Department of Physics, University of Liverpool, L69 3BX Liverpool, Britain
Given the experimental precision in condensed matter physics – positions are measured with errors
of less than 0.1pm, energies with about 0.1meV, and temperature levels are below 20mK – it can be
inferred that standard quantum mechanics, with its inherent uncertainties, is a model at the end of
its natural lifetime. In this presentation I explore the elements of a future deterministic framework
based on the synthesis of wave mechanics and density functional theory at the single-electron level.
The paper describes research presented at the
EmQM 13 conference. It gives an overview of work on
quantum mechanics through about fifteen years, from the
first paper on extended electrons and photons published
in 1998 [1], to the last paper on quantum nonlocality
and Bell-type experiments in 2012 [2]. A final section
contains the first steps towards a density functional the-
ory of atomic nuclei, presented for the first time at the
conference in Vienna. It can be seen that the publica-
tions on quantum mechanics, which I published in this
period, possess a gap from about 2002 to 2010. This
was due to the realization on my part that I could not
account for a simple fact: I could not explain, how the
electron changes its wavelength, when it changes its ve-
locity. I felt at the time that not understanding this fact
probably meant that I could not understand the elec-
tron. Hence I only continued the development of this
framework after, prompted by a student of mine, I had
found a solution which seemed to make sense. For that
I have to thank this particular student. I also have to
thank Gerhard Gr¨ossing and Jan Walleczek for organiz-
ing this great conference, and the Fetzer Franklin Fund
for very generous financial support.
I think we can say today that we actually do under-
stand quantum mechanics. Maybe not in the last details,
and maybe not in its full depth, but in the broad work-
ings of the mathematical formalism, the basic physics
which it describes, and the deep flaws buried within its
seemingly indisputable axioms and theorems. In that,
we differ from Richard Feynman, who famously thought
that nobody could actually understand it. However, this
was said before two of the most important inventions
for science in the twentieth century became available to
researchers: high-performance computers, and scanning
probe microscopes. Computers changed the way science
is conducted. Not only do they allow for exquisite ex-
perimental control and an extensive numerical analysis
of all experiments, they also serve as a predictive tool,
if the models include all aspects of a physical system.
This, in turn, means that successful theory and successful
quantitative predictions, based on local quantities, make
it increasingly implausible, that processes exist, which
are operating outside space and time. Then, the solu-
tion to the often paradoxical theoretical predictions and
sometimes incomprehensible experimental outcomes can-
FIG. 1: Top frames, clockwise, development of scanning probe
microscopy over the last 30 years. Au-terraces 1982[4], Au-
atoms 1987[5], Atomic species 1993[6], electron-spin 1998[8],
atomic vibrations 1999[9], spin-flip excitations 2007[11], forces
on single atoms 2008[3].
not lie in yet another mathematical framework even more
remote from everyday experiences than quantum me-
chanics, but in the rebuilding of a model in microphysics
which is both, rooted in space and time, and which al-
lows for a description of single events at the atomic scale.
This paper aims at delivering the first building blocks of
such a comprehensive model.
That we can say today, that we do understand quan-
tum mechanics, is to a large extent the merit of thousands
of condensed matter physicists and physical chemists,
who have in the last thirty years painstakingly removed
layer after layer of complexity from their experimental
and theoretical research until they could measure and
analyze single processes on individual atoms and even
electrons. Today we can measure and simulate the forces
necessary to push one atom across a smooth metal surface
[3], vibrations created by single electrons impinging on
molecules [9], torques on molecules created by ruptures
of single molecular bonds [10], or single spin-flip excita-
tions on individual atoms [11]. See Fig. 1 for the devel-
opment of experiments over the last thirty years. These
experiments have pushed precision to whole new levels.
Today, distances can be measured with an accuracy of
0.05 pm [12], which is about 1/4000th of an atomic di-
ameter, and energies with a precision of 0.1meV, which
is about 1/20000th of the energy of a chemical bond [11].
Given these successes and this accuracy, of which physi-
cists could only dream at the time of Einstein, Heisen-
berg, Schr¨odinger, or Dirac, it would be intellectually
deeply unsatisfying if we were today still limited to the
somewhat crude theoretical framework of standard quan-
tum mechanics, developed at the beginning of the last
The lesson I have learned from my work as condensed
matter theorist, trying to make sense of results my exper-
imental colleagues threw at me is this: a physical process
has to be thoroughly understood before a suitable theo-
retical model can be constructed. It is probably one of
the more self-defeating features of Physics in the 20th
century that new developments mostly took the oppo-
site route: equations came first, processes and physical
effects a distant second. This, I think, is about to change
again in the 21st century, as mathematical guidance with-
out physical understanding has led Physics thoroughly
The main problem faced by theorists today is the pre-
cision of experiments at the atomic scale, because it ex-
ceeds by far the limit encoded in the uncertainty rela-
tions. This has been the subject of debate for some time
now, following the publication of Ozawa’s paper in 1988
[13], which demonstrated that the limit can be broken
by certain measurements. An even larger violation can
be observed in measurements of scanning probe instru-
ments [14]. If the instrument measures, via its tunneling
current, the variation of the electron density across a sur-
face, then a statistical analysis of such a measurement is
straightforward. In the conventional model electrons are
assumed to be point particles. The same assumption is
made in quantum mechanics, when the formalism is in-
troduced via Hamiltonian mechanics and Poisson brack-
ets. It is also the conventional wisdom in high energy
collision experiments, where one finds that the radius of
the electron should be less than 1018m. If this is cor-
rect, then the density is a statistical quantity derived
from the probability of a point-like electron to be found
at a certain location. This has two consequences:
1. A measurement of a certain distance with a certain
precision for a particular point on the surface can
only be distinguished from the measurement at a
neighboring point if the standard deviation is lower
than a certain value.
2. A certain energy limit allows only a certain lower
limit for the standard deviation in these measure-
One can now show quite easily [14] that the standard de-
viation at realistic energy limits (in case of a silver surface
the band energy) is about two orders of magnitude larger
than the possible value for state-of-the-art measurements
today. The allowed limit for the standard deviation in
the experiments is about 3pm, while the standard devi-
ation from the band energy limit is about 300pm. The
consequence for the standard framework of quantum me-
chanics is quite devastating: the uncertainty principle,
and by association the whole framework of operator me-
chanics, becomes untenable, because it is contradicted by
experiments. It is precisely this contradiction, which has
been claimed by theoretical physicists to be impossible.
It also has one consequence, which can be seen as the one
principle of the following:
The density of electron charge is a real physical
The density of electron charge has the same ontological
status as electromagnetic fields or macroscopic mass or
charge distributions. The only difference, and the ori-
gin of many of the complications arising in atomic scale
physics is that the density not only interacts with exter-
nal energy sources, but it also interacts with an electron’s
internal spin density.
The theoretical framework combines two separate
models. Both of them are due to physicists born in Vi-
enna, so the location of a workshop on emergent quantum
mechanics, from my personal perspective, could not have
been better chosen. The first of these physicists is Erwin
Schr¨odinger, born in Vienna in 1887, the second one is
Walter Kohn, born in Vienna in 1923. The fundamental
statements, underlying these two separate models, are
the following:
A system is fully described by its wavefunction
A system is fully described by its density of electron
charge (Kohn).
I have been asked, at this workshop, whether the viola-
tion of the uncertainty relations could be accounted for
by a reduced limit of the constant, e.g. somewhat smaller
than ¯h/2, a solution which was proposed by Ozawa for
the violations detected in the free-mass measurements
[13]. While this seems, at least for the time being, a
possible solution, it disregards the ultimate origin of the
uncertainty relations. They are based, conceptually, on
the assumption that electrons are point particles (this is
the link to classical mechanics and Poisson brackets), and
the obligation to account for wave properties of electrons.
If wave properties are real, a view taken in the current
framework, then there will be no theoretical limit to the
precision in their description. A remedy along the lines
sketched above then becomes untenable.
If the density of electron charge is a real physical
property, then a common framework must be developed,
which allows to map the density onto wavefunctions in
the Schr¨odinger theory. Wavefunctions famously do not
have physical reality in the conventional model. However,
their square does, according to the Born rule. Here, we
want to demonstrate that this is correct to some extent
also within the new model, but with one important limi-
tation: even though wavefunctions do not have the same
reality as mass or spin densities, they can be assembled
from these two - physically real - properties.
A. Single electrons
1. Density and energy
It has been recognized by some of the greatest physi-
cists in the 20th century, among them Albert Einstein,
that electrons play a key role in modern physics. Indeed,
one could argue that all of physical sciences at the atomic
and molecular level, Physics, Chemistry, and Biology is
concerned with only one topic: the behavior and proper-
ties of electrons. This is also reflected in the celebrated
theorem of Walter Kohn: all properties of a physical sys-
tem, composed of atoms, are defined once the distribu-
tion of electron charge within the system is determined
[15]. The solution to the problem of electron density dis-
tribution is formulated in density functional theory in a
Schr¨odinger-type equation. The spin density is, in this
framework, denoted as an isotropic spin-up or spin-down
component of the total charge density, the energy related
to this spin-density is computed with the help of Pauli-
type equations.
However, the framework does not provide physical in-
sights into either spin-densities at the single electron
level, or how spin-densities will change in external mag-
netic fields. What was missing, so far, was a clear connec-
tion between the density of electron charge, on the one
hand, and the spin density on the other hand. A connec-
tion, which should explain the physical origins of wave-
functions in the standard model. It should also explain,
how density distributions may change as a consequence
of changes to the electron velocity, thus underpinning the
wave properties of electrons, found in all experiments.
It turns out to be surprisingly simple to construct such
a model. Once it is accepted that electrons must be ex-
tended, the wave features must be part of the density
distributions of free electrons themselves. In this case
the density of charge must also be wavelike. This poses a
problem for both, standard quantum theory and density
functional theory, because in both cases free electrons are
described by plane waves:
ψ(r) = 1
Vexp (ikr) (1)
In this case the Born rule gives a constant value for
the probability density, for the mass density and for the
charge density: free electrons, then, do not have any dis-
tinctive property related to their velocity. But the - now
physically real - wave properties of mass and spin densi-
ties can be recovered by first assigning a wave-like behav-
ior to the density of electron mass moving in z-direction
ρ(z, t) = ρ0
21 + cos 4π
λz4πνt (2)
where ρ0is the inertial mass density of the electron, and λ
and νdepend on the momentum and frequency according
to the de Broglie and Planck rules. At zero frequency in-
finite wavelength, describing an electron at rest, the mass
density is equal to the inertial mass density. However, if
the electron moves, then the density is periodic in zand
t. This requires the existence of an additional energy
reservoir to account for the variation in kinetic energy
density. We next introduce the spin density as the geo-
metric product of two field vectors, Eand H, which are
perpendicular to the direction of motion. These fields
E(z, t) = e1E0sin 2π
H(z, t) = e2H0sin 2π
Spin, in this picture, is the geometric product of the two
vector components. It is thus a chiral (and for the free
electron imaginary) field vector, which is either parallel
or anti-parallel to the direction of motion. The total
energy density is constant and equal to the inertial energy
density if we impose a condition on the spin amplitude
Ekin(z , t) = 1
el cos22π
Espin(z , t) = 1
0=: 1
el (4)
Etot =Ekin(z , t) + Espin(z, t) = 1
It should be noted that not only the frequency, but also
the intensity of the spin component depends on the veloc-
ity of the electron. This is in marked contrast to classical
electrodynamics, where the energy of a field only depends
on the intensity but not on the frequency. Here, it is a
necessary consequence of the principle that the electron
density is a real physical variable and it establishes a
link between the quantum behavior of electrons and the
quantum behavior of electromagnetic fields.
This behavior gives a much more precise explanation
for the validity of Planck’s derivation of black body radi-
ation. If every electromagnetic field, due to emission or
absorption of energy by electrons, must follow the same
characteristic, then every energy exchange must also be
proportional to the frequency of the field. Then Planck’s
assumption, that E=is nothing but a statement of
this fact. However, that also the intensity follows the
same rule, has been unknown so far. In our view this
could be the fundamental principle for a general frame-
work of a non-relativistic quantum electrodynamics to be
developed in the future.
It should also be noted that the electrostatic repulsion
of such an extended electron has to be accounted for,
as it is in density functional theory (DFT), by a nega-
tive cohesive energy of the electron of -8.16eV. In DFT
this energy component is known as the self-interaction
2. Wavefunctions
It is straightforward to assemble wavefunctions from
mass and spin density components, following this route.
Wavefunctions are in our framework multivectors con-
taining the even elements of geometric algebra in three
dimensional space [17]. The even elements are real num-
bers and bivectors (product of two vectors), the 4πsym-
metry, which is the basis of Fermi statistics in the conven-
tional framework, follows from the symmetry properties
of multivectors under rotations in space. The real part
ψmof a general wavefunction can be written as a scalar
part, equal to the square root of the number density:
0cos 2π
In geometric algebra, this is the scalar component of a
general multivector. The bivector component ψsis the
square root of the spin component, times the unit vec-
tor in the direction of electron propagation, times the
imaginary unit. It is thus:
0sin 2π
The scalar component and the bivector component for
an electron are equal to the inertial number density:
ρ0=S0ρ+S=ρ0= constant (7)
The same result can be reached by applying the Born
rule, for the wavefunction defined as:
ψψ=ρ+S=ρ0= constant
The difference to the conventional formulation is that the
wavefunction is a multivector, not a complex scalar. It
also makes the spin component a chiral vector, which is
important for the understanding of spin measurements.
Formally, we can recover the standard equations of
wave mechanics, if we define the Schr¨odinger wavefunc-
tion as a complex scalar, retaining the direction of the
spin component as a hidden variable. The wavefunction
for a free electron then reads:
0exp i2π
λz2πνt (9)
In the conventional framework the dependency of the
wavefunction and the Schr¨odinger equation on external
scalar or vector potentials is usually justified with ar-
guments from classical mechanics and energy conserva-
tion. In our approach, the justification is the changed
frequency and wavevector of electrons if they are subject
to external fields. If we assume that the frequency of
the electron varies from the value inferred from the de
Broglie and Planck relations:
∂t =ψ 6=¯h2
then the difference, which is observed in the photoelectric
effect, can be accounted for by an additional term in
the equation which is linear with the measured scalar
potential. Then:
∂t =¯h2
2m2ψS+V ψS(11)
The second situation, where this can be the case, ob-
served for example in Aharonov-Bohm effects, is when
the wavelength does not comply with the wavelength in-
ferred from the frequency and the Planck and de Broglie
relations. In this case one can account for the observation
by including the vector potential in the differential term
of the equation to arrive at the general equation [16]:
∂t =1
2m(i¯h∇ − eA)2ψS+V ψS(12)
The important difference, as will be seen presently, is that
all these effects occur at a local level and can therefore
be analyzed locally: a philosophy, which also forms the
core of the local density approximation in DFT.
B. Many-electron systems
In a many-electron system motion of electrons is cor-
related throughout the system and mediated by crystal
fields within the material. If the spin component in gen-
eral is a bivector, and if it is subject to interactions with
other electrons in the system, then the general, scalar
Schr¨odinger equation will not describe the whole physics
of the system. Simply accounting for all interactions by
a scalar effective potential veff would recover the Kohn-
Sham equations of DFT, if exchange and correlation were
included. It would do so, however, for both, density com-
ponents and spin components, since:
2m2+veff ρ1/2+ie3S1/2=
2m2+veff µρ1/2= 0 (13)
2m2+veff µS1/2= 0
In this case the solutions of the equation, single Kohn-
Sham states, would exist throughout the system and not
lend themselves to a local analysis of physical events.
More importantly, such a model would not include an in-
dependent spin component in the theoretical description.
We therefore propose a different framework for a many
electron system, which scales linearly with the number
of electrons and remains local. Such a model can be
achieved by including a bivector potential into a gener-
alized Schr¨odinger equation in the following way:
2m2+veff +ievvbρ1/2+iesS1/2
where we have changed the spin component to describe
a general spin direction es. The geometric product of
two vectors is the sum of a real scalar and an imaginary
The equation of motion for a general many-electron sys-
tem then reads:
2m2+veff µρ1/2=ev·esvbS1/2
2m2+veff µesS1/2+evvbρ1/2=
If ev= 0, we recover Eq. (13). As inspection shows, the
coupled equations only have a solution if the direction
of the bivector potential is equal to the direction of spin
(ev=es), which reduces the problem to:
2m2+veff µρ1/2S1/2=vbρ1/2+S1/2
With the transformation ˜ρ1/2=ρ1/2S1/2and for vb= 0
this equation is identical to the Levy-Perdew-Sahni equa-
tion derived for orbital free DFT in the 1980s [18].
2m2+veff µ˜ρ1/2= 0 (18)
One can reduce the expression to the conventional
Schr¨odinger equation for the hydrogen atom by setting
veff =vn, the Coulomb potential of the central nucleus.
The equation then has two groundstate solutions, both
radially symmetric:
2eαr S1/2=±C
2eαr (19)
where Cis a constant, and αis the inverse Bohr radius.
The vector esis the radial unit vector erand the two
spin directions are inward and outward. The same so-
lution will apply to all s-like charge distributions, also,
therefore, to the valence electron of silver (see the discus-
sion of Stern-Gerlach experiments below).
The great advantage of the formulation is the simplic-
ity and the reduced number of variables. Both, ρand
Sare scalar variables. In addition, we have to find the
directions of the unit vectors, es=evfor every point of
the system. This reduces the time independent problem
to a problem of finding five scalar components in real
space. Compared to the standard formulation of many-
body physics, where one has to find a wavefunction of
3Nvariables, where Nis the number of electrons, or to
standard DFT, which scales cubic with N, the approach
is much simpler.
However, the effective potential veff and the bivector
potential vbin this model are generally not known and
have to be determined for every system. In standard
DFT this is done by calculating the exchange-correlation
functional for simpler systems, or for very small systems
with high-precision methods. The same route will have
to be taken for this new model of many-body physics.
Judging from the development of standard DFT this pro-
cess will probably take at least ten years of development
before reliable methods can be routinely used in simula-
tions. But we think, that this method and this approach
to many-body physics will also be an element of physics
in the 21st century.
As stated in the introduction, we consider the fact that
quantum mechanics does not allow for a detailed analysis
of single events a major drawback of the theory. How-
ever, a theoretically more advanced model will have to
pass the test that it can actually deliver these insights.
This value statement, i.e. that a theoretical framework is
superior not because it obtains higher precision in the nu-
merical predictions, but it is superior because it provides
causal insight into physical processes, is somewhat alien
to the current debate about quantum mechanics. The
tacit agreement seems to be that no theory can provide
such an insight. This is one of the fundamental assump-
tions of the Copenhagen school. There, it is stated that
no theoretical model can be more than a coherent frame-
work for obtaining numbers in experimental trials. But
we do actually not know that this is true, because the
assumption that it is true contains an assumption about
reality. The assumption that reality cannot in principle
be subjected to an analysis in terms of cause and effect
in physical processes. The argument thus is not even
logically consistent with its own believe system.
Here, we want to show that the analysis of single events
in terms of cause and effect is possible also at the atomic
scale. This, we think, demonstrates more than anything
else the problems of the standard framework. To an un-
biased observer it appears sometimes as if the mathe-
matical tools had, over the last century, acquired a life
of their own, so that they are seen as a separate reality,
which exists independently of space and time. Hilbert
space seems such a concept, and the inability of the stan-
dard framework to actually locate a trajectory in space
is then seen as proof of the reality of infinitely many tra-
jectories in Hilbert space. This is a logical fallacy, as we
shall demonstrate by an analysis of crucial experiments
in the following.
A. Acceleration of electrons
We are quite used to the fact that the wavelength of
an electron is inverse proportional to its momentum. It
is thus also quite normal to write a wavefunction of a
particular free electron, which contains a variation of its
amplitude according to this momentum. However, when
an electron is accelerated, then standard theory is refer-
ring us to the Ehrenfest theorem [19]. Incidentally, also
Paul Ehrenfest was born in Vienna, in 1880. But his the-
orem only describes the change of an expectation value
in a system. It does not allow us to understand, how
the wavefunction changes its wavelength, or how the fre-
quency of the wave increases when it interacts with an
accelerating potential. Within the present model, this is
exactly described at the local level by a new equation,
which we call the local Ehrenfest theorem. Its mathe-
matical expression is:
dt (20)
It states that the force (density) at a particular point of
the electron is exactly equal to the gradient of an external
potential φ, and that it is described by its classical formu-
lation, the acceleration of its inertial mass. The reason
that it is described by this equation is that the number
density or the mass density (here we use the two notions
interchangeable) is complemented by the spin density to
yield a constant:
ρ+S=ρ0= constant (21)
The same applies to the square of the wavefunction,
which is:
ψψ=ρ+S=ρ0= constant (22)
The time differential of momentum density at a particu-
lar point is therefore:
dt (ψv) = m(ψψ)dv
dt (23)
However, what is hidden in the classical expression is
the shift of energy from the mass component to the spin
component as the electron accelerates:
Here we find the reason for the change of wavelength in
an acceleration process: the spin component increases in
amplitude, and as gradually more energy is shifted into
this component the wavelength becomes shorter and the
frequency increases. A process, which so far has remained
buried underneath the mathematical formalism and is
now open to analysis.
B. Stern-Gerlach experiments
An inhomogeneous magnetic field leads to deflection of
atoms, if they possess a magnetic moment. This effect
was used, in the ground breaking experiments on silver
by Gerlach and Stern in 1922 [20], to demonstrate that
the classically expected result, i.e. a statistical distribu-
tion around a central impact, is not in line with exper-
imental outcomes. Moreover, the assumption that the
orbital moment would cause the deflection was also un-
tenable, because in this case one would observe an odd
number of impact locations and not, as in the actual ex-
periments, exactly two. Within the new model the effect
is easy to understand. Above, we derived the solution
for the electron mass and spin density of a hydrogen-like
atom. Assuming that the valence electron of silver can
be described in a similar model, we find two different spin
directions: one, parallel to the radial vector and directed
outward, the other, parallel to the radial vector and di-
rected inward (see Fig. 2, left images). The induced spin
densities Si(see Fig. 2, centre) as the atoms enter the
field, are due to the changes of the spin orientation in
a time-dependent magnetic field, which comply with a
Landau-Lifshitz like equation [16]:
dt = constant ·eS×v×dB
dt (25)
Then the induced spin densities will lead to a precession
around the magnetic field Bin two directions, which will
give rise to induced magnetic moments parallel, or anti
parallel to the field. In an inhomogeneous field the force
of deflection is then directed either parallel or antiparallel
to the field gradient, leading to two deflection spots on
the screen, exactly as seen in the experiments (Fig. 2,
right). While therefore in the standard model, which
assumes that:
1. Spin is isotropic.
2. A measurement breaks the symmetry of the spin.
no process exists, which could actually explain the sym-
metry breaking of the initially isotropic spin, the situa-
tion is completely different in the new model. Here the
process is described by:
1. Spin is isotropic.
2. The measurement induces spins aligned with the
magnetic field.
FIG. 2: Spin measurement of a hydrogen-like atom. Left: the
spin densities S0are parallel to the radial vector. Center: the
direction of the induced spin densities Siis parallel or antipar-
allel to the magnetic field. Right: due to the inhomogeneous
field the atoms are deflected upward or downward.
3. The induced spins lead to positive or negative de-
flections in a field gradient.
The description is fully deterministic, since the initial
direction of spin densities determines the experimental
outcome. Statistics only enter the picture, if the initial
spin densities are unknown, which they are in practice.
Again, we see that the new model actually describes pro-
cesses at the level of single events, and that probabilities
arise due to unknown initial conditions, but that they are
not fundamental to a comprehensive model.
C. Interference experiments
Double slit experiments are notoriously difficult to un-
derstand in the framework of standard quantum mechan-
ics. So difficult, in fact, that Richard Feynman called
them ”a phenomenon which is impossible, absolutely im-
possible, to explain in any classical way, and which has
in it the heart of quantum mechanics. In reality, it con-
tains the only mystery” [21]. The work done recently,
aimed at shedding light on this mystery, is already quite
convincing: whether it is with Bohm-type trajectories,
fluctuating fields [22], or whether it is by establishing
the trajectories with weak measurements [23], the re-
sult always seems to be that one particle passes through
one particular opening. Mathematically, the interference
phenomena in the standard framework are calculated e.g.
FIG. 3: Double slit interferometry, Feynman path integrals.
A single particle is assumed to split into virtual particles prior
to the interferometer. After the interferometer all particles
recombine, the acquired phases along their path determining
the interference amplitude. A single particle is detected at
the detector screen.
Discrete lateral
Discrete lateral
FIG. 4: Double slit interferometry, real picture. Left: A sin-
gle particle passing through an opening of the interferome-
ter acquires a discrete lateral momentum due to interactions
with the discrete interaction spectrum of atomic scale sys-
tems. The interference pattern is a series of sharp impact
regions. Right: due to the thermal energy of the slit environ-
ment and interactions with molecules in air the impact regions
broaden with a Gaussian until they resemble the wave-like in-
terference pattern in an optical interferometer.
with the help of Feynman path integrals. The process de-
scribed in this mathematical framework is shown in Fig.
3. A single particle, upon entering the vicinity of the
interferometer, is assumed to split into a number of vir-
tual particles. Each virtual particle passes exactly one
opening of the interferometer, where it acquires a char-
acteristic phase. After the interferometer, all particles
are again recombined interfering in a particular way due
to their acquired phases. A single impact is observed on
the detector screen.
It is quite clear, and is conceded also in the standard
framework, that this has nothing to do with real events.
However, this insight does not solve the problem, what
actually happens so that single entities (electrons or pho-
tons), will acquire certain deflections in an interferome-
ter, and why these deflections have an uncanny resem-
blance to interference patterns of light in an interferom-
eter. In our view, this problem could actually have been
solved a long time ago by Duane [24]; a solution which
was later taken up by Lande [25], and which contains no
mystery at all. The key observation for their model is
that every atomic scale system has a discrete interaction
spectrum. This means that every interaction of such a
system with a single photon or electron can only cause ob-
servable changes in the particle’s dynamics, if a discrete
amount of energy is exchanged, typically corresponding
to the excitation of single lattice vibrations. Given this
fact, it is impossible that the particle acquires a contin-
uous lateral momentum. Consequently, it also cannot
be detected in intermediate regions, unless its trajectory
is additionally determined by thermal broadening of the
actual interaction.
This model of the process can be experimentally ver-
ified. The key to such a verification is the separation of
the individual effects changing the particle’s trajectory
(see Fig. 4). In a liquid helium environment the thermal
motion of atoms is frozen. In addition, in an ultrahigh
vacuum environment, no interactions with molecules are
possible. In a low temperature interferometer the im-
pacts will be sharply defined images of the particle beam
deflected by interactions with the atomic environment,
while a gradual increase of the temperature of the in-
terferometer should lead to a gradual broadening of the
impact regions. This broadening, moreover, should re-
flect the thermal energy range of the slit environment.
Performing such a controlled experiment seems entirely
feasible today, and in our view it will establish that in-
deed the interaction with the atomic environment, and
not some fictitious splitting and recombination process is
at the bottom of this - hundred years old - mystery.
1. Interference of large molecules
It has been claimed, in a number of high-impact pub-
lications since 1999, that large molecules can be made
to interfere on gold gratings, and that these experiments
show both, the coherence of the molecules over macro-
scopic trajectories (range of cm), and that the ”wave-
length” of these molecules is equal to the de Broglie
wavelength of their inertial mass. This is highly naive
and manifestly incorrect, as we show in the following.
As the exemplar of the misguided interpretations
we use in the following the first experiments on C60
molecules, which were published in the journal Nature
[26]. Due to the interest of the Chemistry community
in these molecules, their properties have been extremely
well researched in the past. Theorists routinely calcu-
late their electronic properties, their phonon spectrum,
and their light absorption and emission spectrum. They
have been adsorbed on surfaces and their charge density
distribution has been compared to the results of STM ex-
periments, which verified the theoretical results in great
detail. As every Chemist will know, phonon or vibra-
tional modes of organic molecules are varied and range
from a few meV (breathing modes, torsion) to a few hun-
dred meV (stretch modes). This particular molecule con-
tains 60 carbon atoms, it thus has 180 modes of vibration
which cover the whole energy range.
Experiments are performed in such a way that the
molecules are heated with laser light, reaching velocities
of a few hundred meters per second, and then passed
through a grating with a width of about 50 nm, and
a depth of 100 nm. All molecules presented in this
type of experiments so far are polarizable, that is they
can possess a dipole moment. No control experiments
with molecules which are not polarizable have been per-
formed to date. After the grating it is observed that the
molecules do not impinge on the screen in a continuous
fashion, but that their impact count shows a variation,
which is taken as proof that the molecules possess a de
Broglie wavelength and interfere as coherent waves.
This is fundamentally wrong on several counts. First,
it is well known that the electronic density is fully char-
acterizing a many-electron system. A de Broglie wave-
length, which does make sense for free electrons, does not
exist in such a structure. Second, it is also well known
that internal degrees of freedom of molecular systems
start mixing after very short timescales, in the range of
femtoseconds. That a molecule is heated with a laser -
most likely leading to excitation of electronic transitions
- and then spends microseconds preserving a fictitious
state vector related to its translational motion, while
shaking rapidly due to vibrational excitations is not cred-
ible. Third, it is even less credible that such a molecule,
with its time dependent dipole moment, will not induce
dipole moments in the slit itself, which then interact with
the molecule’s dipole to alter its trajectory. And fourth,
the fictitious state vector of this molecule, which does
not exist, is supposed to interfere with another fictitious
state vector which went through a different slit, a pro-
cess, which is completely impossible, unless one assumes
that the molecule, during its tra jectory, will split into
several individual molecules. How this could be possible,
given that such a creation of additional molecules violates
the energy principle by several MeV, has never been ex-
plained and can safely be regarded as pure fiction. In
summary, the model is wrong in so many ways, that one
is alarmed by the lack of knowledge in basic Chemistry
and solid state Physics of its authors and, presumably,
the journal’s editors.
So how does it really work? Most likely in the way
sketched in the previous section. A polarizable molecule
is excited by laser light so that most of its low lying vi-
brational excitations are activated. This molecule enters
the interferometer with a time-dependent dipole moment
in lateral direction. As the molecule interacts with the
atomic environment of the interferometer, it induces elec-
tric dipoles into the slit system. These time-dependent
dipole moments interact with the molecular dipole mo-
ments until the molecule has passed the interferometer.
Due to the interaction the molecules acquire a distinct
lateral momentum. The momentum leads to a deflec-
tion on the detector screen. The deflection is interpreted
as the result of a de Broglie wave, because the distance
from the point of no deflection to the point of impact is
inverse proportional to the velocity of the molecule. Why
is it inverse proportional to the velocity of the molecule?
Because the time constant of the interaction duration
depends on the time the molecule spent in the slit envi-
ronment of constant depth. Then a faster molecule will
spend less time, therefore acquire less lateral momentum,
therefore end up closer to the point of no deflection. This,
again. has nothing to do with a de Broglie wave, and
all to do with the constant distance from the entry to
exit of the interferometer (100nm). This whole scenario
should be relatively easy to simulate with modern elec-
tronic structure methods. One could also try to pin down
the actual effect by using non-polarizable molecules. The
prediction here is that no periodic variation on the screen
will be observed in this case.
D. Aspect-type experiments
These experiments have been puzzling physicists for
at least thirty years. The height of the confusion was
probably reached with Aspect’s review paper in the jour-
nal Nature in 1999, where he stated: ”The violation of
Bell’s inequality, with strict relativistic separation be-
tween the chosen measurements, means that it is impos-
sible to maintain the image a la Einstein where corre-
lations are explained by common properties determined
at the common source and subsequently carried along by
each photon. We must conclude that an entangled EPR
photon pair is a non-separable object; that is, it is impos-
sible to assign individual local properties (local physical
reality) to each photon. In some sense, both photons
keep in contact through space and time” [27].
We shall show in the following that exactly such a
model a la Einstein can explain all experimental data
and that the confusion arises from a fundamental techni-
cal error in Bell’s derivations. To explain the experiments
in detail at the single photon level, let us start with set-
ting up a system composed of a source of photons at the
point z= 0, and two polarization measurements at ar-
bitrary points z=Aand z=B. We assume that the
polarization measurements contain rotations in the plane
parallel to z. We also assume that the two photons are
emitted from the source with an arbitrary angle of polar-
ization ϕ0. It is irrelevant for the following, whether the
field vectors of the two photons rotate during propaga-
tion. If they do, this will show up only as an additional
angle ∆ between their polarization measurements at A
or B. The setup of the experiment is shown in Figure
5. A single measurement at Aconsists of two separate
processes: First, the polarization angle is altered by an
angle ϕA. Mathematically, this is a rotation in three di-
mensional space and in the plane perpendicular to the
direction of motion, which can be described by the ge-
ometric product of a rotator in this plane (a geometric
product) ϕAe1e2acting on the photon’s field vector S,
which is parallel to e3. To take care of normalization, we
describe such a rotation as:
R(A) = exp [(ϕA+ϕ0)e1e2(e3)] = ei(ϕA+ϕ0)(26)
Then, the photon is detected, if the probability pwhich
depends on the angle of rotation and the initial angle of
polarization, is larger than a certain threshold:
p[R(A)] = [(R(A))]2= cos2(ϕA+ϕ0) (27)
FIG. 5: Aspect-type experiment. Two photons are emitted
from a common source with an initial unknown polarization
angle ϕ0. Their polarization is then measured at points A
and B. (From Ref. [2]).
Depending on how we define our threshold, which is a
function of the measurement equipment, an impact at
a certain angle of measurement ϕAand a certain initial
angle ϕ0is fully determined by the knowledge of these
two angles. The single event is thus fully accounted for.
However, in the actual experiments the angle ϕ0is un-
known, and it is randomly distributed over the whole
interval [0,2π]. A set of Nexperiments will thus lead to
a random value for the probability, covering the whole
interval [0,1]. The single measurement is thus random.
The same is true for a measurement at point B. Also
here the polarization measurement is described by a ro-
tation, with a different and fully independent angle ϕB.
The probability of detection is, along the same lines:
p[R(B)] = [(R(B))]2= cos2(ϕB+ϕ0) (28)
Also in this case the single event is fully accounted for
if the initial angle ϕ0and the angle of polarization ϕB
are known. Again, a set of Nexperiments will lead to
a random value for the probability, covering the whole
interval [0,1].
Naively, one could now assume that the correlation
probability is the product of the two measurement prob-
abilities at points Aand B, respectively. This is exactly
what Bell assumed in the derivation of his inequalities,
when he wrote [28]:
P(a,b) = Zdλρ(λ)A(a, λ)B(b, λ) (29)
Here, λhas the same meaning as the initial angle ϕ0,
and the crucial error lies in the assumption that the cor-
relation probability is the product of individual probabil-
ities. This is manifestly incorrect, because it disregards
the mathematical properties of rotations. Two separate
rotations at Aand Bhave to be accounted for by a
product of individual rotations, thus:
R(A)·R(B) = exp [(ϕA+ϕ0)e1e2(e3)]
·exp [(ϕBϕ0)e1e2(e3)]
= exp [i(ϕAϕB)] (30)
It is impossible, from these two rotations, to derive a
probability which is the product of two positive numbers.
Furthermore, the hidden variable ϕ0, which is present in
the probability of individual polarization measurements,
is canceled out in the correlation derived from two sep-
arate rotations. The correct form of the probability for
the correlation derived from the two rotations will be:
p[R(A), R(B)] = [(R(A)·R(B))]2= cos2(ϕAϕB)
These probabilities are equal to the correlation probabil-
ities derived in the Clauser-Horne-Shimony-Holt formal-
ism [29]:
C++ =C−− = cos2(ϕAϕB)
C+=C+= 1 cos2(ϕAϕB) (32)
They lead to the standard expectation values measured
in Aspect-type experiments:
E(ϕA, ϕB) = cos [2 (ϕAϕB)] (33)
And they violate the Bell inequalities in the exact same
way as found in the experiments:
S(ϕA, ϕ
A, ϕB, ϕ
B) = E(ϕA, ϕB)E(ϕA, ϕ
B) + (34)
A, ϕB) + E(ϕ
A, ϕ
B) = 22
if ϕA= 0, ϕ
A= 45, ϕB= 22.5, ϕ
B= 67.5. To repeat the
findings: a model based on polarizations and rotations
in space recovers all experimental results. It allows for
a cause-effect description of every single measurement.
It also violates the Bell inequalities. Not, because it is
a non-local model, but because Bell made a fundamen-
tal error in the derivation of his inequalities. It is thus,
paraphrasing Aspect’s words, not a proof that a model a
la Einstein is impossible, but rather a proof that many
quantum theorists do not understand geometry.
A. Electrons and neutrons
At the end of this presentation I would like to report on
some work in progress. It is quite natural, if one consid-
ers the electron an extended particle, to ask, what shape
and form it might have apart from the atomic environ-
ment. We know from DFT that its density, consequently
its shape, will depend on the potential environment. Af-
ter all, we find much higher densities of electron charge
in heavier atoms with a higher number of central charges
than we find in hydrogen. So one may also ask, in what
shape and form an electron exists, for example, in a neu-
tron. We know that the neutron decays outside an atomic
nucleus in about 880 seconds to a proton and an electron,
with an excess energy of 785 keV, which is mostly con-
verted into X-ray radiation.
n0p++e+ 785keV (35)
+-1.38 fm
FIG. 6: Neutron scattering experiments by Littauer et al.
[30]. A neutron consists of a positive core and a negative
shell. The radius of the neutron is about 1.38fm.
We also know, from scattering experiments (see Fig. 6),
that a neutron contains a core of positive charge, which
one could tentatively call the ”proton” and a shell of
negative charge, which one could to first instance iden-
tify as the ”electron”. If the electron exists in such a
high density phase, then one could also seek its eigen-
states with the help of a Schr¨odinger equation adapted
to the much smaller lengthscales and much higher energy
scales. However, for such an assumption to make sense it
first has to be determined, where the additional mass of
the neutron compared to isolated protons and electrons
comes from. Here, it has to be remembered that the ra-
dius of a neutron is much smaller than the radius of a
hydrogen atom. Therefore, the electrostatic field of an
electron outside hydrogen has a very low energy of about
11 eV, while this field has a large energy of close to 1
MeV for an electron with a radius of 1.38 fm:
ǫ0|E|2dV =1
rn1040keV (36)
Here, one finds that the electrostatic energy alone, con-
sidering mass equivalents, can account for the excess
mass. Next, it is necessary to analyze nuclear units. We
know from atomic physics that atomic units are defined
from fundamental constants and determine the solution
of the hydrogen problem with the Schr¨odinger equation.
Let me just remind the reader that an exponentially de-
caying wavefunction ψ(r) = ρ1/2
0exp(αr) leads to the
following characteristic equation and the solution for α:
2mr e2
4πǫ0rψ(r) = Eψ(r)
2mr e2
4πǫ0r= 0 α=me2
If a similar solution exists for the neutron, then the de-
cay constant must be different. We account for this hy-
pothesis by rescaling the Planck constant in a nuclear
environment so that:
¯hn=x¯h αn=1.89 ×1010m1
The Schr¨odinger equation in a nuclear environment then
rψn(r) = Enψn(r) (39)
The total energy is the sum of the positive energy of the
electrostatic field and the negative energy of the eigen-
value, it is known to be 785 keV. It depends, ultimately,
on only two values: the radius of the neutron, which
is known from scattering experiments, and the scale x.
With a0the Bohr radius we get:
2x2×27.211[eV ]
The scale xcan therefore be calculated from experimental
values. With rn= 1.38 fm and Wn= 785 keV we get for
the scale xand the energy scale En:
187791/2=αfEn= 511keV = mec2(41)
Both of these values are very fundamental. In the stan-
dard model the fine structure constant αfdescribes the
difference in coupling between nuclear forces and electro-
static forces, while the rest energy of the electron Enis
one of the fundamental constants in high energy physics.
At present, we do not have a clear indication of the sig-
nificance of this finding. It is quite improbable, that this
result should be a mere coincidence. After all, the iden-
tity relies on two experimental values, the radius of the
neutron and the mass of the neutron. Had these values
been different, the fine structure constant or the rest en-
ergy of the electron would not have been the result of
this derivation. We expect that a nuclear model on the
basis of high-density electrons, which we also tentatively
assume to be an element of physics in the 21st century,
will be able to answer this important question.
B. Magic nuclei
It is known that certain numbers of nucleons, assumed
to be protons and neutrons in the conventional model,
lead to increased stability of atomic nuclei. If high-
density electrons are the glue that holds protons together,
then protons in a nucleus will be in a regular arrange-
ment. In this case the problem of nuclear organization
becomes to first instance a problem of three dimensional
geometry. Starting from a single proton, and adding one
proton after the other, always under the condition that
FIG. 7: Closed shells of atomic nuclei for up to 136 protons.
The shell model is only based on geometry and does not in-
clude detailed interactions at this point.
the distances between protons are constant, will auto-
matically lead to a shell model of atomic nuclei, where a
certain number of protons corresponds to closed shells. In
Fig. 7 we show the first seven closed shells. In particular
the first four, with 4, 16, 28, and 40 protons, correspond
to magic nuclei in nuclear physics. Larger shells do not
necessarily, but it has to be considered that we do not
yet have a comprehensive model of interactions within
an atomic nucleus, which could account for the observed
nuclear masses. Compared to DFT the additional com-
plication within a nucleus is the relatively large volume
of protons, which probably cannot be taken into account
with a model of point charges, and the unknown role
of nuclear forces. Also, it is quite unclear at present if
the electrostatic interactions within the nucleus have the
same intensity as in a vacuum, how screening works, and
what role the energy of electrostatic fields will play in
the overall picture. The first steps towards such a model
are therefore highly tentative and it is to be expected
that a fully quantitative model of atomic nuclei is still
a long time in the future. However, such a model could
provide a unified basis for discussions in nuclear physics,
which connects it seamlessly to other fields of Physics:
something, which is manifestly not the case at present.
In this presentation I have emphasized six results ob-
tained within a theoretical framework which seamlessly
combines wave mechanics and density functional theory.
These six results are:
1. The uncertainty relations are violated by up to two
orders of magnitude in thousands of experiments
every single day.
2. Wavefunctions themselves are not real, but their
components, mass and spin densities, are real.
3. Rotations in space generate complex numbers,
which are not described in a Gibbs vector algebra.
4. Double slit interference experiments show two fea-
tures: a discrete interaction spectrum with the slit
system and a thermal broadening due to environ-
mental conditions.
5. The fine structure constant and the electron rest
mass describe the nuclear energy scale.
6. Closed shell nuclei are due to the geometrical ar-
rangement of nuclear protons.
On a personal note I think that fundamental Physics
has entered a new stage of development, after the near
inertia in the last thirty years. This is largely the merit of
scientists working outside their core disciplines and moti-
vated by nothing else but the curiosity, how things really
work. Finally, future developments in physics, based on
this framework, could include the following elements:
A non-relativistic theory of quantum electrody-
namics making use of the constraint found for elec-
tromagnetic fields that the intensity as well as the
frequency of the field must be linear with the en-
ergy of emission or adsorption.
A linear scaling many-electron theory for con-
densed matter making use of the result that many
body effects can be encoded in a chiral optical po-
A density functional theory of atomic nuclei using
a high-density phase of electrons in the nuclear en-
Helpful discussions with Krisztian Palotas are grate-
fully acknowledged. The work was supported by the
Royal Society London and the Canadian Institute for Ad-
vanced Research (CIFAR).
[1] WA Hofer, Physica A, 178-196 (1998).
[2] WA Hofer, Front. Phys. 7, 504-508 (2012).
[3] M Ternes et al., Science 319, 1066-1069 (2008).
[4] G. Binnig and H. Rohrer, Surf. Sci. 126, 236-244 (1983).
[5] VM Hallmark et al., Phys. Rev. Lett. 59, 2879-2882
[6] PT Wouda et al., Surf. Sci. 359, 17-22 (1996).
[7] MF Crommie, CP Lutz, DM Eigler, Science 262, 218-220
[8] M Bode, Rep. Progr. Phys. 66, 523- (2003).
[9] BC Stipe, MA Rezaei, W Ho, Phys. Rev. Lett. 82, 1724-
1727 (1999).
[10] KR Harikumar et al., Nat. Chem. 3, 400-408 (2011).
[11] CF Hirjibehedin et al., Science 317, 1199-1202 (2007).
[12] H Gawronski, M Mehldorn, K Morgenstern, Science 319,
930-933 (2008).
[13] M Ozawa, Phys. Rev. Lett. 60, 385-388 (1988).
[14] WA Hofer, Front. Phys. 7, 218-222 (2012).
[15] P Hohenberg, W Kohn, Phys. Rev. B136, 864-871 (1964).
[16] WA Hofer, Found. Phys. 41, 754-791 (2011).
[17] C Doran, A Lasenby, Geometric Algebra for Physicists,
Cambridge University Press, Cambridge (2002).
[18] M Levy, JP Perdew, V Sahni, V, Phys. Rev. A 30, 2745-
2748 (1984).
[19] P Ehrenfest, Z. Phys. 45, 455-457 (1927).
[20] W Gerlach, O Stern, Z. Phys. 9, 353-355 (1927).
[21] RP Feynman, The Feynman Lectures Vol. III, pp. 1-1,
Addison Wesley, Reading, Mass. (1965).
[22] G Groessing, S Fussy, J Mesa Pascasio, H. Schwabl,
Physica A 389, 4473-4484 (2010).
[23] S. Kocsis et al., Science 322, 1170-1173 (2011).
[24] W. Duane, Proc. Nat. Acad. Sci. 9, 158 (1923).
[25] A Lande, From Dualism to Unity in Quantum Physics,
Cambridge University Press, Cambridge UK (1960)
[26] M Arndt et al., Nature 401, 680-683 (1999).
[27] A Aspect, Nature 398, 189-190 (1999).
[28] JS Bell, Physics 1, 195-199 (1964).
[29] F Clauser, MA Horne, A Shimony, RA Holt, Phys. Rev.
Lett. 23, 880-883 (1969).
[30] Littauer et al., Phys. Rev. Lett. 7, 144-147 (1961).
... Looking forward, a particular challenge is to verify the Hofer eect, the expected tendency for all spins of a localized particle to align radially either inward or outward (spin up/down), as predicted in Hofer [19]. In that work, there is an explanation of how magnetic eects emerge from symmetry breaking of this spherical pattern, thereby supporting the Stern-Gerlach experiment. ...
Full-text available
All fundamental Planck scale symmetries are restored on a global level when a new charge is postulated in a nite, closed, Euclidean discrete space. The known fundamental particles are formed by a myriad of even more fundamental entities called bubbles. The four known forces of physics appear in an inseparable combination. Gravity, in particular, emerges as a residual eect of the electromagnetic force in this scenario, resulting in a deterministic toy universe driven by a single input parameter. The model is developed using a constructive approach, leading to a universal cellular automaton. This is not an interpretation of Quantum Mechanics, but a deeper attempt to describe nature.
... The Hofer effect is the expected tendency for all spins of a packet to align radially either inward or outward (spin up/down), as predicted in Hofer [20]. In that work, there is an explanation of how magnetic effects emerge from symmetry breaking of this spherical pattern, thereby supporting the Stern-Gerlach experiment. ...
Full-text available
All fundamental Planck scale symmetries are restored on a global level when a new charge is postulated in a finite, closed, Euclidean discrete space. Gravity emerges as a residual effect of the electromagnetic force in this scenario, resulting in a deterministic toy universe driven by a single input parameter. The model is developed using a constructive approach. Randomness is identified using a Chaintin argument. Aleph0 definite value is tied to the size of the universe. This is not an interpretation of Quantum Mechanics, but a deeper attempt to describe nature.
... The Hofer effect is the expected tendency for all spins of a packet to align radially either inward or outward (spin up/down), as predicted in Hofer [20]. In that work, there is an explanation of how magnetic effects emerge from symmetry breaking of this spherical pattern, thereby supporting the Stern-Gerlach experiment. ...
All fundamental Planck scale symmetries are restored on a global level when a new charge is postulated in a finite, closed, Euclidean discrete space. Gravity emerges as a residual effect of the electromagnetic force in this scenario, resulting in a deterministic toy universe driven by a single input parameter. Randomness is identified using a Chaintin argument. Aleph0 definite value is tied to the size of the universe. This is not an interpretation of Quantum Mechanics, but a deeper attempt to describe nature.
... The Hofer effect is the expected tendency for all spins of a packet to align radially either inward or outward (spin up/down), as predicted in Hofer [18]. In that work, there is an explanation of how magnetic effects emerge from symmetry breaking of this spherical pattern. ...
Full-text available
A new charge is postulated in a finite, closed, Euclidean discrete space to restore all fundamental symmetries on a global level. Gravity emerges as a residual effect of the electromagnetic force in this scenario, resulting in a deterministic toy universe driven by a single input parameter. Randomness is identified using a Chaintin argument. Aleph0 definite value is tied to the size of the universe. This is not an interpretation of Quantum Mechanics, but a deeper attempt to describe nature.
Dynamical equations of nonlinear electrodynamics minimally coupled to gravity (NED-GR), admit the class of regular solutions, asymptotically Kerr-Newman for a distant observer, which describe regular electrically charged rotating black holes and spinning electromagnetic solitons with the angular momentum ma and the gyromagnetic ratio g = 2. Their basic generic feature is the existence of the interior de Sitter equatorial disk of the radius a with the equation-of-state p = -ρ in the co-rotating frame, with the properties of a perfect conductor and ideal diamagnetic, and with the superconducting ring current along the edge of the disk, which replaces the ring singularity of the Kerr-Newman geometry and provides the nondissipative source of electromagnetic fields and the origin of an intrinsic magnetic momentum for an electrically charged regular object described by NED-GR. Generic features of the electromagnetic soliton with the parameters of the electron, mea = h/2,g = 2, suggest that the intrinsic origin of the electron magnetic momentum can be a superconducting ring current evaluated as jΦ = 79.277 A.
Full-text available
An extended electron model fully recovers many of the experimental results of quantum mechanics while it avoids many of the pitfalls and remains generally free of paradoxes. The formulation of the many-body electronic problem here resembles the Kohn-Sham formulation of standard density functional theory. However, rather than referring electronic properties to a large set of single electron orbitals, the extended electron model uses only mass density and field components, leading to a substantial increase in computational efficiency. To date, the Hohenberg-Kohn theorems have not been proved for a model of this type, nor has a universal energy functional been presented. In this paper, we address these problems and show that the Hohenberg-Kohn theorems do also hold for a density model of this type. We then present a proof-of-concept practical implementation of this method and show that it reproduces the accuracy of more widely used methods on a test-set of small atomic systems, thus paving the way for the development of fast, efficient and accurate codes on this basis.
Full-text available
It has been found that a model of extended electrons is more suited to describe theoretical simulations and experimental results obtained via scanning tunnelling microscopes, but while the dynamic properties are easily incorporated, magnetic properties, and in particular electron spin properties pose a problem due to their conceived isotropy in the absence of measurement. The spin of an electron reacts with a magnetic field and thus has the properties of a vector. However, electron spin is also isotropic, suggesting that it does not have the properties of a vector. This central conflict in the description of an electron’s spin, we believe, is the root of many of the paradoxical properties measured and postulated for quantum spin particles. Exploiting a model in which the electron spin is described consistently in real three-dimensional space–an extended electron model–we demonstrate that spin may be described by a vector and still maintain its isotropy. In this framework, we re-evaluate the Stern–Gerlach experiments, the Einstein–Podolsky–Rosen experiments, and the effect of consecutive measurements and find in all cases a fairly intuitive explanation.
Full-text available
The orientation of individual C 2HD molecules adsorbed on the Cu(100) surface at 8 K was determined from inelastic tunneling images obtained with a scanning tunneling microscope. The simultaneously recorded constant-current images showed that the deuterium end of the molecule appears 0.006 Å lower than the hydrogen end, which was further substantiated by quantitative measures of the molecular rotation. Extension of the study to C 2HD on Ni(100) revealed that the orientation of the molecule relative to the molecule's shape in the constant-current images contrasts sharply between the two surfaces.
Full-text available
Quantum superposition lies at the heart of quantum mechanics and gives rise to many of its paradoxes. Superposition of de Broglie matter waves has been observed for massive particles such as electrons, atoms and dimers, small van der Waals clusters, and neutrons. But matter wave interferometry with larger objects has remained experimentally challenging, despite the development of powerful atom interferometric techniques for experiments in fundamental quantum mechanics, metrology and lithography. Here we report the observation of de Broglie wave interference of C60 molecules by diffraction at a material absorption grating. This molecule is the most massive and complex object in which wave behaviour has been observed. Of particular interest is the fact that C60 is almost a classical body, because of its many excited internal degrees of freedom and their possible couplings to the environment. Such couplings are essential for the appearance of decoherence, suggesting that interference experiments with large molecules should facilitate detailed studies of this process.
Full-text available
The recent experimental progress in spin-polarized scanning tunnelling microscopy (SP-STM)—a magnetically sensitive imaging technique with ultra-high resolution—is reviewed. The basics of spin-polarized electron tunnelling are introduced as they have been investigated in planar tunnel junctions for different electrode materials, i.e. superconductors, optically excited GaAs, and ferromagnets. It is shown that ferromagnets and antiferromagnets are suitable tip materials for the realization of SP-STM. Possible tip designs and modes of operations are discussed for both classes of materials. The results of recent spatially resolved measurements as performed with different magnetic probe tips and using different modes of operation are reviewed and discussed in terms of applicability to surfaces, thin films, and nanoparticles. The limits of spatial resolution, and the impact of an external magnetic field on the imaging process are debated.
Full-text available
A theorem of Bell, proving that certain predictions of quantum mechanics are inconsistent with the entire family of local hidden-variable theories, is generalized so as to apply to realizable experiments. A proposed extension of the experiment of Kocher and Commins, on the polarization correlation of a pair of optical photons, will provide a decisive test between quantum mechanics and local hidden-variable theories.
The experimental violation of Bell's Inequalities confirms that a pair of entangled photons separated by hundreds of metres must be considered a single non-separable object - It is Impossible to assign local physical reality to each photon.
Geometric algebra is a powerful mathematical language with applications across a range of subjects in physics and engineering. This book is a complete guide to the current state of the subject with early chapters providing a self-contained introduction to geometric algebra. Topics covered include new techniques for handling rotations in arbitrary dimensions, and the links between rotations, bivectors and the structure of the Lie groups. Following chapters extend the concept of a complex analytic function theory to arbitrary dimensions, with applications in quantum theory and electromagnetism. Later chapters cover advanced topics such as non-Euclidean geometry, quantum entanglement, and gauge theories. Applications such as black holes and cosmic strings are also explored. It can be used as a graduate text for courses on the physical applications of geometric algebra and is also suitable for researchers working in the fields of relativity and quantum theory.
This paper deals with the ground state of an interacting electron gas in an external potential v(r). It is proved that there exists a universal functional of the density, Fn(r), independent of v(r), such that the expression Ev(r)n(r)dr+Fn(r) has as its minimum value the correct ground-state energy associated with v(r). The functional Fn(r) is then discussed for two situations: (1) n(r)=n0+n(r), n/n01, and (2) n(r)= (r/r0) with arbitrary and r0. In both cases F can be expressed entirely in terms of the correlation energy and linear and higher order electronic polarizabilities of a uniform electron gas. This approach also sheds some light on generalized Thomas-Fermi methods and their limitations. Some new extensions of these methods are presented.