Content uploaded by Werner A Hofer

Author content

All content in this area was uploaded by Werner A Hofer on May 21, 2014

Content may be subject to copyright.

Available via license: CC BY 3.0

Content may be subject to copyright.

arXiv:1311.5470v1 [physics.gen-ph] 8 Nov 2013

Elements of physics for the 21st century

Werner A. Hofer

Department of Physics, University of Liverpool, L69 3BX Liverpool, Britain

Given the experimental precision in condensed matter physics – positions are measured with errors

of less than 0.1pm, energies with about 0.1meV, and temperature levels are below 20mK – it can be

inferred that standard quantum mechanics, with its inherent uncertainties, is a model at the end of

its natural lifetime. In this presentation I explore the elements of a future deterministic framework

based on the synthesis of wave mechanics and density functional theory at the single-electron level.

I. INTRODUCTION

The paper describes research presented at the

EmQM 13 conference. It gives an overview of work on

quantum mechanics through about ﬁfteen years, from the

ﬁrst paper on extended electrons and photons published

in 1998 [1], to the last paper on quantum nonlocality

and Bell-type experiments in 2012 [2]. A ﬁnal section

contains the ﬁrst steps towards a density functional the-

ory of atomic nuclei, presented for the ﬁrst time at the

conference in Vienna. It can be seen that the publica-

tions on quantum mechanics, which I published in this

period, possess a gap from about 2002 to 2010. This

was due to the realization on my part that I could not

account for a simple fact: I could not explain, how the

electron changes its wavelength, when it changes its ve-

locity. I felt at the time that not understanding this fact

probably meant that I could not understand the elec-

tron. Hence I only continued the development of this

framework after, prompted by a student of mine, I had

found a solution which seemed to make sense. For that

I have to thank this particular student. I also have to

thank Gerhard Gr¨ossing and Jan Walleczek for organiz-

ing this great conference, and the Fetzer Franklin Fund

for very generous ﬁnancial support.

I think we can say today that we actually do under-

stand quantum mechanics. Maybe not in the last details,

and maybe not in its full depth, but in the broad work-

ings of the mathematical formalism, the basic physics

which it describes, and the deep ﬂaws buried within its

seemingly indisputable axioms and theorems. In that,

we diﬀer from Richard Feynman, who famously thought

that nobody could actually understand it. However, this

was said before two of the most important inventions

for science in the twentieth century became available to

researchers: high-performance computers, and scanning

probe microscopes. Computers changed the way science

is conducted. Not only do they allow for exquisite ex-

perimental control and an extensive numerical analysis

of all experiments, they also serve as a predictive tool,

if the models include all aspects of a physical system.

This, in turn, means that successful theory and successful

quantitative predictions, based on local quantities, make

it increasingly implausible, that processes exist, which

are operating outside space and time. Then, the solu-

tion to the often paradoxical theoretical predictions and

sometimes incomprehensible experimental outcomes can-

FIG. 1: Top frames, clockwise, development of scanning probe

microscopy over the last 30 years. Au-terraces 1982[4], Au-

atoms 1987[5], Atomic species 1993[6], electron-spin 1998[8],

atomic vibrations 1999[9], spin-ﬂip excitations 2007[11], forces

on single atoms 2008[3].

not lie in yet another mathematical framework even more

remote from everyday experiences than quantum me-

chanics, but in the rebuilding of a model in microphysics

which is both, rooted in space and time, and which al-

lows for a description of single events at the atomic scale.

This paper aims at delivering the ﬁrst building blocks of

such a comprehensive model.

That we can say today, that we do understand quan-

tum mechanics, is to a large extent the merit of thousands

of condensed matter physicists and physical chemists,

who have in the last thirty years painstakingly removed

layer after layer of complexity from their experimental

and theoretical research until they could measure and

analyze single processes on individual atoms and even

electrons. Today we can measure and simulate the forces

necessary to push one atom across a smooth metal surface

[3], vibrations created by single electrons impinging on

molecules [9], torques on molecules created by ruptures

of single molecular bonds [10], or single spin-ﬂip excita-

tions on individual atoms [11]. See Fig. 1 for the devel-

opment of experiments over the last thirty years. These

experiments have pushed precision to whole new levels.

Today, distances can be measured with an accuracy of

0.05 pm [12], which is about 1/4000th of an atomic di-

ameter, and energies with a precision of 0.1meV, which

is about 1/20000th of the energy of a chemical bond [11].

Given these successes and this accuracy, of which physi-

cists could only dream at the time of Einstein, Heisen-

berg, Schr¨odinger, or Dirac, it would be intellectually

deeply unsatisfying if we were today still limited to the

2

somewhat crude theoretical framework of standard quan-

tum mechanics, developed at the beginning of the last

century.

The lesson I have learned from my work as condensed

matter theorist, trying to make sense of results my exper-

imental colleagues threw at me is this: a physical process

has to be thoroughly understood before a suitable theo-

retical model can be constructed. It is probably one of

the more self-defeating features of Physics in the 20th

century that new developments mostly took the oppo-

site route: equations came ﬁrst, processes and physical

eﬀects a distant second. This, I think, is about to change

again in the 21st century, as mathematical guidance with-

out physical understanding has led Physics thoroughly

astray.

II. THE MAIN PROBLEM

The main problem faced by theorists today is the pre-

cision of experiments at the atomic scale, because it ex-

ceeds by far the limit encoded in the uncertainty rela-

tions. This has been the subject of debate for some time

now, following the publication of Ozawa’s paper in 1988

[13], which demonstrated that the limit can be broken

by certain measurements. An even larger violation can

be observed in measurements of scanning probe instru-

ments [14]. If the instrument measures, via its tunneling

current, the variation of the electron density across a sur-

face, then a statistical analysis of such a measurement is

straightforward. In the conventional model electrons are

assumed to be point particles. The same assumption is

made in quantum mechanics, when the formalism is in-

troduced via Hamiltonian mechanics and Poisson brack-

ets. It is also the conventional wisdom in high energy

collision experiments, where one ﬁnds that the radius of

the electron should be less than 10−18m. If this is cor-

rect, then the density is a statistical quantity derived

from the probability of a point-like electron to be found

at a certain location. This has two consequences:

1. A measurement of a certain distance with a certain

precision for a particular point on the surface can

only be distinguished from the measurement at a

neighboring point if the standard deviation is lower

than a certain value.

2. A certain energy limit allows only a certain lower

limit for the standard deviation in these measure-

ments.

One can now show quite easily [14] that the standard de-

viation at realistic energy limits (in case of a silver surface

the band energy) is about two orders of magnitude larger

than the possible value for state-of-the-art measurements

today. The allowed limit for the standard deviation in

the experiments is about 3pm, while the standard devi-

ation from the band energy limit is about 300pm. The

consequence for the standard framework of quantum me-

chanics is quite devastating: the uncertainty principle,

and by association the whole framework of operator me-

chanics, becomes untenable, because it is contradicted by

experiments. It is precisely this contradiction, which has

been claimed by theoretical physicists to be impossible.

It also has one consequence, which can be seen as the one

principle of the following:

•The density of electron charge is a real physical

quantity.

The density of electron charge has the same ontological

status as electromagnetic ﬁelds or macroscopic mass or

charge distributions. The only diﬀerence, and the ori-

gin of many of the complications arising in atomic scale

physics is that the density not only interacts with exter-

nal energy sources, but it also interacts with an electron’s

internal spin density.

The theoretical framework combines two separate

models. Both of them are due to physicists born in Vi-

enna, so the location of a workshop on emergent quantum

mechanics, from my personal perspective, could not have

been better chosen. The ﬁrst of these physicists is Erwin

Schr¨odinger, born in Vienna in 1887, the second one is

Walter Kohn, born in Vienna in 1923. The fundamental

statements, underlying these two separate models, are

the following:

•A system is fully described by its wavefunction

(Schr¨odinger).

•A system is fully described by its density of electron

charge (Kohn).

I have been asked, at this workshop, whether the viola-

tion of the uncertainty relations could be accounted for

by a reduced limit of the constant, e.g. somewhat smaller

than ¯h/2, a solution which was proposed by Ozawa for

the violations detected in the free-mass measurements

[13]. While this seems, at least for the time being, a

possible solution, it disregards the ultimate origin of the

uncertainty relations. They are based, conceptually, on

the assumption that electrons are point particles (this is

the link to classical mechanics and Poisson brackets), and

the obligation to account for wave properties of electrons.

If wave properties are real, a view taken in the current

framework, then there will be no theoretical limit to the

precision in their description. A remedy along the lines

sketched above then becomes untenable.

III. WAVEFUNCTIONS AND CHARGE

DENSITY

If the density of electron charge is a real physical

property, then a common framework must be developed,

which allows to map the density onto wavefunctions in

the Schr¨odinger theory. Wavefunctions famously do not

have physical reality in the conventional model. However,

their square does, according to the Born rule. Here, we

want to demonstrate that this is correct to some extent

3

also within the new model, but with one important limi-

tation: even though wavefunctions do not have the same

reality as mass or spin densities, they can be assembled

from these two - physically real - properties.

A. Single electrons

1. Density and energy

It has been recognized by some of the greatest physi-

cists in the 20th century, among them Albert Einstein,

that electrons play a key role in modern physics. Indeed,

one could argue that all of physical sciences at the atomic

and molecular level, Physics, Chemistry, and Biology is

concerned with only one topic: the behavior and proper-

ties of electrons. This is also reﬂected in the celebrated

theorem of Walter Kohn: all properties of a physical sys-

tem, composed of atoms, are deﬁned once the distribu-

tion of electron charge within the system is determined

[15]. The solution to the problem of electron density dis-

tribution is formulated in density functional theory in a

Schr¨odinger-type equation. The spin density is, in this

framework, denoted as an isotropic spin-up or spin-down

component of the total charge density, the energy related

to this spin-density is computed with the help of Pauli-

type equations.

However, the framework does not provide physical in-

sights into either spin-densities at the single electron

level, or how spin-densities will change in external mag-

netic ﬁelds. What was missing, so far, was a clear connec-

tion between the density of electron charge, on the one

hand, and the spin density on the other hand. A connec-

tion, which should explain the physical origins of wave-

functions in the standard model. It should also explain,

how density distributions may change as a consequence

of changes to the electron velocity, thus underpinning the

wave properties of electrons, found in all experiments.

It turns out to be surprisingly simple to construct such

a model. Once it is accepted that electrons must be ex-

tended, the wave features must be part of the density

distributions of free electrons themselves. In this case

the density of charge must also be wavelike. This poses a

problem for both, standard quantum theory and density

functional theory, because in both cases free electrons are

described by plane waves:

ψ(r) = 1

√Vexp (ikr) (1)

In this case the Born rule gives a constant value for

the probability density, for the mass density and for the

charge density: free electrons, then, do not have any dis-

tinctive property related to their velocity. But the - now

physically real - wave properties of mass and spin densi-

ties can be recovered by ﬁrst assigning a wave-like behav-

ior to the density of electron mass moving in z-direction

by:

ρ(z, t) = ρ0

21 + cos 4π

λz−4πνt (2)

where ρ0is the inertial mass density of the electron, and λ

and νdepend on the momentum and frequency according

to the de Broglie and Planck rules. At zero frequency in-

ﬁnite wavelength, describing an electron at rest, the mass

density is equal to the inertial mass density. However, if

the electron moves, then the density is periodic in zand

t. This requires the existence of an additional energy

reservoir to account for the variation in kinetic energy

density. We next introduce the spin density as the geo-

metric product of two ﬁeld vectors, Eand H, which are

perpendicular to the direction of motion. These ﬁelds

are:

E(z, t) = e1E0sin 2π

λz−2πνt

H(z, t) = e2H0sin 2π

λz−2πνt(3)

Spin, in this picture, is the geometric product of the two

vector components. It is thus a chiral (and for the free

electron imaginary) ﬁeld vector, which is either parallel

or anti-parallel to the direction of motion. The total

energy density is constant and equal to the inertial energy

density if we impose a condition on the spin amplitude

[16]:

Ekin(z , t) = 1

2ρ0v2

el cos22π

λz−2πνt

Espin(z , t) = 1

2ǫ0E2

0+1

2µ0H2

0sin22π

λz−2πνt

1

2ǫ0E2

0+1

2µ0H2

0=: 1

2ρ0v2

el (4)

⇒Etot =Ekin(z , t) + Espin(z, t) = 1

2ρ0v2

el

It should be noted that not only the frequency, but also

the intensity of the spin component depends on the veloc-

ity of the electron. This is in marked contrast to classical

electrodynamics, where the energy of a ﬁeld only depends

on the intensity but not on the frequency. Here, it is a

necessary consequence of the principle that the electron

density is a real physical variable and it establishes a

link between the quantum behavior of electrons and the

quantum behavior of electromagnetic ﬁelds.

This behavior gives a much more precise explanation

for the validity of Planck’s derivation of black body radi-

ation. If every electromagnetic ﬁeld, due to emission or

absorption of energy by electrons, must follow the same

characteristic, then every energy exchange must also be

proportional to the frequency of the ﬁeld. Then Planck’s

assumption, that E=hν is nothing but a statement of

this fact. However, that also the intensity follows the

same rule, has been unknown so far. In our view this

4

could be the fundamental principle for a general frame-

work of a non-relativistic quantum electrodynamics to be

developed in the future.

It should also be noted that the electrostatic repulsion

of such an extended electron has to be accounted for,

as it is in density functional theory (DFT), by a nega-

tive cohesive energy of the electron of -8.16eV. In DFT

this energy component is known as the self-interaction

correction.

2. Wavefunctions

It is straightforward to assemble wavefunctions from

mass and spin density components, following this route.

Wavefunctions are in our framework multivectors con-

taining the even elements of geometric algebra in three

dimensional space [17]. The even elements are real num-

bers and bivectors (product of two vectors), the 4πsym-

metry, which is the basis of Fermi statistics in the conven-

tional framework, follows from the symmetry properties

of multivectors under rotations in space. The real part

ψmof a general wavefunction can be written as a scalar

part, equal to the square root of the number density:

ψm=ρ1/2=ρ1/2

0cos 2π

λz−2πνt(5)

In geometric algebra, this is the scalar component of a

general multivector. The bivector component ψsis the

square root of the spin component, times the unit vec-

tor in the direction of electron propagation, times the

imaginary unit. It is thus:

ψs=ie3S1/2=ie3S1/2

0sin 2π

λz−2πνt(6)

The scalar component and the bivector component for

an electron are equal to the inertial number density:

ρ0=S0⇒ρ+S=ρ0= constant (7)

The same result can be reached by applying the Born

rule, for the wavefunction deﬁned as:

ψ=ρ1/2+ie3S1/2ψ†=ρ1/2−ie3S1/2(8)

ψ†ψ=ρ+S=ρ0= constant

The diﬀerence to the conventional formulation is that the

wavefunction is a multivector, not a complex scalar. It

also makes the spin component a chiral vector, which is

important for the understanding of spin measurements.

Formally, we can recover the standard equations of

wave mechanics, if we deﬁne the Schr¨odinger wavefunc-

tion as a complex scalar, retaining the direction of the

spin component as a hidden variable. The wavefunction

for a free electron then reads:

ψS=ρ1/2+iS1/2=ρ1/2

0exp i2π

λz−2πνt (9)

In the conventional framework the dependency of the

wavefunction and the Schr¨odinger equation on external

scalar or vector potentials is usually justiﬁed with ar-

guments from classical mechanics and energy conserva-

tion. In our approach, the justiﬁcation is the changed

frequency and wavevector of electrons if they are subject

to external ﬁelds. If we assume that the frequency of

the electron varies from the value inferred from the de

Broglie and Planck relations:

i¯h∂ψS

∂t =hνψ 6=−¯h2

2m∇2ψS=p2

2mψS,(10)

then the diﬀerence, which is observed in the photoelectric

eﬀect, can be accounted for by an additional term in

the equation which is linear with the measured scalar

potential. Then:

i¯h∂ψS

∂t =−¯h2

2m∇2ψS+V ψS(11)

The second situation, where this can be the case, ob-

served for example in Aharonov-Bohm eﬀects, is when

the wavelength does not comply with the wavelength in-

ferred from the frequency and the Planck and de Broglie

relations. In this case one can account for the observation

by including the vector potential in the diﬀerential term

of the equation to arrive at the general equation [16]:

i¯h∂ψS

∂t =1

2m(i¯h∇ − eA)2ψS+V ψS(12)

The important diﬀerence, as will be seen presently, is that

all these eﬀects occur at a local level and can therefore

be analyzed locally: a philosophy, which also forms the

core of the local density approximation in DFT.

B. Many-electron systems

In a many-electron system motion of electrons is cor-

related throughout the system and mediated by crystal

ﬁelds within the material. If the spin component in gen-

eral is a bivector, and if it is subject to interactions with

other electrons in the system, then the general, scalar

Schr¨odinger equation will not describe the whole physics

of the system. Simply accounting for all interactions by

a scalar eﬀective potential veff would recover the Kohn-

Sham equations of DFT, if exchange and correlation were

included. It would do so, however, for both, density com-

ponents and spin components, since:

−¯h2

2m∇2+veff ρ1/2+ie3S1/2=

=µρ1/2+ie3S1/2

−¯h2

2m∇2+veff −µρ1/2= 0 (13)

−¯h2

2m∇2+veff −µS1/2= 0

5

In this case the solutions of the equation, single Kohn-

Sham states, would exist throughout the system and not

lend themselves to a local analysis of physical events.

More importantly, such a model would not include an in-

dependent spin component in the theoretical description.

We therefore propose a diﬀerent framework for a many

electron system, which scales linearly with the number

of electrons and remains local. Such a model can be

achieved by including a bivector potential into a gener-

alized Schr¨odinger equation in the following way:

−¯h2

2m∇2+veff +ievvbρ1/2+iesS1/2

=µρ1/2+iesS1/2(14)

where we have changed the spin component to describe

a general spin direction es. The geometric product of

two vectors is the sum of a real scalar and an imaginary

vector:

eves=ev·es−iev×es(15)

The equation of motion for a general many-electron sys-

tem then reads:

−¯h2

2m∇2+veff −µρ1/2=ev·esvbS1/2

−¯h2

2m∇2+veff −µesS1/2+evvbρ1/2=

=−ev×esvbS1/2(16)

If ev= 0, we recover Eq. (13). As inspection shows, the

coupled equations only have a solution if the direction

of the bivector potential is equal to the direction of spin

(ev=es), which reduces the problem to:

−¯h2

2m∇2+veff −µρ1/2−S1/2=vbρ1/2+S1/2

(17)

With the transformation ˜ρ1/2=ρ1/2−S1/2and for vb= 0

this equation is identical to the Levy-Perdew-Sahni equa-

tion derived for orbital free DFT in the 1980s [18].

−¯h2

2m∇2+veff −µ˜ρ1/2= 0 (18)

One can reduce the expression to the conventional

Schr¨odinger equation for the hydrogen atom by setting

veff =vn, the Coulomb potential of the central nucleus.

The equation then has two groundstate solutions, both

radially symmetric:

ρ1/2=±C

2e−αr S1/2=±C

2e−αr (19)

where Cis a constant, and αis the inverse Bohr radius.

The vector esis the radial unit vector erand the two

spin directions are inward and outward. The same so-

lution will apply to all s-like charge distributions, also,

therefore, to the valence electron of silver (see the discus-

sion of Stern-Gerlach experiments below).

The great advantage of the formulation is the simplic-

ity and the reduced number of variables. Both, ρand

Sare scalar variables. In addition, we have to ﬁnd the

directions of the unit vectors, es=evfor every point of

the system. This reduces the time independent problem

to a problem of ﬁnding ﬁve scalar components in real

space. Compared to the standard formulation of many-

body physics, where one has to ﬁnd a wavefunction of

3Nvariables, where Nis the number of electrons, or to

standard DFT, which scales cubic with N, the approach

is much simpler.

However, the eﬀective potential veff and the bivector

potential vbin this model are generally not known and

have to be determined for every system. In standard

DFT this is done by calculating the exchange-correlation

functional for simpler systems, or for very small systems

with high-precision methods. The same route will have

to be taken for this new model of many-body physics.

Judging from the development of standard DFT this pro-

cess will probably take at least ten years of development

before reliable methods can be routinely used in simula-

tions. But we think, that this method and this approach

to many-body physics will also be an element of physics

in the 21st century.

IV. EXPERIMENTS

As stated in the introduction, we consider the fact that

quantum mechanics does not allow for a detailed analysis

of single events a major drawback of the theory. How-

ever, a theoretically more advanced model will have to

pass the test that it can actually deliver these insights.

This value statement, i.e. that a theoretical framework is

superior not because it obtains higher precision in the nu-

merical predictions, but it is superior because it provides

causal insight into physical processes, is somewhat alien

to the current debate about quantum mechanics. The

tacit agreement seems to be that no theory can provide

such an insight. This is one of the fundamental assump-

tions of the Copenhagen school. There, it is stated that

no theoretical model can be more than a coherent frame-

work for obtaining numbers in experimental trials. But

we do actually not know that this is true, because the

assumption that it is true contains an assumption about

reality. The assumption that reality cannot in principle

be subjected to an analysis in terms of cause and eﬀect

in physical processes. The argument thus is not even

logically consistent with its own believe system.

Here, we want to show that the analysis of single events

in terms of cause and eﬀect is possible also at the atomic

scale. This, we think, demonstrates more than anything

else the problems of the standard framework. To an un-

biased observer it appears sometimes as if the mathe-

6

matical tools had, over the last century, acquired a life

of their own, so that they are seen as a separate reality,

which exists independently of space and time. Hilbert

space seems such a concept, and the inability of the stan-

dard framework to actually locate a trajectory in space

is then seen as proof of the reality of inﬁnitely many tra-

jectories in Hilbert space. This is a logical fallacy, as we

shall demonstrate by an analysis of crucial experiments

in the following.

A. Acceleration of electrons

We are quite used to the fact that the wavelength of

an electron is inverse proportional to its momentum. It

is thus also quite normal to write a wavefunction of a

particular free electron, which contains a variation of its

amplitude according to this momentum. However, when

an electron is accelerated, then standard theory is refer-

ring us to the Ehrenfest theorem [19]. Incidentally, also

Paul Ehrenfest was born in Vienna, in 1880. But his the-

orem only describes the change of an expectation value

in a system. It does not allow us to understand, how

the wavefunction changes its wavelength, or how the fre-

quency of the wave increases when it interacts with an

accelerating potential. Within the present model, this is

exactly described at the local level by a new equation,

which we call the local Ehrenfest theorem. Its mathe-

matical expression is:

f=−∇φ=ρ0

dv

dt (20)

It states that the force (density) at a particular point of

the electron is exactly equal to the gradient of an external

potential φ, and that it is described by its classical formu-

lation, the acceleration of its inertial mass. The reason

that it is described by this equation is that the number

density or the mass density (here we use the two notions

interchangeable) is complemented by the spin density to

yield a constant:

ρ+S=ρ0= constant (21)

The same applies to the square of the wavefunction,

which is:

ψ∗ψ=ρ+S=ρ0= constant (22)

The time diﬀerential of momentum density at a particu-

lar point is therefore:

d

dt (mψ∗ψv) = m(ψ∗ψ)dv

dt (23)

However, what is hidden in the classical expression is

the shift of energy from the mass component to the spin

component as the electron accelerates:

˙

S=−˙ρ(24)

Here we ﬁnd the reason for the change of wavelength in

an acceleration process: the spin component increases in

amplitude, and as gradually more energy is shifted into

this component the wavelength becomes shorter and the

frequency increases. A process, which so far has remained

buried underneath the mathematical formalism and is

now open to analysis.

B. Stern-Gerlach experiments

An inhomogeneous magnetic ﬁeld leads to deﬂection of

atoms, if they possess a magnetic moment. This eﬀect

was used, in the ground breaking experiments on silver

by Gerlach and Stern in 1922 [20], to demonstrate that

the classically expected result, i.e. a statistical distribu-

tion around a central impact, is not in line with exper-

imental outcomes. Moreover, the assumption that the

orbital moment would cause the deﬂection was also un-

tenable, because in this case one would observe an odd

number of impact locations and not, as in the actual ex-

periments, exactly two. Within the new model the eﬀect

is easy to understand. Above, we derived the solution

for the electron mass and spin density of a hydrogen-like

atom. Assuming that the valence electron of silver can

be described in a similar model, we ﬁnd two diﬀerent spin

directions: one, parallel to the radial vector and directed

outward, the other, parallel to the radial vector and di-

rected inward (see Fig. 2, left images). The induced spin

densities Si(see Fig. 2, centre) as the atoms enter the

ﬁeld, are due to the changes of the spin orientation in

a time-dependent magnetic ﬁeld, which comply with a

Landau-Lifshitz like equation [16]:

S=eS·SdeS

dt = constant ·eS×v×dB

dt (25)

Then the induced spin densities will lead to a precession

around the magnetic ﬁeld Bin two directions, which will

give rise to induced magnetic moments parallel, or anti

parallel to the ﬁeld. In an inhomogeneous ﬁeld the force

of deﬂection is then directed either parallel or antiparallel

to the ﬁeld gradient, leading to two deﬂection spots on

the screen, exactly as seen in the experiments (Fig. 2,

right). While therefore in the standard model, which

assumes that:

1. Spin is isotropic.

2. A measurement breaks the symmetry of the spin.

no process exists, which could actually explain the sym-

metry breaking of the initially isotropic spin, the situa-

tion is completely diﬀerent in the new model. Here the

process is described by:

1. Spin is isotropic.

2. The measurement induces spins aligned with the

magnetic ﬁeld.

7

S0Si

∆z

∆z

x

B

dH

dz

FIG. 2: Spin measurement of a hydrogen-like atom. Left: the

spin densities S0are parallel to the radial vector. Center: the

direction of the induced spin densities Siis parallel or antipar-

allel to the magnetic ﬁeld. Right: due to the inhomogeneous

ﬁeld the atoms are deﬂected upward or downward.

3. The induced spins lead to positive or negative de-

ﬂections in a ﬁeld gradient.

The description is fully deterministic, since the initial

direction of spin densities determines the experimental

outcome. Statistics only enter the picture, if the initial

spin densities are unknown, which they are in practice.

Again, we see that the new model actually describes pro-

cesses at the level of single events, and that probabilities

arise due to unknown initial conditions, but that they are

not fundamental to a comprehensive model.

C. Interference experiments

Double slit experiments are notoriously diﬃcult to un-

derstand in the framework of standard quantum mechan-

ics. So diﬃcult, in fact, that Richard Feynman called

them ”a phenomenon which is impossible, absolutely im-

possible, to explain in any classical way, and which has

in it the heart of quantum mechanics. In reality, it con-

tains the only mystery” [21]. The work done recently,

aimed at shedding light on this mystery, is already quite

convincing: whether it is with Bohm-type trajectories,

ﬂuctuating ﬁelds [22], or whether it is by establishing

the trajectories with weak measurements [23], the re-

sult always seems to be that one particle passes through

one particular opening. Mathematically, the interference

phenomena in the standard framework are calculated e.g.

FIG. 3: Double slit interferometry, Feynman path integrals.

A single particle is assumed to split into virtual particles prior

to the interferometer. After the interferometer all particles

recombine, the acquired phases along their path determining

the interference amplitude. A single particle is detected at

the detector screen.

Discrete lateral

momenta

Single

wavelets

Single

wavelets

Discrete

pattern

Discrete lateral

momenta

Single

wavelets

Single

wavelets

Thermal

broadening

Continuous

pattern

FIG. 4: Double slit interferometry, real picture. Left: A sin-

gle particle passing through an opening of the interferome-

ter acquires a discrete lateral momentum due to interactions

with the discrete interaction spectrum of atomic scale sys-

tems. The interference pattern is a series of sharp impact

regions. Right: due to the thermal energy of the slit environ-

ment and interactions with molecules in air the impact regions

broaden with a Gaussian until they resemble the wave-like in-

terference pattern in an optical interferometer.

with the help of Feynman path integrals. The process de-

scribed in this mathematical framework is shown in Fig.

3. A single particle, upon entering the vicinity of the

interferometer, is assumed to split into a number of vir-

tual particles. Each virtual particle passes exactly one

opening of the interferometer, where it acquires a char-

acteristic phase. After the interferometer, all particles

are again recombined interfering in a particular way due

to their acquired phases. A single impact is observed on

the detector screen.

It is quite clear, and is conceded also in the standard

framework, that this has nothing to do with real events.

However, this insight does not solve the problem, what

actually happens so that single entities (electrons or pho-

tons), will acquire certain deﬂections in an interferome-

ter, and why these deﬂections have an uncanny resem-

blance to interference patterns of light in an interferom-

eter. In our view, this problem could actually have been

solved a long time ago by Duane [24]; a solution which

was later taken up by Lande [25], and which contains no

mystery at all. The key observation for their model is

that every atomic scale system has a discrete interaction

spectrum. This means that every interaction of such a

system with a single photon or electron can only cause ob-

servable changes in the particle’s dynamics, if a discrete

amount of energy is exchanged, typically corresponding

8

to the excitation of single lattice vibrations. Given this

fact, it is impossible that the particle acquires a contin-

uous lateral momentum. Consequently, it also cannot

be detected in intermediate regions, unless its trajectory

is additionally determined by thermal broadening of the

actual interaction.

This model of the process can be experimentally ver-

iﬁed. The key to such a veriﬁcation is the separation of

the individual eﬀects changing the particle’s trajectory

(see Fig. 4). In a liquid helium environment the thermal

motion of atoms is frozen. In addition, in an ultrahigh

vacuum environment, no interactions with molecules are

possible. In a low temperature interferometer the im-

pacts will be sharply deﬁned images of the particle beam

deﬂected by interactions with the atomic environment,

while a gradual increase of the temperature of the in-

terferometer should lead to a gradual broadening of the

impact regions. This broadening, moreover, should re-

ﬂect the thermal energy range of the slit environment.

Performing such a controlled experiment seems entirely

feasible today, and in our view it will establish that in-

deed the interaction with the atomic environment, and

not some ﬁctitious splitting and recombination process is

at the bottom of this - hundred years old - mystery.

1. Interference of large molecules

It has been claimed, in a number of high-impact pub-

lications since 1999, that large molecules can be made

to interfere on gold gratings, and that these experiments

show both, the coherence of the molecules over macro-

scopic trajectories (range of cm), and that the ”wave-

length” of these molecules is equal to the de Broglie

wavelength of their inertial mass. This is highly naive

and manifestly incorrect, as we show in the following.

As the exemplar of the misguided interpretations

we use in the following the ﬁrst experiments on C60

molecules, which were published in the journal Nature

[26]. Due to the interest of the Chemistry community

in these molecules, their properties have been extremely

well researched in the past. Theorists routinely calcu-

late their electronic properties, their phonon spectrum,

and their light absorption and emission spectrum. They

have been adsorbed on surfaces and their charge density

distribution has been compared to the results of STM ex-

periments, which veriﬁed the theoretical results in great

detail. As every Chemist will know, phonon or vibra-

tional modes of organic molecules are varied and range

from a few meV (breathing modes, torsion) to a few hun-

dred meV (stretch modes). This particular molecule con-

tains 60 carbon atoms, it thus has 180 modes of vibration

which cover the whole energy range.

Experiments are performed in such a way that the

molecules are heated with laser light, reaching velocities

of a few hundred meters per second, and then passed

through a grating with a width of about 50 nm, and

a depth of 100 nm. All molecules presented in this

type of experiments so far are polarizable, that is they

can possess a dipole moment. No control experiments

with molecules which are not polarizable have been per-

formed to date. After the grating it is observed that the

molecules do not impinge on the screen in a continuous

fashion, but that their impact count shows a variation,

which is taken as proof that the molecules possess a de

Broglie wavelength and interfere as coherent waves.

This is fundamentally wrong on several counts. First,

it is well known that the electronic density is fully char-

acterizing a many-electron system. A de Broglie wave-

length, which does make sense for free electrons, does not

exist in such a structure. Second, it is also well known

that internal degrees of freedom of molecular systems

start mixing after very short timescales, in the range of

femtoseconds. That a molecule is heated with a laser -

most likely leading to excitation of electronic transitions

- and then spends microseconds preserving a ﬁctitious

state vector related to its translational motion, while

shaking rapidly due to vibrational excitations is not cred-

ible. Third, it is even less credible that such a molecule,

with its time dependent dipole moment, will not induce

dipole moments in the slit itself, which then interact with

the molecule’s dipole to alter its trajectory. And fourth,

the ﬁctitious state vector of this molecule, which does

not exist, is supposed to interfere with another ﬁctitious

state vector which went through a diﬀerent slit, a pro-

cess, which is completely impossible, unless one assumes

that the molecule, during its tra jectory, will split into

several individual molecules. How this could be possible,

given that such a creation of additional molecules violates

the energy principle by several MeV, has never been ex-

plained and can safely be regarded as pure ﬁction. In

summary, the model is wrong in so many ways, that one

is alarmed by the lack of knowledge in basic Chemistry

and solid state Physics of its authors and, presumably,

the journal’s editors.

So how does it really work? Most likely in the way

sketched in the previous section. A polarizable molecule

is excited by laser light so that most of its low lying vi-

brational excitations are activated. This molecule enters

the interferometer with a time-dependent dipole moment

in lateral direction. As the molecule interacts with the

atomic environment of the interferometer, it induces elec-

tric dipoles into the slit system. These time-dependent

dipole moments interact with the molecular dipole mo-

ments until the molecule has passed the interferometer.

Due to the interaction the molecules acquire a distinct

lateral momentum. The momentum leads to a deﬂec-

tion on the detector screen. The deﬂection is interpreted

as the result of a de Broglie wave, because the distance

from the point of no deﬂection to the point of impact is

inverse proportional to the velocity of the molecule. Why

is it inverse proportional to the velocity of the molecule?

Because the time constant of the interaction duration

depends on the time the molecule spent in the slit envi-

ronment of constant depth. Then a faster molecule will

spend less time, therefore acquire less lateral momentum,

9

therefore end up closer to the point of no deﬂection. This,

again. has nothing to do with a de Broglie wave, and

all to do with the constant distance from the entry to

exit of the interferometer (100nm). This whole scenario

should be relatively easy to simulate with modern elec-

tronic structure methods. One could also try to pin down

the actual eﬀect by using non-polarizable molecules. The

prediction here is that no periodic variation on the screen

will be observed in this case.

D. Aspect-type experiments

These experiments have been puzzling physicists for

at least thirty years. The height of the confusion was

probably reached with Aspect’s review paper in the jour-

nal Nature in 1999, where he stated: ”The violation of

Bell’s inequality, with strict relativistic separation be-

tween the chosen measurements, means that it is impos-

sible to maintain the image a la Einstein where corre-

lations are explained by common properties determined

at the common source and subsequently carried along by

each photon. We must conclude that an entangled EPR

photon pair is a non-separable object; that is, it is impos-

sible to assign individual local properties (local physical

reality) to each photon. In some sense, both photons

keep in contact through space and time” [27].

We shall show in the following that exactly such a

model a la Einstein can explain all experimental data

and that the confusion arises from a fundamental techni-

cal error in Bell’s derivations. To explain the experiments

in detail at the single photon level, let us start with set-

ting up a system composed of a source of photons at the

point z= 0, and two polarization measurements at ar-

bitrary points z=Aand z=−B. We assume that the

polarization measurements contain rotations in the plane

parallel to z. We also assume that the two photons are

emitted from the source with an arbitrary angle of polar-

ization ϕ0. It is irrelevant for the following, whether the

ﬁeld vectors of the two photons rotate during propaga-

tion. If they do, this will show up only as an additional

angle ∆ between their polarization measurements at A

or −B. The setup of the experiment is shown in Figure

5. A single measurement at Aconsists of two separate

processes: First, the polarization angle is altered by an

angle ϕA. Mathematically, this is a rotation in three di-

mensional space and in the plane perpendicular to the

direction of motion, which can be described by the ge-

ometric product of a rotator in this plane (a geometric

product) ϕAe1e2acting on the photon’s ﬁeld vector S,

which is parallel to e3. To take care of normalization, we

describe such a rotation as:

R(A) = exp [(ϕA+ϕ0)e1e2(e3)] = ei(ϕA+ϕ0)(26)

Then, the photon is detected, if the probability pwhich

depends on the angle of rotation and the initial angle of

polarization, is larger than a certain threshold:

p[R(A)] = [ℜ(R(A))]2= cos2(ϕA+ϕ0) (27)

0λA-λB

Source Photon 1Photon 2

Rotator 2 Rotator 1

z

Filter 2 Filter 1

FIG. 5: Aspect-type experiment. Two photons are emitted

from a common source with an initial unknown polarization

angle ϕ0. Their polarization is then measured at points A

and B. (From Ref. [2]).

Depending on how we deﬁne our threshold, which is a

function of the measurement equipment, an impact at

a certain angle of measurement ϕAand a certain initial

angle ϕ0is fully determined by the knowledge of these

two angles. The single event is thus fully accounted for.

However, in the actual experiments the angle ϕ0is un-

known, and it is randomly distributed over the whole

interval [0,2π]. A set of Nexperiments will thus lead to

a random value for the probability, covering the whole

interval [0,1]. The single measurement is thus random.

The same is true for a measurement at point −B. Also

here the polarization measurement is described by a ro-

tation, with a diﬀerent and fully independent angle ϕB.

The probability of detection is, along the same lines:

p[R(B)] = [ℜ(R(B))]2= cos2(ϕB+ϕ0) (28)

Also in this case the single event is fully accounted for

if the initial angle ϕ0and the angle of polarization ϕB

are known. Again, a set of Nexperiments will lead to

a random value for the probability, covering the whole

interval [0,1].

Naively, one could now assume that the correlation

probability is the product of the two measurement prob-

abilities at points Aand −B, respectively. This is exactly

what Bell assumed in the derivation of his inequalities,

when he wrote [28]:

P(a,b) = Zdλρ(λ)A(a, λ)B(b, λ) (29)

Here, λhas the same meaning as the initial angle ϕ0,

and the crucial error lies in the assumption that the cor-

relation probability is the product of individual probabil-

ities. This is manifestly incorrect, because it disregards

the mathematical properties of rotations. Two separate

rotations at Aand −Bhave to be accounted for by a

product of individual rotations, thus:

R(A)·R(B) = exp [(ϕA+ϕ0)e1e2(e3)]

·exp [(−ϕB−ϕ0)e1e2(e3)]

= exp [i(ϕA−ϕB)] (30)

It is impossible, from these two rotations, to derive a

probability which is the product of two positive numbers.

10

Furthermore, the hidden variable ϕ0, which is present in

the probability of individual polarization measurements,

is canceled out in the correlation derived from two sep-

arate rotations. The correct form of the probability for

the correlation derived from the two rotations will be:

p[R(A), R(B)] = [ℜ(R(A)·R(B))]2= cos2(ϕA−ϕB)

(31)

These probabilities are equal to the correlation probabil-

ities derived in the Clauser-Horne-Shimony-Holt formal-

ism [29]:

C++ =C−− = cos2(ϕA−ϕB)

C+−=C−+= 1 −cos2(ϕA−ϕB) (32)

They lead to the standard expectation values measured

in Aspect-type experiments:

E(ϕA, ϕB) = cos [2 (ϕA−ϕB)] (33)

And they violate the Bell inequalities in the exact same

way as found in the experiments:

S(ϕA, ϕ′

A, ϕB, ϕ′

B) = E(ϕA, ϕB)−E(ϕA, ϕ′

B) + (34)

+E(ϕ′

A, ϕB) + E(ϕ′

A, ϕ′

B) = 2√2

if ϕA= 0, ϕ′

A= 45, ϕB= 22.5, ϕ′

B= 67.5. To repeat the

ﬁndings: a model based on polarizations and rotations

in space recovers all experimental results. It allows for

a cause-eﬀect description of every single measurement.

It also violates the Bell inequalities. Not, because it is

a non-local model, but because Bell made a fundamen-

tal error in the derivation of his inequalities. It is thus,

paraphrasing Aspect’s words, not a proof that a model a

la Einstein is impossible, but rather a proof that many

quantum theorists do not understand geometry.

V. TOWARDS A DENSITY MODEL OF

ATOMIC NUCLEI

A. Electrons and neutrons

At the end of this presentation I would like to report on

some work in progress. It is quite natural, if one consid-

ers the electron an extended particle, to ask, what shape

and form it might have apart from the atomic environ-

ment. We know from DFT that its density, consequently

its shape, will depend on the potential environment. Af-

ter all, we ﬁnd much higher densities of electron charge

in heavier atoms with a higher number of central charges

than we ﬁnd in hydrogen. So one may also ask, in what

shape and form an electron exists, for example, in a neu-

tron. We know that the neutron decays outside an atomic

nucleus in about 880 seconds to a proton and an electron,

with an excess energy of 785 keV, which is mostly con-

verted into X-ray radiation.

n0→p++e−+ 785keV (35)

+-1.38 fm

FIG. 6: Neutron scattering experiments by Littauer et al.

[30]. A neutron consists of a positive core and a negative

shell. The radius of the neutron is about 1.38fm.

We also know, from scattering experiments (see Fig. 6),

that a neutron contains a core of positive charge, which

one could tentatively call the ”proton” and a shell of

negative charge, which one could to ﬁrst instance iden-

tify as the ”electron”. If the electron exists in such a

high density phase, then one could also seek its eigen-

states with the help of a Schr¨odinger equation adapted

to the much smaller lengthscales and much higher energy

scales. However, for such an assumption to make sense it

ﬁrst has to be determined, where the additional mass of

the neutron compared to isolated protons and electrons

comes from. Here, it has to be remembered that the ra-

dius of a neutron is much smaller than the radius of a

hydrogen atom. Therefore, the electrostatic ﬁeld of an

electron outside hydrogen has a very low energy of about

11 eV, while this ﬁeld has a large energy of close to 1

MeV for an electron with a radius of 1.38 fm:

We

0=1

2Zre

∞

ǫ0|E|2dV =1

4πǫ0

e2

re≈11eV

We

n=1

4πǫ0

e2

rn≈1040keV (36)

Here, one ﬁnds that the electrostatic energy alone, con-

sidering mass equivalents, can account for the excess

mass. Next, it is necessary to analyze nuclear units. We

know from atomic physics that atomic units are deﬁned

from fundamental constants and determine the solution

of the hydrogen problem with the Schr¨odinger equation.

Let me just remind the reader that an exponentially de-

caying wavefunction ψ(r) = ρ1/2

0exp(−αr) leads to the

following characteristic equation and the solution for α:

−¯h2α2

2m+2¯h2α

2mr −e2

4πǫ0rψ(r) = Eψ(r)

2¯h2α

2mr −e2

4πǫ0r= 0 →α=me2

4πǫ0¯h2(37)

11

If a similar solution exists for the neutron, then the de-

cay constant must be diﬀerent. We account for this hy-

pothesis by rescaling the Planck constant in a nuclear

environment so that:

¯hn=x¯h αn=1.89 ×10−10m−1

x2En=EH

x2

(38)

The Schr¨odinger equation in a nuclear environment then

reads:

−1

2∇2−1

rψn(r) = Enψn(r) (39)

The total energy is the sum of the positive energy of the

electrostatic ﬁeld and the negative energy of the eigen-

value, it is known to be 785 keV. It depends, ultimately,

on only two values: the radius of the neutron, which

is known from scattering experiments, and the scale x.

With a0the Bohr radius we get:

Wn=e2

4πǫ0a0a0

rn−1

2x2=a0

rn−1

2x2×27.211[eV ]

(40)

The scale xcan therefore be calculated from experimental

values. With rn= 1.38 fm and Wn= 785 keV we get for

the scale xand the energy scale En:

x=1

187791/2=αfEn= 511keV = mec2(41)

Both of these values are very fundamental. In the stan-

dard model the ﬁne structure constant αfdescribes the

diﬀerence in coupling between nuclear forces and electro-

static forces, while the rest energy of the electron Enis

one of the fundamental constants in high energy physics.

At present, we do not have a clear indication of the sig-

niﬁcance of this ﬁnding. It is quite improbable, that this

result should be a mere coincidence. After all, the iden-

tity relies on two experimental values, the radius of the

neutron and the mass of the neutron. Had these values

been diﬀerent, the ﬁne structure constant or the rest en-

ergy of the electron would not have been the result of

this derivation. We expect that a nuclear model on the

basis of high-density electrons, which we also tentatively

assume to be an element of physics in the 21st century,

will be able to answer this important question.

B. Magic nuclei

It is known that certain numbers of nucleons, assumed

to be protons and neutrons in the conventional model,

lead to increased stability of atomic nuclei. If high-

density electrons are the glue that holds protons together,

then protons in a nucleus will be in a regular arrange-

ment. In this case the problem of nuclear organization

becomes to ﬁrst instance a problem of three dimensional

geometry. Starting from a single proton, and adding one

proton after the other, always under the condition that

FIG. 7: Closed shells of atomic nuclei for up to 136 protons.

The shell model is only based on geometry and does not in-

clude detailed interactions at this point.

the distances between protons are constant, will auto-

matically lead to a shell model of atomic nuclei, where a

certain number of protons corresponds to closed shells. In

Fig. 7 we show the ﬁrst seven closed shells. In particular

the ﬁrst four, with 4, 16, 28, and 40 protons, correspond

to magic nuclei in nuclear physics. Larger shells do not

necessarily, but it has to be considered that we do not

yet have a comprehensive model of interactions within

an atomic nucleus, which could account for the observed

nuclear masses. Compared to DFT the additional com-

plication within a nucleus is the relatively large volume

of protons, which probably cannot be taken into account

with a model of point charges, and the unknown role

of nuclear forces. Also, it is quite unclear at present if

the electrostatic interactions within the nucleus have the

same intensity as in a vacuum, how screening works, and

what role the energy of electrostatic ﬁelds will play in

the overall picture. The ﬁrst steps towards such a model

are therefore highly tentative and it is to be expected

that a fully quantitative model of atomic nuclei is still

a long time in the future. However, such a model could

provide a uniﬁed basis for discussions in nuclear physics,

which connects it seamlessly to other ﬁelds of Physics:

something, which is manifestly not the case at present.

VI. SUMMARY

In this presentation I have emphasized six results ob-

tained within a theoretical framework which seamlessly

combines wave mechanics and density functional theory.

These six results are:

1. The uncertainty relations are violated by up to two

orders of magnitude in thousands of experiments

every single day.

2. Wavefunctions themselves are not real, but their

components, mass and spin densities, are real.

3. Rotations in space generate complex numbers,

which are not described in a Gibbs vector algebra.

12

4. Double slit interference experiments show two fea-

tures: a discrete interaction spectrum with the slit

system and a thermal broadening due to environ-

mental conditions.

5. The ﬁne structure constant and the electron rest

mass describe the nuclear energy scale.

6. Closed shell nuclei are due to the geometrical ar-

rangement of nuclear protons.

On a personal note I think that fundamental Physics

has entered a new stage of development, after the near

inertia in the last thirty years. This is largely the merit of

scientists working outside their core disciplines and moti-

vated by nothing else but the curiosity, how things really

work. Finally, future developments in physics, based on

this framework, could include the following elements:

•A non-relativistic theory of quantum electrody-

namics making use of the constraint found for elec-

tromagnetic ﬁelds that the intensity as well as the

frequency of the ﬁeld must be linear with the en-

ergy of emission or adsorption.

•A linear scaling many-electron theory for con-

densed matter making use of the result that many

body eﬀects can be encoded in a chiral optical po-

tential.

•A density functional theory of atomic nuclei using

a high-density phase of electrons in the nuclear en-

vironment.

Acknowledgements

Helpful discussions with Krisztian Palotas are grate-

fully acknowledged. The work was supported by the

Royal Society London and the Canadian Institute for Ad-

vanced Research (CIFAR).

[1] WA Hofer, Physica A, 178-196 (1998).

[2] WA Hofer, Front. Phys. 7, 504-508 (2012).

[3] M Ternes et al., Science 319, 1066-1069 (2008).

[4] G. Binnig and H. Rohrer, Surf. Sci. 126, 236-244 (1983).

[5] VM Hallmark et al., Phys. Rev. Lett. 59, 2879-2882

(1987).

[6] PT Wouda et al., Surf. Sci. 359, 17-22 (1996).

[7] MF Crommie, CP Lutz, DM Eigler, Science 262, 218-220

(1993).

[8] M Bode, Rep. Progr. Phys. 66, 523- (2003).

[9] BC Stipe, MA Rezaei, W Ho, Phys. Rev. Lett. 82, 1724-

1727 (1999).

[10] KR Harikumar et al., Nat. Chem. 3, 400-408 (2011).

[11] CF Hirjibehedin et al., Science 317, 1199-1202 (2007).

[12] H Gawronski, M Mehldorn, K Morgenstern, Science 319,

930-933 (2008).

[13] M Ozawa, Phys. Rev. Lett. 60, 385-388 (1988).

[14] WA Hofer, Front. Phys. 7, 218-222 (2012).

[15] P Hohenberg, W Kohn, Phys. Rev. B136, 864-871 (1964).

[16] WA Hofer, Found. Phys. 41, 754-791 (2011).

[17] C Doran, A Lasenby, Geometric Algebra for Physicists,

Cambridge University Press, Cambridge (2002).

[18] M Levy, JP Perdew, V Sahni, V, Phys. Rev. A 30, 2745-

2748 (1984).

[19] P Ehrenfest, Z. Phys. 45, 455-457 (1927).

[20] W Gerlach, O Stern, Z. Phys. 9, 353-355 (1927).

[21] RP Feynman, The Feynman Lectures Vol. III, pp. 1-1,

Addison Wesley, Reading, Mass. (1965).

[22] G Groessing, S Fussy, J Mesa Pascasio, H. Schwabl,

Physica A 389, 4473-4484 (2010).

[23] S. Kocsis et al., Science 322, 1170-1173 (2011).

[24] W. Duane, Proc. Nat. Acad. Sci. 9, 158 (1923).

[25] A Lande, From Dualism to Unity in Quantum Physics,

Cambridge University Press, Cambridge UK (1960)

[26] M Arndt et al., Nature 401, 680-683 (1999).

[27] A Aspect, Nature 398, 189-190 (1999).

[28] JS Bell, Physics 1, 195-199 (1964).

[29] F Clauser, MA Horne, A Shimony, RA Holt, Phys. Rev.

Lett. 23, 880-883 (1969).

[30] Littauer et al., Phys. Rev. Lett. 7, 144-147 (1961).