Content uploaded by Jean Louis Van Belle

Author content

All content in this area was uploaded by Jean Louis Van Belle on Oct 01, 2022

Content may be subject to copyright.

Wavefunctions and dimensional analysis

Jean Louis Van Belle, Drs, MAEc, BAEc, BPhil

8 April 2021 (revised on 1 October 2022

i

)

Email: jeanlouisvanbelle@outlook.com

Contents

Introduction .................................................................................................................................................. 1

Cyclical functions and complex numbers ...................................................................................................... 1

Derivations of cyclical functions ................................................................................................................... 3

Dimensional analysis ..................................................................................................................................... 5

Real-valued wave equations ......................................................................................................................... 7

Quantum-mechanical wave equations ......................................................................................................... 9

The quantum-mechanical wavefunction .................................................................................................. 9

Dimensional analysis of Schrödinger’s wave equation ........................................................................... 12

The trivial solution to Schrödinger’s wave equation: Schrödinger’s electron ........................................ 14

The acceleration vector .......................................................................................................................... 16

Conclusion ................................................................................................................................................... 17

Annex: musing and exercises ...................................................................................................................... 18

Schrödinger’s electron ............................................................................................................................ 18

Radians and measurement units ............................................................................................................ 19

The arrow of time ................................................................................................................................... 20

Solving the dimensions of the wave equation once again ..................................................................... 21

What is relative and absolute? ............................................................................................................... 23

The nuclear oscillation and quaternion math ......................................................................................... 24

i

We took this paper offline for a while. Its style is very loose and what we write in it overlaps with lots of other

papers. However, we think our dimensional analysis of Schrödinger’s wave equation and the wavefunction itself is

rather solid and a good introduction to some of the more advanced analysis we do in papers such as, say, our

presentation of quaternion math or our paper on scattering matrices and other more high-brow quantum-

mathematical topics. We changed the title of the original paper (The Language of Math), which was rather

pretentious. The new title is a better flag to cover the load. Still, we feel the flow of this paper is quite sloppy. Also,

there is definitely some repetition and overlap between various sections. However, we think some repetition is not

bad, and we also do not have sufficient time or energy to substantially rewrite it.

1

Wavefunctions and dimensional analysis

Introduction

In the epilogue to his Lectures, Feynman writes the following:

“The main purpose of my teaching has not been to prepare you for some examination—it was

not even to prepare you to serve industry or the military. I wanted most to give you some

appreciation of the wonderful world and the physicist’s way of looking at it, which, I believe, is a

major part of the true culture of modern times. (There are probably professors of other subjects

who would object, but they are completely wrong.) Perhaps you will not only have some

appreciation of this culture; it is even possible that you may want to join in the greatest

adventure that the human mind has ever begun.”

This paper – which aims to offer a very basic introduction to the mathematical concepts that you will

need, and how they relate to (quantum) physics – may or may not encourage you to effectively start

exploring things yourself, so it can become part of your culture too!

The audience of this paper is the smart K-12 level student. As such, this paper is probably most

representative of what I refer to as my K-12 level physics project.

Cyclical functions and complex numbers

Where would I start when explaining the basics of math to a K-12 level student? I would probably

remind him of the basic geometry of a circle. The Pythagorean theorem and all that. A K-12 level student

knows a sine from a cosine, but do they appreciate these two cyclical functions are, in fact, the

same⎯except for a phase shift of 90 degrees (/2 radians)? I would ask him to use a graphing tool (e.g.,

Desmos) to make sure he sees and, thereby, understands why the two functions are basically the same.

We write:

cosθ = sin(θ + /2)

And then they are not the same, of course: a phase shift is a phase shift. So, I would ask him to draw a

circle and point out the sine and cosine of an angle. I would then try to make him appreciate the sine is

just a rotation over 90 degrees of the cosine, and tell him we can represent a physical rotation of some

point by a mathematical operator. That operator is referred to as the imaginary unit, but is everything

but imaginary. The imaginary unit (i) is, effectively an operator that does what it does: it rotates axes, or

vectors⎯anything that has a direction. So, yes, we can think of cosθ as a vector too (a vector with length

cosθ pointing in the positive or negative x-direction, and rotate it by 90 degrees to get sinθ. We write

vectors in boldface, so let us do that here:

sinθ = icosθ

You will probably not have seen a sinθ and cosθ written in boldface before, but think of it: we can

effectively think of them as point or line vectors, like a or b. And, yes, think about the power of that little

symbol in front of cosθ: i times something (i). It is not a multiplication of, say, three times five. That is

flat one-dimensional logic. Line logic. We travel left and right of one axis when we add, subtract,

2

multiply or divide real numbers. The imaginary unit takes us from the line to a two-dimensional plane.

Now, we might think of dividing i into smaller units: radians. The imaginary unit (i) corresponds to /2

radians, right? So, 1 rad would, then, corresponds to 2i/ or whatever else we use to measure an angle.

But, no, then we would not need this new imaginary unit. We do need it, however. We cannot

substitute it for /2 or some other angular unit. Why not?

The student can only appreciate this by thinking some more about the idea of an operation. At this

point, I would probably ask him to explain to me the difference between F and F, and ask him how one

calculates the magnitude of a vector. He should remember this from one of his K-12 classes, and it

would help him to understand the geometry of the Pythagorean theorem and use the sine and cosine

function to calculate lengths. And then I would ask him what this weird number, or /2, actually

means, and how it differs from i or 2i. Can we get rid of or /2 by substituting them for 2i or i?

We can – and probably should – get rid of the old degrees when talking about angles, because that is of

no use, really. It goes back to the base-60 numerical system of the Mesopotomians, which also informs

our system of dividing hours into 60 minutes and minutes into 60 seconds. But what about ? Can we

get rid of it by defining some other system of numbers? I would let him think about that, and I would

hope he would find the answer for himself: is a natural unit for expressing angles because of the 2r

and r2 formulas for the circumference and surface area of a circle, respectively. In other words, we do

need it and we cannot get rid of it.

We cannot redefine the unit for angles in terms of a fraction or multiple of i: +i, −2i, i/100 or, as

mentioned above, 2i/. Why not? The 2r and r2 formulas would then be written as 4ir and 4ir2, and

the reduced form of Planck’s quantum of action h = ħ/2 would then be written as ħ/4i. That is OK, isn’t

it?

No. It is not OK. We get in trouble because 4 rotations by i brings us back to the zero point and, hence,

we would have weird identities such as 4i = 0 and, therefore, 4ir = 0, and a lot of our calculations would

stop making sense. Hence, it is preferable to keep to denote a length and i to denote a rotation: we

cannot get rid of these two ‘numbers’: the i and symbols both serve a purpose: is an arc length, and i

is a rotational operator. The two symbols cannot be mixed or substituted for each other.

ii

Now we must go one step further. I must try to explain Euler’s function, and the mathematical

properties of Euler’s number (e), and how that number also relates to the circle, and talk about how

weird that all is: we have two so-called irrational numbers ( and e), both numbers with an infinite

number of decimals (that is why we call them irrational: we cannot reduce them to a ratio), but they are

used in expressions which relate very finite distances, surfaces, and – when introducing rotations in the

two other planes that make up 3D space

iii

– volumes to each other.

ii

Let us make our first deep philosophical or ontological remark here: this π and i symbolism is rooted in Occam’s

Razor Principle. That principle says each and every symbol must correspond to a physical reality, but in the most

parsimonious way possible.

iii

If we have an xy-plane, then we must think of the yz- and xz-planes too. That is why the 19th century

mathematician invented quaternion algebra. We think his own intuition about it – that, as he put it, “we must

admit, in some sense, a fourth dimension of space for the purpose of calculating with triples” – is not to the point:

3

On to the next. One of the most easy and difficult things to understand is this: the e−i = − 1 and e+i = − 1

expressions do not have the same meaning. When going from 1 to −1 in the two-dimensional number

world, it matters how you get there⎯as illustrated below.

iv

Indeed, complex numbers are, basically,

two-dimensional numbers, so we should also write 1 to −1 as vectors or complex numbers: 1 = (1, 0) and

−1 = (−1, 0). We think some physicists made big mistakes because they did not appreciate the multi-

dimensional nature of the problem that they were looking at!

Figure 1: e+iπ e−iπ

And then I would have to start talking about derivatives and integrals, and I would probably introduce

the concept of linear and local or circular waves (linear and orbital oscillations), and talk about motion,

and frequencies, and how we could measure both time as well as distance in radians or other natural

units. And I would show how each and every mathematical concept can be grounded in our intuitive

understanding of right/left, up/down, back/front, and our intuitive understanding of time going in one

direction only.

Would I have lost them by then? Maybe. Maybe not. Did I lose you, just now?

Derivations of cyclical functions

Cyclical functions have a property that is very handy in both classical as well as quantum mechanics:

their derivative is a cyclical function too. In fact, after two or more derivations, one may or may not get

the same function again. Let us show this first for the sine and cosine components of the wavefunction,

first. Because a lot of physics is really about oscillations, we will immediately introduce the wavefunction

argument, which is usually written as θ = ω·t.

θ is usually referred to as the phase and – while the clock ticks – the phase goes around and around goes

with time: θ = ω·t. Imagine it going from 0 to π/2, and then to π and then back to where it started: 2π.

we still talk 3D space (that is all we can imagine), and the three imaginary units which he introduced (i, j and k)

reflect 3D space rather than some four-dimensional reality. We will say some more about this in the Annex to this

paper. To be concise but complete, we should, of course, mention time: you will know we use time as a sort of

fourth dimension in four-vector algebra. But it is and remains a separate beast altogether.

iv

This may seem self-obvious, but it sounds like horror to many mathematicians (and too many physicists too). The

quantum-mechanical argument is technical, and so I will not reproduce it here. I encourage the reader to glance

through it, though. See: Euler’s Wavefunction: The Double Life of – 1 and Feynman’s Time Machine. If you are an

amateur physicist, you should be excited: it is, effectively, the secret key to unlocking the so-called mystery of

quantum mechanics. Remember Aquinas’ warning: quia parvus error in principio magnus est in fine. A small error

in the beginning can lead to great errors in the conclusions, and we think of this as a rather serious error in the

beginning of many standard physics textbooks! It gives rise to so-called 720-degree symmetries, which do not exist

in real (physical) space. Of course, we can define mathematical spaces in which everything is possible. A

mathematical object which has 720-degree symmetry is likely to be a rotation within a rotation. Jason Hise

visualizes such objects nicely.

4

And then it goes around for another cycle: 4π, 6π, 8π, etcetera. That is why we write the frequency as an

angular or radial frequency. Indeed, f is the frequency that you are used to: it is the inverse of the cycle

time T = 1/f.

v

We might say that the use of the angular frequency is a way to express our time unit in

radians: ω = 2π·f and, hence, θ = ω·t = 2π·f·t. Note that this expression shows you that the phase θ has

no physical dimension: the second in the time variable and the 1/s dimension of f cancel each other.

We should not dwell on this. Here are the derivatives of the sine and cosine functions of time:

How does this work for Euler’s function? We vaguely introduced Euler’s function

vi

above already, but let

us do it explicitly now:

Figure 2: Euler’s formula

We will need to take the derivative of Euler’s function. That is not difficult. The rather special thing we

should note is that the natural exponential function ex is its own derivative: d(ex)/dx = ex. When we

move from the real function ex to Euler’s function, the imaginary unit works just like any other

coefficient in front of a function: d(eix)/dx = eix·d(i·x)/dx = i·eix. You can work it out:

You can google other ways to get that derivation

vii

but, again, we cannot dwell on this. We must move

on!

v

Do not be afraid to understand this with easy examples: if the cycle time is 3 seconds, for example, then the

frequency will be equal to f = 1/T = 1/3 herz. Herz (Hz) is just a very honorable term for 1/s. A frequency is a

number expressed per second. The frequency is, obviously, inversely proportional to the cycle time: high

frequencies make for very short cycle times.

vi

The literature will talk about Euler’s formula rather than Euler’s function. Euler invented many great things and,

hence, Euler’s function often refers to one of his other formulas. We will let the reader google this.

vii

See the Wikipedia article or Khan Academy on it, for example.

5

Dimensional analysis

Preliminary note (1 October 2022): When rereading the section that follows, we feel it does not read easily. We

should probably have taken another example of equations to play with dimensional analysis. In fact, the

dimensional analysis of Schrödinger’s wave equation is the more interesting bit in this paper, and that comes only

later. However, it is what it is, and we will not rewrite or restructure this paper. If the reader is bored, he can skip

and go straight to the next section (real-valued wavefunctions).

Dimensional analysis is probably one of the easiest ways to get an intuitive understanding of equations,

and also a very easy way to quickly check if some new high-brow equation in some letter to a journal (or

in an article) makes sense. [In case you wonder if non-sensical equations ever make it to high-brow

journals, it is, sadly, the case: the mathematization of physics has, unfortunately, led to an ‘anything

goes’ attitude and a desire to grab attention no matter what it takes.]

Let us give an example. Below, we have two equations which model an electromagnetic and a nuclear

oscillation, respectively. To be precise, the equation gives us the (orbital) energy per unit mass. Do not

worry if you do not understand the terms of the equation: one is related to the kinetic energy, and the

other to the potential energy⎯but, as mentioned, do not worry about that, right now. Just check the

physical dimensions.

The C and N subscripts stand for Coulomb and nuclear, respectively, so the equations above imply there

is such thing as Coulomb and nuclear mass, respectively. But let us focus on the dimensional analysis:

Energy is expressed in joule, which is newton-meter. Velocity is something in meter per second. Mass is

the inertia to a change in motion caused by a force, so Newton’s force law

viii

(F = ma) tells us that 1 kg =

1 Ns2/m. So, for E/m, we have (Nm)/(Ns2/m) = m2/s2. That is fine, and you should note that it is the

dimension not only of v2/2, but also of c2: if Einstein’s mass-energy equivalence relation (E = mc2) is

correct, then E/m should, somehow, be equal to c2. We might come back to that.

ix

Let us, as for now,

viii

The relativistically correct view of Newton’s force law defines a force as that what causes a change in

momentum, which is the product of velocity and mass, so we should write: F = dp/dt, but the dimensions work out

the same.

ix

E/m is not equal to c2 when considering gravitational orbitals. The orbital energy equation follows from Kepler’s

laws for the motion of the planets:

The kinetic and potential energy (per unit mass) add up to zero here, instead of c2 (nuclear and electromagnetic

orbitals), which is why a geometric approach to gravity makes eminent sense: massive objects simply follow a

geodesic in space, and there is no (gravitational) force in such geometric approach. We can compare force

magnitudes by defining a standard parameter. In practice, this means using the same mass – and charge! – in the

equations (we take the electron in the equation below) and, when considering the nuclear force, equating r to a:

6

just look at the physical dimension of the second term. The physical dimension of the Coulomb constant

(ke = 1/40

x

) is Nm2C−2, so we combine that with the C2 from the qe2 factor, the Ns2/m for the mass

and, let us not forget, the m−1 for the 1/r factor:

You can now see why we need a range parameter (a) for the nuclear force: a 1/r2 potential may or may

not exist (a hot topic for discussion in physics), but without a range parameter (expressed as a distance),

the equation would not make sense at all!

xi

The nuclear potential is weird because, at first, it does not seem to respect the so-called inverse-square

law. This force, therefore, does not seem to respect the energy conservation principle. We can fix this by

adding a unit vector n (same direction of the force but with magnitude 1) in the nuclear potential

formula⎯so we should probably write something like this:

The vector dot product na = nacosθ = acosθ (the cosθ factor should be positive so n must be suitable

defined so as to ensure /2 < θ < −/2

xii

) introduces a spatial asymmetry (think of an oblate spheroid

instead of a sphere here), which should ensure energy is conserved in the absence of an inverse-square

law.

Is this an ad hoc solution? Yes, so you might want to think about a better theory.

xiii

Perhaps we should use

a vector cross-product na = nnasinθ = nansinθ? In light of our sinθ = icosθ equation, that should

amount to the same (think of nsinθ as a vector, with sinθ modulating the magnitude of the vector n), but

Hence, the force of gravity – if considered a force – is about 1042 weaker than the two forces we know

(electromagnetic and nuclear). What if we compare the electromagnetic and nuclear force? We get this:

We will let you think about this result. The nature of the two forces is very different, of course. However, because

we defined the range parameter here as the distance r = a for which the magnitude of the two forces (whose

direction is opposite) is the same, we get unity for their ratio.

x

Check your old physics course or just google. 0 is, of course, the electric constant.

xi

This is why Yukawa’s nuclear potential function (a 1/r instead of a 1/r2 potential) does not make sense. It is just

one example of a historical blunder: no scientist ever wondered why the physical dimensions of Yukawa’s equation

did not make sense. Why? Probably because no one likes to challenge a Nobel Prize award.

xii

Defining a such that it broadly points in the same direction of the line along which we want to measure the force

F should take care of this. Or perhaps we should introduce a cosθ or cos2θ factor. The point is this: we need to

integrate over a volume and ensure that the nuclear potential respects the energy conservation law.

xiii

We think we offer one in the Annex to this paper.

7

you may want to practice your newly acquired operator skills here and, for example, think about the sinθ

= icosθ −isinθ = −(1/i)sinθ = −iicosθ = −i2cosθ = cosθ identity.

Real-valued wave equations

An exception to the general principle that both sides of a physical equation must be expressed in the

same SI units (or need to reduce to the same (combination of) SI units), may be wave equations. Wave

equations are mathematical conditions: they tell us what a wave shape (the wavefunction) must have in

order to be possible at all. In short, it boils down to this: the wave is that what perpetuates itself in

space, and the wave equation is a clever way of expressing all of the physical laws that apply to it.

The derivation of a wave equation is a lot of work, but you usually get a delightfully elegant result⎯but

elegant is not necessarily immediately intelligible. On the contrary! It takes a lot of past knowledge and

practice to appreciate what elegant equations (including equations such E = mc2 or ħ = ET = p) actually

mean.

xiv

Take a look at Feynman’s derivation of the wave equation for sound waves, for example. The

derivation itself consists of two dense pages, but it takes a lot of previously acquired knowledge to

understand each and every step of it. Anyway, the result is this wave equation:

Feynman uses a different symbol for the wavefunction, but we intentionally use the same symbol (psi)

as the one that is (mostly) used for the quantum-mechanical wavefunction, even if we do not have any

complex numbers here. So, this wave equation is very beautiful, but looks completely mysterious, at

first, that is. Let us try to demystify it. First, note the equation relates a time derivative (a second-order

derivative, to be precise) to a derivative with respect to a spatial direction (the second-order derivative

with respect to x, to be precise). So, we have the 1/s2 dimension of the 2/t2 operator on the left side,

and the 1/m2 dimension of the 2/x2 operator on the right side. And then we have a physical

proportionality constant () and, last but not least, the physical dimension of the wavefunction itself.

A physical proportionality constant is, basically, a mathematical proportionality coefficient but, unlike a

purely mathematically proportionality constant, a physical proportionality constant has a physical

dimension (some combination of SI units) which makes the dimensional analysis come out all right.

The wave equation, and its physical proportionality coefficient in particular, usually describes the

relevant properties of the medium: that what makes wave propagation possible. In this case, the

derivation shows that must be equal to:

xiv

The second (set of) equation(s) is the Planck-Einstein law (E = hf = h/T), which you may want to think as the law

that expresses the quantization of Nature. Planck’s quantum of action – whose physical dimension is force times

distance times time – can, effectively, be expressed as either (i) energy (E) times a (cycle) time (T), or momentum

(p) times a (wave)length (). The wavelength may be linear or non-linear (think of circular/elliptical orbitals here).

Both the ħ = ET and ħ = p relations imply a small, non-finite space over which the physical action is expended.

Planck's quantum of (physical) action, therefore, effectively quantizes space⎯and energy, momentum, and

whatever other related physical variables. So, the quantization of space (or spacetime, if you want⎯but no one

really knows what the latter term actually means) is a 'variable geometry' (I am using a term coined by a French

President here).

8

What is this? P is the pressure, and is the mass density of the gas (think of sound propagating in air or

some other gas), so dP/d tells us how the pressure of the gas changes when its mass density changes.

We cannot dwell on this, but you will probably accept that pressure is measured as force per unit area

xv

(N/m2), and that mass density must be measured in kg/m3, which is equivalent to (Ns2/m)/m3 = N

s2/m4. Hence, the physical dimension of might be (something like) this

xvi

:

Strange, we get the physical dimension of a squared velocity once more. It must be coincidence, of

course. Or not? Of course not! There is no such thing as coincidence in physics. One can easily

show

xvii

the speed of wave propagation is equal to the square root of :

Let us see what is left to explain by writing this:

We are fine! We do not need to associate a physical dimension to the wavefunction . We could, if we

would wish to do so (what about the dirac, or the einstein

xviii

), but we do not have to, and so we will not!

Note that the wavefunction is a pure mathematical function. It has no physical dimension: it projects the

position and time variables x and t (which do have a physical dimension, of course: meter and second,

respectively) onto a purely mathematical space.

On to the next. The quantum-mechanical wave equations. Note that I use a plural (equations) because

there are several candidates (Schrödinger, Dirac, Klein-Gordon), and these candidates also look different

depending on what it is that we are trying to model (electron orbitals, nuclear oscillations, two-state

systems, etcetera).

Do not worry about it. We will try to guide you through and, remember, we are only talking about doing

dimensional analysis right now, so you do not need to worry too much about what the equations

actually represent. Nobody really knows anyway because Schrödinger did not leave any notes on his

derivation. Feynman writes this about the origin of Schrödinger’s wave equation:

“Where did we get that? Nowhere. It is not possible to derive it from anything you know. It

xv

Feynman refers to atm or bar, but you should always convert to SI units.

xvi

Squared brackets can mean many things, but here we use them as an instruction (think of it as another

operator): take the physical dimension of the thing between the brackets.

xvii

See the reference above (Feynman, Vol. I, Chapter 47).

xviii

The einstein actually exists: just google it. The einstein is defined as a one mole (6.022×1023) of photons As for

the dirac, we initially thought there might be a separate nuclear charge⎯something different from the electric

charge: a nucleon charge. We did a (s)crap paper on that. You may want to read it if you are interested in how trial

and error might help you to make sense of things.

9

came out of the mind of Schrödinger, invented in his struggle to find an understanding of the

experimental observations of the real world.” (Feynman, III-16-5)

We are sure the notes must be somewhere⎯in some unexplored archive, perhaps. If there are Holy

Grails to be found in the history of physics, then these notes are surely one of them.

xix

Quantum-mechanical wave equations

We introduced a real wave above: the sound wave, and we found that we could represent it by (or

associate it with

xx

) a simple real-valued wavefunction which does not necessarily have to have any

physical dimension. The wavefunction may be a mathematical function only. Of course, the function

does depend on physical variables: position (x) and time (t), respectively. So, the soundwave function

projects physical variables to a purely mathematical space only: we associate each x and t with a purely

mathematical value (x, t). Let us not think about this too much, and just move on.

Now, I have a theory about the quantum-mechanical wavefunction: I think it does have some physical

dimension! And so I want to test it by doing a dimensional analysis of wave equations. If the dimensions

come out all right, I might be right, right? So let me present my theory first, and then I will present

one or more wave equations and see if my theory makes sense. So let us do a sub-section on my theory,

and then another one with the dimensional analysis of some wave equation.

The quantum-mechanical wavefunction

So I think the quantum-mechanical wavefunction may describe both the position (r = a = e−iθ = cosθ +

isinθ) of a pointlike charge on its orbit – in terms of its coordinates x = (x, 0) = (cosθ, 0) on the real axis

and y = (0, y) = (0, sinθ) on the imaginary axis (y = ix) – or, alternatively, in terms of the force F = Fx + Fy

which keeps the pointlike charge in place (see Figure 3). The force is a centripetal force, so it must be

proportional to −r.

Of course, the orbit may not be perfectly circular. In fact, it most likely is not. It can be elliptical, or have

some other strange form. Perhaps it is chaotic, but then it must have some regularity because otherwise

we would not be able to associate a regular frequency with it. In short, the amplitude a will itself be a

function of x and t too! But, in a first approach, we will consider a to be some constant. To be precise,

our ring current model tells us a must be equal to the Compton radius of the particle: a = ħ/mc.

xix

MIT published about everything they have about Feynman. Perhaps it is somewhere there. There is a book

about a mysterious woman, who might have inspired Schrödinger, but I have not read it: it is on my to-read list,

but that list is too long.

xx

You can imagine philosophers spend quite some time debating such statements. We will not amuse ourselves

with that. We are just trying to enlighten you a bit about the language of physics (and math). We do not want to

get into ontological discussions. We do that in (some of) our more advanced papers (not K-12 level, that is).

10

Figure 3: The ring current model of an elementary matter-particle

xxi

Of course, this is quite a mouthful, and we do not expect you to understand much at the moment. Here

we are interested in physical dimensions only: we just want to give you a feeling of what keeps

physicists busy (or what keeps me busy, at least). If I say the wavefunction describes a position vector,

then its physical dimension must be expressed in distance units: meter, that is. If I say it is a force, then

its dimension must be newton or newton per unit area, perhaps (N/m2).

So, what is it? I do not know. Perhaps it is either, but the second possibility is appealing. Why? Wave

equations usually also incorporate the energy conservation law; so I note that, if I would express the

force as a force per unit area (I find it hard to imagine a force grabbing onto an infinitesimally small

point), then this force per unit area dimension equals an energy density (energy/volume):

Now, you can see that I also put a momentum vector p = mc in Figure 3. The initial point of the position

vector r = e−iθ is the zero point of the reference frame, while the initial point of the momentum vector p

(i.e., its point of application) coincides with the (moving) terminal point of the position vector. Denoting

vectors in the negative x- and y-direction as −x and −y respectively, we can now easily relate the two

components of the momentum vector to the x and y components of the position vector:

px = −iy and py = −ix

We can, therefore, effectively consider the wavefunction to describe the position r of the pointlike

xxi

This model is also referred to as the Zitterbewegung model. Erwin Schrödinger stumbled upon it, and identified

it as a trivial solution to Dirac’s wave equation. Zitter refers to a rapid trembling or shaking motion in German.

Dirac highlighted the significance of Schrödinger’s model at the occasion of his Nobel Prize lecture:

“It is found that an electron which seems to us to be moving slowly, must actually have a very high

frequency oscillatory motion of small amplitude superposed on the regular motion which appears to us.

As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light.

This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory

motion is so high, and its amplitude is so small. But one must believe in this consequence of the theory,

since other consequences of the theory which are inseparably bound up with this one, such as the law of

scattering of light by an electron, are confirmed by experiment.” (Paul A.M. Dirac, Theory of Electrons and

Positrons, Nobel Lecture, December 12, 1933)

11

charge, while its time derivative describes the momentum vector. We, therefore, write:

r = e−iθ

p = −ie−iθ

Note that it is tempting to write the imaginary unit as vector quantity too: it has a magnitude (90

degrees or /2 radians) and, as a rotation, a direction too (clockwise or counterclockwise). However, its

direction depends on the plane of oscillation and we, therefore, write it in lowercase (i instead of i). That

is also the reason we wrote sinθ = icosθ instead of sinθ = icosθ: we would have a vector on the left,

and a scalar on the right (a vector dot product yields a scalar).

Any case, we are pushing our theory already. Let us just calculate derivatives. They may or may not

come in handy later, and they will give you some firsthand experience of calculating derivatives of

complex-valued functions (which will be useful when we will be discussing quantum-mechanical

operators, which we may or may not do in this paper).

If we stick to the ‘position interpretation’ of the wavefunction, then we can take the derivative of the

position vector r = ae−it with respect to time to get the velocity vector v = c:

So, what about the dimensional analysis? The dimension of c is, of course, the velocity dimension m/s,

so that is all right.

Now, we also have the imaginary unit here, of course, which is nothing but a rotation operator. It also

rotates coordinate axes: the x-axis rotates onto the y-axis, and the y-axis becomes the new x-axis. So, if

the x-axis is position (m) and the y-axis is time (s), then we might associate i with the s/m dimension.

This sounds rather fuzzy, of course, but think about the directions of the electric and magnetic field

vectors: we can write the magnetic field vector B as B = iE/c or −iE/c (depending on your orientation

vis-à-vis these fields

xxii

), and the physical dimension of B is (N/C)(s/m): the dimension of E multiplied by

s/m. Hence, if we think of a multiplication with the imaginary unit as a multiplication by s/m, then the

m/s and s/m dimension cancel out! Just like the sound wave. We should be happy, right?

Maybe. Maybe not. We need a m/s dimension for a velocity, and so this does not quite cut it. What if we

go back to the force idea and associate the N/m2 dimension with that e−it wavefunction? As we pointed

out already, it is quite appealing because the N/m2 also amounts to an energy density

xxiii

, and Feynman

talked about wave equations as modeling energy diffusion.

xxiv

But let us just leave this idea with you, and

xxii

The plus or minus sign of i determines whether or not you have to change the direction of one of the two axes.

xxiii

We leave considerations of plus or minus signs for energies out for the time being. Those have to do with

conventions (or perspectives, cf. left- or right-hand rules) and, when talking energies, the point of reference for the

U = 0 point, which we can choose to be infinity or, preferably, the center of the reference frame. When considering

multiple charges orbiting around each other (e.g. when building a neutron model (n = p + e) or a deuteron model

(d = p + p + e), the center-of-mass (barycenter) of the various oscillations becomes an important point of reference.

xxiv

See Feynman’s Lectures, Vol. III, Chapter 16: “We can think of the [wave equation] as describing the diffusion of

a probability amplitude from one point to the next along the line. That is, if an electron has a certain amplitude to

be at one point, it will, a little time later, have some amplitude to be at neighboring points. In fact, the equation

12

talk about something else again.

Let us think about the v = c identity. How does that work? If we write the circumference of the orbital as

and the cycle time as T, then it is rather obvious that will be equal to cT. Now, the Planck-Einstein

relation tells us the cycle time will be equal to T = ħ/E.

xxv

Now, E = mc2: all of the mass of our particle

(that is what the elementary wavefunction represents) is in the oscillation of the pointlike charge.

xxvi

So

we can write:

ħ = ET = mc2T ħ/m = c2T = ccT = c

This gives us a m2/s dimension for the ħ/m factor in Schrödinger’s equation: [ħ/m] = [c] = (m/s)m =

m2/s. So, what is the deal here? We have more than just a dimensional analysis here: because we now

used the Planck-Einstein and mass-energy equivalence relations, we know why the dimension of ħ/m

must be m2/s. We write:

We are getting a bit ahead of ourselves here: we introduced the ħ/m factor – the physical

proportionality coefficient which we see in Schrödinger’s equation – but we did not introduce

Schrödinger’s wave equation yet! Let us do that now. It will be Schrödinger’s wave equation in the so-

called free space. Free space means we have no potential (electromagnetic or nuclear). Hence, our

pointlike charge (and the particle⎯or should we call it a wavicle?) can just move freely around. How

exactly does that work? Why does it do so? That is what the wave equation should tell us.

Dimensional analysis of Schrödinger’s wave equation

Schrödinger’s wave equation in free space is this

xxvii

:

The 2 term looks frightening, but it is just the same as the 2/x2 derivative in our soundwave

equation: it is the second-order derivative with respect to position. The only difference is that we are

applying it to a position vector x or r = (x, y, z).

xxviii

looks something like the diffusion equations which we have used in Volume I. But there is one main difference: the

imaginary coefficient in front of the time derivative makes the behavior completely different from the ordinary

diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real

exponential solutions, whereas the solutions of [the wave equation] are complex waves.”

xxv

Because we are using the reduced Planck constant here, time is measured in radians here. We would usually

write this: E = hf = h/T T = h/E. We might have used a different symbol for a cycle time expressed in radians,

such as , but that symbol is usually reserved to denote the lifetime of non-stable particles (transients).

xxvi

For electron orbitals in an atom, the energy to be used will be of the order of the Rydberg energy.

xxvii

You will usually it with a ½ factor, but that has to do with the enigmatic concept of effective mass. We will not

go into that. If you want to know, see our paper on the matter-wave.

xxviii

We could also use spherical coordinates (r, θ, φ) instead of Cartesian coordinates. In fact, that is what is usually

done when working with wave equations because it makes the calculations easier. However, we do not want to

confuse the reader too much here.

13

So, what about the dimensional analysis? Let us do it for the two sides of the equation

xxix

:

This looks good: we have the 1/s dimension on both sides, but what is that square of the physical

dimension of in the second equation? Good question. It does not make sense

xxx

:

So, it is all OK. Of course, you may wonder, do we not need some functional form here? We do not have

one, yet⎯we only have that theory about circular motion, but so that is not a proof. In fact, the

equation above suggests our wavefunction can have any physical dimension: whatever it is,

Schrödinger’s equation will make sense, physically, that is. Its functional form and its physical dimension

can be whatever, and our ‘bracket operator’ – [X] – will work: the /t and 2/x2 will bring out the 1/s

and 1/m2 dimension respectively, but whatever other dimension is in (N, J, C, etcetera, or any

combination thereof⎯literally whatever!) will still be there.

xxxi

So, in short, we can just write this::

What about [i] or [1/i]?

xxxii

What should we do with that? We are not sure. Perhaps you will find the

deeper meaning of this one day once you also understand how the amplitude a varies as a function of x

(or r)and t! All vectors – v, a, p, F, etcetera – then all become variables, and all depend on each other, in

a very complicated set of equations: we are talking about a system here, in other words. If you do, let us

know. All we wanted to do here is to explore our interpretation of the wavefunction

mathematically⎯in a very first and, therefore, rather rough approach, and so we do that with a

dimensional analysis.

Let us do something else with second-order derivatives. Let us calculate the second-order derivative

with respect to time: this should give us an acceleration. Think of it as a warm-up for further thinking on

that second-order derivative with respect to x or x.

xxix

Note that we use the square brackets as a sort of operator too here. They say this: take the physical dimension

of what is between the brackets. It can be quite confusing because square brackets are used as… Well… Just plain

square brackets in a few other places of this paper. We hope we do not confuse the reader too much!

xxx

This is one of the many places where we thought we should rewrite and restructure the paper a bit, but we do

want you to think everything through for yourself, and so we will not explain you where we went wrong and why

and how we corrected ourselves: you should really work out this dimensional analysis for yourself!

xxxi

It has nothing to do with Dirac’s ‘bra-ket’ operators, of course!

xxxii

We can, effectively, move i to the other side of the equation, but it should not matter: 1/i = −i and, hence, 1/i

must do the same to the physical dimensions: it amounts to a multiplication by second/meter (s/m) which,

incidentally, is the physical dimension of 1/c.

14

The trivial solution to Schrödinger’s wave equation: Schrödinger’s electron

If you google proper textbooks (we think of Feynman’s Lectures here

xxxiii

), then you will that

Schrödinger’s wave equation for an electron in free space (i.e., in the absence of any potential energy

term) has an extra 1/2 factor. It is written like this:

We must make a few remarks here about things which we believe to be the case. However, you must

make up your own mind about our remarks below:

⎯ Mainstream physicists consider this equation to be not relativistically correct. We think that is

unfortunately and unjustified.

⎯ The reader should also note that the concept of the effective mass in this equation (meff) of an

electron emerges from an analysis of the motion of an electron through a crystal lattice (or, to be

very precise, its motion in a linear array or a line of atoms). However, Richard Feynman and all

academics who produced textbooks based on his, then rather shamelessly substitute the efficient

mass (meff) for me rather than by me/2. They do so by noting, without any explanation at all

xxxiv

, that

the effective mass of an electron becomes the free-space mass of an electron outside of the lattice.

We think this is unwarranted too.

⎯ The ring current model explains the ½ factor by distinguishing between (1) the effective mass of the

pointlike charge inside of the electron (while its rest mass is zero, it acquires a relativistic mass equal

to half of the total mass of the electron) and (2) the total (rest) mass of the electron, which consists

of two parts: the (kinetic) energy of the pointlike charge and the (potential) energy in the field that

sustains that motion.

xxxv

⎯ Of course, now you will say that Schrödinger’s equation works in the context of electron orbitals for

the hydrogen atom. Hence, that factor ½ or 2 times m must be correct, right? Wrong. Schrödinger’s

wave equation for electron orbitals with a potential term basically models the orbitals of two

electrons rather than just one: it abstracts away from spin and we, therefore, think the orbitals are

orbitals of electron pairs. That is why the factor 2 pops up and, yes, it is correct and must be there in

the context of that model (electron orbitals in a hydrogen atom). We think it is not correct to use it

in Schrödinger’s equation for electron motion in free space.

⎯ In short, in our not-so-humble view, Schrödinger’s wave equation for a charged particle in free

space, which he wrote down in 1926, which Feynman describes – with the hyperbole we all love – as

“the great historical moment marking the birth of the quantum mechanical description of matter

occurred when Schrödinger first wrote down his equation in 1926”, effectively reduces to this:

xxxiii

We must be precise: we refer to Feynman’s derivation of Schrödinger’s wave equation in a lattice.

xxxiv

See: Richard Feynman’s move from equation 16.12 to 16.13 in his Lecture on the dependence of amplitudes on

position.

xxxv

We make this point quite forcefully because it is one of the key differences between our interpretation of

Schrödinger’s Zitterbewegung electron and that of an author like David Hestenes.

15

We think this is the right wave equation because it produces a sensible dispersion relation: one that

does not lead to the dissipation of the particles that it is supposed to describe. The Nobel Prize

committee should have given Schrödinger all of the 1933 Nobel Prize, rather than splitting it half-half

between him and Paul Dirac. We are really not sure why physicists did not think of the Zitterbewegung

of a charge or some ring current model and, therefore, dumped Schrödinger equation for something

fancier. We talk about that in other papers, so we will not repeat ourselves here.

xxxvi

However, we still owe it to you to show that our electron model – and the wavefunction that comes

with – is, effectively, a very trivial solution to the wave equation:

So let us do that here now.

xxxvii

Our Zitterbewegung model of an electron yields the following elementary

wavefunction for the electron:

It is just the general wavefunction = ae iθ = ae it, but substituting a for a = ħ/mc = ħc/E and ħ/m

for ħc2/E. Now, we must prove that this is, indeed, a solution to Schrödinger’s above. We can prove this

by writing it all out:

Now, this all looks very formidable, but it works out surprisingly well. Take the left side first:

Now the right side, but so there we have time as a variable and we want to take the (second-order)

derivative with respect to position⎯so how does that work, then? We can write x as x = ct or as x = −

ct, perhaps

xxxviii

, and, therefore, substitute t for x/c, perhaps? Let us what we get:

xxxvi

See our paper on de Broglie’s matter-wave and, quite complementary, our papers on the history of quantum-

mechanical ideas or on the meaning of uncertainty.

xxxvii

This section got inserted in the October revision of this paper. We basically copied it from a section that comes

much later in this paper. We thought the reader might not get there and, hence, it is probably better to put this

key result here.

xxxviii

We must define a convention here for the plus/minus sign of the velocity vector, associating one or the other

with a clock- and counterclockwise rotation, respectively. It is a fine matter (because we must take the minus sign

of the i3 = −i factor into account), but we will not worry about it.

16

Bingo! All is OK. This is a very significant result. In fact, we talked about lost notes and the Holy Grails of

quantum physics. This might be it: we do not exclude that Schrödinger might have worked backwards.

Would it not be logical to first jot down a wavefunction, and then see in what wave equation it might fit

as a solution?

The acceleration vector

Let us have some more fun now. Let us calculate the acceleration vector a (do not confuse this with the

amplitude a or the radius vector r

xxxix

):

We find that the magnitude of the (centripetal) acceleration is constant and equal to mc3/ħ.

xl

This is a

nice result⎯because its physical dimension works out: [mc3/ħ] = m/s2, so that is an acceleration all right.

Let us go beyond our electron model now. Let us see if all this works for something we know: Bohr-

Rutherford electron orbitals, for example. The radius of Bohr-Rutherford orbitals is of the order of the

Bohr radius rB = rC/, and their energy is of the order of the Rydberg energy ER = 2mc2, with the fine-

structure.

xli

The velocity and accelerations are, therefore, equal to:

We get the classical orbital velocity v= c, while the magnitude of the oscillation equals c. The

acceleration factor c has the right physical dimension (1/s)(m/s) = m/s2 and so, yes, all looks good.

However, we need to get further into the grind. We have an easy explanation now of the second-order

derivative with respect to time (2/t2), but we do not have such easy interpretation for 2. Do you

see one?

[…]

It is and remains all very mysterious but, at the very least, you can already appreciate that there is

nothing magical or mysterious about quantum-mechanical operators: /t and 2/t2 are quantum-

mechanical operators too!

xxxix

It is like using m for mass and for meter. Textbooks will usually take care to differentiate symbols, but we think

that does not make you any smarter.

xl

The minus sign is there because its direction is opposite to that of the radius vector r.

xli

If the principal quantum number is larger than 1 (n = 2, 3,…), an extra n2 or 1/n2 factor comes into play. We refer

to Chapter VII (the wavefunction and the atom) of our manuscript for these formulas.

17

Conclusion

We hope we have succeeded in giving you a feel for the real mystery of quantum physics: the wave

equation. We are sure some higher mind will, one day, be able to reconstruct Schrödinger’s derivation

of his wave equation, in very much the same way as Feynman gave us a derivation of the soundwave

equation. Perhaps it will be you!

Brussels, 8 April 2021

18

Annex: musing and exercises

Schrödinger’s electron

We gave you the solution to Schrödinger’s wave equation in free space. It is a complex exponential with

a (real-valued) coefficient

xlii

, so we should not write as r = e−iθ but as r = ae−iθ, and this coefficient is,

effectively, the Compton radius of our particle. For an electron, we can easily calculate it as follows:

Paraphrasing Prof. Dr. Patrick LeClair

xliii

, we understand this distance as “the scale above which the

electron can be localized in a particle-like sense”, and it also clarifies what Dirac referred to as the law of

(elastic or inelastic) scattering of light by an electron”⎯Compton’s law, in other words.

xliv

So, you can check the physical dimension of a:

Now, when we derive the wavefunction, we can treat this coefficient like a constant

xlv

and the

dimensional analysis of Schrödinger’s equation of the left and right side respectively can then be written

as

xlvi

:

xlii

Physics textbooks will tell you the coefficient may be complex-valued, but when you multiply everything through

in practical examples, you will see you can always write the whole thing as a real-valued coefficient times a

complex exponential.

xliii

See: http://pleclair.ua.edu/PH253/Notes/compton.pdf, p. 10.

xliv

Compton scattering may be explained conceptually by accepting the incoming and outgoing photon are

different photons (they have different wavelengths so it should not be too difficult to accept this as a logical

statement: the wavelength pretty much defines the photon⎯so if it is different, you have a different photon). This,

then, leads us to think of an excited electron state, which briefly combines the energy of the stationary electron

and the photon it has just absorbed. The electron then returns to its equilibrium state by emitting a new photon.

The energy difference between the incoming and outgoing photon then gets added to the kinetic energy of the

electron through Compton’s law:

This physical law can be easily derived from first principles (see, for example, Patrick R. Le Clair, 2019): the energy

and momentum conservation laws, to be precise. More importantly, however, it has been confirmed

experimentally.

xlv

We can only do that because we model circular orbitals here. For elliptical orbits (or whatever other complicated

shape), the coefficient itself will vary as a function of the position (r) and time (t), so we must apply the product

rule and the derivation then becomes quite complicated, so generalizing the model to encompass all possible

orbitals is quite complicated.

xlvi

In the second equation (righthand side of Schrödinger’s equation), we make use of the

result, which we derived above (the rotation operator swaps axes and, incidentally, has the same dimension as the

1/c operator. We wrote a bit about that in a very early paper of ours, in which we explore Feynman’s suggestion to

think of it all as some energy diffusion or energy propagation mechanism. That paper has visual illustrations which

you might want to explore.

19

So, we are good⎯once more! We now have a m/s dimension, but it is OK because it is the same on both

sides of the equation. And, yes, now that we have a functional form for the wavefunction itself (), we

could do an even better job at writing it all out, but you can work that out for yourself, right? Try it as an

exercise and, no worries, we will do it for you at the end of this paper.

Radians and measurement units

Let us get back to pure math and explain some more about radians. There is something very special

about them. We can not only measure distance in radians but time as well. Read this again: we can

measure time in radians (a distance unit) rather than seconds. Because Schrödinger’s wave equation has

the reduced Planck constant in it (ħ, not h), this is sort of logical.

xlvii

But it may come across as a mystery

to you. In fact, if there is any mystery in quantum physics, then this might be it: I do not see any other

mysteries.

The point to be appreciated here is that (circular) motion is associated with a change in the phase angle

θ, which we will write as a differential Δθ. Now there are two viewpoints:

1. For a given (small) interval in time (Δt), the distance traveled (Δx) will be equal to the radius

vector times Δθ (approximately, at least

xlviii

), so we can write: Δx rΔθ.

2. For a given (small) interval in space (Δx or, in 3D space, Δx), the time needed to cover that

distance (Δt) will be equal to the same: Δt rΔθ.

This shows we can use the radian as a unit of time as well as a unit of distance, as illustrated below

(Figure 4).

Figure 4: The radian as unit of distance and of time

The use of the radian as an equivalent time and distance unit can also be illustrated by playing with the

associated derivatives, and taking their ratios

xlix

:

xlvii

The reduced quantum (ħ) is equal to h/2. This division by 2 distinguishes a cycle time measured in cycles from

time measured in radians, which corresponds to the difference between frequency (f = 1/T) and angular frequency

( = 1/). The latter symbol () is, unfortunately, not commonly used.

xlviii

We have two approximations here: (1) the length of the hypotenuse and the adjacent side of the triangle are

equated to the radius and the arc length; (2) we use the small-angle approximation to equate sin(Δθ) to Δθ.

xlix

We must make an important remark here: our playing with differentials here assumes a normalized concept of

velocity: we can only use the radian simultaneously as a time and distance unit when defining time and distance

20

This is as far as we can go in terms of understanding the nature of space and time⎯philosophically,

mathematically, and physically. Or… Perhaps not. Let us say something more about the so-called

arrow of time.

The arrow of time

Spacetime trajectories – or, to put it more simply, motion – need to be described by well-defined

functions. That means that for every value of t (time), we should have one, and only one, value of x

(space).

l

The reverse, of course, is not true: a particle can travel back to where it was (or, if there is no

motion, just stay where it is).

This is illustrated below: a pointlike particle which moves like what is shown on the right-hand side

cannot exist because there are a few occasions here where the particle occupies multiple positions in

space at the same point in time. Now, some physicists may believe that should actually be possible, but

we do not want to entertain such ideas, really.

Figure 5: A well- and a not-well behaved trajectory in spacetime

This shows that time must go in one direction only. We can play a movie backwards, but we cannot

reverse time. Think of this: a movie in which two like charges (say, two electrons⎯or two protons)

would attract rather than repel each other does not make sense. We would, therefore, know this is a

movie which was being played backwards, and we would say it is impossible: time cannot be reversed.

This intuition contrasts with the erroneous suggestion of Richard Feynman that we should, perhaps,

think of antimatter-particles as particles that travel back in time. It is nonsense: the plus/minus sign of

the argument of the wavefunction gives us the spin direction of a particle. It has nothing to do with time

going in this or that direction. As for antimatter, we believe it can be modelled by the plus/minus sign of

units such that v = / = 1. The in this equation is the circumference of the circle (think of it as a circular

wavelength), and = T/2 is the (reduced) cycle time. See footnote xlvii.

l

We can generalize to two- or three-dimensional space, of course. The x in the illustration then becomes a vector

in a three-dimensional vector space: x = (x, y, z). We should note there is no such thing as four-dimensional

physical space. Mathematical spaces may have any number of dimensions, but the notion of physical space is a

category of our mind, and it is three-dimensional: left or right, up or down, front or back. You can try to invent

something else, but it will always be some combination of these innate notions. Time and space are surely related

(through special and general relativity theory, to be precise) but they are not the same. Nor are they similar. We

do, therefore, not think that some ‘kind of union of the two’ will replace the separate concepts of space and time

any time soon, despite Minkowski’s stated expectations in this regard back in 1908.

21

the coefficient of the wavefunction.

li

Solving the dimensions of the wave equation once again

We are now ready to tell you how we make sense of the world. We measure distance and time as arc

lengths. The full wavelength corresponds to the circumference of the circle, and the natural time unit

is one full cycle. We can then normalize velocities by defining the orbital velocity v as v = /1, which

amounts to normalizing the length of the radius vector: a = 1. The magnitude of the velocity vector is

then expressed in radians (per unit time): 2 radians, to be precise, and v = = 2 rad.

This, in turn, allows us to choose a force unit such that this force unit times times T (the cycle time)

equals Planck’s quantum of action: TF = h. We can write this in reduced form by dividing and h by 2

so as to get the reduced form of Planck’s constant and write it as a product of energy (force times a

distance) and time:

ħ = FaT = (Fa)T = ET

Multiplying with 2 once more, gives us the second de Broglie relation, which gives us Planck’s constant

written in terms of a momentum (a force times a time interval) and a length:

h = FT2a = (FT)) = p

We can normalize the force unit too by equating ħ = 1 and then choosing the force unit such that F = 1.

Of course, we can think of force and momentum as vectors, which turns the equation into a vector

equation:

h = ET = p

Alternatively, we can also think of the radius as a radius vector and write

lii

:

ħ = FaT = (Fa)T = (Fa)T = ET

Now we can define a mass unit using Newton’s force law F = dp/dt (the relativistically correct

expression) or F = mac (non-relativistic). Note that we have a centripetal acceleration vector a

here⎯and, yes, we added the subscript c so as to distinguish the centripetal acceleration vector ac from

the radius vector a.

liii

And on and on it goes. We can now introduce derivatives and complex notation and introduce the

wavefunction = ae iθ = ae it = ae i(E/ħ)t for the electron, substituting a for a = ħ/mc = ħc/E and

ħ/m for ħc2/E, and, therefore, write Schrödinger’s equation as:

li

See our paper on the Zitterbewegung hypothesis and the scattering matrix.

lii

The radius and force vector have opposite direction, so the vector dot product Fa reduces to Fa = Facosφ

= Fa1 = Fa

liii

You should not confuse the c from centripetal with the c of lightspeed!

22

Now, this all looks very formidable, but it works out surprisingly well. Take the left side first:

Now the right side, but so there we have time as a variable and we want to take the (second-order)

derivative with respect to position⎯so how does that work, then? We can write x as x = ct or as x = −

ct, perhaps

liv

, and, therefore, substitute t for x/c, perhaps? Let us what we get:

Bingo! All is OK. We did not prove Schrödinger’s equation, but we did show it is dimensionally

consistent!

Of course, now that you’ve got this, the real work starts: you must think about variabilizing the radius

and consider a particle that is not at rest. And you need to understand all about potentials, learn about

four-vectors, etcetera. For that, we refer you to our other K-12 level papers.

If you do not want to go that far, you can continue thinking about the (possible) physical dimension of

the wavefunction. If it is an energy density, then the dimension (m) of its coefficient (the radius or

amplitude of the oscillation a) combines with the F/m2 or E/m3 of the complex exponential eiθ (it is just

a cyclical function⎯nothing to be mystified about: just a combination of two sinusoidally varying

orthogonal vectors), and so the whole ae−i(E/ħ)t expression then gets a F/m or an E/m2 dimension. That

makes a lot of sense when we think of the interpretation of the wavefunction in terms of probabilities of

actually finding the particle (or the pointlike charge?) at the x position x at time t!

Indeed, if we take the absolute square of the wavefunction

lv

, and normalize that value by dividing it by

the (squared) energy density of the whole volume, we should get a probability: some pure (scalar)

number which varies between 0 and 1.

Of course, you may wonder: why this squaring business? The logic here is just the same as that which

we apply in statistics: plus and minus signs would cancel each other and, hence, to calculate a mean, we

should take a root mean square (aka as a quadratic mean) approach so as to get a meaningful result.

Is the above absolute truth? It surely is not. There are alternative ways of looking at the wavefunction:

we may think of it as modeling a field vector, for example. We then can analyze energies using the

Poynting vector or other ways of modeling (field) energy, which (also) involves the squaring of field

magnitudes. We say a few words on that in the Annex to this paper, so you may want to read on.

liv

We must define a convention here for the plus/minus sign of the velocity vector, associating one or the other

with a clock- and counterclockwise rotation, respectively. It is a fine matter (because we must take the minus sign

of the i3 = −i factor into account), but we will not worry about it.

lv

The correct term is: the absolute value of the square, but we prefer the shorthand term ‘absolute square’.

23

What is relative and absolute?

After all of this, you may wonder: what is real, and what is not? That is a philosophical question to which

there are (almost) as many answers as (great) philosophers. What we know is that we have a complete

and consistent description or representation of what we think of as reality. Now, that description

describes some things which are relative (relative in the sense as used in special or general relativity

theory) and some things which are not: the constants of Nature (think of the elementary charge,

lightspeed, and Planck’s quantum of action here). In addition, we have reduced all possible physical

dimensions to 7 base units (see the 2019 revision of the international system of units), and we also

devised a consistent set of concepts and operators (vectors, rotations, derivatives, integrals, etcetera).

So that allows to write all laws of physics as some combination of these constants and measurements.

As an example, we may combine Einstein’s mass-energy equivalence relation (E = mc2) with the Planck-

Einstein relation (E = hf) to get a combined or synthetic relation:

The m/f = h/c2 equation tells us that the ratio of the mass and the frequency (as given by the Planck-

Einstein relation) of any (elementary) particle must be equal to the ratio of Planck’s quantum of action

and the squared lightspeed. Do we understand this relation? Yes and no: to understand it, we must

analyze this equation in terms of the two fundamental relations which give us this equation. Hence, the

language of math, combined with the laws of physics, give us a representation which makes sense. Think

of it of some kind of story⎯call it the Book(let) of Nature or its mode d’emploi, if you want.

Space and time do remain somewhat special: these two concepts (categories of the mind, as Immanuel

Kant referred to them) link all of the physical concepts to our thinking about it, so we can think of the

wavefunction (and the wave equation) as some kind of link function (another concept from statistics,

which you may (or not) want to study further). Now, we could make this story – this paper – much

longer, but we do not want to do that, so we will just put in a diagram which reflects what we wrote

above and that will be it.

24

The nuclear oscillation and quaternion math

lvi

In this paper, we talked a lot about the Zitterbewegung model of an electron, which is a model which

allows us to think of the elementary wavefunction as representing a radius or position vector. We write:

ψ = r = a·e±iθ = a·[cos(±θ) + i · sin(±θ)]

It is just an application of Parson’s ring current or magneton model of an electron. Note we

use boldface to denote vectors, and that we think of the sine and cosine here as vectors too! You should

note that the sine and cosine are the same function: they differ only because of a 90-degree phase shift:

cosθ = sin(θ + π/2). Alternatively, we can use the imaginary unit (i) as a rotation operator and use the

vector notation to write: sinθ = i·cosθ.

We also showed how and why this all works like a charm: when we take the derivative with respect to

time, we get the (orbital or tangential) velocity (dr/dt = v), and the second-order derivative gives us the

(centripetal) acceleration vector (d2r/dt2 = a). The plus/minus sign of the argument of the wavefunction

gives us the direction of spin, and we may, perhaps, add a plus/minus sign to the wavefunction as a

whole to model matter and antimatter, respectively (the latter assertion remains very speculative

though).

One orbital cycle packs Planck’s quantum of (physical) action, which we can write either as the product

of the energy (E) and the cycle time (T), or the momentum (p) of the charge times the distance travelled,

which is the circumference of the loop λ in the inertial frame of reference (we can always add a classical

linear velocity component when considering an electron in motion, and we may want to write Planck’s

quantum of action as an angular momentum vector (h or ħ) to explain what the Uncertainty Principle is

all about (statistical uncertainty, nothing ontological), but let us keep things simple as for now):

h = E·T = p·λ

It is important to distinguish between the electron and the charge, which we think of being pointlike:

the electron is charge in motion. Charge is just charge: it explains everything, and its nature is, therefore,

quite mysterious: is it really a pointlike thing, or is there some fractal structure? Of these things, we

know very little, but the small anomaly in the magnetic moment of an electron suggests its structure

might be fractal. Think of the fine-structure constant here, as the factor which distinguishes the classical,

Compton and Bohr radii of the electron: we associate the classical electron radius with the radius of the

poinlike charge, but perhaps we can drill down further.

We also showed how the physical dimensions work out in Schrödinger’s wave equation. Let us jot it

down to appreciate what it might model, and appreciate once again why complex numbers come in

handy:

This is, of course, Schrödinger’s equation in free space, which means there are no other charges around

and we, therefore, have no potential energy terms here. The rather enigmatic concept of the effective

lvi

This Annex is a copy of one of our blog posts on the same topic (math and physics), and there may be, therefore,

some repetition with the main body of the paper.

25

mass (which is half the total mass of the electron) is just the relativistic mass of the pointlike charge as it

whizzes around at lightspeed, so that is the motion which Schrödinger referred to as

its Zitterbewegung (Dirac confused it with some motion of the electron itself, further compounding

what we think of as de Broglie’s mistaken interpretation of the matter-wave as a linear oscillation: think

of it as an orbital oscillation). The 1/2 factor is there in Schrödinger’s wave equation for electron

orbitals, but he replaced the effective mass rather subtly (or not-so-subtly, I should say) by the total

mass of the electron because the wave equation models the orbitals of an electron pair (two electrons

with opposite spin). So, we might say he was lucky: the two mistakes together (not accounting for spin,

and adding the effective mass of two electrons to get a mass factor) make things come out all right.

However, we will not say more about Schrödinger’s equation for the time being (we will come back to

it): just note the imaginary unit, which does operate like a rotation operator here. Schrödinger’s wave

equation, therefore, must model (planar) orbitals. Of course, the plane of the orbital itself may be

rotating itself, and most probably is because that is what gives us those wonderful shapes of electron

orbitals (subshells). Also note the physical dimension of ħ/m: it is a factor which is expressed in m2/s, but

when you combine that with the 1/m2 dimension of the 2 operator, then you get the 1/s dimension on

both sides of Schrödinger’s equation. [The 2 operator is just the generalization of the d2r/dx2 but in

three dimensions, so x becomes a vector: x, and we apply the operator to the three spatial coordinates

and get another vector, which is why we call 2 a vector operator. Let us move on because we cannot

explain each and every detail here, of course!]

We need to talk forces and fields now. This ring current model assumes an electromagnetic field which

keeps the pointlike charge in its orbit. This centripetal force must be equal to the Lorentz force (F),

which we can write in terms of the electric and magnetic field vectors E and B (fields are just forces per

unit charge, so the two concepts are very intimately related):

F = q·(E + v×B) = q·(E + c×iE/c) = q·(E + 1×iE) = q·(E + j·E) = (1+ j)·q·E

We use a different imaginary unit here (j instead of i) because the plane in which the magnetic field

vector B is going round and round is orthogonal to the plane in which E is going round and round, so let

us call these planes the xy– and xz-planes, respectively. Of course, you will ask: why is the B-plane not

the yz-plane? We might be mistaken, but the magnetic field vector lags the electric field vector, so it

is either of the two, and so now you can check for yourself of what we wrote above is actually correct.

Also note that we write 1 as a vector (1) or a complex number: 1 = 1 + i·0. [It is also possible to write

this: 1 = 1 + i·0 or 1 = 1 + i·0. As long as we think of these things as vectors – something with a

magnitude and a direction – it is OK.]

You may be lost in math already, so we should visualize this. Unfortunately, that is not easy. You may

to google for animations of circularly polarized electromagnetic waves, but these usually show the

electric field vector only, and animations which show both E and B are usually linearly polarized waves.

Let me reproduce the simplest of images: imagine the electric field vector E going round and round.

Now imagine the field vector B being orthogonal to it, but also going round and round (because

its phase follows the phase of E). So, yes, it must be going around in the xz– or yz-plane (as mentioned

above, we let you figure out how the various right-hand rules work together here).

26

You should now appreciate that the E and B vectors – taken together – will also form a plane. This

plane is not static: it is not the xy-, yz– or xz-plane, nor is it some static combination of two of these. No!

We cannot describe it with reference to our classical Cartesian axes because it changes all the time as a

result of the rotation of both the E and B vectors. So how we can describe that plane mathematically?

The Irish mathematician William Rowan Hamilton – who is also known for many other mathematical

concepts – found a great way to do just that, and we will use his notation. We could say the plane

formed by the E and B vectors is the E–B plane but, in line with Hamilton’s quaternion algebra, we will

refer to it as the k-plane. How is it related to what we referred to as the i– and j-planes, or the xy–

and xz-plane as we used to say? At this point, we should introduce Hamilton’s notation:

he did write i and j in boldface (we do not like that, but you may want to think of it as just a minor

change in notation because we are using these imaginary units in a new mathematical space: the

quaternion number space), and he referred to them as basic quaternions in what you should think of as

an extension of the complex number system. More specifically, he wrote this on a now rather famous

bridge in Dublin:

i2 = -1

j2 = -1

k2 = -1

i·j = k

j·i= –k

The first three rules are the ones you know from complex number math: two successive rotations by 90

degrees will bring you from 1 to -1. The order of multiplication in the other two rules ( i·j = k and j·i = –k )

gives us not only the k-plane but also the spin direction. All other rules in regard to quaternions (we can

write, for example, this: i ·j·k = -1), and the other products you will find in the Wikipedia article on

quaternions) can be derived from these, but we will not go into them here.

Now, you will say, we do not really need that k, do we? Just distinguishing between i and j should do,

right? The answer to that question is: yes, but only when you are dealing with electromagnetic

oscillations! But it is a resounding no when you are trying to model nuclear oscillations! That is, in fact,

exactly why we need this quaternion math in quantum physics!

Let us think about this nuclear oscillation. Particle physics experiments – especially high-energy physics

experiments – effectively provide evidence for the presence of a nuclear force. To explain the proton

radius, one can effectively think of a nuclear oscillation as an orbital oscillation in three rather than just

27

two dimensions. The oscillation is, therefore, driven by two (perpendicular) forces rather than just one,

with the frequency of each of the oscillators being equal to ω = E/2ħ = mc2/2ħ.

Each of the two perpendicular oscillations would, therefore, pack one half-unit of ħ only. The ω =

E/2ħ formula also incorporates the energy equipartition theorem, according to which each of the two

oscillations should pack half of the total energy of the nuclear particle (so that is the proton, in this

case). This spherical view of a proton fits nicely with packing models for nucleons and yields the

experimentally measured radius of a proton:

Of course, you can immediately see that the 4 factor is the same factor 4 as the one appearing in the

formula for the surface area of a sphere (A = 4πr2), as opposed to that for the surface of a disc (A = πr2).

And now you should be able to appreciate that we should probably represent a proton by

a combination of two wavefunctions. Something like this:

What about a wave equation for nuclear oscillations? Do we need one? We sure do. Perhaps we do not

need one to model a neutron as some nuclear dance of a negative and a positive charge. Indeed, think

of a combination of a proton and what we will refer to as a deep electron here, just to distinguish it from

an electron in Schrödinger’s atomic electron orbitals. But we might need it when we are modeling

something more complicated, such as the different energy states of, say, a deuteron nucleus, which

combines a proton and a neutron and, therefore, two positive charges and one deep electron.

According to some, the deep electron may also appear in other energy states and may, therefore, give

rise to a different kind of hydrogen (they are referred to as hydrinos). What do I think of those? I think

these things do not exist and, if they do, they cannot be stable. These researchers need to produce a

wave equation for them in order to be credible and, in light of what we wrote about the complications

in regard to the various rotational planes, that wave equation will probably have all of Hamilton’s basic

quaternions in it. [But so, as mentioned above, I am waiting for them to come up with something that

makes sense and matches what we can actually observe in Nature: those hydrinos should have a

specific spectrum, and we do not such see such spectrum from, say, the Sun, where there is so much

going on so, if hydrinos exist, the Sun should produce them, right? So, yes, I am rather skeptical here: I

do think we know everything now and physics, as a science, is sort of complete and, therefore, dead as a

science: all that is left now is engineering!]

But, yes, quaternion algebra is a very necessary part of our toolkit. It completes our description of

everything!

28

Let us now go back to our electron wavefunction and do some more calculations. We already said that a

field is a force per unit charge, so if we have the force, we can calculate the field strength. Now, we refer

to previous papers

lvii

for those force calculations. Here, we will just present the result for the electron:

If we think in terms of some force holding the pointlike charge in its orbit, then we calculate this force

for the electron as being equal to about 0.106 N. This is the formula (the ½ factor has something to do

with effective mass (half of the mass of the electron is kinetic and the other half is field energy), but we

will not bother you with that right now

lviii

):

That is a huge force at the sub-atomic scale: it is equivalent to a force that gives a mass of about 106

gram (1 g = 10−3 kg) an acceleration of 1 m/s per second! However, if you think it might be too huge to

make sense, think again: this is an electromagnetic force, and the nuclear force inside a muon-electron

and the proton is much stronger. Anyway, let us calculate the field strength now:

To help you to appreciate how humongous this value is, you should note that the most powerful man-

made accelerators may only reach field strengths of the order of 109 N/C (1 GV/m). So, does this make

any sense? We think it does, but you should, of course, always think for yourself.

[…]

Oh – what about that asymmetric potential for the nuclear force and energy conservation? Think about

it: the energy conservation law should hold because, for the nuclear force also, we will have the

equivalent of an electric and a magnetic field component, and so the associated energy sloshes back and

forth between them.

The only weird thing is that the inverse-square law does not seem to hold for the nuclear potential, but

that is – perhaps – because the nuclear force is so humongous (think of the massive proton versus the

volatile electron here) that it might, effectively, curve spacetime. It is not a very elegant solution to the

(field) energy conservation problem, but we do not see any other one, unfortunately. Perhaps you will

manage to show that gravity is, somehow, some residual force⎯a sort of Van der Waals force: that

what is left from the fundamental forces at the atomic or molecular scale. We should warn you,

however: many have tried their hand at this (or have trained their brain on it, I should say, but no one

has ever managed to do that.

lvii

See, for example, our lecture on quantum behavior. The idea is this: energy is force over a distance, so we get

the force from using the Planck-Einstein and mass-energy equivalence relations, and substituting the distance by

the circumference of the loop.

lviii

The formula itself is derived from the definition of energy as a force over a distance: E = F (if the force is not

constant, then you need to calculate an integral). So, we can write F = E/ = (mc2)/(h/mc) = m2c3/h. This calculation

may be off with a factor 2 or a factor ½, so you should just think of it as an order of magnitude.