Content uploaded by Sergei Viznyuk

Author content

All content in this area was uploaded by Sergei Viznyuk on Apr 08, 2018

Content may be subject to copyright.

Content uploaded by Sergei Viznyuk

Author content

All content in this area was uploaded by Sergei Viznyuk on Apr 08, 2018

Content may be subject to copyright.

Content uploaded by Sergei Viznyuk

Author content

All content in this area was uploaded by Sergei Viznyuk on Apr 08, 2018

Content may be subject to copyright.

Content uploaded by Sergei Viznyuk

Author content

All content in this area was uploaded by Sergei Viznyuk on Dec 17, 2017

Content may be subject to copyright.

Content uploaded by Sergei Viznyuk

Author content

All content in this area was uploaded by Sergei Viznyuk on Jan 25, 2017

Content may be subject to copyright.

Content uploaded by Sergei Viznyuk

Author content

All content in this area was uploaded by Sergei Viznyuk on Jun 05, 2016

Content may be subject to copyright.

New QM framework

Sergei Viznyuk

Abstract

I propose a model wherein a system is represented by a finite sequence of

natural numbers. These numbers are thought of as population numbers in statistical

ensemble formed as a sample with replacement of entities (microstates) from some

abstract set. I derive the concepts of energy and of temperature. I show an analogy

between energy spectra computed from the model and energy spectra of some known

constructs, such as particle in a box and quantum harmonic oscillator. The presented

model replaces the concept of wave function with knowledge vector. I derive

Schrödinger-type equation for knowledge vector and discuss principal differences with

Schrödinger equation. The model retains major QM hallmarks such as wave-particle

duality, violation of Bell’s inequalities, quantum Zeno effect, uncertainty relations,

while avoiding controversial concept of wave function collapse. Unlike standard QM

and Newtonian mechanics, the presented model has the Second Law of

Thermodynamics built-in; in particular, it is not invariant with respect to time reversal.

As for prophecies, they will pass away; as for tongues,

they will cease; as for knowledge, it will pass away.

1 Corinthians 13:8

1. PREAMBLE

Physical properties, such as temperature, energy, entropy, pressure, and phenomena such as

Bose-Einstein condensation are exhibited not just by “real” physical systems, but also by virtual

entities such as binary or character strings [1, 2, 3], world wide web [4], business and citation

networks [5, 6], economy [7, 8, 9]. A characteristic quantum mechanical behavior has been

observed in entities as different as electrons, electromagnetic waves, and nanomechanical

oscillators [10].

There must be an underlying mechanism which accounts for the grand commonality in

observed behavior of vastly different entities. Scientists have recently discovered that various

complex systems have an underlying architecture governed by shared organizing principles [6].

The …present-day quantum mechanics is a limiting case of some more unified scheme… Such a

theory would have to provide, as an appropriate limit, something equivalent to a unitarily evolving

state vector [11].

There are two factors present in all theories. One is the all-pervading time, and the other is the

observer’s mind. A successful grand commonality model must explain the nature of time, specify

mechanism of how the physical reality projects onto the mind of observer, and relate time to that

projection. A grand commonality theory may not contain fundamental physical constants. Any

model containing such constants is deemed incomplete.

I call physical reality (i.e. the “real” world) the underlying system. The underlying system is

represented by its state vector . The measuring device defines the basis. Knowledge vector is

the state vector represented in the basis. The concept of knowledge vector may seem similar to

wave function in Niels Bohr interpretation whence the wave function is not to be taken seriously

as describing a quantum-level physical reality, but is to be regarded as merely referring to our

(maximal) “knowledge” of a physical system… [11]. In the presented framework, the action of the

measuring device does not affect the state of underlying system, only the knowledge vector. The

model does not exhibit the so-called measurement problem [12].

The state vector has an associated value of proper time [13] which serves as the ordering

parameter for different states of underlying system. State vector is completely defined by a finite

sequence of natural numbers . I call the population numbers of microstates from set

. I do not speculate what is microstate, or set , leaving them as abstract notions. I think of

sequence as a sample with replacement of microstates . I call such sample the

statistical ensemble. Henceforth the notion of physical reality is reduced to a sequence of natural

numbers which do not have to be physicalized in any way.

The proper time has been previously defined [13] as the ordering parameter for

the states of statistical ensemble:

, where

(1)

The proper time is quantized with time quantum . I combine this definition of

time with the following rule on time increments:

The positive direction of time change is when:

, where all are non-negative.

(2)

The negative direction of time change is when:

, where all are non-positive.

(3)

If and are such that some are positive and some are negative, then state

vectors and do not connect by timeline. There can be multiple timelines (histories)

connecting two state vectors, as well as none.

For the given state vector, a choice of observation basis defines the knowledge vector. An

observation basis associated with the measuring apparatus is the preferred basis, much discussed

recently [14].

Commonly, the [Schrödinger] equation is solved in time-forward manner: given known state,

find conditional probabilities of [future] measurement outcomes. Similarly, if same equation is

solved backward in time, one would find the past too is only defined in terms of conditional

probabilities, expressed as a modulus square of correlation function of two state vectors.

Any known fact from the past is deemed the artifact of the present state. What observer thinks

as the past, the present, or the future is represented by the knowledge vectors in the present. This

stance aligns with empirical evidence, e.g. from delayed-choice experiments [15], suggesting the

history is determined by the setup at present. Thus, there is only present. A statistical correlation

between knowledge vectors, having as correlation distance, is perceived as time evolution.

To further develop the model, I derive the notions of energy in Section 2, and of temperature

in Section 3. In Section 4 I derive the equation of motion for knowledge vector and discuss its

similarity and differences with Schrödinger equation. I define the concept of measurement. I touch

upon the notions of open/closed systems; conservation of energy; Bell’s inequalities; Haag’s

theorem; quantum Zeno effect; and the notion of memory. I show the condition for quantum vs.

classical behavior is the existence of predictable phase relationship between knowledge vectors.

2. ENERGY

The base tenet of the model is that the sequence completely defines the underlying

system. I call sequence the mode. Since mode is formed as a sample with replacement, the

[unconditional] probability of finding underlying system in a particular mode is given by the

multinomial probability mass function:

(4)

, where is the probability of sampling microstate from set . Within the context of the model

(5)

, where is the cardinality of set . I introduce functions as follows:

(6)

(7)

(8)

(9)

(10)

, where is gamma function,

is Shannon’s [16] entropy, and

is equilibrium microstate entropy [17]. With (7-9), I rewrite (4) as

(11)

From (11) the probability of observing statistical ensemble of N microstates in a particular mode

is determined solely by the value of . If I’m to use as a single independent

variable, I can write the probability mass function in domain as:

(12)

Here is the multiplicity (degeneracy) of the given value

1

, i.e. a number of ways

the same value of is realized by different modes with given parameters. There is no

analytic expression for, however, it is numerically computable. Table 1 contains ,

values calculated for several sets of parameters. Figures 1-2 show distinct

values of in increasing order for several values of parameter and probabilities (5) calculated

from (9), using algorithm [18] for finding partitions of integer into parts [19]. The

sum of over all distinct values of is the total number of modes. It is equal to the

number of ways to distribute indistinguishable balls into distinguishable cells:

(13)

, where sum is over all distinct values of . Figure 3 shows the total number of

distinguishable states of statistical ensemble, and the total number of distinct values as

1

For a case of statistical ensemble with microstate probabilities (5); the multiplicity of is the multiplicity of the

value of multinomial coefficient in (4) [80]

functions of for two sets of probabilities (5), calculated from (13) and (9) using algorithm [18].

The graphs demonstrate the following:

• For probabilities (5), the average degeneracy of levels approaches as.

This statement can be expressed as:

(14)

Here

sum represents the number of distinct values of for the given parameters. As

is not a smooth function of (see Table 1), there could be no true probability density

in domain. However, I shall derive pseudo probability density to be used in expressions

involving integration by in thermodynamic limit. To be able to use analytical math, I have to

extend (7-11) from discrete variables to continuous domain. I call

• Thermodynamic limit is the approximation of large population numbers:

(15)

In thermodynamic limit, I shall use Stirling’s approximation for factorials

(16)

With (5) it allows rewriting of (7-9) as

(17)

(18)

Figure 4 demonstrates function calculated for two sets of parameters using exact

expression (7) and approximate formula (17). In thermodynamic limit, is a smooth function of

approximated by positive semi-definite quadratic form of in the vicinity of its

minimum (10):

(19)

Knowing the covariance matrix [20] of multinomial distribution (4) allows reduction of (19) to

diagonal form. The covariance matrix, divided by is:

, where

;

(20)

The rank of is . If is a diagonal form of , the eigenvalues of are :

;

;

;

(21)

For equal probabilities (5),

. I transform to new discrete variables:

;

(22)

, where is matrix with columns as unit eigenvectors of corresponding to eigenvalues (21).

In case of and probabilities (5)

(23)

The eigenvector corresponding to eigenvalue is perpendicular to hyper-plane (1)

defined by in M-dimensional space of coordinates, while vector is

parallel to the hyper-plane. Therefore, in (22). I rewrite (19) in terms of new variables

as

(24)

I call the canonical variables of statistical ensemble, and the canonical state vector.

I call parameter the energy of statistical ensemble. If statistical ensemble of microstates is

divided into sub-ensembles of microstates as

(25)

, then, from (22,25), the relation between canonical vector of the larger system and vectors

of subsystems is:

(26)

I may also call the canonical momentum of the system. Relation (26) states the total momentum

of the system equals the sum of momenta of constituent parts. The expressions (22,24) lead to

the following time evolution laws for , with time defined as proper time (1):

;

;

(27)

;

, where designates the expectation value of . The canonical vector constitutes the knowledge

vector in the basis of eigenvectors of . As the basis is associated with an observer, basis vectors

may differ from eigenvectors of . If the basis is obtained from eigenvectors of via an

orthogonal transformation, linear form (26), and quadratic form (24) are preserved. Hence, I state

the conservation of energy law as follows: the energy of the system is conserved under orthogonal

transformations of the basis. In layman’s terms, it means the energy may change from one form

to another (e.g. from potential into kinetic), while total energy of the system is conserved under

such transformation. The conservation of energy law in this form differs from the common one

(Conservation of Energy, Wikipedia) which “… states that the total energy of an isolated system

remains … conserved over time”. From (27) it follows the expectation value of energy is

not conserved over time. However, in scenarios drawn from classical physics, the exponent in (27)

has much longer characteristic timescale than orthogonal transformation of observation basis. This

condition is equivalent to , i.e. to thermodynamic limit case.

Figure 5 demonstrates function calculated for two sets of parameters using

exact expression (9) and approximations (18), and (24). I plotted instead of to show

asymptotic behavior of (9) and (18) in comparison with quadratic form (24). Using (17) and (24)

I obtain multivariate normal approximation [20] to multinomial distribution (4) as

(28)

Figure 6 shows graphs of as a function of calculated for and four

sets of probabilities, using exact formula (4), and multivariate normal approximation (28).

In order to derive pseudo probability density in domain, I note that:

• In thermodynamic limit, the number of distinguishable states of statistical

ensemble having is proportional to the volume of –dimensional sphere of

radius. This statement can be expressed as

(29)

The sum in (29) is over all distinct values of which are less or equal than. The function

is determined from normalization requirement:

(30)

In order to convert from sums to integrals over continuous variable I define pseudo density

of distinguishable states of statistical ensemble as

(31)

The corresponding pseudo probability density is given by (12). The normalization

requirement for these functions becomes:

(32)

The value is obtained from (9) by having microstate with lowest probability

acquire maximum population: . From (9) as :

(33)

For probabilities (5):

(34)

From (33) as . That allows replacing in the upper limit of integral in (32)

with . I get [20] the expression for function in

(29) as:

(35)

Using (35) and (17) allows rewriting (29) as

(36)

, where

is the volume of –dimensional

sphere of radius .

The number of distinct values of in limit can be estimated from (36) and (14) as

(37)

From (37) one can approximately enumerate distinct energy levels by “quantum number” :

(38)

From (31) the pseudo density of distinguishable states of statistical ensemble is

(39)

I use condition (13) to define effective

value:

(40)

Figure 7 shows calculated from exact expressions (4), (9), and from formula (36).

From (11), (17), and (39), the pseudo probability density function of statistical ensemble in

thermodynamic limit is

(41)

, where is the probability density function of gamma [20] distribution with scale

parameter, and shape parameter

. I calculate moments of:

Mean:

(42)

Variance:

(43)

moment

about mean:

(44)

The sums in (42-44) are over all combinations of satisfying (1), i.e. over all partitions of N.

Expression (41) allows explicit calculation of all moments of in thermodynamic limit. From (41)

the mean value , the variance, and the third moment are:

(45)

(46)

(47)

Figure 8 shows calculations of mean value, the variance, and the third moment from the

exact expressions (42-44) for the moments, with (4) as probability mass function. It demonstrates

how these values asymptotically approach thermodynamic limit values (45-47) as, i.e.

as where is the proper time (1).

I shall demonstrate how the presented model correlates with some known constructs. Consider

one-dimensional quantum harmonic oscillator. Its energy levels [21] are given by:

(48)

, where is the base frequency, and . Energy levels (48) are equally-spaced. In my

model, similar pattern is exhibited by energy levels of statistical ensemble of cardinality ,

as shown on Figure 1. From (18) in thermodynamic limit approximation, the energy of statistical

ensemble can be written as:

(49)

, where

;

;

(50)

From above, the energy levels of statistical ensemble of cardinality are:

(51)

, where are Loeschian numbers (Sloane, OEIS sequence A003136). With (48) and (51), I can

write the comparison table of the first few energy levels of quantum harmonic oscillator in units

of , and of statistical ensemble of cardinality in units

:

quantum harmonic

oscillator

1

3

5

7

9

11

13

15

17

19

21

23

25

27

29

31

33

35

37

39

41

43

statistical ensemble

of cardinality

0

1

3

4

7

9

12

13

16

19

21

25

27

28

31

36

37

39

43

, where black boxes designate missing energy levels. In the second row, the energy levels shown

in shaded boxes are only realized for modes with ; and energy levels shown in

white boxes are realized for modes with. Here is the remainder of

division of by .

Consider another classic quantum mechanical example: particle of mass in a box of size .

Its energy levels [21] are given by:

;

(52)

In my model, similar energy spectrum is exhibited by statistical ensemble of cardinality ,

as shown on Figure 2. From (49), the energy levels of statistical ensemble of cardinality ,

in thermodynamic limit approximation, are:

;

(53)

, where

is to be considered as the effective mass of the particle.

(54)

Energy levels (53) with even are only possible when is even, and energy levels with odd are

only possible when is odd. With ½ probability the lowest energy level is , and with

½ probability it is , in units of .

3. THERMODYNAMIC ENSEMBLE

The statistical ensemble considered in previous section represents a single copy of

underlying system, with mode uniquely identifying the state. In this section, I consider

observation as a random pick of underlying system from a collection of systems, each represented

by its own statistical ensemble of cardinality . I call such collection of systems thermodynamic

ensemble. I designate the set of modes a system may occupy; the total number of systems,

and the number of systems in mode :

(55)

I designate the probability for a system to be in any mode with total population of

microstates . Then, from (11), the probability for a system to be in a mode is:

(56)

I consider systems in the same mode indistinguishable to the observer. Then the probability mass

function of distribution of modes among systems is

(57)

The objective is to find the most probable distribution . For standalone systems, the most

probable distribution is the one which maximizes (57), i.e.

(58)

Let’s consider systems to be part of some bigger system in a certain state. That imposes conditions

on distribution of modes among systems, so relations (45-47), (58) may no longer hold. I consider

one of the possible conditions and show how it leads to the notion of temperature. Let the state of

the bigger system be such that the average energy of the systems in thermodynamic ensemble is

, which may be different from the average energy of a standalone systems given by (45). Then:

(59)

To find the most probable distribution of modes , I shall maximize logarithm of (57) using

method of Lagrange multipliers [22, 23] with conditions (55) and (59):

(60)

From (60), (59), (55) I obtain the following equation involving Lagrange multipliers and :

(61)

, where is digamma function, and and are to be determined by solving (61) for :

(62)

, and by plugging from (62) into (59) and (55). In (62), is the inverse digamma function,

and

. The parameter is commonly known as temperature.

Since the number of systems in mode cannot be negative, expression (62) effectively

limits modes which can be present in most probable distribution to those satisfying

(63)

, where is Euler–Mascheroni constant. Using approximation [24]:

, I rewrite (62) as:

(64)

Presence of term in (64) leads to a computationally horrendous task of calculating and ,

because the summation in (55) and (59) has to be only performed for modes satisfying (63). I shall

leave the exact computation to a separate exercise, and make a shortcut, by ignoring term in

(64). This approximation is equivalent to Boltzmann’s postulate

2

[25] that the number of systems

in mode is proportional to

. The shortcut allows calculation of Lagrange multiplier

from (55):

, where

(65)

Using expression (39), the partition function in (65) can be evaluated as:

(66)

The equation (59) then becomes

(67)

Eq. (67) is the familiar relation [23] between average per-particle energy and temperature in

-dimensional ideal Maxwell-Boltzmann gas. Thermodynamic entropy can be evaluated

as:

(68)

, where

(69)

With expression (17) for , in thermodynamic limit, I rewrite (68) as

(70)

, where

(71)

2

While widely used, this postulate has rather unphysical consequence that there is a non-zero probability of finding a

system in a mode with arbitrary large energy. Another consequence is the divergence of partition function for some

constructs, e.g. hydrogen electronic levels [59].

To calculate I have to make an assumption on distribution. As a possible example, I

shall assume the number of microstates for a system in thermodynamic ensemble is

Poisson-distributed around mean value. Therefore, for , I can use expression for the

entropy of Poisson distribution [26]:

(72)

I also use the following:

(73)

With (72), (73) I finally obtain:

(74)

In case of , i.e. for degrees of freedom, the expression (74) turns into

equivalent of Sackur-Tetrode equation [27] for entropy of ideal gas. For thermodynamic entropy

of a standalone system, instead of (68-74) from (17) and (45) I have:

(75)

Thermodynamic entropy (74) per system in thermodynamic ensemble is larger than entropy (75)

of a standalone system by term (72) plus the temperature-related term. The increase in entropy by

happens because of the spread in values of , i.e. in age of the systems. The increase in entropy

by temperature-related term

is due to the spread in energies of the systems. The non-zero

thermodynamic entropy of a standalone system implies its state is unknown prior to observation,

for each observation. Using (1) I rewrite (75) in terms of proper time as:

, where

(76)

The expression for in (76) was derived in thermodynamic limit, i.e. when . When

(i.e. when ) . By comparing to (Figure 9) I see that fairly

close to except when is large enough, in which case thermodynamic limit approximation

for the given becomes less valid anyhow. Therefore, I can replace with in (76) and

obtain thermodynamic entropy of a standalone system as:

(77)

The [linear] relation between thermodynamic entropy and proper time (77) is the manifestation

of the Second Law of Thermodynamics (SLT). Previously, SLT has been demonstrated in the

context of time model [13] using numeric calculation of microstate entropy.

The expression for in (66) has been derived in thermodynamic limit approximation, i.e.

when . It means there must be large number of energy levels included in sum (65), i.e.

temperature cannot be too small. Therefore, the expressions (66-67) are only valid for ,

where is the characteristic difference between adjacent energy levels.

For statistical ensemble of cardinality the approximately evenly-spaced energy levels

(Figure 1) allow for more accurate expression for partition function. From (38) the characteristic

difference between energy levels in the limit is:

(78)

Figure 10 shows numeric calculation of the difference between adjacent energy levels

averaged over distinct states of statistical ensemble with the given value of , and .

If , the first 17 energy levels in units of

and their degeneracy:

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

0

3

9

12

21

21

27

36

39

39

48

57

57

63

63

75

81

1

6

6

6

6

6

6

6

6

6

6

6

6

6

6

6

6

If , the first 16 energy levels and their degeneracy:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

1

4

7

13

16

19

25

28

31

37

43

49

52

61

64

67

3

3

6

6

3

6

3

6

6

6

6

9

6

6

3

6

The combined energy levels are thus given by (51). I can use (48) as approximation for the

combined energy levels (51) in expression for partition function (65) with degeneracy of each level

=6, and obtain the mean energy of modes for subsystems with given as [28]:

;

(79)

The relation

is commonly referred to as the average number of photons in a mode [29].

In my model, the notion of a photon is meaningless. The quantized energy levels (9,48,51) make

transitions between modes appear as absorption or emission of particles.

Formula (79) has been obtained using linear dependence (48) of energy levels on quantum

number , in , i.e. limit. Figure 1 shows approximation (48) holds reasonably well

when is not too large. From (14), the linearity (48) break down for

. Therefore, the

typical black-body spectrum can only be exhibited by relatively low-temperature systems, with

. A good example is the cosmic microwave background. The higher the temperature,

the more will the spectrum differ from that of black body, especially in region, where

spectral intensity would fall off steeper than black-body radiation as . Such deviation from

black-body radiation is already obvious in solar spectrum.

Zero-point energy term

in (79) is the subject of a hundred-year controversy [30, 31]. It

leads to the infinite energy density of the field in any volume of space, as there is no upper limit

on in conventional theory. In presented model, contrary to the conventional theory, term

cannot contribute more than

to the average energy (79), i.e. its contribution is within

standard deviation (46). The problem of infinite zero-point energy [30] does not exist within the

context of the model.

4. THE DYNAMICS OF KNOWLEDGE VECTOR

In this section, I discuss dynamics of knowledge vector in the context of time model (1). A

knowledge vector is defined in an observation basis associated with the measuring device. A

special interest presents knowledge vector as projection of canonical vector (22)

(80)

Transformation (80) preserves quadratic form (24) which means the components have

familiar from classical mechanics relation to energy. The orthogonal transformation in (80)

represents the measuring device. The state of measuring device has to reflect the state of underlying

system. Therefore, transformation depends on , and, possibly, proper time . Any

matrix can be expressed [32] as matrix of real skew-symmetric matrix :

, where

(81)

Any real skew-symmetric matrix can be reduced [32, 33] to a block-diagonal form by

transformation:

(82)

, where

, and are real

(83)

Plugging (82) into (81) I obtain from (80):

(84)

, where

(85)

I re-write (84) as

(86)

, where I introduced new knowledge vector , and new state vector . Vector

is the state vector represented in eigenbasis of the measuring device. Action of operator

on vector in (86) constitutes transformation by the measuring device. I define eigenspaces of the

measuring device as 2D subspaces formed by pairs of eigenvectors corresponding to eigenvalues

in (83). In eigenbasis, the action of the measuring device is reduced to rotations in 2D

orthogonal eigenspaces, as evident from (85).

As I mentioned earlier, the rank of matrices , and of vectors is . Therefore,

transformation (86) takes especially simple form in case of :

, where

, and

(87)

When the transformation is trivial: if then ; if then .

Transformation (87) can be expressed in complex notation using real components :

, where

, and

(88)

In case of one can convert to complex notation by combining pairs of real ,

components corresponding to 2D eigenspaces in (85) into complex numbers as in (88). In case of

even there will be one real-only component left un-transformed. In complex notation, (85)

can be written as a Lie group unitary matrix :

for even :

, where

(89)

, and for odd

(90)

Transformation (86) can now be expressed in complex notation as:

(91)

In complex notation, matrix (85) takes diagonal form and becomes

matrix (89-90). Operating with unitary matrices in diagonal form is easier than with orthogonal

matrix (85). The complex notation is a technique to make some math easier to handle, not to create

new physics out of thin air.

Consider scenario where eigenspaces of the measuring device do not depend on time or on

state vector of underlying system, i.e.

in (86). That deems a valid expectation if

measuring device is to provide consistent results. It means, e.g., the orientation of polarizer does

not depend on time, or on the polarization of incident light. Then, from (27),

.

Assuming analytic :

(92)

, where, for even :

(93)

, and for odd :

(94)

, where

(95)

If eigenspace component , the underlying system is in a state characterized by the

symmetry with respect to rotations within 2D eigenspace of the measuring device. Therefore,

the eigenspace component will not rotate if , i.e. null vector has no phase. Thus,

. In the vicinity of, is approximated by a positive quadratic form

on . The only such form is the eigenspace component of energy:

, where

, and

(96)

, and is a constant of proportionality which I’m tempted to call Planck’s constant. No model is

complete if it contains underived physical constants. To obtain an expression for , I note that in

a case of and , there are canonical vectors possible, from (22):

(97)

The phase difference between these vectors is

. Thus, for underlying system with and

, the proper time (1) increment corresponds to a phase increment

. The energy (9) of underlying system with and is

.

Thus, in proper time scale of the system with , the value of the Planck’s constant is:

;

(98)

A classical measuring device, such as the wall clock, is coupled into environment via various

interactions, electromagnetic, gravitational, etc. Effectively, it is part of the whole universe. Its

proper time is the universe’ proper time, defined by the total population number (1) of microstates

in statistical ensemble representing the universe. In universe’ proper time scale (98) becomes:

;

(99)

From (99) it follows, the Planck’s constant ought to decrease with time . Although, given

the number is likely to be very large, the decrease

might not be detectable.

Since the time increments

also decrease, the product

stays the same. The

decrease in the value of is negated by the slowing time. It is not clear if such decrease can be

detected at all given all measurements of Planck’s constant [34], in effect, are measurements of

the product or

. The dimensionless value (99) of Planck’s constant does not expound

the observable timescales, which are to be determined from empirical evidence.

Unlike (91), the equation (92) with (93-95) only contains reference to knowledge vector , and

no reference to the state vector of underlying system. The linearity of (92) means the expected

past and the future of knowledge vector are unambiguously defined in the present. The expectation

that the current state contains information about the past gives rise to the concept of memory. The

past or the future are the knowledge vectors defined in the present and related by transformation:

(100)

The measurement device performing transformation (100) maintains phase relationship

(coherence) between knowledge vectors. I call it a quantum device.

Equation (92) is similar to Schrödinger equation where H-matrix , and where are

the characteristic frequencies. The notable difference is that (92) incorporates measurement

apparatus, as it describes the expected evolution of knowledge vector as rotations within 2D

eigenspaces of the measuring device. The Schrödinger equation describes evolution of wave

function as if it exists as some sort of physical reality outside of measurement apparatus.

Schrödinger himself believed wave function is a physical reality, a density wave [35]. It is rather

common in physics community [36], and others (see e.g. Tangled up in Entanglement by L.M.

Krauss and response by D. Chopra) to assume the evolution of wave function according to

Schrödinger equation is independent of the observer, i.e. independent of representation.

The expectation of the equivalence of different representations is born out of assumption of

realism, i.e. of observed system existing and possessing properties independent of observer. If

different representations are not equivalent, physicists are confronted with the choice of preferred

basis [14]. The assumption of realism led to a number of theories alternative to Copenhagen

Interpretation, such as many worlds [37] and pilot wave [38]. The Copenhagen Interpretation

maintains underlying system does not have definite properties prior to being measured. In different

representations, objects may demonstrate contradictory behavior, as in interference experiments

[15] where objects behave either as waves, or as classical particles depending on configuration of

measuring apparatus. If device maintains phase relationship between knowledge vectors, the

observed object exhibits wave-like behavior. Without phase relationship between knowledge

vectors, classical behavior is observed, as I show below. The corresponding experimental setups

provide examples of [unitarily] non-equivalent representations. It is trivial to prove there could be

no unitary transformation from quantum to a classical behavior. Consider a system in a pure

quantum state. Its density matrix satisfies . If unitary transformation , such as time

propagator

, is applied to , then density matrix in the new basis is

. It is easy to see that , i.e. the system remains in a pure quantum state. It will also be

true in case of time-dependent , i.e. no [coherent] coupling of external fields can force a pure

quantum system to transform into a classical, or a mixed state. The existence of non-equivalent

representations has been proven by Haag [39] but its significance is still not fully appreciated.

The measurement result is [usually] a finite scalar value. It’s natural to assume it is an analytic

scalar function of knowledge vector with minimum at =0. Due to above-

mentioned symmetry of state, in the vicinity of , is approximated by a positive

semi-definite quadratic form on , diagonal in device eigenbasis:

(101)

, where index spans eigenspaces of the measuring device; is the operator matrix of observable;

are eigenvalues of ; are eigenspace components of energy (96); are device calibration

constants. For any observable, there exists a device eigenbasis in which the operator matrix of

the observable is diagonal. In real number notation, matrix must be positive semi-definite in

order for the observed values to be real non-negative numbers. In complex notation (91), the

eigenvalues of come in complex-conjugate pairs. In this case in (101) is the real part of the

eigenvalue. In canonical basis (22), the device is defined by a symmetric matrix ,

where matrix defines eigenspaces, as in (85), and diagonal matrix defines eigenvalues.

The quadratic form approximation is invalid under higher energies. However, the rightmost

side of (101) is expressed in terms of energy eigenspace components, not as a quadratic form on

knowledge vector. It is conceivable the rightmost side of (101) is not an approximation. It may be

valid for the whole range of energy values, including region of higher energies, where linear

dependence of on quantum number breaks down (Figure 1).

If operator matrix of observable is diagonal in eigenbasis of device , and operator matrix

of observable is diagonal in eigenbasis of device , and if devices and have different

eigenspaces, then matrices and do not commute. Hence, the generalized uncertainty principle

[21] applies if observables and are measured in eigenbasis of some third device :

(102)

The canonical state vector can be expressed as a sum (26) of canonical state vectors. Similar

decomposition of the knowledge vector can be used for input to quadratic form (101):

(103)

, where are eigenspace phases (89) of vectors ;

;

. The decomposition is selected from possible decompositions of

by the measuring device, through entanglement with underlying system. Device acts as a filter,

only entangling with modes which match device modes. In device eigenbasis, such modes are

the knowledge vectors . I will also call them medium oscillators.

Depending on the context, knowledge vectors in (103) may bear different meaning.

In time-correlation measurement, vector is viewed as state vector before-transition, and vector

– as after-transition. In case of spatial correlation, vector may be viewed as observer Alice,

and vector – as observer Bob. The expression (103) simplifies for , with :

, where

(104)

The variance of the measurement scalar (104) is:

(105)

Signal (104) does not include transitions to or from . Therefore, it is impossible to

harvest zero-point energy or detect a zero-energy state with scalar measurement. A narrow-band

device would have

, where is the number of knowledge vectors

in superposition . If a single transition does not change significantly the energy of a

given mode, I can also assume , with . These

assumptions are equivalent to thermodynamic limit approximation. I then rewrite (104-105) as:

(106)

(107)

Given ; , I evaluate in (106) via linear expansion near initial values

; :

(108)

In a case of temporal correlations, ; . With (92-96), I rewrite (108) in the vicinity

of initial state as:

(109)

, where is the transition time; is the complex gradient of scalar phase:

, where

(110)

For a device to detect transition , signal must vary more than its standard deviation .

The detection fidelity is characterized by a change from statistical mean, expressed in terms of a

number of standard deviations. In [40] the half-life threshold was used – the decrease of the signal

by half from its initial value . For superposition in (106-107), it corresponds

to . The detection fidelity in this case is , i.e. the probability the change

was due to transition is 84%, and probability that it happened due to statistical error is 16%.

For ; ; from (106-107):

(111)

(112)

Signal (111) decreases by half from its initial value =, when

. Therefore,

the half-life detection threshold satisfies energy-time uncertainty relation in a form of Mandelstam-

Tamm bound [41]:

(113)

I call the superposition of knowledge vectors , coherent within interval , if the

difference is an analytic function of in the interval. I call

the transition rate.

For the coherent superposition of knowledge vectors, with , the transition rate

. This result is referred to as Zeno paradox [42]. It is the characteristic feature of

measurement with quantum device.

The non-zero transition rate arises from de-coherence of knowledge vectors through random

phase dispersion caused by various mechanisms, e.g. by:

1. Rayleigh scattering [43, 44]

2. Brownian motion [45, 46]

3. Dispersive media [47, 48]

4. Recombination of electron-hole pairs in semiconductors [49]

A transition changes energy (48) by with equal probability in either direction. The

case of , where , is equivalent to consecutive transitions in the same direction.

I call device which undergoes multiple transitions at random within any given interval a

classical device. In time the phases of knowledge vectors in (106) undergo a number

of positive and negative increments with equal probability The resultant increments are

binomially distributed with mean , and variance

.

Here, is the mean free time between transitions, i.e. de-coherence time. In between transitions,

the phase difference changes according to (109). It gives rise to the variance in phase

, and to the variance in phase difference

.

Figure 11 shows numeric calculation of (106), with binomially distributed phases . The

calculation established the following:

(114)

The exponential decay (114) is the characteristic feature of a classical process. From (114), the

transition rate in limit:

(115)

The half-life threshold (113), in this case, changes to:

, given

(116)

From (116), , where is the object’s decay time. The expression (116)

establishes relation between de-coherence and decay times. Quantum de-coherence has received

an extensive coverage in last decades. A link between de-coherence and decay has been suggested

[50]. De-coherence between knowledge vectors is not unlike the spontaneous collapse of wave

function, a subject of a number of collapse theories [51].

Consider a photodetector measuring intensity of incident radiation. A knowledge vector (a

medium oscillator) is formed when a set of elements (e.g. electron-hole pairs) on the surface of

photodetector entangle through some medium, e.g. through electromagnetic field. An analog of

such entanglement is a Cooper pair in superconductor, mediated by phonon interaction. The

surface area which encloses a set of elements in entangled state is limited by the coherence radius

, where is the speed of light, is the refractive index of the material. If is the

number of entangled elements per unit area of the detector; is the dimensionless scattering

rate; is the scattering frequency within [rad/s] spectral width, then, the de-coherence

time is evaluated as:

(117)

Combining (117) with (115), I obtain transition rate:

(118)

In equilibrium, the loss of a number of oscillators in a particular mode is compensated by the

radiation-stimulated induction into the mode of the same number of oscillators. The energy balance

equation is:

(119)

, where is the spectral radiance of incident radiation; is the efficiency of conversion of the

incident radiation into oscillator energy; is the number of medium oscillators per unit surface

area of the detector. I have to subtract zero-point energy term

from the ensemble-average

energy in (119), because an oscillator cannot lose energy in the ground state. From (79,119):

(120)

The term in square brackets can be considered as pertaining to the incident radiation, and

parameters outside the brackets as properties of the detector. Then, (120) can be split into formula

for the spectral radiance, and formula for the detector efficiency:

(121)

(122)

, where

can be interpreted as the number of entangled elements making up one oscillator.

The Planck’s formula for the spectral energy density readily follows from (121):

(123)

5. DISCUSSION

The presented model bears familiar hallmarks of quantum physics. Consider e.g. wave-particle

duality. The particle properties result from discreteness of probability mass function (4), of energy

spectrum (Table 1), of proper time (1), of measurement scalar (101,103,106). The discrete values

of measurement scalar may be associated with observable states of underlying system. The choice

of observation basis (i.e. the experimental setup) can make quantum leaps between observable

values look like emission and absorption of particles.

Wave properties result from superposition of knowledge vectors in the measurement

scalar (103,106). The superposition should rather be called decomposition, as the knowledge

vector of underlying system is decomposed into a sum of eigenvectors of the measuring

device. For a simple case , the expression for the measurement scalar (111) is identical

to the intensity distribution in interference pattern in a double-slit experiment.

Consider another QM hallmark: violation of Bell’s inequalities [52]. The violation of Bell’s

inequalities can be understood without invoking the concept of wave function collapse. In a typical

experiment [53, 54] two entangled particles represent the same underlying system, which is being

observed via two spatially separated devices and . An observer Alice is attached to device ,

and observer Bob is attached to device . If Alice and Bob did not communicate via conventional

channel, neither of them would know the result of the other. The statistical correlation can only

be detected when knowledge vectors and converge to form the resultant observation (111).

The target of experiments on violation of Bell’s inequalities is the function in (111). It shows

double-slit experiment is about as good experiment on violation of Bell’s inequalities as any other.

Confusion of statistical correlation with causality in this context led some minds to bewilderment

about spooky action at a distance [55].

Consider the concept of measurement in conventional QM theory. If measurement has

been performed at time , and the result is , the expectation value at time is given by:

, where

; is Hamiltonian,

(124)

, and is the state of the system at . Is the system considered closed or open? Conventional

theory would imply the system is closed, as only a closed system can be described by a state vector.

If the system is closed, it has to be in energy eigenstate. If is also an energy eigenstate, then

from (124), , i.e. a closed system ought to be static. The conventional theory

handles this paradox by considering system quasi-closed, i.e. initially described by a state vector,

but with -matrix having off-diagonal terms. Then is not a true Hamiltonian of the system but

so-called interaction Hamiltonian, and is not an eigenstate of .

According to Heisenberg picture of QM formalism, the measurement

is obtained from measurement via unitary transformation of observation basis.

Since the result of the measurement at is one of the eigenvalues of operator , the state

of the system at has to be one of the eigenstates of . Attempts to understand this fact

have needlessly led Copenhagen School to the concept of wave function collapse. If are

eigenvectors of , corresponding to eigenvalues , then

(125)

, where I converted to eigenbasis of -matrix. It allows rewriting (125) as

(126)

, where are eigenvalues of . Coefficients are all real because

their transpose by indices is equal to their adjoint. Hence, I rewrite (126) as:

(127)

From (127) it is clear that [42, 56], which can also be demonstrated if

(127) is broken into power series of near :

(128)

, where are commutator brackets. If the system was in state at

then the probability to find it in state at is:

, where

(129)

From (127-129) it follows that

;

leading to what is

perceived as quantum Zeno effect.

The eigenvalues of -matrix are not true energy levels of the system because -matrix is

not a true Hamiltonian. They can also be defined up to an arbitrary constant, as only the difference

matters for the dynamics of the system. The subtraction of a constant from eigenvalues of

-matrix in (127-129) doesn’t change the expectation value . Such technique is called re-

normalization. The re-normalization [30] is used in conventional theory to “resolve” various

ultraviolet catastrophes, including problem of infinite zero-point energy density resulting from

term in (79). The problem with that approach is that term in (48,79) already refers to

the difference between energy levels. Therefore, renormalization cannot possibly help here. Other

problem with conventional theory is that it does not comply with Second Law of Thermodynamics

(SLT), as Neumann’s entropy [57] is invariant under unitary transformations. The SLT trumps any

other law in physics, so any theory or model which is not compliant with SLT is faulty.

In the presented model, the canonical state vector (22) of underlying system has an associated

value of time (1). The multiple possible histories of underlying system are represented by the

sequences of statistical ensembles arranged by time progression rule (2,3). The progression

rule (2,3) results in increase of entropy (77) with time. In thermodynamic limit, the time

progression of underlying system is approximated by rather featureless exponential decline (27).

The observational diversity is rooted not in dynamics of underlying system, but in the dynamics

(91-92) of observation basis.

This work has been motivated, in part, by:

• a perception of a common mechanism which might account for similar traits in vastly different

entities

• an apprehension that modern science mostly operates within confinement of a primitive

realism, which presents the world as a collection of solids of various shapes, liquids, gases,

and progressively more exotic objects, all the way to the not quite defined notions of dark

energy and dark matter, purported to make up 95% of the “reality”; with all those things

existing somewhere “out there” beyond the tip of our noses

• a sense of contradiction between assumption of causal relationships which modern physics

strives to establish, and probabilistic nature of observation outcome, implying there is no such

thing as causality, just correlation