Content uploaded by Sergei Viznyuk
Author content
All content in this area was uploaded by Sergei Viznyuk on Jan 14, 2020
Content may be subject to copyright.
From QM to KM
Sergei Viznyuk
Abstract
Understanding of physical reality is rooted in the knowledge obtained from
observations. The knowledge is encoded in variety of forms, from sequence of letters in
a book, to neural circuits in a brain. At the core, any encoded knowledge is a sample of
correlated events (symbols). I show the event samples bear attributes of a physical
reality: energy, temperature, momentum, mass. I show that treating measurement as
event sampling is consistent with predictions of quantum mechanics (QM). I discuss
QM basics: wave function, Born rule, and Schrödinger equation, emphasizing their true
meaning, which is rarely, if ever, mentioned in textbooks. I derive similar expressions
using event sample as base construct, demonstrating the connection between QM and
the presented model. I explain the mechanics of observation, and the role of observer. I
show how model extends to include dispersion, decoherence, transition from quantum
to classical state. I prove decoherence is a key factor in Fermi’s golden rule, in Planck’s
radiation law, and in emergence of time. The controversial aspects of QM, such as wave
function collapse, and measurement problem, do not appear in presented framework,
which I call the knowledge mechanics (KM)
As for prophecies, they will pass away; as for tongues,
they will cease; as for knowledge, it will pass away.
1 Corinthians 13:8
1. PREAMBLE
Physical properties, such as temperature, energy, entropy, pressure, and phenomena such as
Bose-Einstein condensation are exhibited not just by “real” physical systems, but also by virtual
entities such as binary or character strings [1, 2, 3], world wide web [4], business and citation
networks [5], economy [6, 7]. Quantum mechanical behavior has been observed in objects as
different as electrons, electromagnetic waves, nanomechanical oscillators [8]. There must be a
mechanism which accounts for the grand commonality in observed behavior of vastly different
entities. Scientists have recently discovered that various complex systems have an underlying
architecture governed by shared organizing principles [9]. The …present-day quantum mechanics
is a limiting case of some more unified scheme… Such a theory would have to provide, as an
appropriate limit, something equivalent to a unitarily evolving state vector [10].
There are two factors present in all theories. One is the all-pervading time, and the other is the
observer’s mind. A successful grand commonality model must explain the nature of time, specify
mechanism of how the physical reality projects onto the mind of observer, and relate time to that
projection. Conventional theory routinely manipulates notions, such as energy, distance, electric
charge, without providing their definition, treating them as God-given things. A self-contained
model must define any notion it operates with. The definition must be confined within the model.
A complete theory may not contain underived fundamental physical constants. The conventional
QM provides accurate predictions, yet without clear model of the system [11]. This work describes
a model wherein state vector formalism for correlated event samples is similar to mathematical
apparatus of conventional QM. I expound some notions left obscure by the conventional theory,
e.g. the notions of time, of observer, of measurement.
In this preamble I outline the model in defined terms. In Section 2 I develop the concepts of
energy, of knowledge, and of knowledge confidence. I calculate energy spectra for some event
samples and show their similarity with known QM constructs: particle in a box, and quantum
oscillator. In Section 3, I consider an ensemble of uncorrelated particles. I derive the notion of
temperature, and the First Law of Thermodynamics. In Section 4, I discuss conventional QM
approach to measurement. I apply state vector formalism to an event sample to obtain Born rule,
and Schrödinger equation. I state the equivalence of Born rule and of coefficient of determination. I
extend formalism to include numeric methods of handling dispersion and decoherence. I show the
decoherence is a key factor in Fermi’s golden rule, and in Planck’s radiation formula. I show how
decoherence leads to Second Law of Thermodynamics and to emergence of time.
A description of quantum-level physical reality (PR) is attained as a sample of eigenstates
, where is a set of eigenstates PR can be observed in. In information terms, the eigenstate
is an event; is the number of occurrences of -th event in the sample. The acquisition of a sample
is measurement. The sample is encoded in an information-holding construct, which I call the
observer. The event is elementary if it is not a direct product of other events. I call event a particle,
in some contexts. The particles are said to be entangled if they are parts of a non-elementary event.
The event sample describes a quantum object, if events sampled in different observation bases
are correlated. An ensemble of uncorrelated particles represents a classical object.
I call a variation of observation basis the transformation. A transformation is correlative with
a change in macroscopic parameters, notably time, position, etc. I call a change of eigenstate
population numbers , associated with transformation, the transition.
The quantitative measure of information (knowledge), about object, is obtained from event
sample , as a difference between entropy of equilibrium (maximum probability state), and
entropy of statistical ensemble . The calculation is based on understanding of entropy as a
measure of missing information (i.e. the amount of unknown). The sum of knowledge (i.e. the
amount of known), and amount of unknown equals entropy of equilibrium.
All graphs have been pushed to the end of the paper to make text part more focused. This paper
is a substantial re-write of its predecessor [12], most significantly in Section 4.
2. ENERGY
The unconditional probability of an event sample is given by multinomial probability
mass function (pmf):
(1)
, where is the probability of sampling eigenstate from set . The elementary eigenstates would
have equal probability:
(2)
, where is the cardinality of set . I introduce functions as follows:
(3)
(4)
(5)
(6)
(7)
, where is gamma function;
(8)
Here is Shannon’s [13] unit entropy, and is the entropy of equilibrium.
With (4-6), I rewrite (1) as
(9)
From (9), the probability of an event sample , among all samples of the same size , is
determined solely by the value of . If I’m to use as a single independent variable,
I can write (9) in domain as:
(10)
Here is the multiplicity (degeneracy) of the given value
1
, i.e. a number of ways
the same value of is realized by different samples with given parameters. There is no
analytic expression for, however, it is numerically computable. Table 1 contains ,
values calculated for several sets of parameters. Figures 1-2 show distinct
values of in increasing order for several values of parameter and probabilities (2) calculated
from (6), using algorithm [14] for finding partitions of integer into parts [15]. The
sum of over all distinct values of is the total number of distinct samples. It is equal
to the number of ways to distribute indistinguishable balls into distinguishable cells:
(11)
, where sum is over all distinct values of . Figure 3 shows the total number of distinct
event samples, and the total number of distinct values as functions of for two sets of
probabilities (2), calculated from (11) and (6) using algorithm [14]. The graphs demonstrate that:
• For probabilities (2), the average degeneracy of levels
This statement can be expressed as:
(12)
Here
sum represents the number of distinct values of for the given parameters.
can exceed in some cases. E.g.
because . Another example is,
.
As is not a smooth function of (see Table 1), there could be no true probability
density in domain. I shall derive pseudo probability density to be used in expressions involving
integration by in thermodynamic limit. To be able to use analytical math, I have to extend (4-9)
from discrete variables to continuous domain. I call
• Thermodynamic limit is the approximation of large occupation numbers:
(13)
1
In case of a sample with event probabilities (2); the multiplicity of is the multiplicity of the value of multinomial
coefficient in (1) [41]
In thermodynamic limit, I can use Stirling’s approximation for factorials
(14)
It allows rewriting of (4, 6), for probabilities (2), as
(15)
(16)
Expr. (16) reverberates with the proposed [16] electron correlation energy .
Figures 4,5 demonstrate functions and calculated for two sets of parameters
using exact expressions (4), (6), and approximations (15), (16).
In thermodynamic limit, is a smooth function of approximated by positive semi-definite
quadratic form of in the vicinity of its minimum (7):
(17)
Knowing the covariance matrix [17] of multinomial distribution (1) allows reduction of (17) to
diagonal form. The covariance matrix, divided by is:
, where
;
(18)
The rank of is . If is a diagonal form of , the eigenvalues of are :
;
;
;
(19)
For equal probabilities (2),
. I transform to new discrete variables:
;
(20)
, where is matrix with columns as unit eigenvectors of corresponding to eigenvalues (19).
In case of and probabilities (2)
(21)
The eigenvector corresponding to eigenvalue is perpendicular to hyper-plane defined
by in M-dimensional space of coordinates, while vector is
parallel to the hyper-plane. Therefore, in (20). I rewrite (17) in terms of new variables
as:
(22)
I call the canonical variables of the sample, and the canonical momentum. I call
parameter the energy. plays a role of mass. From (5, 6), for probabilities (2), it follows:
(23)
, where
(24)
Hence, for elementary eigenstates, energy equals difference between entropy of
equilibrium, and entropy of the sample, i.e. energy equals knowledge [about object
state]. As entropies (5), (24) are in units of nats, so is the energy (6,16,22,23).
Figure 5 demonstrates function calculated for two sets of parameters using exact
expression (6) and approximations (16), and (22). I plotted instead of to show asymptotic
behavior of (6) and (16) in comparison with quadratic form (22). Using (9, 15, 22) I obtain
multivariate normal approximation [17] to multinomial distribution (1) as
(25)
Figure 6 shows graphs of as a function of calculated for and four
sets of probabilities, using exact formula (1), and multivariate normal approximation (25).
In order to derive pseudo probability density in domain, I note that:
• In thermodynamic limit, the number of distinct event samples having is
proportional to the volume of –dimensional sphere of radius. This
statement can be expressed as
(26)
The sum in (26) is over all distinct values of which are less or equal than. The function
is determined from normalization requirement:
(27)
In order to convert from sums to integrals over continuous variable , I define pseudo density
of object states as
(28)
The corresponding pseudo probability density is given by (10). The normalization
requirement for these functions becomes:
(29)
The value is obtained from (6) by having event with lowest probability
acquire maximum population: . From (6), as :
(30)
For probabilities (2):
(31)
From (31) as . That allows replacing in the upper limit of integral in (29)
with . The expression for function in (26) is [17]:
(32)
Using (32, 15) I write (26) as
(33)
, where
is the volume of –dimensional
sphere of radius .
The number of distinct values of in limit can be estimated from (33, 12) as
(34)
From (34), I can approximately enumerate distinct energy levels by “quantum number” :
(35)
From (28, 33) the pseudo density of object states is:
(36)
I use condition (11) to define effective
value:
(37)
Figure 7 shows calculated from expressions (1, 6), and from formula (33). From (10,
15, 36), the pseudo probability density function (pdf) of event samples in thermodynamic limit is
(38)
, where is the pdf of gamma [17] distribution with scale parameter, and shape
parameter
. I calculate moments of in equilibrium:
Mean:
(39)
Variance:
(40)
moment
about mean:
(41)
The sums in (39-41) are over all partitions of N. Expression (38) allows explicit calculation of all
moments of in thermodynamic limit. From (38) the mean value , the variance, and the
third moment , in equilibrium, are:
(42)
(43)
(44)
Figure 8 shows calculations of , , and from expressions (39-41) for the moments, with
probability mass function (1). It demonstrates how these values asymptotically approach
thermodynamic limit values (42-44) as.
The knowledge about object’s state is not full if sample size is finite [13]. Even in case of
maximum knowledge, when the sample of size consists of the same event, there is
probability a sample of size will return two distinct events. I define knowledge confidence
as:
(45)
To illustrate the notion of knowledge confidence, imagine an experiment to determine polarization
of a light source with a polarizer coupled to light detector. If all photons from the source arrive
at the detector, does it mean I know the polarization of light source with absolute certainty? The
answer is no, since there is a chance, if I repeat the experiment with photons, at least one
photon will be lost in polarizer.
I shall demonstrate how the presented model correlates with some known constructs. Consider
one-dimensional quantum harmonic oscillator. Its energy levels [18] are given by:
(46)
, where is the interval between energy levels; . Energy levels (46) are equally-
spaced. The energy levels of event sample of cardinality exhibit similar pattern. As shown
on Figure 1, linear dependence on quantum number n holds reasonably well if is not too large.
From (12), the linearity breaks down when
. From (22):
(47)
, where
;
;
(48)
From above, the energy levels of an event sample of cardinality are:
(49)
, where are Loeschian numbers [19]. With (46, 49), I can write the comparison table of the first
few energy levels of quantum harmonic oscillator in units of , and of event sample of
cardinality in units
:
quantum harmonic
oscillator
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
event sample of
cardinality
0
1
3
4
7
9
12
13
16
19
21
25
27
28
31
36
37
39
43
, where black boxes designate missing energy levels. In the second row, the energy levels shown
in shaded boxes are only realized for samples with sizes satisfying ; and energy
levels shown in white boxes are realized for samples with sizes satisfying. Here
is the remainder of division of by .
Consider another quantum mechanical example: particle of mass in a box of size .
Its energy levels [18] are given by:
;
(50)
In presented model, similar energy spectrum is exhibited by event sample of cardinality ,
as shown on Figure 2. From (47), the energy levels of an event sample of cardinality , in
thermodynamic limit approximation, are:
;
(51)
, where
is to be considered as the effective mass of the particle.
(52)
Energy levels (51) with even are only possible when is even, and energy levels with odd are
only possible when is odd. With ½ probability the lowest energy level is , and with
½ probability it is , in units of .
3. THERMODYNAMIC ENSEMBLE
In previous section, the event sample represented a single object. In this section I consider a
collection of objects; each object represented by an event sample of size , and cardinality . I
call such collection thermodynamic ensemble. Objects with different or belong to different
thermodynamic ensembles. I call event sample a mode. I designate the set of modes an
object may occupy, and the number of objects in mode :
(53)
The probability for an object to be in mode is given by (1, 9). Objects in the same mode are
indistinguishable, by definition of measurement. The probability mass function of distribution of
modes among objects is:
(54)
The objective is to find equilibrium, i.e. the most probable distribution . For a standalone
object, the most probable distribution is the one which maximizes (54):
(55)
Consider objects to be part of thermodynamic ensemble in a certain state. That imposes conditions
on distribution of modes, so relations (42-44), (55) may no longer hold. I consider one of the
possible conditions and show how it leads to the notion of temperature. Let the state of
thermodynamic ensemble be such that the mean energy of objects in equilibrium is , which
may be different from equilibrium mean energy of a standalone object (42). Then:
(56)
To find the most probable distribution of modes , I shall maximize logarithm of (54) using
method of Lagrange multipliers [20] with conditions (53, 56):
(57)
From (57, 56, 53) I obtain the following equation involving Lagrange multipliers and :
(58)
, where is digamma function, and and are to be determined by solving (58) for :
(59)
, and by plugging from (59) into (56) and (53). In (59), is the inverse digamma function,
and
. The parameter is commonly known as temperature.
Since the number of objects in mode cannot be negative, expression (59) effectively limits
modes which can be present in equilibrium to those satisfying
(60)
, where is Euler–Mascheroni constant. With approximation [21]:
I rewrite (59) as:
(61)
Presence of term in (61) leads to a computationally horrendous task of calculating and ,
because the summation in (53, 56) has to be only performed for modes satisfying (60). I shall leave
the exact computation to a separate exercise, and make a shortcut, by ignoring term in (61).
This approximation is equivalent to Boltzmann’s postulate
2
that the number of objects in mode ,
in equilibrium, is proportional to
. The shortcut allows calculation of Lagrange
multiplier from (53):
, where
(62)
Using (36), the partition function in (62) can be evaluated as:
(63)
The equation (56) then becomes
(64)
2
While widely used, this postulate has rather unphysical consequence that there is a non-zero probability of finding
an object in a mode with arbitrary large energy. Another consequence is the divergence of partition function for some
constructs, e.g. hydrogen electronic levels [40].
Eq. (64) is the relation [22] between mean per-particle energy and temperature in -
dimensional Maxwell-Boltzmann gas.
The thermodynamic equilibrium per-object entropy
is the number of nats required to
encode distribution of modes in equilibrium. Using (15) and (64), I evaluate
in limit:
(65)
, where
(66)
In case of , i.e. for degrees of freedom, expression (65) turns into equivalent
of Sackur-Tetrode equation [22] for the entropy of ideal gas. For thermodynamic equilibrium
entropy of a standalone object, instead of (65), I have:
(67)
The difference of entropies (65, 67) by
term is due to spread in object energies. The non-
zero thermodynamic entropy means the mode is unknown prior to observation, for each
observation. I rewrite (67) as:
; where
(68)
The expression for
in (68) was derived in thermodynamic limit, i.e. when . When
, . By comparing
to (Figure 9) I see that
fairly close to
except when is large enough, in which case thermodynamic limit approximation for the
given becomes less valid anyhow. Therefore, I can replace
with in (68) and write
thermodynamic equilibrium entropy as:
;
(69)
Figure 10 shows the comparison of thermodynamic equilibrium entropy of a standalone
object in limit, calculated from
, and from (69). Since
should
be , it means cannot be less than
.
The expression for in (63) has been derived in thermodynamic limit approximation, i.e.
when . It means there must be large number of energy levels included in sum (63), i.e.
temperature cannot be too small. Therefore, the expressions (63-64) are only valid for ,
where is the characteristic difference between adjacent energy levels.
For an event sample of cardinality the approximately evenly-spaced energy levels
(Figure 1) allow for more accurate expression for partition function. From (35) the characteristic
difference between adjacent energy levels is:
(70)
Figure 11 shows the numeric calculation of the difference between adjacent energy levels
averaged over distinct event samples with the given value of , and . I can use (46), with
degeneracy of each level =6, as an approximation for combined energy levels (49) in expression
for partition function (63), and obtain mean energy of modes with given as [23]:
;
(71)
Eq. (71) reduces to (64) if . (71) has been derived using linear dependence (46) of energy
levels on quantum number , in , i.e. limit. For a black-body spectrum, the
condition for validity of (71) has to be satisfied for as well. Therefore, a typical
black-body spectrum can only be exhibited by ensembles with
. An
example is cosmic microwave background. The higher the temperature, the less accurate (71) will
be in region, where spectral intensity would fall off steeper than in (71). Such deviation
from black-body is obvious in solar spectrum.
The zero-point energy term
in (71) is the subject of a hundred-year controversy [24, 25].
The conventional theory views radiation as existing “out there”, decoupled from the matter. Such
view leads to the infinite energy density due to the infinite number of hypothetically possible
decoupled radiation modes, each multiplied by
[23]. The conventional theory has no upper
limit on , short of an artificial cut-off, usually assumed at Planck energy. Even with frequency
cut-off, there is still a discrepancy with empirical evidence of at least 58 orders of magnitude [25],
possibly the biggest contradiction of any theory. In presented model,
term cannot contribute
more than
to average energy, i.e. its contribution is well within standard deviation.
Thermodynamic ensemble is the statistical ensemble of non-elementary eigenstates whose
probabilities are given by (66), as opposed to elementary eigenstates’ probabilities given by
(2). Using (66, 5, 24) in formula (6) for energy of arbitrary non-equilibrium state, and substituting
for , for , and for :
(72)
Here
is the entropy (5) of the ensemble, and is objects’ mean energy, in equilibrium. I
rewrite (72) in terms of per-object quantities
;
, in the limit :
(73)
Here ;
is the deviation of objects’ mean internal energy, and
of mean thermodynamic entropy, from their equilibrium values; is the work done on
ensemble. As expected, from (73),
. Eq. (73) represents the First Law of Thermodynamics.
For ensemble of non-elementary eigenstates,
, where
is the maximum entropy,
achieved with equal population numbers in (54). It means, in thermodynamic ensemble,
according to (72, 73), part of the knowledge is associated with objects’ internal energy.
4. THE MEASUREMENT
Measurement is one of the most debated topics in conventional theory [26, 27]. The
controversy is stirred by the discreteness of outcomes of the measurement on quantum objects. A
concept of wave function collapse has been devised early on, more as illustration, than explanation.
The collapse concept is an awkward amalgamation of quantum postulate [28], and an implicit
assumption of wave function’s physical reality (PR). The collapse concept is prevalent [29] despite
its contradiction with other accepted frameworks, such as special relativity. A burlesque scenario
can be imagined: using wave function of a photon, which has to be non-zero everywhere up to the
moment of measurement, one can instantly communicate with an observer on the opposite side of
the galaxy, by absorbing photons coming from a star near the center of galaxy, and thus impacting
probability of the same photons being detected by the remote observer. A similar scenario inspired
EPR paradox [30]. To work around the problem, a number of alternative QM interpretations, such
as many-world [31], and pilot wave [32], have been proposed, still maintaining the PR viewpoint.
The measurement problem disappears if we stop attributing PR to wave function, and stop
mistaking correlation for causality. In this section I show the concept of wave function is
superfluous, and that treating measurement as event sampling is consistent with predictions of
conventional QM, without its controversial baggage. I discuss the basics of conventional QM:
wave function, Born rule, and Schrödinger equation, emphasizing their true meaning which is
rarely, if ever, mentioned in textbooks. I derive similar expressions using event sample as base
construct, demonstrating the connection between QM and the presented model. I explain the
mechanics of observation, and the role of observer. I show how model extends to include transition
from quantum to classical state, dispersion and decoherence.
The following experiment illustrates the delusion of wave function collapse concept, and of
EPR “paradox”: put a pair of gloves (left-hand and right-hand) randomly into two boxes, and let
Alice and Bob each pick a box. Until one of them opens a box, no one knows who has which glove.
To the same effect, the pair of gloves can be substituted by a pair of entangled particles having
opposite spin, in a gedankenexperiment. The conventional QM describes situation as superposition
(74)
, where signs enforce parity flip if Alice and Bob swap boxes, since mirror image of a left-hand
glove is a right-hand glove. The lower sign corresponds to mirror image. If Alice finds left-hand
glove in her box, Bob would find right-hand glove in his. In this case, conventional theory says,
the wave function (74) collapsed into eigenstate. Has Alice finding left-
hand glove caused collapse of , and, as a result, Bob to find right-hand glove in his box? Of
course not. That is correlation, not causality, just like EPR case, where authors considered a pair
of entangled particles. The distinct outcomes of the measurement (the event sample) are
determined by the observation basis. A sampled event is one of object’s eigenstates in a given
observation basis. In example above, the object consists of two particles: two glove boxes,
entangled by glove pairing. Each particle produces event sample of cardinality (left or right
glove). The events produced by different particles are correlated due to entanglement between the
particles, even though events in each particle basis are completely random. The object’s eigenstates
are composed as direct product of particle eigenstates. The state (74) is one of object’s eigenstates
in preparation basis. The other eigenstates could be constructed as:
In the measurement basis, the object’s eigenstates are:
There are two observation bases involved in determining the conditional probability of finding
object in state , given it was prepared in state . The observation bases are usually related by a
[macroscopic parameter]-driven unitary transformation, albeit there are examples from quantum
field theory (QFT), when they are not [33]. There could be no unitary transformation between
observation bases of different cardinality . In example, the observation bases of preparation and
of measurement are rotated with respect to each other by
within plane formed by vectors
. The observation basis rotated when boxes became identifiable (i.e. separated) by a
parameter (distance): one box got to Alice and another box to Bob. This is an example of a unitary
transformation of observation basis with distance as transformation parameter.
At the core of conventional QM lies Born rule. It stipulates the probability of a particular
outcome of a measurement performed on state [vector] is
(75)
, where is the angle between and . If vectors constitute an eigenbasis, then
(76)
The rest of QM deals with how probabilities (75) change with parameter-driven unitary
transformation of eigenbasis . The Born rule is a conditional probability, i.e. a probability of
outcome , given the object was prepared in state . In example above, the conditional probability
of outcomes , given preparation (74) are:
Conventional QM does not deal with unconditional probabilities. QM predicts results of a
measurement, if the state of an object is already known in some observation basis. The conditional
measurement requires two or more particles to be entangled via some medium [23]. The
entanglement enforces correlation between object’s eigenstates in preparation and measurement
bases. That is the underlying setting, albeit not widely acknowledged, for Born rule.
For any state there exists an observation basis in which has only one non-zero eigenstate
component:
(77)
, where is a unitary transformation to eigenbasis in which . It corresponds
to a situation when all measurement events are a particular outcome , as e.g. detection of
polarized photons using polarizer aligned with photon polarization.
Schrödinger equation, in its true meaning, describes a parameter-driven unitary transformation
of observation basis. The usual parameter of transformation is time, but it can be other parameter,
e.g. distance. The integral form of Schrödinger equation:
(78)
is the same equation as (77), where is a unitary transform generated by Hermitian operator .
From (77), the eigenbasis components of quantum state must have certain phase relations with
each other, enforced by . In case of a time-driven unitary transformation (78), generated by time-
independent , the phase relations between and eigenvector components are:
(79)
, where are eigenvalues of . If phase relations do not exist, or only exist for some components,
then we have a mixed state, or a partially mixed state. The mixed state is that of a classical object,
the pure state is that of a quantum object, and partially mixed state is that of an object in transition
from quantum to classical.
An outcome of a single act of measurement is an event. The measurement involves collecting
sample of events . The term measurement is thus synonymous to sampling. The
measurement events on quantum object are formed as a direct product (entanglement) of [more
elementary] constituent events, as in example with glove boxes. The entanglement imposes
the observer
measuring
device &
environment
memory
correlation (phase relations) between measurement events taken in different observation bases, i.e.
corresponding to different macroscopic parameters. The constituent events come from the
measuring device, from environment, and even from observer memory. A taken event sample,
decoupled from the measuring device and from the environment, is stored in observer memory.
The observation process is illustrated on Figure 12:
The captured event sample can be encoded and stored as, e.g.:
1. Electronic spin configuration in magnetized materials [34]
2. Charge distribution in capacitor elements of charge-coupled devices (CCD) [35]
3. Sequence of nucleotide bases in a DNA strand [36]
4. Neural circuits [37]
It can be proven
3
, the knowledge encoded in any form can be construed by a statistical ensemble
of orthogonal (uncorrelated) eigenstates (events), with set being the encoding alphabet.
For a pure state there exists an observation basis (77) in which the measurement would only
return event . In such basis, entropy (24) . The state is known,
by knowledge amount (23), with confidence (45). In a different observation basis, the
measurement sample may consist of events , with entropy . The sample
may not carry the same amount of knowledge, as sample done in basis, because sample with
does not uniquely identify the state. E.g., a circular polarized light (eigenstate
) will produce the same event sample, when measured with linear polarizer, with any orientation
of the latter within the plane perpendicular to light propagation, i.e. under any unitary
transformation which has as one of its eigenvectors. For that class of transformations, which
3
Perhaps a simplistic proof is to consider encoded knowledge as a sequence of yes/no answers in a form of a binary
string, which can also be represented as an integer number . Factorization into primes yields
statistical ensemble where is the exponent of prime in factorization. As prime factors in factorization
are uncorrelated, the associated eigenstates are orthogonal.
Figure 12
, are event samples, taken in observation bases corresponding
to parameters , ; , are decoupled from the measuring
device and from environment, and stored in observer memory;
is the currently captured event sample. The measurement events in
are formed as direct product of constituent events from the measuring
device, environment, and memory. The feedback from memory may play
a role in emergence of consciousness
keep the event sample unchanged, a state vector formalism can be used to extract the knowledge
about quantum state from correlations between events taken in different observation bases. The
state vector has to incorporate the correlation mechanism. One way to incorporate correlation is
via phase relations between vector components, similar to (79).
The state vector has to satisfy conditions:
1. ; this is the usual normalization requirement
2. Vector should be invariant, up to a phase factor, with respect to a change in size of
event sample, at least in the limit . This is to ensure the conditional measurement
probabilities converge to (75)
3. Vector has to incorporate the relevant macroscopic parameter(s). The variation of
parameters should equate to a unitary transformation of observation basis
Vector is an abstract mathematical construct whose only purpose is to enable correct calculation
of conditional probabilities. That’s how the wave function should have been treated in a first place.
The above considerations lead to the following expression, associated with measurement sample
, and macroscopic parameter :
(80)
, where , and is (4). The probabilities of different events in the sample are
not necessarily equal, because measurement events are not elementary. The energy values are
associated with unconditional probabilities by eq. (9):
;
(81)
, where I added conversion constant because the imaginary argument of has to be in
. The correlation coefficient between vectors and is:
(82)
, where is the correlation distance. From (82), the coefficient of determination , i.e.
the ratio of outcomes in sample 2, which are predictable from sample 1, is:
(83)
, where is the probabilities vector; . Antisymmetric matrix has two
non-zero purely imaginary eigenvalues, which only differ in sign:
;
(84)
Empirically established relations:
;
(85)
(86)
, where is energy (6); . The sign in (85)
means linear correlation, rather than functional equality. Figure 13 shows correlation between left
and right sides of (85) for a number of event samples. In the limit two sides of (85) turn
into exact equality.
Eq. (83) is routinely obtained by solving Schrödinger equation for probability of finding an
object after time to be in the same initial state [23]. Evidently, the Born rule (75) is equivalent to
coefficient of determination in statistics, in being a conditional probability measure. The state
vector is the counterpart of wave function. The Schrödinger equation is, as expected
4
:
, with diagonal form of being
(87)
Eq. (83) represents a self-interference of an object at correlation distance . To generalize for
objects, and allow for dispersion, and decoherence, I rewrite (83) as
(88)
, where is the dispersion matrix, defined as:
, where
(89)
Expr. (88) reduces to (83) if . Matrix determines the type of eigenstate dispersion in
correlated objects. Two distinct types of parameter-driven dispersion could be identified:
1. Coherent dispersion:
;
(90)
, where is a dispersion parameter. The numeric analysis of (88-90) reveals
follows Gaussian profile (Figure 14):
;
(91)
The dispersion parameter is the property of the device. In case of coherent dispersion,
the transition rate
.
2. Incoherent dispersion (decoherence):
;
(92)
The numeric analysis of (88-89, 92) reveals follows exponential decay (Figure 15):
;
(93)
Function in (90, 92) returns matrix, where each row equals
multinomial distribution of events into buckets, with probability vector given by (2). The
multinomial distribution of events is generated for each object , .
The condition for exponential decay (93) is the breakdown of predictable phase relations
between constituting particles, i.e. decoherence. The decoherence is imposed by randomly
generated -matrix (92). The end result of decoherence is the mixed state, where probability is
spread equally among eigenstates of dispersed objects.
4
Any continuous dynamics which may be implied by (87) is a detachment from fundamentally discrete nature. The
continuous process contradicts quantum postulate, unless it is an …abstraction, from which no unambiguous
information concerning previous or future behavior can be obtained [28]. I would only provide continuous equations
like (87) as a link to conventional theory, not as an advancement of the model
For thermodynamic ensemble, the exponential decay has a different context:
(94)
, where is the number of objects remaining in initial state ; is the total number of objects,
(95)
The exponential decay (94) is driven by transitions of constituting elementary eigenstates. Initially,
all objects are in eigenstate , represented by event sample , having entropy
(24) . A transition changes event sample as:
(96)
The transition (96) is accompanied by a change in object entropy . The essential feature of a
classical decay process is that per-object entropy is correlated with
as:
(97)
The per-object entropy is calculated from object entropies (24) as:
(98)
The object state can be represented by
state vectors , having different
phase relationships between constituent events , due to different values of transformation
parameter . For a given value of transformation parameter, the probability to find object in
corresponding state is
.
To prove the correlative relations (94, 97), a numeric analysis of vs.
vs.
has been performed, with wide variation of input parameters. The product is the number
of transitions (96) randomly distributed across objects. Per object, transition equates to of
information loss. Figure 16 shows values of
,
, plotted against . While and
are object-dependent,
ratio is not, in the limit . The ratio
correlates with
classical time, in this analysis represented by independent variable . It explains why time
measures derived from observation of vastly different macroscopic objects, are highly correlative.
An illustration of how this correlation breaks down if is small is the increase in laser linewidth
with decrease in number of photons in the mode [38], resulting in less accurate atomic clocks. In
terms of time intervals, , where is per-object energy loss
corresponding to per-object entropy change , and is the temperature introduced in previous
section. This relation indicates an accurate clock has to dispense the same amount of energy per
cycle. The established correlative relation represents the Second Law of
Thermodynamics.
Above, energy and entropy are in units of nats. Parameter is dimensionless. The conversion
to SI units is:
(99)
, where
(100)
, and
(101)
The conversion (101) is between and . Therefore, energy and time, expressed in SI
units, are not independent. If characteristic times of all processes in nature are increased by a factor,
then all energies have to decrease by the same factor. There is no separate conversion between
energy in nats, and energy in joules, or time in seconds and dimensionless parameter .
The conversion constant represents an average, per object, quantity of information loss in
transition. Therefore, a change in amount of knowledge, being a result of a number of transitions,
has to satisfy . With (93, 85), it leads to the speed limit [39]:
(102)
The values are dimensionless. They should not depend on units used for arguments in (83,
93). The conversion constant (100) takes care of that in (83). However, in (93), the resultant
expression under is in , as it should be. If energy is expressed in joules , and time in
seconds , where would under in (93) appear from? There has to be additional
conversion parameter added under in (93). This conversion parameter, unlike true
conversion constants (100, 101), is a device parameter. With conversion parameter added, the
expression under in (93), in SI units, becomes:
;
(103)
, where is a dimensional device parameter with a meaning of decoherence time [23].
For a classical radiation detector, the expression for in (103) is [23]:
(104)
, where is the speed of light; is the refractive index of material; [rad/s] is the spread in
object’s internal transition frequencies; is the number of correlated objects per unit surface area
of the detector; the dimensionless scattering rate. The value in (104) conceivably is double
the standard deviation of -matrix (86), divided by , i.e.:
(105)
Then,
;
(106)
The expression (106) is a restatement of Fermi’s golden rule for transition probability into
continuous spectrum near :
(107)
, where is density of final states, per unit . By comparing (107) with expression (9) in
[23], I get quite simple relation between decoherence time and density of final radiation states:
(108)
With (106, 108), for classical radiation detector
(109)
The left side of (109) purports to be a property of radiation field, while the right side is the
property of detector. The fact that seemingly unrelated parameters are connected to each other with
only universal constants, is an argument against considering radiation as a standalone entity with
properties independent of the measurement context [23].
The model united the seemingly disjoint QM artifacts: Fermi’s golden rule and Planck’s
radiation formula [23], by exposing decoherence (93) as a driving factor in both cases.
References
[1]
W. Wislicki, "Thermodynamics of systems of finite sequences," J. Phys. A: Math. Gen.,
vol. 23, no. 3, p. L121, 1990.
[2]
C. Frenzen, "Explicit mean energies for the thermodynamics of systems of finite
sequences," J. Phys. A: Math. Gen., vol. 26, no. 9, p. 2269, 1993.
[3]
S. Viznyuk, "Thermodynamic properties of finite binary strings," arXiv:1001.4267 [cs.IT],
2010.
[4]
R. Albert and A.-L. Barabási, "Statistical mechanics of complex networks," Rev. Mod.
Phys., vol. 74, pp. 47-97, 2002.
[5]
G. Bianconi and A.-L. Barabási, "Bose-Einstein condensation in complex networks,"
arXiv:cond-mat/0011224 [cond-mat.dis-nn], 2000.
[6]
M. Bouchaud, "Wealth Condensation in a simple model of economy," Physica A Statistical
Mechanics and its Applications, vol. 282, p. 536, 2000.
[7]
S. Sieniutycz and P. Salamon, Finite-Time Thermodynamics and Thermoeconomics,
Taylor & Francis, 1990.
[8]
J. Chan, T. Alegre, A. Safavi-Naeini, J. Hill, A. Krause, S. Groeblacher, M. Aspelmeyer
and O. Painter, "Laser cooling of a nanomechanical oscillator into its quantum ground
state," arXiv:1106.3614 [quant-ph], 06 2011.
[9]
A.-L. Barabási and E. Bonabeau, "Scale-Free Networks," Scientific American, vol. 288, pp.
50-59, 2003.
[10]
R. Penrose, "On Gravity's Role in Quantum State Reduction," General Relativity and
Gravitation, vol. 28, no. 5, pp. 581-600, 1996.
[11]
M. Jammer, "The Conceptual Development of Quantum Mechanics," in The History of
Modern Physics, Tomash, 1989.
[12]
S. Viznyuk, "New QM framework," ResearchGate.net, 2014. [Online]. Available:
https://www.researchgate.net/publication/293992646_New_QM_framework.
[13]
S. Viznyuk, "Shannon's entropy revisited," arXiv:1504.01407 [cs.IT], 03 2015.
[14]
K. Yamanaka, S. Kawano and K. Y., "Constant Time Generation of Integer Partitions,"
IEICE Transactions on Fundamentals of Electronics, Communications and Computer
Sciences, Vols. E90-A, no. 5, pp. 888-895, 2007.
[15]
S. Viznyuk, "OEIS sequence A210237," 2012. [Online]. Available:
https://oeis.org/A210237. [Accessed 21 04 2015].
[16]
D. Collins, "Entropy Maximizations on Electron Density," Z. Naturforsch, vol. 48a, pp. 68-
74, 1993.
[17]
C. Forbes, M. Evans, N. Hastings and B. Peacock, Statistical Distributions, Fourth Edition,
Hoboken, NJ, USA: John Wiley & Sons, Inc., 2010.
[18]
D. J. Griffiths, Introduction to Quantum Mechanics, 2nd ed., Pearson Education, Inc.,
2005.
[19]
N. Sloane, "OEIS sequence A003136," 1991. [Online]. Available:
https://oeis.org/A003136. [Accessed 2015].
[20]
I. Vapnyarskii, "Lagrange multipliers," in Encyclopedia of Mathematics, Springer, 2001.
[21]
M. S. I. A. Abramowitz, "6.3 psi (Digamma) Function," in Handbook of Mathematical
Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover, 1972, p.
258–259.
[22]
K. Huang, Statistical Mechanics, John Wiley & Sons, 1987.
[23]
S. Viznyuk, "Planck's law revisited," Academia.edu, 2017. [Online]. Available:
https://www.academia.edu/35548486/Plancks_law_revisited.
[24]
P. Jordan and W. Pauli, "Zur Quantenelektrodynamik ladungsfreier Felder," Zeitschrift für
Physik, vol. 47, no. 3, pp. 151-173, 1928.
[25]
G. Grundler, "The zero-point energy of elementary quantum fields," arXiv:1711.03877
[physics.gen-ph], Novermber 2017.
[26]
J. Bell, "Against 'measurement'," Physics World, August 1990.
[27]
J. Wheeler and W. Zurek, Quantum Theory and Measurement, Princeton University Press,
NJ, 1983.
[28]
N. Bohr, "The Quantum Postulate and the Recent Development of Atomic Theory,"
Nature, pp. 580-590, 14 April 1928.
[29]
M. Tegmark, "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?,"
arXiv:quant-ph/9709032, 1997.
[30]
A. Einstein, N. Rosen and B. Podolsky, "Can Quantum-Mechanical Description of Physical
Reality Be Considered Complete," Phys. Rev., vol. 47, p. 777, 1935.
[31]
H. Everett, "Relative State Formulation of Quantum Mechanics," Reviews of Modern
Physics, vol. 29, pp. 454-462, 1957.
[32]
D. Bohm, "A suggested interpretation of the quantum theory in terms of hidden variables,"
Phys. Rev., vol. 85, pp. 166-179, 1952.
[33]
R. Haag, "On Quantum Field Theories," Dan. Mat. Fys. Medd., vol. 29, no. 12, pp. 1-37,
1955.
[34]
C. Chappert, A. Fert and F. Van Dau, "The emergence of spin electronics in data storage,"
Nature Mater, vol. 6, no. 11, p. 813–823, 2007.
[35]
M. Lesser, "Charge coupled device (CCD) image sensors," in High Performance Silicon
Imaging, Fundamentals and Applications of CMOS and CCD Sensors, Woodhead
Publishing, 2014, pp. 78-97.
[36]
M. Mansuripur, "DNA, Human Memory, and the Storage Technology of the 21st Century,"
Proceedings of SPIE, vol. 4342, pp. 1-29, 2002.
[37]
T. Kitamura, S. Ogawa, D. S. Roy, T. Okuyama, M. Morrissey, L. Smith, R. Redondo and
S. Tonegawa, "Engrams and circuits crucial for systems consolidation of a memory,"
Science, vol. 356, no. 6333, pp. 73-78, 2017.
[38]
H. Wiseman, "The ultimate quantum limit to the linewidth of lasers," arXiv:quant-
ph/9903082, 1999. [Online]. Available: https://arxiv.org/abs/quant-ph/9903082.
[39]
S. Deffner and S. Campbell, "Quantum speed limits: from Heisenberg's uncertainty
principle to optimal quantum control," arXiv:1705.08023 [quant-ph], 16 10 2017.
[40]
S. J. Strickler, "Electronic Partition Function Paradox," Journal of Chemical Education,
vol. 43, no. 7, pp. 364-366, 1966.
[41]
S. Viznyuk, "OEIS sequence A210238," 2012. [Online]. Available:
http://oeis.org/A210238. [Accessed 21 04 2015].
0.000000
1
0.000000
1
0.000000
1
0.000000
1
0.001998
2
0.003328
6
0.008299
20
0.036368
42
0.007992
2
0.009972
3
0.016598
30
0.072735
210
0.017982
2
0.009994
3
0.024828
30
0.107827
105
0.031968
2
0.013311
6
0.024966
30
0.109103
140
0.049951
2
0.023262
6
0.033127
20
0.110476
105
0.071930
2
0.023328
6
0.033196
20
0.144194
420
0.097905
2
0.029951
6
0.033265
20
0.145567
42
0.127878
2
0.039846
3
0.041495
120
0.146843
420
0.161847
2
0.040023
3
0.049521
20
0.180562
105
0.199813
2
0.043196
6
0.050072
20
0.181935
840
0.241778
2
0.043329
6
0.057889
60
0.183211
105
0.287740
2
0.053246
6
0.058024
30
0.213187
140
0.337700
2
0.063064
6
0.058162
30
0.215653
105
0.391659
2
0.063396
6
0.058302
60
0.218302
1260
0.449618
2
0.069775
6
0.066188
60
0.220951
105
0.511576
2
0.069997
6
0.066393
30
0.223804
140
0.577534
2
0.083198
6
0.066601
60
0.249555
210
0.647492
2
0.089556
3
0.074556
60
0.250928
210
0.721452
2
0.090154
3
0.074696
20
0.253394
630
[…]
[…]
[…]
[…]
[…]
[…]
[…]
[…]
640.1354
2
952.5446
6
929.0220
30
333.7789
105
644.8379
2
955.9414
3
929.4275
60
334.1844
210
649.6592
2
956.3468
6
930.8138
20
335.5707
42
654.6150
2
957.7331
6
934.0276
20
337.6184
140
659.7260
2
962.0473
6
934.7208
60
338.3115
210
665.0203
2
963.1459
6
935.8194
20
339.4101
42
670.5388
2
968.1543
3
940.4212
30
342.8495
105
676.3459
2
968.8474
6
941.1144
20
343.5426
42
682.5595
2
974.9556
6
946.8165
20
348.0859
42
689.4673
2
981.7580
3
953.2134
5
353.3277
7
Table 1
, value pairs calculated from (6) for four sets of parameters using [14]
algorithm for finding partitions of integer into parts. For each partition I
calculated the value of and multiplicity of multinomial coefficient in (1) [41].
Finally, for each distinct value of produced the results for the table.
I display the first 20 and the last 10 records from the table.
0
5
10
15
20
25
30
010 20 30 40 50 60 70 80 90 100
0
100
200
300
400
500
600
700
800
900
1,000
0 10,000 20,000 30,000 40,000 50,000 60,000 70,000
M=3 N=30
Figure 1
Distinct values of in increasing order calculated from (6) with (2), using [14] algorithm
for finding partitions of integer into parts. The values of M and N are given
on the graphs. The graphs represent complete set of distinct values of for the given
values of M and N. The graphs demonstrate close to linear dependence of on “quantum
number” in the vicinity of equilibrium . This is a characteristic feature of a sample
with cardinality . Away from equilibrium the linear behavior is violated as
interval between energy levels begins to grow to reach
M=3 N=900
0
20
40
60
80
100
120
140
010 20 30 40 50 60 70 80 90 100
0
20
40
60
80
100
120
140
0 10,000 20,000 30,000 40,000 50,000
M=2 N=200
M=7 N=70
Figure 2
Distinct values of in increasing order calculated from (6) with (2), using [14]
algorithm for finding partitions of integer into parts. The values of M and
N are given on the graphs. The graphs represent complete set of distinct values of
for the given values of M and N.
1
10
100
1000
10000
100000
1000000
10000000
100000000
1E+09
1E+10
110 100 1000 10000
Figure 3
The total number of distinct event samples (curves 1, 3), and the total number
of distinct values (curves 2, 4) as functions of for two sets of
probabilities:
1. for
2.
for
3. for
4.
for
The values on curve 1 are by factor greater than on curve 2 as. The
values on curve 3 are by factor greater than on curve 4 as.
Using Stirling’s approximation for large in (11) one can show the curves grow
proportionally to as
(1)
(2)
(3)
(4)
Figure 4
Function calculated for two sets of probabilities.
Blue lines were calculated using formula (4). Red lines were
calculated using thermodynamic limit approximation (15)
Figure 5
Values of calculated as a function of with probabilities (2) for
four sets of parameters:
1.
2.
3.
4.
Blue lines were calculated using formula (6). Green dash lines were
calculated using thermodynamic limit approximation (16). Red lines were
calculated using quadratic form (22) approximation. For a given value of
the values were distributed proportionally to corresponding
probabilities. For large value of the blue lines and green
dash lines overlap closely as seen on curves 1 and 3. For small values of
the thermodynamic limit approximation is not accurate, and blue lines differ
from green dash lines as seen on curves 2 and 4. Red lines overlap with blue
lines in close proximity to the minimum (7) of.
(1)
(2)
(3)
(4)
(1)
(2)
(3)
(4)
Figure 6
Values of calculated as a function of for
and four sets of probabilities
1.
2.
3.
4.
Blue lines were calculated using formula (1). Red lines were
calculated using multivariate normal approximation (25). For the
given value of the distribution of values is proportional
to the corresponding probabilities
1.E+00
1.E+01
1.E+02
1.E+03
1.E+04
1.E+05
1.E+06
1.E+07
1.E+08
1.E+09
1.E+10
1.E+11
0.001 0.01 0.1 1 10 100 1000
(3)
(2)
(1)
Figure 7
The number of object states having as a function of
for three sets of the parameters and probabilities (2):
1.
2.
3.
4.
Solid lines are the results of calculation using formulas (1, 6). Dash lines
represent thermodynamic limit approximation (33). The graphs show
thermodynamic limit provides the better approximation the larger is the ratio
. Solid lines level off close to because density of states per interval
decreases near due to non-spherical -domain boundary. The
boundary is defined by
First three moments of plotted as dots vs. total number N of microstates for
three sets of probabilities. The value of the third moment is reduced by
a factor of 2 to show its asymptotic behavior comparing with the first two moments.
(4)
0.0
0.5
1.0
1.5
2.0
2.5
3.0
110 100 1000
Figure 8
The mean value, the variance, and the third moment vs. sample size N for
three values of and probabilities (2). The graphs have been calculated using
expressions (39-41) with probability mass function (1). The value of the third moment
is reduced by a factor of 2 to show its asymptotic behavior comparing with and
. For each set of parameters, the curves approach values as
Figure 9
Comparison of
in (68) with
0
2
4
6
8
10
12
1 10 100
Figure 10
In blue is the thermodynamic equilibrium entropy of a standalone object
calculated from
, where is given by (66). In red is the
thermodynamic equilibrium entropy calculated from (69).
Two sets of graphs are for two values of
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
110 100 1000
Figure 11
Difference between adjacent energy levels (6), averaged over distinct event
samples with given value of , and . The curve is approximated by (70) as
Figure 14
Blue line is calculation of from (88-90) with parameters:
, and event sample
generated with multinomial pmf (1).
Red line is Gaussian profile (91), where has been
computed from event sample generated for blue line.
The MATLAB code used in calculation:
http://phystech.com/download/gaussian_dispersion.m
Figure 15
Blue line is calculation of from (88-89, 92) with parameters:
, and event sample generated with
multinomial pmf (1)
Red line is exponential decay (93), where has been
computed from event sample generated for blue line.
The MATLAB code used in calculation:
http://phystech.com/download/exponential_dispersion.m
Figure 16
Blue crosses, visible inside red circles, are values of
,
where is the total number of objects; is the per-object information
outflow rate; is the number of objects remaining in initial mode after
transitions (96). Red circles are the corresponding values of
,
where is calculated as (98) with (24). Transitions (96) are randomly
generated at a rate of transitions per object per interval.
The MATLAB code used in calculation:
http://phystech.com/download/time_entropy.m
with parameters: