PreprintPDF Available

From QM to KM

Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Understanding of physical reality is rooted in the knowledge obtained from observations. The knowledge is encoded in variety of forms, from sequence of letters in a book, to neural circuits in a brain. At the core, any encoded knowledge is a sample of correlated events (symbols). I show the event samples bear attributes of a physical reality: energy, temperature, momentum, mass. I show that treating measurement as event sampling is consistent with predictions of quantum mechanics (QM). I discuss QM basics: wave function, Born rule, and Schrödinger equation, emphasizing their true meaning, which is rarely, if ever, mentioned in textbooks. I derive similar expressions using event sample as base construct, demonstrating the connection between QM and the presented model. I explain the mechanics of observation, and the role of observer. I show how model extends to include dispersion, decoherence, transition from quantum to classical state. I prove decoherence is a key factor in Fermi's golden rule, in Planck's radiation law, and in emergence of time. The controversial aspects of QM, such as wave function collapse, and measurement problem, do not appear in presented framework, which I call the knowledge mechanics (KM)
Content may be subject to copyright.
From QM to KM
Sergei Viznyuk
Understanding of physical reality is rooted in the knowledge obtained from
observations. The knowledge is encoded in variety of forms, from sequence of letters in
a book, to neural circuits in a brain. At the core, any encoded knowledge is a sample of
correlated events (symbols). I show the event samples bear attributes of a physical
reality: energy, temperature, momentum, mass. I show that treating measurement as
event sampling is consistent with predictions of quantum mechanics (QM). I discuss
QM basics: wave function, Born rule, and Schrödinger equation, emphasizing their true
meaning, which is rarely, if ever, mentioned in textbooks. I derive similar expressions
using event sample as base construct, demonstrating the connection between QM and
the presented model. I explain the mechanics of observation, and the role of observer. I
show how model extends to include dispersion, decoherence, transition from quantum
to classical state. I prove decoherence is a key factor in Fermi’s golden rule, in Planck’s
radiation law, and in emergence of time. The controversial aspects of QM, such as wave
function collapse, and measurement problem, do not appear in presented framework,
which I call the knowledge mechanics (KM)
As for prophecies, they will pass away; as for tongues,
they will cease; as for knowledge, it will pass away.
1 Corinthians 13:8
Physical properties, such as temperature, energy, entropy, pressure, and phenomena such as
Bose-Einstein condensation are exhibited not just by “real” physical systems, but also by virtual
entities such as binary or character strings [1, 2, 3], world wide web [4], business and citation
networks [5], economy [6, 7]. Quantum mechanical behavior has been observed in objects as
different as electrons, electromagnetic waves, nanomechanical oscillators [8]. There must be a
mechanism which accounts for the grand commonality in observed behavior of vastly different
entities. Scientists have recently discovered that various complex systems have an underlying
architecture governed by shared organizing principles [9]. The present-day quantum mechanics
is a limiting case of some more unified scheme Such a theory would have to provide, as an
appropriate limit, something equivalent to a unitarily evolving state vector  [10].
There are two factors present in all theories. One is the all-pervading time, and the other is the
observer’s mind. A successful grand commonality model must explain the nature of time, specify
mechanism of how the physical reality projects onto the mind of observer, and relate time to that
projection. Conventional theory routinely manipulates notions, such as energy, distance, electric
charge, without providing their definition, treating them as God-given things. A self-contained
model must define any notion it operates with. The definition must be confined within the model.
A complete theory may not contain underived fundamental physical constants. The conventional
QM provides accurate predictions, yet without clear model of the system [11]. This work describes
a model wherein state vector formalism for correlated event samples is similar to mathematical
apparatus of conventional QM. I expound some notions left obscure by the conventional theory,
e.g. the notions of time, of observer, of measurement.
In this preamble I outline the model in defined terms. In Section 2 I develop the concepts of
energy, of knowledge, and of knowledge confidence. I calculate energy spectra for some event
samples and show their similarity with known QM constructs: particle in a box, and quantum
oscillator. In Section 3, I consider an ensemble of uncorrelated particles. I derive the notion of
temperature, and the First Law of Thermodynamics. In Section 4, I discuss conventional QM
approach to measurement. I apply state vector formalism to an event sample to obtain Born rule,
and Schrödinger equation. I state the equivalence of Born rule and of coefficient of determination. I
extend formalism to include numeric methods of handling dispersion and decoherence. I show the
decoherence is a key factor in Fermi’s golden rule, and in Planck’s radiation formula. I show how
decoherence leads to Second Law of Thermodynamics and to emergence of time.
A description of quantum-level physical reality (PR) is attained as a sample of eigenstates
, where is a set of eigenstates PR can be observed in. In information terms, the eigenstate
is an event; is the number of occurrences of -th event in the sample. The acquisition of a sample
is measurement. The sample is encoded in an information-holding construct, which I call the
observer. The event is elementary if it is not a direct product of other events. I call event a particle,
in some contexts. The particles are said to be entangled if they are parts of a non-elementary event.
The event sample describes a quantum object, if events sampled in different observation bases
are correlated. An ensemble of uncorrelated particles represents a classical object.
I call a variation of observation basis the transformation. A transformation is correlative with
a change in macroscopic parameters, notably time, position, etc. I call a change of eigenstate
population numbers , associated with transformation, the transition.
The quantitative measure of information (knowledge), about object, is obtained from event
sample , as a difference between entropy of equilibrium (maximum probability state), and
entropy of statistical ensemble . The calculation is based on understanding of entropy as a
measure of missing information (i.e. the amount of unknown). The sum of knowledge (i.e. the
amount of known), and amount of unknown equals entropy of equilibrium.
All graphs have been pushed to the end of the paper to make text part more focused. This paper
is a substantial re-write of its predecessor [12], most significantly in Section 4.
The unconditional probability of an event sample is given by multinomial probability
mass function (pmf):
, where is the probability of sampling eigenstate from set . The elementary eigenstates would
have equal probability:
 
, where is the cardinality of set . I introduce functions as follows:
, where is gamma function;
 
Here is Shannon’s [13] unit entropy, and  is the entropy of equilibrium.
With (4-6), I rewrite (1) as
From (9), the probability of an event sample , among all samples of the same size , is
determined solely by the value of . If I’m to use as a single independent variable,
I can write (9) in domain as:
Here  is the multiplicity (degeneracy) of the given value
, i.e. a number of ways
the same value of is realized by different samples with given parameters. There is no
analytic expression for, however, it is numerically computable. Table 1 contains ,
 values calculated for several sets of parameters. Figures 1-2 show distinct
values of in increasing order for several values of parameter and probabilities (2) calculated
from (6), using algorithm [14] for finding partitions of integer into parts [15]. The
sum of  over all distinct values of is the total number of distinct samples. It is equal
to the number of ways to distribute indistinguishable balls into distinguishable cells:
, where sum is over all distinct values of . Figure 3 shows the total number of distinct
event samples, and the total number of distinct values as functions of for two sets of
probabilities (2), calculated from (11) and (6) using algorithm [14]. The graphs demonstrate that:
For probabilities (2), the average degeneracy of levels 
This statement can be expressed as:
sum represents the number of distinct values of for the given parameters.
 can exceed  in some cases. E.g. 
because . Another example is,
As  is not a smooth function of (see Table 1), there could be no true probability
density in domain. I shall derive pseudo probability density to be used in expressions involving
integration by in thermodynamic limit. To be able to use analytical math, I have to extend (4-9)
from discrete variables to continuous domain. I call
Thermodynamic limit is the approximation of large occupation numbers:
In case of a sample with event probabilities (2); the multiplicity of is the multiplicity of the value of multinomial
coefficient in (1) [41]
In thermodynamic limit, I can use Stirling’s approximation for factorials
It allows rewriting of (4, 6), for probabilities (2), as
 
 
 
Expr. (16) reverberates with the proposed [16] electron correlation energy .
Figures 4,5 demonstrate functions  and calculated for two sets of parameters
using exact expressions (4), (6), and approximations (15), (16).
In thermodynamic limit, is a smooth function of approximated by positive semi-definite
quadratic form of  in the vicinity of its minimum (7):
Knowing the covariance matrix [17] of multinomial distribution (1) allows reduction of (17) to
diagonal form. The covariance matrix, divided by is:
 
, where
The rank of  is . If  is a diagonal form of , the eigenvalues of  are :
For equal probabilities (2),  
. I transform to new discrete variables:
, where is matrix with columns as unit eigenvectors of  corresponding to eigenvalues (19).
In case of and probabilities (2)
 
The eigenvector  corresponding to eigenvalue is perpendicular to hyper-plane defined
by  in M-dimensional space of coordinates, while vector  is
parallel to the hyper-plane. Therefore, in (20). I rewrite (17) in terms of new variables
I call the canonical variables of the sample, and  the canonical momentum. I call
parameter the energy. plays a role of mass. From (5, 6), for probabilities (2), it follows:
, where
Hence, for elementary eigenstates, energy equals difference between entropy  of
equilibrium, and entropy  of the sample, i.e. energy equals knowledge [about object
state]. As entropies (5), (24) are in units of nats, so is the energy (6,16,22,23).
Figure 5 demonstrates function  calculated for two sets of parameters using exact
expression (6) and approximations (16), and (22). I plotted  instead of to show asymptotic
behavior of (6) and (16) in comparison with quadratic form (22). Using (9, 15, 22) I obtain
multivariate normal approximation [17] to multinomial distribution (1) as
 
Figure 6 shows graphs of  as a function of calculated for  and four
sets of probabilities, using exact formula (1), and multivariate normal approximation (25).
In order to derive pseudo probability density in domain, I note that:
In thermodynamic limit, the number  of distinct event samples having is
proportional to the volume of dimensional sphere of radius. This
statement can be expressed as
  
The sum in (26) is over all distinct values of which are less or equal than. The function
 is determined from normalization requirement:
In order to convert from sums to integrals over continuous variable , I define pseudo density
 of object states as
The corresponding pseudo probability density  is given by (10). The normalization
requirement for these functions becomes:
 
 
The  value is obtained from (6) by having event with lowest probability 
acquire maximum population:  . From (6), as :
For probabilities (2):
From (31)  as . That allows replacing  in the upper limit of integral in (29)
with . The expression for function  in (26) is [17]:
 
Using (32, 15) I write (26) as
, where
is the volume of dimensional
sphere of radius .
The number of distinct values of in limit can be estimated from (33, 12) as
 
From (34), I can approximately enumerate distinct energy levels by “quantum number” :
From (28, 33) the pseudo density  of object states is:
I use condition (11) to define effective 
 value:
Figure 7 shows  calculated from expressions (1, 6), and from formula (33). From (10,
15, 36), the pseudo probability density function (pdf) of event samples in thermodynamic limit is
, where  is the pdf of gamma [17] distribution with scale parameter, and shape
. I calculate moments of in equilibrium:
 moment
about mean:
The sums in (39-41) are over all partitions of N. Expression (38) allows explicit calculation of all
moments of in thermodynamic limit. From (38) the mean value , the variance, and the
third moment , in equilibrium, are:
Figure 8 shows calculations of , , and from expressions (39-41) for the moments, with
probability mass function (1). It demonstrates how these values asymptotically approach
thermodynamic limit values (42-44) as.
The knowledge about object’s state is not full if sample size is finite [13]. Even in case of
maximum knowledge, when the sample of size consists of the same event, there is 
probability a sample of size will return two distinct events. I define knowledge confidence
To illustrate the notion of knowledge confidence, imagine an experiment to determine polarization
of a light source with a polarizer coupled to light detector. If all photons from the source arrive
at the detector, does it mean I know the polarization of light source with absolute certainty? The
answer is no, since there is a chance, if I repeat the experiment with photons, at least one
photon will be lost in polarizer.
I shall demonstrate how the presented model correlates with some known constructs. Consider
one-dimensional quantum harmonic oscillator. Its energy levels [18] are given by:
, where  is the interval between energy levels;  . Energy levels (46) are equally-
spaced. The energy levels of event sample of cardinality exhibit similar pattern. As shown
on Figure 1, linear dependence on quantum number n holds reasonably well if is not too large.
From (12), the linearity breaks down when 
. From (22):
, where
From above, the energy levels of an event sample of cardinality are:
, where are Loeschian numbers [19]. With (46, 49), I can write the comparison table of the first
few energy levels of quantum harmonic oscillator in units of , and of event sample of
cardinality in units  
quantum harmonic
event sample of
, where black boxes designate missing energy levels. In the second row, the energy levels shown
in shaded boxes are only realized for samples with sizes satisfying ; and energy
levels shown in white boxes are realized for samples with sizes satisfying. Here
 is the remainder of division of by .
Consider another quantum mechanical example: particle of mass in a box of size .
Its energy levels [18] are given by:
In presented model, similar energy spectrum is exhibited by event sample of cardinality ,
as shown on Figure 2. From (47), the energy levels of an event sample of cardinality , in
thermodynamic limit approximation, are:
, where
is to be considered as the effective mass of the particle.
Energy levels (51) with even are only possible when is even, and energy levels with odd are
only possible when is odd. With ½ probability the lowest energy level is , and with
½ probability it is , in units of .
In previous section, the event sample represented a single object. In this section I consider a
collection of objects; each object represented by an event sample of size , and cardinality . I
call such collection thermodynamic ensemble. Objects with different or belong to different
thermodynamic ensembles. I call event sample a mode. I designate the set of modes an
object may occupy, and the number of objects in mode :
The probability for an object to be in mode is given by (1, 9). Objects in the same mode are
indistinguishable, by definition of measurement. The probability mass function of distribution of
modes among objects is:
The objective is to find equilibrium, i.e. the most probable distribution . For a standalone
object, the most probable distribution is the one which maximizes (54):
Consider objects to be part of thermodynamic ensemble in a certain state. That imposes conditions
on distribution of modes, so relations (42-44), (55) may no longer hold. I consider one of the
possible conditions and show how it leads to the notion of temperature. Let the state of
thermodynamic ensemble be such that the mean energy of objects in equilibrium is , which
may be different from equilibrium mean energy of a standalone object (42). Then:
To find the most probable distribution of modes , I shall maximize logarithm of (54) using
method of Lagrange multipliers [20] with conditions (53, 56):
From (57, 56, 53) I obtain the following equation involving Lagrange multipliers and :
, where is digamma function, and and are to be determined by solving (58) for :
, and by plugging from (59) into (56) and (53). In (59),  is the inverse digamma function,
and  
. The parameter is commonly known as temperature.
Since the number of objects in mode cannot be negative, expression (59) effectively limits
modes which can be present in equilibrium to those satisfying
, where  is EulerMascheroni constant. With approximation [21]:
 
I rewrite (59) as:
Presence of  term in (61) leads to a computationally horrendous task of calculating and ,
because the summation in (53, 56) has to be only performed for modes satisfying (60). I shall leave
the exact computation to a separate exercise, and make a shortcut, by ignoring  term in (61).
This approximation is equivalent to Boltzmann’s postulate
that the number of objects in mode ,
in equilibrium, is proportional to 
 . The shortcut allows calculation of Lagrange
multiplier from (53):
, where
Using (36), the partition function in (62) can be evaluated as:
The equation (56) then becomes
While widely used, this postulate has rather unphysical consequence that there is a non-zero probability of finding
an object in a mode with arbitrary large energy. Another consequence is the divergence of partition function for some
constructs, e.g. hydrogen electronic levels [40].
Eq. (64) is the relation [22] between mean per-particle energy and temperature in -
dimensional Maxwell-Boltzmann gas.
The thermodynamic equilibrium per-object entropy
 is the number of nats required to
encode distribution of modes in equilibrium. Using (15) and (64), I evaluate
 in limit:
, where
In case of , i.e. for degrees of freedom, expression (65) turns into equivalent
of Sackur-Tetrode equation [22] for the entropy of ideal gas. For thermodynamic equilibrium
entropy of a standalone object, instead of (65), I have:
The difference of entropies (65, 67) by 
 term is due to spread in object energies. The non-
zero thermodynamic entropy means the mode is unknown prior to observation, for each
observation. I rewrite (67) as:
; where
The expression for
 in (68) was derived in thermodynamic limit, i.e. when . When
, . By comparing
 to  (Figure 9) I see that
 fairly close to
 except when is large enough, in which case thermodynamic limit approximation for the
given becomes less valid anyhow. Therefore, I can replace
 with  in (68) and write
thermodynamic equilibrium entropy as:
Figure 10 shows the comparison of thermodynamic equilibrium entropy  of a standalone
object in limit, calculated from 
, and from (69). Since
 should
be , it means cannot be less than  
The expression for in (63) has been derived in thermodynamic limit approximation, i.e.
when . It means there must be large number of energy levels included in sum (63), i.e.
temperature cannot be too small. Therefore, the expressions (63-64) are only valid for ,
where  is the characteristic difference between adjacent energy levels.
For an event sample of cardinality the approximately evenly-spaced energy levels
(Figure 1) allow for more accurate expression for partition function. From (35) the characteristic
difference between adjacent energy levels is:
Figure 11 shows the numeric calculation of the difference  between adjacent energy levels
averaged over distinct event samples with the given value of , and . I can use (46), with
degeneracy of each level =6, as an approximation for combined energy levels (49) in expression
for partition function (63), and obtain mean energy of modes with given as [23]:
 
Eq. (71) reduces to (64) if . (71) has been derived using linear dependence (46) of energy
levels on quantum number , in , i.e. limit. For a black-body spectrum, the
 condition for validity of (71) has to be satisfied for  as well. Therefore, a typical
black-body spectrum can only be exhibited by ensembles with  
 
. An
example is cosmic microwave background. The higher the temperature, the less accurate (71) will
be in  region, where spectral intensity would fall off steeper than in (71). Such deviation
from black-body is obvious in solar spectrum.
The zero-point energy term  
in (71) is the subject of a hundred-year controversy [24, 25].
The conventional theory views radiation as existing “out there”, decoupled from the matter. Such
view leads to the infinite energy density due to the infinite number of hypothetically possible
decoupled radiation modes, each multiplied by  
[23]. The conventional theory has no upper
limit on , short of an artificial cut-off, usually assumed at Planck energy. Even with frequency
cut-off, there is still a discrepancy with empirical evidence of at least 58 orders of magnitude [25],
possibly the biggest contradiction of any theory. In presented model,  
term cannot contribute
more than  
  to average energy, i.e. its contribution is well within standard deviation.
Thermodynamic ensemble is the statistical ensemble of non-elementary eigenstates whose
probabilities are given by (66), as opposed to elementary eigenstates’ probabilities given by
(2). Using (66, 5, 24) in formula (6) for energy of arbitrary non-equilibrium state, and substituting
for , for , and for :
 is the entropy (5) of the ensemble, and  is objects’ mean energy, in equilibrium. I
rewrite (72) in terms of per-object quantities
, in the limit :
Here  ; 
 is the deviation of objectsmean internal energy, and
of mean thermodynamic entropy, from their equilibrium values;  is the work done on
ensemble. As expected, from (73),
. Eq. (73) represents the First Law of Thermodynamics.
For ensemble of non-elementary eigenstates,
, where
 is the maximum entropy,
achieved with equal population numbers in (54). It means, in thermodynamic ensemble,
according to (72, 73), part of the knowledge is associated with objectsinternal energy.
Measurement is one of the most debated topics in conventional theory [26, 27]. The
controversy is stirred by the discreteness of outcomes of the measurement on quantum objects. A
concept of wave function collapse has been devised early on, more as illustration, than explanation.
The collapse concept is an awkward amalgamation of quantum postulate [28], and an implicit
assumption of wave function’s physical reality (PR). The collapse concept is prevalent [29] despite
its contradiction with other accepted frameworks, such as special relativity. A burlesque scenario
can be imagined: using wave function of a photon, which has to be non-zero everywhere up to the
moment of measurement, one can instantly communicate with an observer on the opposite side of
the galaxy, by absorbing photons coming from a star near the center of galaxy, and thus impacting
probability of the same photons being detected by the remote observer. A similar scenario inspired
EPR paradox [30]. To work around the problem, a number of alternative QM interpretations, such
as many-world [31], and pilot wave [32], have been proposed, still maintaining the PR viewpoint.
The measurement problem disappears if we stop attributing PR to wave function, and stop
mistaking correlation for causality. In this section I show the concept of wave function is
superfluous, and that treating measurement as event sampling is consistent with predictions of
conventional QM, without its controversial baggage. I discuss the basics of conventional QM:
wave function, Born rule, and Schrödinger equation, emphasizing their true meaning which is
rarely, if ever, mentioned in textbooks. I derive similar expressions using event sample as base
construct, demonstrating the connection between QM and the presented model. I explain the
mechanics of observation, and the role of observer. I show how model extends to include transition
from quantum to classical state, dispersion and decoherence.
The following experiment illustrates the delusion of wave function collapse concept, and of
EPR “paradox”: put a pair of gloves (left-hand and right-hand) randomly into two boxes, and let
Alice and Bob each pick a box. Until one of them opens a box, no one knows who has which glove.
To the same effect, the pair of gloves can be substituted by a pair of entangled particles having
opposite spin, in a gedankenexperiment. The conventional QM describes situation as superposition
, where signs enforce parity flip if Alice and Bob swap boxes, since mirror image of a left-hand
glove is a right-hand glove. The lower sign corresponds to mirror image. If Alice finds left-hand
glove in her box, Bob would find right-hand glove in his. In this case, conventional theory says,
the wave function (74) collapsed into  eigenstate. Has Alice finding left-
hand glove caused collapse of , and, as a result, Bob to find right-hand glove in his box? Of
course not. That is correlation, not causality, just like EPR case, where authors considered a pair
of entangled particles. The distinct outcomes of the measurement (the event sample) are
determined by the observation basis. A sampled event is one of object’s eigenstates in a given
observation basis. In example above, the object consists of two particles: two glove boxes,
entangled by glove pairing. Each particle produces event sample of cardinality (left or right
glove). The events produced by different particles are correlated due to entanglement between the
particles, even though events in each particle basis are completely random. The object’s eigenstates
are composed as direct product of particle eigenstates. The state (74) is one of object’s eigenstates
in preparation basis. The other eigenstates could be constructed as:
In the measurement basis, the object’s eigenstates are:
There are two observation bases involved in determining the conditional probability of finding
object in state , given it was prepared in state . The observation bases are usually related by a
[macroscopic parameter]-driven unitary transformation, albeit there are examples from quantum
field theory (QFT), when they are not [33]. There could be no unitary transformation between
observation bases of different cardinality . In example, the observation bases of preparation and
of measurement are rotated with respect to each other by  
within plane formed by vectors
. The observation basis rotated when boxes became identifiable (i.e. separated) by a
parameter (distance): one box got to Alice and another box to Bob. This is an example of a unitary
transformation of observation basis with distance as transformation parameter.
At the core of conventional QM lies Born rule. It stipulates the probability of a particular
outcome of a measurement performed on state [vector] is
, where  is the angle between and . If vectors constitute an eigenbasis, then
The rest of QM deals with how probabilities (75) change with parameter-driven unitary
transformation of eigenbasis . The Born rule is a conditional probability, i.e. a probability of
outcome , given the object was prepared in state . In example above, the conditional probability
of outcomes , given preparation (74) are:
Conventional QM does not deal with unconditional probabilities. QM predicts results of a
measurement, if the state of an object is already known in some observation basis. The conditional
measurement requires two or more particles to be entangled via some medium [23]. The
entanglement enforces correlation between object’s eigenstates in preparation and measurement
bases. That is the underlying setting, albeit not widely acknowledged, for Born rule.
For any state there exists an observation basis in which has only one non-zero eigenstate
, where is a unitary transformation to eigenbasis in which . It corresponds
to a situation when all measurement events are a particular outcome , as e.g. detection of
polarized photons using polarizer aligned with photon polarization.
Schrödinger equation, in its true meaning, describes a parameter-driven unitary transformation
of observation basis. The usual parameter of transformation is time, but it can be other parameter,
e.g. distance. The integral form of Schrödinger equation:
is the same equation as (77), where is a unitary transform generated by Hermitian operator .
From (77), the eigenbasis components of quantum state must have certain phase relations with
each other, enforced by . In case of a time-driven unitary transformation (78), generated by time-
independent , the phase relations between and eigenvector components are:
, where are eigenvalues of . If phase relations do not exist, or only exist for some components,
then we have a mixed state, or a partially mixed state. The mixed state is that of a classical object,
the pure state is that of a quantum object, and partially mixed state is that of an object in transition
from quantum to classical.
An outcome of a single act of measurement is an event. The measurement involves collecting
sample of events . The term measurement is thus synonymous to sampling. The
measurement events on quantum object are formed as a direct product (entanglement) of [more
elementary] constituent events, as in example with glove boxes. The entanglement imposes
the observer
device &
correlation (phase relations) between measurement events taken in different observation bases, i.e.
corresponding to different macroscopic parameters. The constituent events come from the
measuring device, from environment, and even from observer memory. A taken event sample,
decoupled from the measuring device and from the environment, is stored in observer memory.
The observation process is illustrated on Figure 12:
The captured event sample  can be encoded and stored as, e.g.:
1. Electronic spin configuration in magnetized materials [34]
2. Charge distribution in capacitor elements of charge-coupled devices (CCD) [35]
3. Sequence of nucleotide bases in a DNA strand [36]
4. Neural circuits [37]
It can be proven
, the knowledge encoded in any form can be construed by a statistical ensemble
 of orthogonal (uncorrelated) eigenstates (events), with set being the encoding alphabet.
For a pure state there exists an observation basis (77) in which the measurement would only
return event . In such basis, entropy (24) . The state is known,
by knowledge amount (23), with confidence (45). In a different observation basis, the
measurement sample may consist of events , with entropy . The sample
may not carry the same amount of knowledge, as sample done in basis, because sample with
 does not uniquely identify the state. E.g., a circular polarized light (eigenstate
) will produce the same event sample, when measured with linear polarizer, with any orientation
of the latter within the plane perpendicular to light propagation, i.e. under any unitary
transformation which has as one of its eigenvectors. For that class of transformations, which
Perhaps a simplistic proof is to consider encoded knowledge as a sequence of yes/no answers in a form of a binary
string, which can also be represented as an integer number . Factorization  into primes yields
statistical ensemble where is the exponent of prime in factorization. As prime factors in factorization
are uncorrelated, the associated eigenstates are orthogonal.
Figure 12
, are event samples, taken in observation bases corresponding
to parameters , ; , are decoupled from the measuring
device and from environment, and stored in observer memory;
is the currently captured event sample. The measurement events in
are formed as direct product of constituent events from the measuring
device, environment, and memory. The feedback from memory may play
a role in emergence of consciousness
keep the event sample unchanged, a state vector formalism can be used to extract the knowledge
about quantum state from correlations between events taken in different observation bases. The
state vector has to incorporate the correlation mechanism. One way to incorporate correlation is
via phase relations between vector components, similar to (79).
The state vector has to satisfy conditions:
1. ; this is the usual normalization requirement
2. Vector should be invariant, up to a phase factor, with respect to a change in size of
event sample, at least in the limit . This is to ensure the conditional measurement
probabilities converge to (75)
3. Vector has to incorporate the relevant macroscopic parameter(s). The variation of
parameters should equate to a unitary transformation of observation basis
Vector is an abstract mathematical construct whose only purpose is to enable correct calculation
of conditional probabilities. That’s how the wave function should have been treated in a first place.
The above considerations lead to the following expression, associated with measurement sample
, and macroscopic parameter :
 
, where , and  is (4). The probabilities of different events in the sample are
not necessarily equal, because measurement events are not elementary. The energy values are
associated with unconditional probabilities by eq. (9):
 
, where I added conversion constant because the imaginary argument of   has to be in
. The correlation coefficient between vectors and is:
, where is the correlation distance. From (82), the coefficient of determination , i.e.
the ratio of outcomes in sample 2, which are predictable from sample 1, is:
 
, where  is the probabilities vector; . Antisymmetric matrix has two
non-zero purely imaginary eigenvalues, which only differ in sign:
 ;
Empirically established relations:
, where  is energy (6); . The sign in (85)
means linear correlation, rather than functional equality. Figure 13 shows correlation between left
and right sides of (85) for a number of event samples. In the limit two sides of (85) turn
into exact equality.
Eq. (83) is routinely obtained by solving Schrödinger equation for probability of finding an
object after time to be in the same initial state [23]. Evidently, the Born rule (75) is equivalent to
coefficient of determination in statistics, in being a conditional probability measure. The state
vector is the counterpart of wave function. The Schrödinger equation is, as expected
 
, with diagonal form of being  
Eq. (83) represents a self-interference of an object at correlation distance . To generalize for
objects, and allow for dispersion, and decoherence, I rewrite (83) as
, where is the dispersion matrix, defined as:
, where
Expr. (88) reduces to (83) if . Matrix determines the type of eigenstate dispersion in
correlated objects. Two distinct types of parameter-driven dispersion could be identified:
1. Coherent dispersion:
, where  is a dispersion parameter. The numeric analysis of (88-90) reveals
follows Gaussian profile (Figure 14):
The dispersion parameter is the property of the device. In case of coherent dispersion,
the transition rate 
2. Incoherent dispersion (decoherence):
The numeric analysis of (88-89, 92) reveals follows exponential decay (Figure 15):
Function  in (90, 92) returns matrix, where each row equals
multinomial distribution of events into buckets, with probability vector given by (2). The
multinomial distribution of events is generated for each object , .
The condition for exponential decay (93) is the breakdown of predictable phase relations
between constituting particles, i.e. decoherence. The decoherence is imposed by randomly
generated -matrix (92). The end result of decoherence is the mixed state, where probability is
spread equally among eigenstates of dispersed objects.
Any continuous dynamics which may be implied by (87) is a detachment from fundamentally discrete nature. The
continuous process contradicts quantum postulate, unless it is an …abstraction, from which no unambiguous
information concerning previous or future behavior can be obtained [28]. I would only provide continuous equations
like (87) as a link to conventional theory, not as an advancement of the model
For thermodynamic ensemble, the exponential decay has a different context:
, where is the number of objects remaining in initial state ; is the total number of objects,
The exponential decay (94) is driven by transitions of constituting elementary eigenstates. Initially,
all objects are in eigenstate , represented by event sample , having entropy
(24) . A transition changes event sample as:
The transition (96) is accompanied by a change in object entropy . The essential feature of a
classical decay process is that per-object entropy is correlated with
The per-object entropy is calculated from object entropies (24) as:
The object state can be represented by 
  state vectors , having different
phase relationships between constituent events , due to different values of transformation
parameter . For a given value of transformation parameter, the probability to find object in
corresponding state is  
To prove the correlative relations (94, 97), a numeric analysis of vs. 
  vs.
has been performed, with wide variation of input parameters. The product is the number
of transitions (96) randomly distributed across objects. Per object, transition equates to  of
information loss. Figure 16 shows values of  
, 
, plotted against . While and
are object-dependent,  
ratio is not, in the limit . The ratio  
correlates with
classical time, in this analysis represented by independent variable . It explains why time
measures derived from observation of vastly different macroscopic objects, are highly correlative.
An illustration of how this correlation breaks down if is small is the increase in laser linewidth
with decrease in number of photons in the mode [38], resulting in less accurate atomic clocks. In
terms of time intervals,   , where  is per-object energy loss
corresponding to per-object entropy change , and is the temperature introduced in previous
section. This relation indicates an accurate clock has to dispense the same amount of energy per
cycle. The established correlative relation represents the Second Law of
Above, energy and entropy are in units of nats. Parameter is dimensionless. The conversion
to SI units is:
, where
, and
The conversion (101) is between  and . Therefore, energy and time, expressed in SI
units, are not independent. If characteristic times of all processes in nature are increased by a factor,
then all energies have to decrease by the same factor. There is no separate conversion between
energy in nats, and energy in joules, or time in seconds and dimensionless parameter .
The conversion constant represents an average, per object, quantity of information loss in
transition. Therefore, a change in amount of knowledge, being a result of a number of transitions,
has to satisfy . With (93, 85), it leads to the speed limit [39]:
The values are dimensionless. They should not depend on units used for arguments in (83,
93). The conversion constant (100) takes care of that in (83). However, in (93), the resultant
expression under  is in , as it should be. If energy is expressed in joules , and time in
seconds , where would  under  in (93) appear from? There has to be additional
conversion parameter added under  in (93). This conversion parameter, unlike true
conversion constants (100, 101), is a device parameter. With conversion parameter added, the
expression under  in (93), in SI units, becomes:
, where is a dimensional device parameter with a meaning of decoherence time [23].
For a classical radiation detector, the expression for in (103) is [23]:
, where is the speed of light; is the refractive index of material;  [rad/s] is the spread in
object’s internal transition frequencies; is the number of correlated objects per unit surface area
of the detector; the dimensionless scattering rate. The value  in (104) conceivably is double
the standard deviation of -matrix (86), divided by , i.e.:
The expression (106) is a restatement of Fermi’s golden rule for transition probability into
continuous spectrum near :
, where  is density of final states, per unit . By comparing (107) with expression (9) in
[23], I get quite simple relation between decoherence time and density of final radiation states:
With (106, 108), for classical radiation detector
The left side of (109) purports to be a property of radiation field, while the right side is the
property of detector. The fact that seemingly unrelated parameters are connected to each other with
only universal constants, is an argument against considering radiation as a standalone entity with
properties independent of the measurement context [23].
The model united the seemingly disjoint QM artifacts: Fermi’s golden rule and Planck’s
radiation formula [23], by exposing decoherence (93) as a driving factor in both cases.
W. Wislicki, "Thermodynamics of systems of finite sequences," J. Phys. A: Math. Gen.,
vol. 23, no. 3, p. L121, 1990.
C. Frenzen, "Explicit mean energies for the thermodynamics of systems of finite
sequences," J. Phys. A: Math. Gen., vol. 26, no. 9, p. 2269, 1993.
S. Viznyuk, "Thermodynamic properties of finite binary strings," arXiv:1001.4267 [cs.IT],
R. Albert and A.-L. Barabási, "Statistical mechanics of complex networks," Rev. Mod.
Phys., vol. 74, pp. 47-97, 2002.
G. Bianconi and A.-L. Barabási, "Bose-Einstein condensation in complex networks,"
arXiv:cond-mat/0011224 [cond-mat.dis-nn], 2000.
M. Bouchaud, "Wealth Condensation in a simple model of economy," Physica A Statistical
Mechanics and its Applications, vol. 282, p. 536, 2000.
S. Sieniutycz and P. Salamon, Finite-Time Thermodynamics and Thermoeconomics,
Taylor & Francis, 1990.
J. Chan, T. Alegre, A. Safavi-Naeini, J. Hill, A. Krause, S. Groeblacher, M. Aspelmeyer
and O. Painter, "Laser cooling of a nanomechanical oscillator into its quantum ground
state," arXiv:1106.3614 [quant-ph], 06 2011.
A.-L. Barabási and E. Bonabeau, "Scale-Free Networks," Scientific American, vol. 288, pp.
50-59, 2003.
R. Penrose, "On Gravity's Role in Quantum State Reduction," General Relativity and
Gravitation, vol. 28, no. 5, pp. 581-600, 1996.
M. Jammer, "The Conceptual Development of Quantum Mechanics," in The History of
Modern Physics, Tomash, 1989.
S. Viznyuk, "New QM framework,", 2014. [Online]. Available:
S. Viznyuk, "Shannon's entropy revisited," arXiv:1504.01407 [cs.IT], 03 2015.
K. Yamanaka, S. Kawano and K. Y., "Constant Time Generation of Integer Partitions,"
IEICE Transactions on Fundamentals of Electronics, Communications and Computer
Sciences, Vols. E90-A, no. 5, pp. 888-895, 2007.
S. Viznyuk, "OEIS sequence A210237," 2012. [Online]. Available: [Accessed 21 04 2015].
D. Collins, "Entropy Maximizations on Electron Density," Z. Naturforsch, vol. 48a, pp. 68-
74, 1993.
C. Forbes, M. Evans, N. Hastings and B. Peacock, Statistical Distributions, Fourth Edition,
Hoboken, NJ, USA: John Wiley & Sons, Inc., 2010.
D. J. Griffiths, Introduction to Quantum Mechanics, 2nd ed., Pearson Education, Inc.,
N. Sloane, "OEIS sequence A003136," 1991. [Online]. Available: [Accessed 2015].
I. Vapnyarskii, "Lagrange multipliers," in Encyclopedia of Mathematics, Springer, 2001.
M. S. I. A. Abramowitz, "6.3 psi (Digamma) Function," in Handbook of Mathematical
Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover, 1972, p.
K. Huang, Statistical Mechanics, John Wiley & Sons, 1987.
S. Viznyuk, "Planck's law revisited,", 2017. [Online]. Available:
P. Jordan and W. Pauli, "Zur Quantenelektrodynamik ladungsfreier Felder," Zeitschrift für
Physik, vol. 47, no. 3, pp. 151-173, 1928.
G. Grundler, "The zero-point energy of elementary quantum fields," arXiv:1711.03877
[physics.gen-ph], Novermber 2017.
J. Bell, "Against 'measurement'," Physics World, August 1990.
J. Wheeler and W. Zurek, Quantum Theory and Measurement, Princeton University Press,
NJ, 1983.
N. Bohr, "The Quantum Postulate and the Recent Development of Atomic Theory,"
Nature, pp. 580-590, 14 April 1928.
M. Tegmark, "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?,"
arXiv:quant-ph/9709032, 1997.
A. Einstein, N. Rosen and B. Podolsky, "Can Quantum-Mechanical Description of Physical
Reality Be Considered Complete," Phys. Rev., vol. 47, p. 777, 1935.
H. Everett, "Relative State Formulation of Quantum Mechanics," Reviews of Modern
Physics, vol. 29, pp. 454-462, 1957.
D. Bohm, "A suggested interpretation of the quantum theory in terms of hidden variables,"
Phys. Rev., vol. 85, pp. 166-179, 1952.
R. Haag, "On Quantum Field Theories," Dan. Mat. Fys. Medd., vol. 29, no. 12, pp. 1-37,
C. Chappert, A. Fert and F. Van Dau, "The emergence of spin electronics in data storage,"
Nature Mater, vol. 6, no. 11, p. 813823, 2007.
M. Lesser, "Charge coupled device (CCD) image sensors," in High Performance Silicon
Imaging, Fundamentals and Applications of CMOS and CCD Sensors, Woodhead
Publishing, 2014, pp. 78-97.
M. Mansuripur, "DNA, Human Memory, and the Storage Technology of the 21st Century,"
Proceedings of SPIE, vol. 4342, pp. 1-29, 2002.
T. Kitamura, S. Ogawa, D. S. Roy, T. Okuyama, M. Morrissey, L. Smith, R. Redondo and
S. Tonegawa, "Engrams and circuits crucial for systems consolidation of a memory,"
Science, vol. 356, no. 6333, pp. 73-78, 2017.
H. Wiseman, "The ultimate quantum limit to the linewidth of lasers," arXiv:quant-
ph/9903082, 1999. [Online]. Available:
S. Deffner and S. Campbell, "Quantum speed limits: from Heisenberg's uncertainty
principle to optimal quantum control," arXiv:1705.08023 [quant-ph], 16 10 2017.
S. J. Strickler, "Electronic Partition Function Paradox," Journal of Chemical Education,
vol. 43, no. 7, pp. 364-366, 1966.
S. Viznyuk, "OEIS sequence A210238," 2012. [Online]. Available: [Accessed 21 04 2015].
Table 1
,  value pairs calculated from (6) for four sets of parameters using [14]
algorithm for finding partitions of integer into parts. For each partition I
calculated the value of and multiplicity  of multinomial coefficient in (1) [41].
Finally,  for each distinct value of produced the results for the table.
I display the first 20 and the last 10 records from the table.
010 20 30 40 50 60 70 80 90 100
0 10,000 20,000 30,000 40,000 50,000 60,000 70,000
M=3 N=30
Figure 1
Distinct values of in increasing order calculated from (6) with (2), using [14] algorithm
for finding partitions of integer into parts. The values of M and N are given
on the graphs. The graphs represent complete set of distinct values of for the given
values of M and N. The graphs demonstrate close to linear dependence of on “quantum
number” in the vicinity of equilibrium . This is a characteristic feature of a sample
with cardinality . Away from equilibrium the linear behavior is violated as
interval  between energy levels begins to grow to reach 
M=3 N=900
010 20 30 40 50 60 70 80 90 100
0 10,000 20,000 30,000 40,000 50,000
M=2 N=200
M=7 N=70
Figure 2
Distinct values of in increasing order calculated from (6) with (2), using [14]
algorithm for finding partitions of integer into parts. The values of M and
N are given on the graphs. The graphs represent complete set of distinct values of
for the given values of M and N.
110 100 1000 10000
Figure 3
The total number of distinct event samples (curves 1, 3), and the total number
of distinct values (curves 2, 4) as functions of for two sets of
1.  for
3.  for
The values on curve 1 are by factor  greater than on curve 2 as. The
values on curve 3 are by factor  greater than on curve 4 as.
Using Stirling’s approximation for large in (11) one can show the curves grow
proportionally to  as
Figure 4
Function  calculated for two sets of probabilities.
Blue lines were calculated using formula (4). Red lines were
calculated using thermodynamic limit approximation (15)
Figure 5
Values of  calculated as a function of  with probabilities (2) for
four sets of parameters:
1. 
2. 
3. 
4. 
Blue lines were calculated using formula (6). Green dash lines were
calculated using thermodynamic limit approximation (16). Red lines were
calculated using quadratic form (22) approximation. For a given value of
the values  were distributed proportionally to corresponding
probabilities. For large value of  the blue lines and green
dash lines overlap closely as seen on curves 1 and 3. For small values of
the thermodynamic limit approximation is not accurate, and blue lines differ
from green dash lines as seen on curves 2 and 4. Red lines overlap with blue
lines in close proximity to the minimum (7) of.
Figure 6
Values of  calculated as a function of for
 and four sets of probabilities
Blue lines were calculated using formula (1). Red lines were
calculated using multivariate normal approximation (25). For the
given value of the distribution of values  is proportional
to the corresponding probabilities 
0.001 0.01 0.1 1 10 100 1000
Figure 7
The number  of object states having as a function of
for three sets of the parameters and probabilities (2):
1. 
2. 
3. 
4. 
Solid lines are the results of calculation using formulas (1, 6). Dash lines
represent thermodynamic limit approximation (33). The graphs show
thermodynamic limit provides the better approximation the larger is the ratio
 
. Solid lines level off close to  because density of states per interval
 decreases near  due to non-spherical -domain boundary. The
boundary is defined by  
First three moments of plotted as dots vs. total number N of microstates for
three sets of probabilities. The value of the third moment is reduced by
a factor of 2 to show its asymptotic behavior comparing with the first two moments.
110 100 1000
Figure 8
The mean value, the variance, and the third moment vs. sample size N for
three values of and probabilities (2). The graphs have been calculated using
expressions (39-41) with probability mass function (1). The value of the third moment
is reduced by a factor of 2 to show its asymptotic behavior comparing with and
. For each set of parameters, the curves approach  values as
Figure 9
Comparison of
 in (68) with 
1 10 100
Figure 10
In blue is the thermodynamic equilibrium entropy  of a standalone object
calculated from 
, where is given by (66). In red is the
thermodynamic equilibrium entropy calculated from (69).
Two sets of graphs are for two values of 
110 100 1000
Figure 11
Difference  between adjacent energy levels (6), averaged over distinct event
samples with given value of , and . The curve is approximated by (70) as
Figure 13
The   of left and right sides of (85), showing high degree of
linear correlation. The graph was produced with MATLAB code:
with parameters: 
Figure 14
Blue line is calculation of from (88-90) with parameters:
, and event sample
generated with multinomial pmf (1).
Red line is Gaussian profile (91), where  has been
computed from event sample generated for blue line.
The MATLAB code used in calculation:
Figure 15
Blue line is calculation of  from (88-89, 92) with parameters:
, and event sample generated with
multinomial pmf (1)
Red line is exponential decay (93), where  has been
computed from event sample generated for blue line.
The MATLAB code used in calculation:
Figure 16
Blue crosses, visible inside red circles, are values of 
where is the total number of objects; is the per-object information
outflow rate; is the number of objects remaining in initial mode after
transitions (96). Red circles are the corresponding values of  
where is calculated as (98) with (24). Transitions (96) are randomly
generated at a rate of transitions per object per interval.
The MATLAB code used in calculation:
with parameters:
ResearchGate has not been able to resolve any citations for this publication.
Working Paper
Full-text available
Full-text available
One of the most widely known building blocks of modern physics is Heisenberg's indeterminacy principle. Among the different statements of this fundamental property of the full quantum mechanical nature of physical reality, the uncertainty relation for energy and time has a special place. Its interpretation and its consequences have inspired continued research efforts for almost a century. In its modern formulation, the uncertainty relation is understood as setting a fundamental bound on how fast any quantum system can evolve. In this Topical Review we describe important milestones, such as the Mandelstam-Tamm and the Margolus-Levitin bounds on the quantum speed limit, and summarise recent applications in a variety of current research fields -- including quantum information theory, quantum computing, and quantum thermodynamics amongst several others. To bring order and to provide an access point into the many different notions and concepts, we have grouped the various approaches into the minimal time approach and the geometric approach, where the former relies on quantum control theory, and the latter arises from measuring the distinguishability of quantum states. Due to the volume of the literature, this Topical Review can only present a snapshot of the current state-of-the-art and can never be fully comprehensive. Therefore, we highlight but a few works hoping that our selection can serve as a representative starting point for the interested reader.
Full-text available
Episodic memories initially require rapid synaptic plasticity within the hippocampus for their formation and are gradually consolidated in neocortical networks for permanent storage. However, the engrams and circuits that support neocortical memory consolidation have thus far been unknown. We found that neocortical prefrontal memory engram cells, which are critical for remote contextual fear memory, were rapidly generated during initial learning through inputs from both the hippocampal-entorhinal cortex network and the basolateral amygdala. After their generation, the prefrontal engram cells, with support from hippocampal memory engram cells, became functionally mature with time. Whereas hippocampal engram cells gradually became silent with time, engram cells in the basolateral amygdala, which were necessary for fear memory, were maintained. Our data provide new insights into the functional reorganization of engrams and circuits underlying systems consolidation of memory.
Full-text available
Incomplete and imperfect data characterize the problem of constructing electron density representations from experimental information. One fundamental concern is identification of the proper protocol for including new information at any stage of a density reconstruction. An axiomatic approach developed in other fields specifies entropy maximization as the desired protocol. In particular, if new data are used to modify a prior charge density distribution without adding extraneous prejudice, the new distribution must both agree with all the data, new and old, and be a function of maximum relative entropy. The functional form of relative entropy is s = - r In (r/t), where r and t respectively refer to new and prior distributions normalized to a common scale. Entropy maximization has been used to deal with certain aspects of the phase problem of X-ray diffraction. Varying degrees of success have marked the work which may be roughly assigned to categories as direct methods, data reduction and analysis, and image enhancement. Much of the work has been expressed in probabilistic language, although image enhancement has been somewhat more physical or geometric in description. Whatever the language, entropy maximization is a specific and deterministic functional manipulation. A recent advance has been the description of an algorithm which, quite deterministically, adjusts a prior positive charge density distribution to agree exactly with a specified subset of structure-factor moduli by a constrained entropy maximization. Entropy on an N-representable one-particle density matrix is well defined. The entropy is the expected form, and it is a simple function of the one-matrix eigenvalues which all must be non-negative. Relationships between the entropy functional and certain properties of a one-matrix are discussed, as well as a conjecture concerning the physical interpretation of entropy. Throughout this work reference is made to informational entropy, not the entropy of thermodynamics.
Full-text available
I consider the effect of a finite sample size on the entropy of a sample of independent events. I propose formula for entropy which satisfies Shannon's axioms, and which reduces to Shannon's entropy when sample size is infinite. I discuss the physical meaning of the difference between two formulas, including some practical implications, such as maximum achievable channel utilization, and minimum achievable communication protocol overhead, for a given message size.
Since 1925, exactly four arguments have been forwarded for the assumption of a diverging (respectively --- after regularization --- very huge) zero-point energy of elementary quantum fields. And exactly three arguments have been forwarded against this assumption. In this article, all seven arguments are reviewed and assessed. It turns out that the three CONTRA arguments against the assumption of that zero-point energy are overwhelmingly stronger than the four PRO arguments.
The usual interpretation of the quantum theory is self-consistent, but it involves an assumption that cannot be tested experimentally, viz., that the most complete possible specification of an individual system is in terms of a wave function that determines only probable results of actual measurement processes. The only way of investigating the truth of this assumption is by trying to find some other interpretation of the quantum theory in terms of at present "hidden" variables, which in principle determine the precise behavior of an individual system, but which are in practice averaged over in measurements of the types that can now be carried out. In this paper and in a subsequent paper, an interpretation of the quantum theory in terms of just such "hidden" variables is suggested. It is shown that as long as the mathematical theory retains its present general form, this suggested interpretation leads to precisely the same results for all physical processes as does the usual interpretation. Nevertheless, the suggested interpretation provides a broader conceptual framework than the usual interpretation, because it makes possible a precise and continuous description of all processes, even at the quantum level. This broader conceptual framework allows more general mathematical formulations of the theory than those allowed by the usual interpretation. Now, the usual mathematical formulation seems to lead to insoluble difficulties when it is extrapolated into the domain of distances of the order of 10-13 cm or less. It is therefore entirely possible that the interpretation suggested here may be needed for the resolution of these difficulties. In any case, the mere possibility of such an interpretation proves that it is not necessary for us to give up a precise, rational, and objective description of individual systems at a quantum level of accuracy.