ArticlePDF Available

Abstract and Figures

Globally-constrained classical fields provide a unexplored framework for modeling quantum phenomena, including apparent particle-like behavior. By allowing controllable constraints on unknown past fields, these models are retrocausal but not retro-signaling, respecting the conventional block universe viewpoint of classical spacetime. Several example models are developed that resolve the most essential problems with using classical electromagnetic fields to explain single-photon phenomena. These models share some similarities with Stochastic Electrodynamics, but without the infinite background energy problem, and with a clear path to explaining entanglement phenomena. Intriguingly, the average intermediate field intensities share a surprising connection with quantum “weak values”, even in the single-photon limit. This new class of models is hoped to guide further research into spacetime-based accounts of weak values, entanglement, and other quantum phenomena.
This content is subject to copyright.
entropy
Article
A New Class of Retrocausal Models
Ken Wharton
Department of Physics and Astronomy, San José State University, San José, CA 95192-0106, USA;
kenneth.wharton@sjsu.edu
Received: 27 April 2018; Accepted: 24 May 2018; Published: 26 May 2018


Abstract:
Globally-constrained classical fields provide a unexplored framework for modeling
quantum phenomena, including apparent particle-like behavior. By allowing controllable constraints
on unknown past fields, these models are retrocausal but not retro-signaling, respecting the
conventional block universe viewpoint of classical spacetime. Several example models are developed
that resolve the most essential problems with using classical electromagnetic fields to explain
single-photon phenomena. These models share some similarities with Stochastic Electrodynamics,
but without the infinite background energy problem, and with a clear path to explaining entanglement
phenomena. Intriguingly, the average intermediate field intensities share a surprising connection
with quantum “weak values”, even in the single-photon limit. This new class of models is hoped
to guide further research into spacetime-based accounts of weak values, entanglement, and other
quantum phenomena.
Keywords: Retrocausation; weak values; Stochastic Electrodynamics
1. Introduction
In principle, retrocausal models of quantum phenomena offer the enticing possibility of replacing
the high-dimensional configuration space of quantum mechanics with ordinary spacetime, without
breaking Lorentz covariance or utilizing action-at-a-distance [
1
6
]. Any quantum model based entirely
on spacetime-localized parameters would obviously be much easier to reconcile with general relativity,
not to mention macroscopic classical observations. (In general, block-universe retrocausal models
can violate Bell-type inequalities because they contain hidden variables
λ
that are constrained by the
future measurement settings (
a
,
b
). These constraints can be mediated via continuous influence on
the particle worldlines, explicitly violating the independence assumption
P(λ|a
,
b) = P(λ)
utilized in
Bell-type no-go theorems.)
In practice, however, the most sophisticated spacetime-based retrocausal models to date only
apply to a pair of maximally entangled particles [
3
,
7
9
]. A recent retrocausal proposal from Sen [
10
]
is more likely to extend to more of quantum theory, but without a retrocausal mechanism it would
have to use calculations in configuration space, preparing whatever initial distribution is needed to
match the expected final measurement. Sutherland’s retrocausal Bohmian model [
11
] also uses some
calculations in configuration space. Given the difficulties in extending known retrocausal models to
more sophisticated situations, further development may require entirely new approaches.
One obvious way to change the character of existing retrocausal models is to replace the usual
particle ontology with a framework built upon spacetime-based fields. Every quantum “particle”,
after all, is thought to actually be an excitation of a quantum field, and every quantum field has
a corresponding classical field that could exist in ordinary spacetime. The classical Dirac field,
for example, is a Dirac-spinor-valued function of ordinary spacetime, and is arguably a far closer
analog to the electrons of quantum theory than a classical charged particle. This point is even more
obvious when it comes to photons, which have no classical particle analog at all, but of course have a
classical analog in the ordinary electromagnetic field.
Entropy 2018,20, 410; doi:10.3390/e20060410 www.mdpi.com/journal/entropy
Entropy 2018,20, 410 2 of 16
This paper will outline a new class of field-based retrocausal models. Field-based accounts of
particle phenomena are rare but not unprecedented, one example being the Bohmian account of
photons [
12
,
13
], using fields in configuration space. One disadvantage to field-based models is that
they are more complicated than particle models. However, if the reason that particle-based models
cannot be extended to more realistic situations is that particles are too simple, then moving to the
closer analog of classical fields might arguably be beneficial. Indeed, many quantum phenomena
(superposition, interference, importance of relative phases, etc.) have excellent analogs in classical
field behavior. In contrast, particles have essentially only one phenomenological advantage over fields:
localized position measurements. The class of models proposed here may contain a solution to this
problem, but the primary goal will be to set up a framework in which more detailed models can be
developed (and to show that this framework is consistent with some known experimental results).
Apart from being an inherently closer analog to standard quantum theory, retrocausal field models
have a few other interesting advantages to their particle counterparts. One intriguing development,
outlined in detail below, is an account of the average “weak values” [
14
,
15
] measured in actual
experiments, naturally emerging from the analysis of the intermediate field values. Another point of
interest is that the framework here bears similarities to Stochastic Electrodynamics (SED), but without
some of the conceptual difficulties encountered by that program (i.e., infinite background energy, and a
lack of a response to Bell’s theorem) [
16
,
17
]. Therefore, it seems hopeful that many of the successes of
SED might be applied to a further development of this framework.
The plan of this paper is to start with a conceptual framework, motivating and explaining the
general approach that will be utilized by the specific models. Section 3then explores a simple example
model that illustrates the general approach, as well as demonstrating how discrete outcomes can still
be consistent with a field-based model. Section 4then steps back to examine a large class of models,
calculating the many-run average predictions given a minimal set of assumptions. These averages
are then shown to essentially match the weak-value measurements. The results are then used to
motivate an improved model, as discussed in Section 5, followed by preliminary conclusions and
future research directions.
2. Conceptual Framework
Classical fields generally have Cauchy data on every spacelike hypersurface. Specifically,
for second order field equations, knowledge of the field and its time derivative everywhere at one
time is sufficient to calculate the field at all times. However, the uncertainty principle, applied in a
field framework, implies that knowledge of this Cauchy data can never be obtained: No matter how
precise a measurement, some components of the field can always elude detection. Therefore, it is
impossible to assert that either the preparation or the measurement of a field represents the precise
field configuration at that time. This point sheds serious doubt on the way that preparations are
normally treated as exact initial boundary conditions (and, in most retrocausal models, the way that
measurements are treated as exact final boundary conditions).
In accordance with this uncertainty, the field of Stochastic Electrodynamics (SED) explores the
possibility that in addition to measured electromagnetic (EM) field values, there exists an unknown
and unmeasured “classical zero-point” EM field that interacts with charges in the usual manner [
16
,
17
].
Starting from the assumption of relativistic covariance, a natural gaussian noise spectrum is derived,
fixing one free parameter to match the effective quantum zero-point spectrum of a half-photon per
EM field mode. Using classical physics, a remarkable range of quantum phenomena can be recovered
from this assumption. However, these SED successes come with two enormous problems. First, the
background spectrum diverges, implying an infinite stress energy tensor at every point in spacetime.
Such a field would clearly be in conflict with our best understanding of general relativity, even with
some additional ultraviolet cutoff. Second, there is no path to recovering all quantum phenomena via
locally interacting fields, because of Bell-inequality violations in entanglement experiments.
Entropy 2018,20, 410 3 of 16
Both of these problems have a potential resolution when using the Lagrangian Schema [
3
] familiar
from least-action principles in classical physics. Instead of treating a spacetime system as a computer
program that takes the past as an input and generates the future as an output, the Lagrangian Schema
utilizes both past and future constraints, solving for entire spacetime structures “all at once”. Unknown
past parameters (say, the initial angle of a ray of light constrained by Fermat’s principle of least time) are
the outputs of such a calculation, not inputs. Crucially, the action
S
that is utilized by these calculations
is a covariant scalar, and therefore provides path to a Lorentz covariant calculation of unknown field
parameters, different from the divergent spectrum considered by SED. The key idea is to keep the
action extremized as usual (
δS=
0), while also imposing some additional constraint on the total action
of the system. One intriguing option is to quantize the action (
S=nh
), a successful strategy from the
“old” quantum theory that has not been pursued in a field context, and would motivate
δS=
0 in the
first place. (Here, the action
S
is the usual functional of the fields throughout any given spacetime
subsystem, calculated by integrating the classical Lagrangian density over spacetime.)
Constraining the action does not merely ensure relativistic covariance. When complex macroscopic
systems are included in the spacetime subsystem (i.e., preparation and measurement devices), they will
obviously dominate the action, acting as enormous constraints on the microscopic fields, just as a
thermal reservoir acts as a constraint on a single atom. The behavior of microscopic fields would
therefore depend on what experimental apparatus is considered. Crucially, the action is an integral
over spacetime systems, not merely spatial systems. Therefore, the future settings and orientations of
measurement devices strongly influence the total action, and unknown microscopic fields at earlier
times will be effectively constrained by those future devices. Again, those earlier field values are
literally “outputs” of the full calculation, while the measurement settings are inputs.
Such models are correctly termed “retrocausal”. Given the usual block universe framework from
classical field theory and the interventionist definition of causation [
18
21
], any devices with free
external settings are “causes”, and any constrained parameters are “effects” (including field values at
spacetime locations before the settings are chosen). Such models are retrocausal but not retro-signaling,
because the future settings constrain unknown past field parameters, hidden by the uncertainty
principle. (These models are also forward-causal, because the preparation is another intervention.)
It is important not to view causation as a process—certainly not one “flowing” back-and-forth through
time—as this would violate the block universe perspective. Instead, such systems are consistently
solved “all-at-once”, as in action principles. Additional discussion of this topic can be found in [
2
,
4
,
22
].
The retrocausal character of these models immediately provides a potential resolution to both of
the problems with SED. Concerning the infinite-density zero point spectrum, SED assumes that all
possible field modes are required because one never knows which ones will be relevant in the future.
However, a retrocausal model is not “in the dark” about the future, because (in this case) the action is
an integral that includes the future. The total action might very well only be highly sensitive to a bare
few field modes. (Indeed, this is usually the case; consider an excited atom, waiting for a zero-point
field to trigger “spontaneous” emission. Here, only one particular EM mode is required to explain the
eventual emission of a photon, with the rest of the zero point field modes being irrelevant to a future
photon detector.) As is shown below, it is not difficult to envision action constraints where typically
only a few field modes need to be populated in the first place, resolving the problem of infinities
encountered by SED. Furthermore, it is well-known that retrocausal models can naturally resolve
Bell-inequality violations without action-at-a-distance, because the past hidden variables are naturally
correlated with the future measurement settings [
4
,
23
]. (Numerous proof-of-principle retrocausal
models of entanglement phenomena have been developed over the past decade [3,710].)
Unfortunately, solving for the exact action of even the simplest experiments is very hard.
The macroscopic nature of preparation and measurement that makes them so potent as boundary
constraints also makes them notoriously difficult to calculate exactly—especially when the relevant
changes in the action are on the order of Planck’s constant. Therefore, to initially consider such
models, this paper will assume that any constraint on the total action manifests itself as certain rules
Entropy 2018,20, 410 4 of 16
constraining how microscopic fields are allowed to interact with the macroscopic devices. (Presumably,
such rules would include quantization conditions, for example only allowing absorption of EM waves
in packets of energy
¯hω
.) This assumption will allow us to focus on what is happening between devices
rather than in the devices themselves, setting aside those difficulties as a topic for future research.
This paper will proceed by simply exploring some possible higher-level interaction constraints
(guided by other general principles such as time-symmetry), and determining whether they might
plausibly lead to an accurate explanation of observed phenomena. At this level, the relativistic
covariance will not be obvious; after all, when considering intermediate EM fields in a laboratory
experiment, a special reference frame is determined by the macroscopic devices which constrain those
fields. However, it seems plausible that if some higher-level model matches known experiments then a
lower-level covariant account would eventually be acheivable, given that known experiments respect
relativistic covariance.
The following examples will be focused on simple problems, with much attention given to the
case where a single photon passes through a beamsplitter and is then measured on one path or the
other. This is precisely the case where field approaches are thought to fail entirely, and therefore the
most in need of careful analysis. In addition, bear in mind that these are representative examples
of an entire class of models, not one particular model. It is hoped that, by laying out this new class
of retrocausal models, one particular model will eventually emerge as a possible basis for a future
reformulation of quantum theory.
3. Constrained Classical Fields
3.1. Classical Photons
Ordinary electromagnetism provides a natural analog to a single photon: a finite-duration
electromagnetic wave with total energy
¯hω
. Even in classical physics, all of the usual uncertainty
relations exist between the wave’s duration and its frequency
ω
; in the analysis below, we assume
long-duration EM waves that have a reasonably well-defined frequency, in some well-defined beam
such as the
TE M00
gaussian mode of a narrow bandwidth laser. By normalizing the peak intensity
I
of this wave so that a total energy of
¯hω
corresponds to
I=
1, one can define a “Classical Photon
Analog” (CPA).
Such CPAs are rarely considered, for the simple reason that they seem incompatible with the
simple experiment shown in Figure 1a. If such a CPA were incident upon a beamsplitter, some fraction
T
of the energy would be transmitted and the remaining fraction
R=
1
T
would be reflected.
This means that detectors
A
and
B
on these two paths would never see what actually happens, which is
a full
¯hω
amount of energy on either
A
or
B
, with probabilities
T
and
R
, respectively. Indeed, this very
experiment is usually viewed as proof that classical EM is incorrect.
Notice that the analysis in the previous paragraph assumed that the initial conditions were exactly
known, which would violate the uncertainty principle. If unknown fields existed on top of the original
CPA, boosting its total energy to something larger than
¯hω
, it would change the analysis. For example,
if the CPA resulted from a typical laser, the ultimate source of the photon could be traced back to
a spontaneous emission event, and (in SED-style theories) such “spontaneous” emission is actually
stimulated emission, due to unknown incident zero-point radiation. This unknown background would
then still be present, boosting the intensity of the CPA such that
I>
1. Furthermore, every beamsplitter
has a “dark” input port, from which any input radiation would also end up on the same two detectors,
A
and
B
. In quantum electrodynamics, it is essential that one remember to put an input vacuum state
on such dark ports; the classical analog of this well-known procedure is to allow for possible unknown
EM wave inputs from this direction.
The uncertain field strengths apply to the outputs as well as the inputs, from both time-symmetry
and the uncertainty principle. Just because a CPA is measured on some detector
A
, it does not follow
that there is no additional EM wave energy that goes unmeasured. Just because nothing is measured
Entropy 2018,20, 410 5 of 16
on detector
B
does not mean that there is no EM wave energy there at all. If one were to insist on a
perfectly energy-free detector, one would violate the uncertainty principle.
I=1
I=R
I=T
?"
?"
1+I1
IB
1+IA
I2
1+I1
1+IB
IA
I2
Y"
Y"
N"
N"
(a)
(b)
(c)
A"
B"
A"
B"
A"
B"
Figure 1.
(
a
) A classical photon analog encounters a beamsplitter, and is divided among two detectors,
in contradiction with observation. (
b
) A classical photon analog, boosted by some unknown peak
intensity
I1
, encounters the same beamsplitter. Another beam with unknown peak intensity
I2
enters
the dark port. This is potentially consistent with a classical photon detection in only detector
A
(“Y"
for yes, “N" for no), so long as the output intensities
IA
and
IB
remain unobserved. (The wavefronts
have been replaced by dashed lines for clarity.) (
c
) The same inputs as in (
b
), but with outputs
consistent with classical photon detection in only detector
B
, where the output intensities
IA
and
IB
again remain unobserved.
By adding these unknown input and output fields, Figure 1b demonstrates a classical beamsplitter
scenario that is consistent with an observation of one CPA on detector
A
. In this case, two incoming
beams, with peak intensities 1
+I1
and
I2
, interfere to produce two outgoing beams with peak
intensities 1
+IA
and
IB
. The four unknown intensities are related by energy conservation,
I1+I2=
IA+IB
, where the exact relationship between these four parameters is determined by the unknown
phase difference between the incoming beams. Different intensities and phases could also result in
the detection of exactly one CPA on detector
B
, as shown in Figure 1c. These scenarios are allowed by
classical EM and consistent with observation, subject to known uncertainties in measuring field values,
pointing the way towards a classical account of “single-photon” experiments. This is also distinct
from prior field-based accounts of beamsplitter experiments [13]; here there is no need to non-locally
transfer field energy from one path to another.
Some potential objections should be addressed. One might claim that quantum theory does allow
certainty in the total energy of a photon, at the expense of timing and phase information. However, in
quantum field theory, one can only arrive at this conclusion after one has renormalized the zero-point
values of the electromagnetic field—the very motivation for I1and I2in the first place. (Furthermore,
when hunting for some more-classical formulation of quantum theory, one should not assume that the
original formulation is correct in every single detail.)
Another objection would be to point out the sheer implausibility of any appropriate beam
I2
.
Indeed, to interfere with the original CPA, it would have to come in with just the right frequency,
spatial mode, pulse shape, and polarization. However, this concern makes the error of thinking of
all past parameters as logical inputs. In the Lagrangian Schema, the logical inputs are the known
constraints at the beginning and end of the relevant system. The unknown parameters are logical
outputs of this Schema, just as the initial angle of the light ray in Fermat’s principle. The models below
Entropy 2018,20, 410 6 of 16
aim to generate the parameters of the incoming beam in
I2
, as constrained by the entire experiment.
In action principles, just because a parameter is coming into the system at the temporal beginning does
not mean that it is a logical input. In retrocausal models, these are the parameters that are the effects of
the constraints, not causes in their own right. (Such unknown background fields do not have external
settings by which they can be independently controlled, even in principle, and therefore they are not
causal interventions.)
Even if the classical field configurations depicted in Figure 1are possible, it remains to explain why
the observed transmission shown in Figure 1b occurs with a probability
T
, while the observed reflection
shown in Figure 1c occurs with a probability
R
. To extract probabilities from such a formulation,
one obviously needs to assign probabilities to the unknown parameters,
P(I1)
,
P(I2)
, etc. However,
use of the Lagrangian Schema requires an important distinction, in that the probabilities an agent
would assign to the unknown fields would depend on that agent’s information about the experimental
geometry. In the absence of any information whatsoever, one would start with a “a priori probability
distribution”
P0(I2)
—effectively a Bayesian prior that would be (Bayesian) updated upon learning
about any experimental constraints. Any complete model would require both a probability distribution
P0
as well as rules for how the experimental geometry might further constrain the allowed field values.
Before giving an example model, one further problem should be noted. Even if one were
successful in postulating some prior distribution
P0(I1)
and
P0(I2)
that eventually recovered the correct
probabilities, this might very well break an important time symmetry. Specifically, the time-reverse of
this situation would instead depend on
P0(IA)
and
P0(IB)
. For that matter, if both outgoing ports have
a wave with a peak intensity of at least
I=
1, then the only parameters sensitive to which detector
fires are the unobserved intensities
IA
and
IB
. Both arguments encourage us to include a consideration
of the unknown outgoing intensities
IA
and
IB
in any model, not merely the unknown incoming fields.
3.2. Simple Model Example
The model considered in this section is meant to be an illustrative example of the class of
retrocausal models described above, illustrating that it is possible to get particle-like phenomena
from a field-based ontology, and also indicating a connection to some of the existing retrocausal
accounts of entanglement.
One way to resolve the time-symmetry issues noted above is to impose a model constraint
whereby the two unobserved incoming intensities
I1
and
I2
are always exactly equal to the unobserved
outgoing intensities
IA
and
IB
(either
I1=IA
or
I1=IB
). If this constraint is enforced, then assigning
a probability of
P0(I1)P0(I2)
to each diagram does not break any time symmetry, as this quantity
will always be equal to
P0(IA)P0(IB)
. One simple rule that seems to work well in this case is the a
priori distribution
P0(IZ) = Q1
IZ
(where IZ>e). (1)
Here,
IZ
is any of the allowed unobserved background intensities,
Q
is a normalization constant,
and
e
is some vanishingly small minimum intensity to avoid the pole at
IZ=
0. (While there may be
a formal need to normalize this expression, there is never a practical need; these prior probabilities
will be restricted by the experimental constraints before being utilized, and will have to be normalized
again.) The only additional rule to recover the appropriate probabilities is that
I1e
. (This might be
motivated by the above analysis that laser photons would have to be triggered by background fields,
so the known incoming CPA would have to be accompanied by a non-vanishing unobserved field.)
To see how these model assumptions lead to the appropriate probabilities, first consider that it
is overwhelmingly probable that
I2e
. Thus, in this case, we can ignore the input on the dark port
of the beamsplitter. However, with only one non-vanishing input, there can be no interference, and
both outputs must have non-vanishing intensities. The only way it is possible for detector
A
to fire,
given the above constraints, is if
I1=IB=R/T
in Figure 1b (such that
I2=IA=
0). The only way it
is possible for detector Bto fire, in Figure 1c, is if I1=IA=T/R.
Entropy 2018,20, 410 7 of 16
With this added information from the experimental geometry, one would update the prior
distribution
P0(I1)
by constraining the only allowed values of
I1
to be
R/T
or
T/R
(and then
normalizing). The relative probabilities of these two cases is therefore
P(A)
P(B)=
1
R/TP0(I2)
1
T/RP0(I2)=T
R, (2)
yielding the appropriate ratio of possible outcomes.
Taking stock of this result, here are the assumptions of this example model:
The a priori probability distribution on each unknown field intensity is given by Equation (1)—to
be updated for any given experiment.
The unknown field values are further constrained to be equal as pairs, {I1,I2}={IA,IB}.
I1is non-negligible because it accompanies a known “photon”.
The probability of each diagram is given by P0(I1)P0(I2), or equivalently, P0(IA)P0(IB).
Note that it does not seem reasonable to assign the prior probability to the total incoming field
(
1
+I1)
, because Equation (1) should refer to the probability given no further information, not even the
knowledge that there is an incoming photon’s worth of energy on that channel. (The known incoming
photon that defines this experiment is an addition to the a priori intensity, not a part of it.) Given these
assumptions, one finds the appropriate probabilities for a detected transmission as compared to a
detected reflection.
There are several other features of this example model. Given Equation (1), it should be obvious
that the total energy in most zero-point fields should be effectively zero, resolving the standard SED
problem of infinite zero-point energy. In addition, this model would work for any device that splits
a photon into two paths (such as a polarizing cube), because the only relevant parameters are the
classical transmission and reflection, Tand R.
More importantly, this model allows one to recover the correct measurement probabilities for two
maximally entangled photons in essentially the same way as several existing retrocausal models in the
literature [
3
,
7
,
8
]. Consider two CPAs produced by parametric down-conversion in a nonlinear crystal,
with identical but unknown polarizations (a standard technique for generating entangled photons).
The three-wave mixing that classically describes the down-conversion process can be strongly driven
by the presence of background fields matching one of the two output modes, M1, even if there is no
background field on the other output mode, M2. (Given Equation (1), having essentially no background
field on one of these modes is overwhelmingly probable.) Thus, in this case, the polarization of M2
necessarily matches the polarization of the unknown background field on M1 (the field that strongly
drives the down-conversion process).
Now, assume both output photons are measured by polarizing cubes set at arbitrary polarization
angles, followed by detectors. With no extra background field on M2, the only way that M2 could
satisfy the above constraints at measurement would be if its polarization was already exactly aligned
(modulo
π/
2) with the angle of the future polarizing cube. (In that case, no background field would be
needed on that path; the bare CPA would fully arrive at one detector or the other.) However, we have
established that the polarization of M2 was selected by the background field on M1, so the background
field on M1 is also forced to align with the measurement angle on M2 (modulo
π/
2). In other words,
solving the whole experiment “all at once”, the polarization of both photons is effectively constrained
to match one of the two future measurement angles.
This is essentially what happens in several previously-published retrocausal models of maximally
entangled particles [
3
,
7
,
8
]. In these models, the properties of both particles (spin or polarization,
depending on the context) are constrained to be aligned with one of the two future settings.
The resulting probabilities are then entirely determined by the mis-matched particle, the one doesn’t
match the future settings. However, this is just a single-particle problem, and in this case the
Entropy 2018,20, 410 8 of 16
corresponding classical probabilties (R and T, given by Malus’s Law at the final polarizer) are enforced
by the above rules, matching experimental results for maximally entangled particles. The whole
picture almost looks as if the measurement on one photon has collapsed the other photon into that
same polarization, but in these models it was clear that the CPAs had the correct polarization all along,
due to future constraints on the appropriate hidden fields.
3.3. Discussion
The above model was presented as an illustrating example, demonstrating one way to resolve the
most obvious problems with classical photon analogs and SED-style approaches. Unfortunately, it does
not seem to extend to more complicated situations. For example, if one additional beamsplitter is
added, as in Figure 2, no obvious time-symmetric extension of the assumptions in the previous section
lead to the correct results. In this case, one of the two dark ports would have to have non-negligible
input fields. Performing this analysis, it is very difficult to invent any analogous rules that lead to the
correct distribution of probabilities on the three output detectors.
I2I3
1+I1
Figure 2.
A classical photon analog encounters two beamsplitters, and is divided among three detectors.
The CPA is boosted by some unknown peak intensity
I1
, and each beamsplitter’s dark port has an
additional incident field with unknown intensity.
In Section 5, we show that it is possible to resolve this problem, using different assumptions to
arrive at another model which works fine for multiple beamsplitters. However, before proceeding, it is
worth reviewing the most important accomplishment so far. We have shown that it is possible to give
a classical field account of an apparent single photon passing through a beamsplitter, matching known
observations. Such models are generally thought to be impossible (setting aside nonlocal options [
13
]).
Given that they are possible—if using the Lagrangian Schema—the next-level concern could be that
such models are simply implausible. For phenomena that look so much like particle behavior, such
classical-field-based models might seem to be essentially unmotivated.
The next section addresses this concern in two different ways. First, the experiments considered
in Section 4are expanded to include clear wave-like behavior, by combining two beamsplitters into an
interferometer. Again, the input and output look like single particles, but now some essential wave
interference is clearly occurring in the middle. Second, the averaged and post-selected results of these
models can be compared with “weak values” that can be measured in actual experiments
[14,15]
.
Notably, the results demonstrate a new connection between the average intermediate classical fields
and experimental weak values. This correspondence is known in the high-field case [
24
28
], but
here they are shown to apply even in the single-photon regime. Such a result will boost the
general plausibility of this classical-field-based approach, and will also motivate an improved model
for Section 5.
Entropy 2018,20, 410 9 of 16
4. Averaged Fields and Weak Values
Even without a particular retrocausal model, it is still possible to draw conclusions as to the
long-term averages predicted over many runs of the same experiment. The only assumption made
here will be that every relevant unknown field component for a given experiment (both inputs and
outputs) is treated the same as every other. In Figure 1, this would imply an equality between the
averaged values <I1>=<I2>=<IA>=<IB>, each defined to be the quantity IZ.
Not every model will lead to this assumption; indeed, the example model above does not, because
the the CPA-accompanying field
I1
was treated differently from the dark port field
I2
. However,
for models which do not treat these fields differently, the averages converge onto parameters that
can actually be measured in the laboratory: weak values [
14
,
15
]. This intriguing correspondence is
arguably an independent motivation to pursue this style of retrocausal models.
4.1. Beamsplitter Analysis
Applying this average condition on the simple beamsplitter example of Figure 1b,c yields a phase
relationship between the incoming beams, in order to retain the proper outputs. If
θ
is the phase
difference between
I1
and
I2
before the beamsplitter, then taking into account the relative
π/
2 phase
shift caused by the beamsplitter itself, a simple calculation for Figure 1b reveals that
h1+IAi=IZ+TD2pRT(1+I1)(I2)sin θE(3)
hIBi=IZ+R+D2pRT(1+I1)(I2)sin θE. (4)
Given the above restrictions on the average values, this is only possible if there exists a non-zero
average correlation
Cq(1+I1)(I2)sin θ(5)
between the inputs, such that
C=R/4T
. The same analysis applied to Figure 1c reveals that in
this case
C=T/4R
. (This implies some inherent probability distribution
P(I1
,
I2
,
θ)
1
/|C|
to yield
the correct distribution of outcomes, which will inform some of the model-building in the next section.)
In this case, there are no intermediate fields to analyze, as every mode is either an input or an output.
To discuss intermediate fields, we must go to a more complicated scenario.
4.2. Interferometer Analysis
Consider the simple interferometer shown in Figure 3. For these purposes, we assume it is aligned
such that the path length on the two arms is exactly equal. For further simplicity, the final beamsplitter
is assumed to be 50/50. Again, the global constraints imply that either Figure 3a or Figure 3b actually
happens. A calculation of the average intermediate value of
Ix
yields the same result as Equation (3),
while the average value of
Iy
is the same as Equation (4). For Figure 3a, further interference at the final
beamsplitter then yields, after some simplifying algebra,
h1+IAi= (0.5 +RT) + IZ+ (TR)Dp(1+I1)(I2)sin θE(6)
hIBi= (0.5 RT) + IZ(TR)Dp(1+I1)(I2)sin θE. (7)
The first term on the right of these expressions is the outgoing classical field intensity one would
expect for a single CPA input, with no unknown fields. Because of our normalization, it is also the
expected probability of a single-photon detection on that arm. The second term is just the average
unknown field
IZ
, and the final term is a correction to this average that is non-zero if the incoming
unknown fields are correlated. Note that the quantity Cdefined in Equation (5) again appears in this
final term.
Entropy 2018,20, 410 10 of 16
(a)
1+I1
IY
IX
IB
1+IA
N"
Y"
I2
1+I1
IY
IX
1+IB
IA
Y"
N"
I2
(b)
(T/R)" (T/R)"
(50/50)" (50/50)"
A"
B"
A"
B"
Figure 3.
(
a
) A classical photon analog, boosted by some unknown peak intensity
I1
, enters an
interferometer through a beamsplitter with transmission fraction
T
. An unknown field also enters from
the dark port. Both paths to the final 50/50 beamsplitter are the same length; the intermediate field
intensities on these paths are
IX
and
IY
. Here, detector
A
fires, leaving unmeasured output fields
IA
and IB. (b) The same situation as (a), except here detector Bfires.
To make this end result compatible with the condition that
h1+IAi=
1
+IZ
, the correlation term
C
must be constrained to be
C= (
0.5
RT)/(TR)
. For Figure 3b, with detector
B
firing, this term
must be
C=(
0.5
+RT)/(TR)
. (As in the beamsplitter case, the quantity 1
/|C|
happens to be
proportional to the probability of the corresponding outcome, for allowed values of
C
.) Notice that as
the original beamsplitter approaches 50/50, the required value of
C
diverges for Figure 3b, but not for
Figure 3a. That is because this case corresponds to a perfectly tuned interferometer, where detector
A
is certain to fire, but never B. (This analysis also goes through for an interferometer with an arbitrary
phase shift, and arbitrary final beamsplitter ratio; these results will be detailed in a future publication.)
In this interferometer, once the outcome is known, it is possible to use
C
to calculate the average
intensities <IX>and <IY>on the intermediate paths. For Figure 3a, some algebra yields:
hIXi=IZ+T
T+R(8)
hIYi=IZ+R
T+R. (9)
For Figure 3b, the corresponding average intermediate intensities are
hIXi=IZ+T
TR(10)
hIYi=IZR
TR. (11)
Remarkably, as we are about to see, the non-
IZ
portion of these calculated average intensities can
actually be measured in the laboratory.
4.3. Weak Values
When the final outcome of a quantum experiment is known, it is possible to elegantly calculate
the (averaged) result of a weak intermediate measurement via the real part of the “Weak Value”
equation [14]:
hQiweak =Re <Φ|Q|Ψ>
<Φ|Ψ>. (12)
Entropy 2018,20, 410 11 of 16
Here,
|Ψ>
is the initial wavefunction evolved forward to the intermediate time of interest,
|Φ>
is the final (measured) wavefunction evolved backward to the same time, and
Q
is the operator for
which one would like to calculate the expected weak value. (Note that weak values by themselves
are not retrocausal; post-selecting an outcome is not a causal intervention. However, if one takes
the backward-evolved wavefunction
|Φ>
to be an element of reality, as done by one of the authors
here [
29
], then one does have a retrocausal model—albeit in configuration space rather than spacetime.)
Equation (12) yields the correct answer in the limit that the measurement
Q
is sufficiently weak, so
that it does not appreciably affect the intermediate dynamics. The success of this equation has been
verified in the laboratory [
26
], but is subject to a variety of interpretations. For example,
hQiweak
can be
negative, seemingly making a classical interpretation impossible.
In the case of the interferometer, the intermediate weak values can be calculated by recalling that
it is the square root of the normalized intensity that maps to the wavefunction. (Of course, the standard
wavefunction knows nothing about
IZ
; only the prepared and detected photon are relevant in a
quantum context.) Taking into account the phase shift due to a reflection, the wavefunction between
the two beamsplitters is simply
|Ψ>=T|X>+iR|Y>
, where
|X>(|Y>)
is the state of the photon
on the upper (lower) arm of the interferometer.
The intermediate value of
|Φ>
depends on whether the photon is measured by detector
A
or
B
.
The two possibilities are:
|ΦA>=1
2(i|X>+|Y>), (13)
|ΦB>=1
2(|X>i|Y>). (14)
Notice that, in this case, the reflection off the beamsplitter is associated with a negative
π/
2 phase
shift, because we are evolving the final state in the opposite time direction.
These are easily inserted into Equation (12), where
Q=|X><X|
for a weak measurement of
IX
,
and
Q=|Y><Y|
for a weak measurement of
IY
. (Given our normalization, probability maps to peak
intensity.) If the outcome is a detection on A, this yields
hIXiweak =T
T+R, (15)
hIYiweak =R
T+R. (16)
If instead the outcome is a detection on B, one finds
hIXiweak =T
TR, (17)
hIYiweak =R
TR. (18)
Except for the background average intensity
IZ
, these quantum weak values are precisely the
same intermediate intensities computed in the previous section.
The earlier results were framed in an essentially classical context, but these weak values come
from an inherently quantum calculation, with no clear interpretation. Some of the strangest features
of weak values are when one gets a negative probability/intensity, which seem to have no classical
analog whatsoever. For example, whenever detector
B
fires, either Equation (17) or Equation (18)
will be negative. (Recall that if
T=R
, then
B
never fires.) Nevertheless, a classical interpretation of
this negative weak value is still consistent with the earlier results of Equations (10) and (11), because
those cases also include an additional unknown intensity
IZ
. It is perfectly reasonable to have classical
destructive interference that would decrease the average value of
IY
to below that of
IZ
; after all, the
latter is just an unknown classical field.
Entropy 2018,20, 410 12 of 16
One objection here might be that for values of
TR
, the weak values of Equations (17) and (18)
could get arbitrarily large, such that
IZ
would have to be very large as well to maintain a positive
intensity for both Equations (10) and (11). However, consider that if
IZ
were not large enough, then
there would be no classical solution at all, in contradiction to the Lagrangian Schema assumptions
considered above (requiring a global solution to the entire problem). Furthermore, if the weak values
get very large, that is only because the outcome at
B
becomes very improbable, meaning that
IZ
would rarely have to take a large value. As we show in the next section, there are reasonable a priori
distributions of IZwhich would be consistent with this occasional restriction.
Such connections between uncertain classical fields and quantum weak values are certainly
intriguing, and also under current investigation by at least one other group [
30
]. However, while
it may be that the unknown-classical-field framework might help make some conceptual sense of
quantum weak values, the main point here is simply that these two perspectives are mutually consistent.
Specifically, the known experimental success of weak value predictions seems to equally support
the unknown-field formalism presented above. It remains to be seen whether (and why) these two
formalisms always seem to give compatible answers in every case, but this paper will set that question
aside for future research.
For the purposes of this introductory paper, the final task will be to consider whether the above
results indicate a more promising model of these experiments.
5. An Improved Model
Given the intriguing connection to weak values demonstrated in the previous section, it seems
worth trying to revise the example model from Section 3. In Section 4, the new assumption which led
to the successful result was that every unknown field component
(I1
,
I2
,
IA
,
IB)
, should be treated on
an equal footing, not singling out
I1
for accompanying a known photon. (Recall the average value
of each of these was assumed to be some identical parameter
IZ
.) Meanwhile, the central idea of
the model in Section 3is that time-symmetry could be enforced by demanding an exact equivalence
between the two input fields (I1,I2) and the two output fields (IA,IB).
One obvious way to combine all these ideas is to instead demand an equivalence between all four
of these intensities—not on average, but on every run of the experiment. This might seem to be in
conflict with the weak value measurements, which are not the same on every run, but only converge
to the weak values after an experimental averaging. However, these measurements are necessarily
weak/noisy, so these results are inconclusive as to whether the underlying signal is constant or varying.
(Alternatively, one could consider a class of models that on average converge to the below model, but
this option will also be set aside for the purposes of this paper.)
With the very strict constraint that each of (
I1
,
I2
,
IA
,
IB
) are always equal to the same intensity
IZ
,
the only two free parameters are
IZ
and the relative initial phase
θ
(between the two incoming modes
1
+I1
and
I2
). In addition,
θ
and
IZ
must be correlated, depending on the experimental parameters,
in order to fulfill these constraints. For the case of the beamsplitter (Figure 1b,c), this amounts to
removing all the time-averages from the analysis of Section 4.1. This leads to the conditions
1
qI2
ZA +IZ A
=sin θr4T
R, (19)
1
qI2
ZB +IZB
=sin θr4R
T. (20)
Here,
IZA
is the value of
IZ
needed for an outcome on detector
A
(as in Figure 1b), and
IZB
is the
value of IZneeded for an outcome on detector B(as in Figure 1c). Both are functions of θ.
This model requires a priori probability distributions
P0(IZ)
and
P0
0(θ)
(the prime is to distinguish
these two functions). The hope is that these distributions can then be restricted by the global constraints
Entropy 2018,20, 410 13 of 16
such that the correct outcome probabilities are recovered. To implement the above constraints, instead
of integrating over the two-dimensional space [
IZ
,
θ
], the correlations between
IZ
and
θ
essentially
make this a one-dimensional space, which can be calculated with a delta function:
RP0(IZ)P0
0(θ)δ(IZIZA )dIZdθ
RP0(IZ)P0
0(θ)δ(IZIZB )dIZdθ=P(outcome A)
P(outcome B). (21)
It is very hard to imagine any rule whereby
P0
0(θ)
would not start out as a flat distribution—all
relative phases should be equally a priori likely. The earlier observation that the appropriate probability
was always proportional to 1
/|C|
(in both the beamsplitter and the interferometer geometries)
motivates the following guess for an a priori probability distribution for background fields:
P0(IZ)1
qI2
Z+IZ
, (22)
assuming the normalization where
I=
1 corresponds to a single classical photon. This expression
diverges as
IZ
0, which is appropriate for avoiding the infinities of SED, although some cutoff would
be required to form a normalized distribution. (Again, it is unclear whether an a priori assessment
of relative likelihood would actually have to be normalized, given that in any experimental instance
there would only be some values of
IZ
which were possible, and only these probabilities would have
to be normalized.)
Inserting Equation (22) into Equation (21), along with a flat distribution for
P0
0(θ)
, the beamsplitter
conditions from Equations (19) and (20) yield
R2π
πsin θ4T/Rdθ
Rπ
0sin θ4R/Tdθ=T
R, (23)
as desired. Here, the limits on
θ
come from the range of possible solutions to Equations (19) and (20).
A similar successful result is found in the above case of the interferometer, because 1
/|C|
is again
proportional to the outcome probability. This model also works well for the previously-problematic
case of multiple beamsplitters shown in Figure 2. Now, because the incoming fields
(I1
,
I2
,
I3)
are all
equal, this essentially splits into two consecutive beamsplitter problems, and the probabilities of these
two beamsplitters combine in an ordinary manner.
Summarizing the assumptions behind this improved model:
The unknown field values are constrained to all be equal: I1=I2=IA=IB.
The
apriori
probability distribution on each unknown field intensity is given by Equation (22)—but
must be updated for any given experiment.
The relative phase between the incoming fields is a priori completely unknown—but must be
updated for any given experiment.
However, there is still a conceptual difficulty in this new model, in that all considered incoming
field modes are constrained to be equal intensities, but we have left the unconsidered modes equal
to zero. (Meaning, the modes with the wrong frequencies, or coming in the wrong direction, etc.).
If literally all zero-point modes were non-zero, it would not only change the above calculations, but it
would run directly into the usual infinities of SED. Thus, if this improved model were to be further
developed, there would have to be some way to determine certain groups of background modes
that were linked together through the model assumptions, while other background modes could
be neglected.
This point is also essential if such a revised model is to apply to entangled particles. For two
down-converted photons with identical polarizations, each measured by a separate beamsplitter, there
are actually four relevant incoming field modes: the unknown intensity accompanying each photon,
Entropy 2018,20, 410 14 of 16
as well as the unknown intensity incident upon the dark port of each beamsplitter. If one sets all four
of these peak intensities to the same
IZ
, one does not recover the correct joint probabilities of the two
measurements. However, if two of these fields are (nearly) zero, as described in Section 3.2, then the
correct probabilities are recovered in the usual retrocausal manner (see Section 3.2 or [
3
,
7
,
8
]). Again, it
seems that there must be some way to parse the background modes into special groups.
The model in this section is meant to be an example starting point, not some final product.
Additional features and ideas that might prove useful for future model development will now be
addressed in the final section.
6. Summary and Future Directions
Retrocausal accounts of quantum phenomena have come a long way since the initial proposal by
Costa de Beauregard [
31
]. Notably, the number of retrocausal models in the literature has expanded
significantly in the past decade alone [
3
,
7
11
,
22
,
32
40
], but more ideas are clearly needed. The central
novelties in the class of models discussed here are: (1) using fields (exclusively) rather than particles;
and (2) introducing uncertainty to even the initial and final boundary constraints. Any retrocausal
model must have hidden variables (or else there is nothing for the future measurement choices to
constrain), but it has always proved convenient to segregate the known parameters from the unknown
parameters in a clear manner. Nature, however, may not respect such a convenience. In the case of
realistic measurements on fields, there is every reason to think that our best knowledge of the field
strength may not correspond to the actual value.
Although the models considered here obey classical field equations (in this case, classical
electromagnetism), they only make sense in terms of the Lagrangian Schema, where the entire
experiment is solved “all-at-once”. Only then does it make sense to consider incoming dark-port fields
(such as
I2
), because the global solution may require these incoming modes in order have a solution.
However, despite the presence of such fields at the beginning of the experiment (and, presumably,
before it even begins), they are not “inputs” in the conventional sense; they are literally outputs of the
retrocausal model.
The above models have demonstrated a number of features and consequences, most notably:
Distributed classical fields can be consistent with particle-like detection events.
There exist simple constraints and a priori field intensity distributions that yield the correct
probabilities for basic experimental geometries.
Most unobserved field modes are expected to have zero intensity (unlike in SED).
The usual retrocausal account for maximally entangled photons still seems to be available.
The average intermediate field values, minus the unobserved background, is precisely equal to
the “weak value” predicted by quantum theory (in the cases considered so far).
Negative weak values can have a classical interpretation, provided the unobserved background is
sufficiently large.
This seems to be a promising start, but there are many other research directions that might
be inspired by these models. For example, consider the motivation of action constraints, raised in
Section 2. If the total action is ultimately important, then any constraint or probability rule would have
to consider the contribution to the action of the microscopic intermediate fields. Even the simple case
of a CPA passing through a finite-thickness beamsplitter has a non-trivial action. (A single free-field
EM wave has a vanishing Lagrangian density at every point, but two crossing or interfering waves
generally do not). It certainly seems worth developing models that constrain not only the inputs and
outputs, but also these intermediate quantities (which would have the effect of further constraining
the inputs and outputs).
Another possibility is to make the incoming beams more realistic, introducing spatially-varying
noise, not just a single unknown parameter per beam. It is well-known that such spatial noise
introduces bright speckles into laser profiles, and in some ways these speckles are analogous to
detected photons—in terms of both probability distributions as well as their small spatial extent
Entropy 2018,20, 410 15 of 16
(compared to the full laser profile). A related point would be to introduce unknown matter fields,
say some zero-point equivalent of the classical Dirac field, which would introduce further uncertainty
and effective noise sources into the electromagnetic field. These research ideas, and other related
approaches, are wide open for exploration.
Certainly, there are also conceptual and technical problems that need to be addressed, if such
models are to be further developed. The largest unaddressed issue is how a global action constraint
applied to macroscopic measurement devices might lead to specific rules that constrain the microscopic
fields in a manner consistent with observation. (In general, two-time boundary constraints can be
shown to lead to intermediate particle-like behavior [
41
], but different global rules will lead to different
intermediate consequences.) The tension between a covariant action and the special frame of the
measurement devices also needs to be treated consistently. Another topic that is in particular need of
progress is an extension of retrocausal entanglement models to handle partially-entangled states, and
not merely the maximally-entangled Bell states.
Although the challenges remain significant, the above list of accomplishments arising from this
new class of models should give some hope that further accomplishments are possible. By branching
out from particle-based models to field-based models, novel research directions are clearly motivated.
The promise of such research, if successful, would be to supply a nearly-classical explanation for all
quantum phenomena: realistic fields as the solution to a global constraint problem in spacetime.
Acknowledgments:
The author would like to thank Justin Dressel for very helpful advice, Aephraim Steinberg
for unintentional inspiration, Jan Walleczek for crucial support and encouragement, Ramen Bahuguna for insights
concerning laser speckles, and Matt Leifer for hosting a productive visit to Chapman University. This work is
supported in part by the Fetzer Franklin Fund of the John E. Fetzer Memorial Trust.
Conflicts of Interest: The author declares no conflict of interest.
References
1.
Sutherland, R.I. Bell’s theorem and backwards-in-time causality. Int. J. Theor. Phys.
1983
,22, 377–384.
[CrossRef]
2.
Price, H. Time’s Arrow & Archimedes’ Point: New Directions for the Physics of Time; Oxford University Press:
Oxford, UK, 1997.
3. Wharton, K. Quantum states as ordinary information. Information 2014,5, 190–208. [CrossRef]
4. Price, H.; Wharton, K. Disentangling the quantum world. Entropy 2015,17, 7752–7767. [CrossRef]
5.
Leifer, M.S.; Pusey, M.F. Is a time symmetric interpretation of quantum theory possible without retrocausality?
Proc. R. Soc. A 2017,473, 20160607. [CrossRef] [PubMed]
6. Adlam, E. Spooky Action at a Temporal Distance. Entropy 2018,20, 41. [CrossRef]
7. Argaman, N. Bell’s theorem and the causal arrow of time. Am. J. Phys. 2010,78, 1007–1013. [CrossRef]
8.
Almada, D.; Ch’ng, K.; Kintner, S.; Morrison, B.; Wharton, K. Are Retrocausal Accounts of Entanglement
Unnaturally Fine-Tuned? Int. J. Quantum Found. 2016,2, 1–14.
9.
Weinstein, S. Learning the Einstein-Podolsky-Rosen correlations on a Restricted Boltzmann Machine. arXiv
2017, arXiv:1707.03114. [CrossRef]
10.
Sen, I. A local
ψ
-epistemic retrocasual hidden-variable model of Bell correlations with wavefunctions in
physical space. arXiv 2018, arXiv:1803.06458. [CrossRef]
11.
Sutherland, R.I. Lagrangian Description for Particle Interpretations of Quantum Mechanics: Entangled
Many-Particle Case. Found. Phys. 2017,47, 174–207. [CrossRef]
12.
Bohm, D.; Hiley, B.J.; Kaloyerou, P.N. An ontological basis for the quantum theory. Phys. Rep.
1987
,
144, 321–375. [CrossRef]
13.
Kaloyerou, P. The GRA beam-splitter experiments and particle-wave duality of light. J. Phys. A
2006
,
39, 11541. [CrossRef]
14.
Aharonov, Y.; Albert, D.Z.; Vaidman, L. How the result of a measurement of a component of the spin of a
spin-1/2 particle can turn out to be 100. Phys. Rev. Lett. 1988,60, 1351. [CrossRef] [PubMed]
15.
Dressel, J.; Malik, M.; Miatto, F.M.; Jordan, A.N.; Boyd, R.W. Colloquium: Understanding quantum weak
values: Basics and applications. Rev. Mod. Phys. 2014,86, 307. [CrossRef]
Entropy 2018,20, 410 16 of 16
16.
Boyer, T.H. A brief survey of stochastic electrodynamics. In Foundations of Radiation Theory and Quantum
Electrodynamics; Springer: Berlin/Heidelberg, Germany, 1980; pp. 49–63.
17.
de La Pena, L.; Cetto, A.M. The qUantum Dice: An Introduction to Stochastic Electrodynamics; Springer Science
& Business Media: Berlin, Germany, 2013.
18.
Woodward, J. Making Things Happen: A Theory of Causal Explanation; Oxford University Press: Oxford,
UK, 2005.
19. Price, H. Agency and probabilistic causality. Br. J. Philos. Sci. 1991,42, 157–176. [CrossRef]
20. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009.
21. Menzies, P.; Price, H. Causation as a secondary quality. Br. J. Philos. Sci. 1993,44, 187–203. [CrossRef]
22. Price, H. Toy models for retrocausality. Stud. Hist. Philos. Sci. Part B 2008,39, 752–761. [CrossRef]
23.
Leifer, M.S. Is the Quantum State Real? An Extended Review of
ψ
-ontology Theorems. Quanta
2014
,
3, 67–155. [CrossRef]
24.
Dressel, J.; Bliokh, K.Y.; Nori, F. Classical field approach to quantum weak measurements. Phys. Rev. Lett.
2014,112, 110407. [CrossRef] [PubMed]
25. Dressel, J. Weak values as interference phenomena. Phys. Rev. A 2015,91, 032116. [CrossRef]
26.
Ritchie, N.; Story, J.G.; Hulet, R.G. Realization of a measurement of a “weak value”. Phys. Rev. Lett.
1991
,
66, 1107. [CrossRef] [PubMed]
27.
Bliokh, K.Y.; Bekshaev, A.Y.; Kofman, A.G.; Nori, F. Photon trajectories, anomalous velocities and weak
measurements: A classical interpretation. New J. Phys. 2013,15, 073022. [CrossRef]
28.
Howell, J.C.; Starling, D.J.; Dixon, P.B.; Vudyasetu, P.K.; Jordan, A.N. Interferometric weak value deflections:
Quantum and classical treatments. Phys. Rev. A 2010,81, 033813. [CrossRef]
29.
Aharonov, Y.; Vaidman, L. The two-state vector formalism: An updated review. In Time in Quantum
Mechanics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 399–447.
30. Sinclair, J.; Spierings, D.; Brodutch, A.; Steinberg, A. Weak values and neoclassical realism. 2018, in press.
31.
De Beauregard, O.C. Une réponse à l’argument dirigé par Einstein, Podolsky et Rosen contre l’interprétation
bohrienne des phénomènes quantiques. C. R. Acad. Sci. 1953,236, 1632–1634. (In French)
32.
Wharton, K. A novel interpretation of the Klein-Gordon equation. Found. Phys.
2010
,40, 313–332. [CrossRef]
33.
Wharton, K.B.; Miller, D.J.; Price, H. Action duality: A constructive principle for quantum foundations.
Symmetry 2011,3, 524–540. [CrossRef]
34.
Evans, P.W.; Price, H.; Wharton, K.B. New slant on the EPR-Bell experiment. Br. J. Philos. Sci.
2012
,
64, 297–324. [CrossRef]
35.
Harrison, A.K. Wavefunction collapse via a nonlocal relativistic variational principle. arXiv
2012
, arXiv:1204.3969.
[CrossRef]
36.
Schulman, L.S. Experimental test of the “Special State” theory of quantum measurement. Entropy
2012
,
14, 665–686. [CrossRef]
37.
Heaney, M.B. A symmetrical interpretation of the Klein-Gordon equation. Found. Phys.
2013
,43, 733–746.
[CrossRef]
38. Corry, R. Retrocausal models for EPR. Stud. Hist. Philos. Sci. Part B 2015,49, 1–9. [CrossRef]
39.
Lazarovici, D. A relativistic retrocausal model violating Bell’s inequality. Proc. R. Soc. A
2015
,471, 20140454.
[CrossRef]
40.
Silberstein, M.; Stuckey, W.M.; McDevitt, T. Beyond the Dynamical Universe: Unifying Block Universe Physics
and Time as Experienced; Oxford University Press: Oxford, UK, 2018.
41.
Wharton, K. Time-symmetric boundary conditions and quantum foundations. Symmetry
2010
,2, 272–283.
[CrossRef]
c
2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Even more surprisingly, Eqs. (23) and (24) are correct at every intermediate point in these particular solutions. In other But all of these patterns fail when analyzing the solution with the |10⟩ outcome (50% probability). ...
... Also, w a · w b ̸ = 1, and the first derivativesẇ are never predicted by Eqs. (23). ...
... This step is likely also necessary in order to make sense of anomalous (negative) weak values; as shown in[23], such values can naturally be viewed as indicating that the value has dropped below the typical (non-zero) "vacuum state". ...
Preprint
Full-text available
Although entangled state vectors cannot be described in terms of classically realistic variables, localized in space and time, any given entanglement experiment can be built from basic quantum circuit components with well-defined locations. By analyzing the (local) weak values for any given run of a quantum circuit, we present evidence for a localized account of any circuit's behavior. Specifically, even if the state is massively entangled, the weak values are found to evolve only when they pass through a local circuit element. They otherwise remain constant and do not evolve when other qubits pass through their circuit elements. A further surprise is found when two qubits are brought together in an exchange interaction, as their weak values then evolve according to a simple classical equation. The weak values are subject to both past and future constraints, so they can only be determined by considering the entire circuit "all-at-once", as in action principles. In the context of a few basic quantum gates, we show how an all-at-once model of a complete circuit could generate weak values without using state vectors as an intermediate step. Since these gates comprise a universal quantum gate set, this lends support to the claim that any quantum circuit can plausibly be underpinned by localized variables, providing a realistic, lower-level account of generic quantum systems.
... Several physical effects, such as delayed-choice experiments and inhibited spontaneous emission, can be explained using retrocausalitythe idea that seemingly future effects in some way influence present events (Aharonov et al., 2010;Cohen et al. 2020aCohen et al. , 2020bCramer, 1983;Leifer & Pusey, 2017;Price, 2012;Shimony, 1989;Wharton, 2018). Recently, advances in empirical approaches to demonstrating retrocausal effects have also emerged in the quantum computing space. ...
Article
Full-text available
It is generally assumed that information about the exact nature of truly random events can only be obtained after those events occur. One empirical apparent contradiction of this assumption is the causally ambiguous duration-sorting (CADS) effect, in which photon absorptions are measured before a truly random decision about the duration of an experiment is made. The only parameter varied across experimental runs is the duration between on- and off-times, yet the number of photons absorbed prior to this decision is related to the decision itself. This report focuses on further examining the CADS effect by characterizing the pre-decision periods for data continuously recorded for 365 days in an independent laboratory. A complex but reliable periodicity gave a conservative estimate of 4.7 for sigma across six comparisons of pre-decision photon absorptions with post-decision duration as the parameter. A linear CADS equation emerged to estimate magnitude at peak frequencies for each of four equiprobable post-decision durations. An apparently novel and unrelated relationship between photon absorptions and lunar phase was also revealed in the year-long dataset. Determining whether accurate pre-decision information about future durations is only available in retrospect requires further experimentation, but these results strongly support apparent retrocausality or at least causal ambiguity in groups of photons with shared classical boundaries in time.
... Alternatively, 'All-at-once' retrocausality suggests that the laws of nature apply atemporally to the whole of history, as for example in Wharton's all-at-once Lagrangian models [48]; in such a picture the past and the future have a reciprocal effect on one another, so there is definitely some kind of influence from the future to the past at play, but these effects cannot be separated out into distinct forwards and backwards evolutions. In general, we would expect models of this kind to obey event symmetry, since there is no special point in time from which the evolution begins. ...
Article
Full-text available
We investigate two types of temporal symmetry in quantum mechanics. The first type, time symmetry, refers to the inclusion of opposite time orientations on an equivalent physical footing. The second, event symmetry, refers to the inclusion of all time instants in a history sequence on an equivalent physical footing. We find that recent time symmetric interpretations of quantum mechanics fail to respect event symmetry. Building on the recent fixed-point formulation (FPF) of quantum theory, we formulate the notion of an event precisely as a fixed point constraint on the Keldysh time contour. Then, considering a sequence of measurement events in time, we show that both time and event symmetry can be retained in this multiple-time formulation of quantum theory. We then use this model to resolve conceptual paradoxes with time symmetric quantum mechanics within an 'all-at-once', atemporal picture.
... Another approach is the retrocausal quantum interpretation, where quantum systems are determined all-at-once across spacetime, avoiding action-at-a-distance while maintaining compatibility with relativity (Wharton, 2018). These models introduce hidden variables influenced by future measurements, addressing challenges found in the pilot wave theory. ...
Preprint
Full-text available
This essay examines the fundamental inconsistency between the block universe theory of time and the subjective conscious experience, particularly the paradox of the singular "now" moment in a timeless reality. Grounded in two postulates—that we live in a four-dimensional block universe where all moments coexist and that conscious experience of the present moment is real—the essay introduces the multitrack consciousness conjecture to reconcile these views. It suggests that multiple versions of our conscious selves exist simultaneously across time, with each version only capable of experiencing a single "now." The concept of information meshes briefly addresses questions of identity, memory, and shared experiences within this framework. This conjecture offers a new perspective on time, consciousness, and the self, inviting further interdisciplinary exploration.
... [15]. General aspects of the problem of retro-causality in quantum mechanics are discussed in [3,5,18,19], where further references can be found. ...
Article
Full-text available
In this article, we present a detailed analysis of two famous delayed choice experiments: Wheeler’s classic gedanken-experiment and the delayed quantum eraser. Our analysis shows that the outcomes of both experiments can be fully explained on the basis of the information collected during the experiments using textbook quantum mechanics only. At no point in the argument, information from the future is needed to explain what happens next. In fact, more is true: for both experiments, we show, in a strictly mathematical way, that a modified version in which the time-ordering of the steps is changed to avoid the delayed choice leads to exactly the same final state. In this operational sense, the scenarios are completely equivalent in terms of conclusions that can be drawn from their outcomes.
... Evidently models of this kind, if regarded as representational models, do not respect event symmetry since they privilege the initial and final states. Alternatively, 'All-at-once' retrocausality suggests that the laws of nature apply atemporally to the whole of history, as for example in Wharton's all-at-once Lagrangian models [48]; in such a picture the past and the future have a reciprocal effect on one another, so there is definitely some kind of influence from the future to the past at play, but these effects can't be separated out into distinct forwards and backwards evolutions. In general we would expect models of this kind to obey event symmetry, since there is no special point in time from which the evolution begins. ...
Preprint
Full-text available
We investigate two types of temporal symmetry in quantum mechanics. The first type, time symmetry, refers to the inclusion of opposite time orientations on an equivalent physical footing. The second, event symmetry, refers to the inclusion of all time instants in a history sequence on an equivalent physical footing. We find that recent time symmetric interpretations of quantum mechanics fail to respect event symmetry. Building on the recent fixed-point formulation (FPF) of quantum theory, we formulate the notion of an event precisely as a fixed point constraint on the Keldysh time contour. Then, considering a sequence of measurement events in time, we show that both time and event symmetry can be retained in this multiple-time formulation of quantum theory. We then use this model to resolve conceptual paradoxes with time symmetric quantum mechanics within an 'all-at-once', atemporal picture.
... However, Aharonov et al [13] showed that this is not the case when one augments the initial boundary condition of a quantum system with a final one. This has led to the development of the two-state vector formalism (TSVF) [14][15][16][17], which accords with other modern discussions in which past and future are treated on equal footing [18][19][20][21][22][23][24][25][26][27][28]. The TSVF ascribes the quantum measurement's outcome to two wavefunctions: one (preselected) proceeding forward in time from the source to the detector and the other (postselected) going in a backward direction. ...
Article
Full-text available
The Aharonov–Bohm (AB) effect has been highly influential in fundamental and applied physics. Its topological nature commonly implies that an electron encircling a magnetic flux source in a field-free region must close the loop in order to generate an observable effect. In this paper, we study a variant of the AB effect that apparently challenges this concept. The significance of weak values and nonlocal equations of motion is discussed as part of the analysis, shedding light on and connecting all these fundamental concepts.
... Some approaches to retrocausality postulate two distinct directions of dynamical causality which together determine intermediate events by forwards and backwards evolution respectively from separate and independent initial and final states -for example, the forwards-evolving state and the backwards-evolving state in the two-state vector interpretation [31]. Other approaches postulate an 'all-at-once,' picture where the laws of nature apply atemporally to the whole of history, as for example in Wharton's all-at-once Lagrangian models [32]; in such a picture the past and the future have a reciprocal effect on one another, so there is definitely some kind of influence from the future to the past at play, but these effects can't be separated out into separate forwards and backwards evolutions. 3 If we had access to dynamical retrocausality then we would presumably be able to use the outputs of our deliberations as inputs to these past-directed processes, so processes exemplifying dynamical retrocausality would seem to violate the consistent chaining requirement. ...
Article
Full-text available
We argue that the temporal asymmetry of influence is not merely the result of thermodynamics: it is a consequence of the fact that modal structure of the universe must admit only processes which cannot give rise to contradictions. We appeal to the process matrix formalism developed in the field of quantum foundations to characterise processes which are compatible with local free will whilst ruling out contradictions, and argue that this gives rise to ‘consistent chaining’ requirements that explain the temporal asymmetry of influence. We compare this view to the perspectival account of causation advocated by Price and Ramsey.
... However, Aharonov, Bergmann, and Lebowitz [11] showed that this is not the case when one augments the initial boundary condition of a quantum system with a final one. This has led to the development of the two-state vector formalism (TSVF) [12][13][14][15], which accords with other modern discussions in which past and future are treated on equal footing [16][17][18][19][20][21][22][23][24][25][26]. The TSVF ascribes the quantum measurement's outcome to two wavefunctions: one (preselected) proceeding forward in time from the source to the detector and the other (postselected) going in a backward direction. ...
Preprint
Full-text available
The Aharonov-Bohm (AB) effect has been highly influential in fundamental and applied physics. Its topological nature commonly implies that an electron encircling a magnetic flux source in a field-free region must close the loop in order to generate an observable effect. In this Letter, we study a variant of the AB effect that apparently challenges this concept. The significance of weak values and nonlocal equations of motion is discussed as part of the analysis, shedding light on and connecting all these fundamental concepts.
... If the goal of a retrocausal model is to explain Bell inequality violations by correlating hidden parameters with future settings, such that P( | , ) ≠ P( ) , then either of these accounts of retrocausation are arguably reasonable. While some retrocausal models of conventional Bell correlations must be solved all-at-once [26,27], others seem to contain both forward-and reverse-dynamical calculations [28]. Either way, the key is that the future settings have some adjustable constraint on the past, providing a mechanism to violate Bell Statistical Independence. ...
Article
Full-text available
The path integral is not typically utilized for analyzing entanglement experiments, in part because there is no standard toolbox for converting an arbitrary experiment into a form allowing a simple sum-over-history calculation. After completing the last portion of this toolbox (a technique for implementing multi-particle measurements in an entangled basis), some interesting 4- and 6-particle experiments are analyzed with this alternate technique. While the joint probabilities of measurement outcomes are always equivalent to conventional quantum mechanics, differences in the calculations motivate a number of foundational insights, concerning nonlocality, retrocausality, and the objectivity of entanglement itself.
Book
Full-text available
Theoretical physics and foundations of physics have not made much progress in the last few decades. There is no consensus among researchers on how to approach unifying general relativity and quantum field theory (quantum gravity), explaining so-called dark energy and dark matter (cosmology), or the interpretation and implications of quantum mechanics and relativity. In addition, both fields are deeply puzzled about various facets of time including, above all, time as experienced. This book argues that this impasse is the result of the "dynamical universe paradigm," the idea that reality fundamentally comprises physical entities that evolve in time from some initial state according to dynamical laws. Thus, in the dynamical universe, the initial conditions plus the dynamical laws explain everything else going exclusively forward in time. In cosmology, for example, the initial conditions reside in the Big Bang and the dynamical law is supplied by general relativity. Accordingly, the present state of the universe is explained exclusively by its past. A completely new paradigm (called Relational Blockworld) is offered here whereby the past, present, and future co-determine each other via "adynamical global constraints," such as the least action principle. Accordingly, the future is just as important for explaining the present as the past is. Most of the book is devoted to showing how Relational Blockworld resolves many of the current conundrums of both theoretical physics and foundations of physics, including the mystery of time as experienced and how that experience relates to the block universe. © Michael Silberstein, W.M. Stuckey and Timothy McDevitt 2018. All rights reserved.
Article
Full-text available
Since the discovery of Bell’s theorem, the physics community has come to take seriously the possibility that the universe might contain physical processes which are spatially nonlocal, but there has been no such revolution with regard to the possibility of temporally nonlocal processes. In this article, we argue that the assumption of temporal locality is actively limiting progress in the field of quantum foundations. We investigate the origins of the assumption, arguing that it has arisen for historical and pragmatic reasons rather than good scientific ones, then explain why temporal locality is in tension with relativity and review some recent results which cast doubt on its validity.
Article
Full-text available
A Lagrangian formulation is constructed for particle interpretations of quantum mechanics, a well-known example of such an interpretation being the Bohm model. The advantages of such a description are that the equations for particle motion, field evolution and conservation laws can all be deduced from a single Lagrangian density expression. The formalism presented is Lorentz invariant. This paper follows on from a previous one which was limited to the single-particle case. The present paper treats the more general case of many particles in an entangled state. It is found that describing more than one particle while maintaining a relativistic description requires the specification of final boundary conditions as well as the usual initial ones, with the experimenter’s controllable choice of the final conditions thereby exerting a backwards-in-time influence. This retrocausality then allows an important theoretical step forward to be made, namely that it becomes possible to dispense with the usual, many-dimensional description in configuration space and instead revert to a description in space–time using separate, single-particle wavefunctions.
Article
Full-text available
Huw Price has proposed an argument that suggests a time-symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price's assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called λ\lambda-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell's local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time-symmetric ontology.
Article
Full-text available
An explicit retrocausal model is used to analyze the general Wood-Spekkens argument [1] that any causal explanation of Bell-inequality violations must be unnaturally fine-tuned to avoid signaling. The no-signaling aspects of the model turn out to be robust under variation of the only free parameter, even as the probabilities deviate from standard quantum theory. The ultimate reason for this robustness is then traced to a symmetry assumed by the original model. A broader conclusion is that symmetry-based restrictions seem a natural and acceptable form of fine-tuning, not an unnatural model-rigging. And if the Wood-Spekkens argument is indicating the presence of hidden symmetries, this might even be interpreted as supporting time-symmetric retrocausal models.
Article
Full-text available
Correlations related to quantum entanglement have convinced many physicists that there must be some at-a-distance connection between separated events, at the quantum level. In the late 1940s, however, O. Costa de Beauregard proposed that such correlations can be explained without action at a distance, so long as the influence takes a zigzag path, via the intersecting past lightcones of the events in question. Costa de Beauregard’s proposal is related to what has come to be called the retrocausal loophole in Bell’s Theorem, but—like that loophole—it receives little attention, and remains poorly understood. Here we propose a new way to explain and motivate the idea. We exploit some simple symmetries to show how Costa de Beauregard’s zigzag needs to work, to explain the correlations at the core of Bell’s Theorem. As a bonus, the explanation shows how entanglement might be a much simpler matter than the orthodox view assumes—not a puzzling feature of quantum reality itself, but an entirely unpuzzling feature of our knowledge of reality, once zigzags are in play.
Article
We construct a local ψ\psi-epistemic hidden-variable model of Bell correlations by a retrocausal adaptation of the originally superdeterministic model given by Brans. In our model, for a pair of particles the joint quantum state ψe(t)|\psi_e(t)\rangle as determined by preparation is epistemic. The model also assigns to the pair of particles a factorisable joint quantum state ψo(t)|\psi_o(t)\rangle which is different from the prepared quantum state ψe(t)|\psi_e(t)\rangle and has an ontic status. The ontic state of a single particle consists of two parts. First, a single particle ontic quantum state χ(x,t)i\chi(\vec{x},t)|i\rangle, where χ(x,t)\chi(\vec{x},t) is a 3-space wavepacket and i|i\rangle is a spin eigenstate of the future measurement setting. Second, a particle position in 3-space x(t)\vec{x}(t), which evolves via a de Broglie-Bohm type guidance equation with the 3-space wavepacket χ(x,t)\chi(\vec{x},t) acting as a local pilot wave. The joint ontic quantum state ψo(t)|\psi_o(t)\rangle fixes the measurement outcomes deterministically whereas the prepared quantum state ψe(t)|\psi_e(t)\rangle determines the distribution of the ψo(t)|\psi_o(t)\rangle's over an ensemble. Both ψo(t)|\psi_o(t)\rangle and ψe(t)|\psi_e(t)\rangle evolve via the Schrodinger equation. Our model exactly reproduces the Bell correlations for any pair of measurement settings. We also consider `non-equilibrium' extensions of the model with an arbitrary distribution of hidden variables. We show that, in non-equilibrium, the model generally violates no-signalling constraints while remaining local with respect to both ontology and interaction between particles. We argue that our model shares some structural similarities with the modal class of interpretations of quantum mechanics.
Article
We construct a hidden variable model for the EPR correlations using a Restricted Boltzmann Machine. The model reproduces the expected correlations and thus violates the Bell inequality, as required by Bell's theorem. Unlike most hidden-variable models, this model does not violate the locality assumption in Bell's argument. Rather, it violates measurement independence, albeit in a decidedly non-conspiratorial way.
Chapter
Stochastic electrodynamics and random electrodynamics are the names given to a particular version of classical electrodynamics. This purely classical theory is Lorentz’s classical electron theory(1) into which one introduces random electromagnetic radiation (classical zero-point radiation) as the boundary condition giving the homogeneous solution of Maxwell’s equations. The theory contains one adjustable parameter setting the scale of the random radiation, and this parameter is chosen in terms of Planck’s constant,h = 2πℏ. Many of the researchers(2–70) working on stochastic electrodynamics hope that it will provide an accurate description of atomic physics and replace or explain quantum theory. At the very least the theory makes available new tools for calculating van der Waals forces, and it deepens our understanding of the connections between classical and quantum theories.(71)