ArticlePDF Available

On Testing the Simulation Theory

Authors:

Abstract

Can the theory that reality is a simulation be tested? We investigate this question based on the assumption that if the system performing the simulation is finite (i.e. has limited resources), then to achieve low computational complexity, such a system would, as in a video game, render content (reality) only at the moment that information becomes available for observation by a player and not at the moment of detection by a machine (that would be part of the simulation and whose detection would also be part of the internal computation performed by the Virtual Reality server before rendering content to the player). Guided by this principle we describe conceptual wave/particle duality experiments aimed at testing the simulation theory.
On testing the simulation theory
Tom Campbell
, Houman Owhadi
, Joe Sauvageau
, David Watkinson§
June 8, 2017
Abstract
Can the theory that reality is a simulation be tested? We investigate this ques-
tion based on the assumption that if the system performing the simulation is finite
(i.e. has limited resources), then to achieve low computational complexity, such a
system would, as in a video game, render content (reality) only at the moment that
information becomes available for observation by a player and not at the moment of
detection by a machine (that would be part of the simulation and whose detection
would also be part of the internal computation performed by the Virtual Reality
server before rendering content to the player). Guided by this principle we describe
conceptual wave/particle duality experiments aimed at testing the simulation theory.
1 Introduction
Wheeler advocated [51] that “Quantum Physics requires a new view of reality” inte-
grating physics with digital (quanta) information. Two such views emerge from the
presupposition that reality could be computed. The first one, which includes Digital
Physics [56] and the cellular automaton interpretation of Quantum Mechanics [43], pro-
poses that the universe is the computer. The second one, which includes the simulation
theory [7,9,52] (also known as the simulation hypothesis), suggests that the observable
reality is entirely virtual and the system performing the simulation (the computer) is
distinct from its simulation (the universe). In this paper we investigate the possibility of
experimentally testing the second view, and base our analysis on the assumption that the
system performing the simulation has limited computational resources. Such a system
would therefore use computational complexity as a minimization/selection principle for
algorithm design.
twcjr44@gmail.com
California Institute of Technology, Computing & Mathematical Sciences , MC 9-94 Pasadena, CA
91125, owhadi@caltech.edu
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena,
California 91109, jsauvage@jpl.nasa.gov. This work was done as a private venture and not in the author’s
capacity as an employee of the Jet Propulsion Laboratory, California Institute of Technology.
§Main Street Multimedia, Inc. 3005 Main St. #406, Santa Monica, CA 90405, watkinson-
davidm@gmail.com
1
arXiv:1703.00058v2 [quant-ph] 6 Jun 2017
Although, from a broader perspective, the issue being addressed is the nature of
reality, to avoid philosophical positions, the subject of this paper and of our tests will
be limited to the following question:
What causes and determines the collapse of the wave function?
2 Review
There are three main perspectives on the nature and causation of the collapse of the
wave function.
1. The Copenhagen interpretation [39] states that physicists can calculate as if quan-
tum waves existed, even though they do not. In this view, any detection causes a
quantum collapse that does not actually occur.
2. In the many worlds theory [15,11], which views reality as a branching process
where measurements cause splitting into possible outcomes, there is no quantum
collapse because every collapse option occurs in some other universe.
3. In the Von Neumann-Wigner theory [48,53] human consciousness triggers quantum
collapse.
The Copenhagen interpretation, summed up by D. Mermin [28] as “Shut up and
calculate!”, renounces addressing the question. The many worlds theory is incredibly
inefficient from a computational complexity perspective. The theory that consciousness
causes collapse is not widely accepted as it seems to lead towards solipsism and problems
of consistency over time (which caused Wigner to abandon [14] this theory towards the
end of his life): why create an event (e.g. the big-bang) with no-one there to observe it?
On QBism. A theory that is related to the simulation theory emerges from the inter-
pretation of the wave function as a description of a single observers subjective knowledge
about the state of the universe and the interpretation of the wave function as a process of
conditioning this subjective prior. This theory, known as QBism [29], is the Bayesian for-
mulation of Quantum Mechanics, and it is distinct from the Copenhagen interpretation
[30] and the Von Neumann-Wigner interpretation in the sense that the “wave function
is not viewed as a description of an objective reality shared by multiple observers but
as a description of a single observers subjective knowledge” [18]. As in the simulation
theory, QBism resolves quantum paradoxes at the cost of realism, i.e. by embracing the
idea that a single objective reality is an illusion.
3 Theory description
We will now describe the simulation theory from a computational perspective. Although
we may call the system performing the computation computer,VR server/engine, or
2
Larger Consciousness, the specification of the fundamental nature of this system is not
necessary to the description of the theory. Our core assumption is that this system
is finite and, as a consequence, the content of the simulation is limited by (and only
by) its finite processing resources and the system seeks to achieve low computational
complexity.
On rendering reality It is now well understood in the emerging science of Uncertainty
Quantification [19] that low complexity computation must be performed with hierarchies
of multi-fidelity models [16]. It is also now well understood, in the domain of game
development, that low computational complexity requires rendering/displaying content
only when observed by a player. Recent games, such as No-Man’s Sky and Boundless,
have shown that vast open universes (potentially including “over 18 quintillion planets
with their own sets of flora and fauna” [21]) are made feasable by creating content,
only at the moment the corresponding information becomes available for observation
by a player, through randomized generation techniques (such as procedural generation).
Therefore, to minimize computational complexity in the simulation theory, the system
performing the simulation would render reality only at the moment the corresponding
information becomes available for observation by a conscious observer (a player), and the
resolution/granularity of the rendering would be adjusted to the level of perception of
the observer. More precisely, using such techniques, the complexity of simulation would
not be constrained by the apparent size of the universe or an underlying pre-determined
mesh/grid size [4] but by the number of players and the resolution of the information
made available for observation.
On the compatibility of the simulation theory with Bell’s no go theorem.
Bell’s no-go theorem shows that the predictions of quantum mechanics cannot be re-
covered/interpreted, in terms of classical probability through the introduction of local
random variables. Here, the vital assumption [5, p. 2] made by Bell is the absence of
action at distance (i.e. as emphasized in [5, eq. 1], the independence of the outcome of
an experiment performed on one particle, from the setting of the experiment performed
on another particle). Therefore Bell’s no-go theorem does not prevent a (classical) prob-
abilistic interpretation of quantum mechanics using a “spooky action at distance” [13].
Here, the simulation theory offers a very simple explanation for the violation of the
principle of locality implied by Bell’s no-go theorem [5], the EPR paradox [13], Bell’s
inequalities violation experiments [1,3] and quantum entanglement [22]: notions of lo-
cality and distance defined within the simulation do not constrain the action space of the
system performing the simulation (i.e. from the perspective of the system performing
the simulation, changing the values of variables of spins/particles separated by 1 meter
or 1 light year has the same complexity).
On the emergence of probabilistic computation. It is well understood in Infor-
mation Based Complexity (IBC) [45,54,37,31,55] that low complexity computation
requires computation with partial/incomplete information. As suggested in [37] and
3
shown in [34] the identification of near optimal complexity algorithms requires playing
repeated adversarial (minimax) games against the missing information. As in Game
[46,47] and Decision Theory [49], optimal strategies for such games are randomized
strategies [34]. Therefore Bayesian computation emerges naturally [36,10] in the pres-
ence of incomplete information (we refer to [38,42,26,12,41,32,33,20,35,34,10,36]
for a history of the correspondence between Bayesian/statistical inference, numerical
analysis and algorithm design). Given these observations the fact that quantum me-
chanics can naturally be interpreted as Bayesian analysis with complex numbers [8,6]
suggests its natural interpretation as an optimal form of computation in presence of
incomplete information. Summarizing, in the simulation theory, to achieve near optimal
computational complexity by computing with partial information and limited resources,
the system performing the simulation would have to play dice. It is interesting to note
that in [6] the Bayesian formulation of Quantum Mechanics is also logically derived in a
game theoretic setting.
On the initialization of the simulation and solipsism. A universe simulated by
a finite system would have a necessary beginning (e.g., a big bang or pressing the Enter
key) that cannot be explained from a perspective confined to the simulation itself. In
this theory, the consciousness of the players form the screen on which reality is rendered
and problems of solipsism are resolved from the perspective of a multi-player VR game:
if a tree falls in a forest when no player is there to observe it, its fall is only part of the
internal computation performed by the Virtual Reality server before rendering content
to the player and fine details (mold, fungi, termites, a rats nest in it with babies) are
only rendered at that moment to preserve the consistency of the simulation (assuming
that the rats are non-playing characters).
What causes and determines the collapse of the wave function? Or in Virtual
Reality (VR) terminology, what causes the virtual reality engine to compute and make
information defining the VR available to an experimenter within the VR? Is it
(I) entirely determined by the experimental/detection set-up?
(II) or does the observer play a critical role in the outcome?
In the simulation theory, these questions can be analyzed based on the idea that a
good/effective VR would operate based on two, possibly conflicting, requirements: (1)
preserving the consistency of the VR (2) avoiding detection (from the players that they
are in a VR). However, the resolution of such a conflict would be limited by computa-
tional resources, bounds on computational complexity, the granularity of the VR being
rendered and logical constraints on how inconsistencies can be resolved. Occasionally,
conflicts that were unresolvable would lead to VR indicators and discontinuities (such
as the wave/particle duality).
Summing up, to save itself computing work, the system only calculates reality when
information becomes available for observation by a player, and to avoid detection by
4
players it maintains a consistent world, but occasionally, conflicts that are unresolvable
lead to VR indicators and discontinuities (such as the wave/particle duality). Although
this perspective supports the Von Neumann-Wigner postulate that [48,53] human con-
sciousness is necessary for the completion of quantum theory, the simulation theory also
agrees with Copenhagen in the sense that it does not require the actual existence of
quantum waves or their collapse (which are seen as useful predictive models made by
the players immersed in the VR).
4 Hypothesis test
Two strategies can be followed to test the simulation theory: (1) Test the moment
of rendering (2) Exploit conflicting requirement of logical consistency preservation and
detection avoidance to force the VR rendering engine to create discontinuities in its
rendering or produce a measurable signature event within our reality that indicates that
our reality must be simulated.
Testing the moment of rendering. In subsections 4.2,4.3 and 4.4 we will describe
wave-particle duality experiments (illustrated in figures 5,6and 7) aimed at testing
the simulation theory by testing the hypothesis that reality is not rendered (or the wave
function is not collapsed) at the moment of detection by an apparatus that would be part
of the simulation, but rather at the moment when the corresponding information becomes
available for observation by an experimenter. More precisely, in the setting of wave-
particle duality experiments, our hypothesis is that wave or particle duality patterns are
not determined at the moment of detection but by the existence and availability of the
which-way data when the pattern is observed.
Exploiting consistency vs detection. In Subsection 4.5 we propose though experi-
ments where the conflicting requirement of logical consistency preservation and detection
avoidance is exploited to force the VR rendering engine to create discontinuities in its
rendering or produce a clear and measurable signature event within our reality that
would be an unambiguous indicator that our reality must be simulated. Although we
cannot predict the outcome of the experiments proposed in Subsection 4.5 we can rig-
orously prove that their outcome will be new in comparison to classical wave-duality
experiments. As a secondary purpose, the analysis of the experiment of Subsection 4.5
will also be used clarify the notion of availability of which-way data in a VR.
4.1 Wave-particle duality experiments
Although the double slit experiment has been known as a classic thought experiment [17]
since the beginning of quantum mechanics, and although this experiment was performed
with “feeble light” [44] in 1909 and electron beams [24] in 1961, the first experiment with
single photons was not conducted prior to 1985 (we refer to [2], which also describes how
5
Figure 1: The classical double slit experiment [17,2] with which-way detected before or
at the slits. We write “wave pattern” for interference patten, and “particle pattern” for
non-interference pattern.
Figure 2: The delayed choice experiment [50,23]. The choice of whether or not to detect
and record which-way data is delayed until after each particle has passed through a slit
but before it reaches the screen.
the interpretation of “feeble light” experiments in terms of quantum mechanics is am-
biguous due to the nature of the Poisson distribution associated with “feeble light”).
The double slit experiment, illustrated in its simplified and conceptual form in Figure 1,
is known as the classical demonstration of the concept of wave/particle duality of quan-
tum mechanics. In this classical form, if which-way (i.e. which slit does each particle
pass through) is “detected and recorded” (at the slits), then particles (e.g. photons or
electrons) behave like particles and a non-diffraction pattern is observed on the screen.
However, if which-way is not “detected and recorded,” then particles behave like waves
and an interference pattern is observed on the screen. Since in the classical set up, the
which-way detection is done at the slits, one may wonder whether the detection appa-
ratus itself, could have induced the particle behavior, through a perturbation caused by
its interaction with the photon/electron going through those slits. Motivated by this
6
Figure 3: The delayed choice quantum eraser experiment [40,25]. Which-way data
is collected before, at, or after each particle has passed through a slit, however, this
which-way data may be erased before the particle hits the screen. This experiment is
sometimes called a delayed erasure experiment since the decision to erase is made after
the particle has passed through a slit (chosen one path or the other).
question, Wheeler [50] argued, using a thought experiment (illustrated in its simplified
and conceptual form in Figure 2), that the choice to perform the which-way detection
could be delayed and done after the double-slits. We refer to [23] for the experimental
realization of Wheeler’s delayed-choice gedanken experiment. Comparing Figure 2with
Figure 1, it appears that whether the which-way data is detected and recorded before,
at, or after the slits makes no difference at the result screen. In other words, the result at
the screen appears to not be determined by when or how that which-way data is detected
but by having the recorded which-way data before a particle impacts that screen.
Following Wheeler, Scully and Dr¨uhl [40] proposed and analyzed an experiment (see
Figure 3), realized in [25], where the which-way detection is always performed “after the
beam has been split by appropriate optics” and the screen data has been collected, but
before it is possibly erased (with probability 1/2 using a beam-splitter). We also refer to
[27] for a set-up with significant separation in space between the different elements of the
experiment. Comparing Figure 3with Figure 2, it appears that whether the which-way
data is or is not erased determines the screen result. Again, the result at the screen
seems to be determined, not by the detection process itself but by the availability of the
which-way data. Erasing the which-way data appears to be equivalent to having never
detected it.
Remark 1. A remarkable feature of the delayed choice quantum eraser experiment [25]
(see figures 3and 4) is the creation of an entangled photon pair (using a type-II phase
7
Figure 4: The delayed choice quantum eraser experiment set up as described in [25].
The microprocessor (µ-p) represents an addition to the original experiment and will be
discussed in subsections 4.4 and 4.5.
matching nonlinear optical crystal BBO: βBaB2O4) sharing the same which-way data
and the same creation time. One photon is used to trigger the coincidence counter (its
impact location screen D0is also recorded) and the second one is used to detect the
which-way data and possibly erase it (by recording its impact on detectors D1, D2, D3
and D4). The coincidence counter is used to identify each pair of entangled photon by
tagging each impact on the result screen D0and each event on the detectors D1, D2, D3
and D4with a time label. Using the coincidence counter to sort/subset the impact
locations (data) collected on the result screen D0, by the name (D1, D2, D3or D4) of
the detector activated by the entangled photon, one obtains the following patterns:
D1: Interference pattern (which-way is erased).
D2: Interference pattern (which-way is erased).
D3: Particle pattern (which-way is known, these photons are generated at Slit 1).
D4: Particle pattern (which-way is known, these photons generated at Slit 2).
4.2 Detecting but not making the data available to an observer
The following experiments are designed based on the hypothesis that the availability of
which-way data to an observer is the key element that determines the pattern found on
the result screen: the simulated content (the virtual reality) is computed and available to
be rendered to an experimenter only at the moment that information becomes available
for observation by an experimenter and not at the moment of detection by an apparatus.
8
Figure 5: Detecting but not recording “which-way”
In the proposed experiment, illustrated in a simplified and conceptual form in Fig-
ure 5, the which-way data is detected but not recorded (which translates into the non
availability of the which-way data to the experimenter/observer). There are many pos-
sible set ups for this experiment. A simple instantiation would be to place (turned on)
detectors at the slits and turn off any device recording the information sent from these
detectors (or this could be simply done by unplugging cables transmitting impulses from
the detectors to the recording device, the main idea for this experiment is to test the
impact of “detecting but not making the data available to an observer”).
This test could also be implemented with entangled pairs using the delayed choice
quantum eraser experiment (see Figure 4) by:
Simply removing the coincidence counter from the experimental setup and record-
ing (only) the output of D0(result screen). D0should display the wave pattern if
the experiment is successful.
Or by turning off the coincidence counter channels D3and D4(and/or the de-
tectors). If the experiment is successful, then D0should (without the available
information for sorting/subsetting between D3and D4) display an interference
pattern (and sorting the impacts at D0by D1or D2should also show interference
patterns).
On the availability of the which-way data. These tests are designed based on
the conjecture that it is “the availability of objective which-way data at the time of
observation of the interference/particle pattern” that determines the nature (particle or
wave) of the observed pattern. Note that “availability of the which-way data” is implied
by the which-way data being recorded on objective media (but does not imply a simple
“observation of the which-way data”). This distinction must be used as an Occam’s razor
to avoid the history problem that plagued solipsism and Wigner’s theory. For instance if
experimenter I watches only the pattern-screen and experimenter II watches the which-
9
way-screen, then even if the pattern-screen watcher does not know the which-way data,
the pattern screen should show a wave pattern (since which-way data is only available
as subjective information recorded in the memory of experimenter II). In general, the
notion of “availability of the which-way data” must be separated from the notion of
consciousness (as represented by one or more experimenters) “knowing” which-way data
by means of unrecorded observation. For instance if experimenter II is replaced by a
USB flash drive, then experimenter I should see a particle pattern since at the time of
observation of the pattern, which-way information is available (recorded in the USB ash
drive). Note that the time of observation of (wave or particle) pattern is a determining
factor. For instance, in the following subsection, experimenters I and II are replaced by
USB flash drives I and II then when USB I should show (a) a wave pattern if USB II is
irreversibly destroyed prior to reading USB I (b) a particle pattern if USB II is available
for reading when USB I is read.
Figure 6: Erasing the which-way data on a macroscopic scale
4.3 Erasing the which-way data on a macroscopic scale
In the proposed experiment, illustrated in a simplified and conceptual form in Figure 6,
the decision to erase the which-way data is delayed to a macroscopic time-scale. This can
be implemented by using the classical double slit experiment shown in Figure 1where
the recordings of the which-way data and the screen data (impact pattern) are collected
on two separate USB flash drives. By repeating this process ntimes one obtains npairs
of USB flash drives (nis an arbitrary non-zero integer). For each pair, the which-way
USB flash drive is destroyed with probability pd= 1/2. Destruction must be such that
the data is not recoverable and no trace of the data is left on the computer that held and
transferred the data. For neven, one can replace the coin flipping randomization by that
of randomly selecting a subset composed of half of the pairs of USB flash drives containing
which-way data for destruction (with uniform probability over such subsets). The test is
successful if the USB flash drives storing impact patterns show an interference pattern
10
only when the corresponding which-way data USB flash drive has been destroyed. This
test can also be performed by using the delayed choice quantum eraser experiment or its
modified version illustrated in Figure 7. For this implementation, one USB flash drive
is used to record the data generated by the photons for which Xis measured (output
of D0) and other USB flash drives to record the data generated by D1, D2, D3and D4
along with the associated output of the coincidence counter.
Figure 7: Delayed Erasure Experiment. Which-way data is randomly recorded with
probability 1/2.
4.4 Predicting erasure in the delayed choice quantum eraser experi-
ment
The proposed test is based on a modification of the delayed choice quantum eraser
experiment [25]. In this modification, we use the facts that (1) the entangled pair
of photons discussed in Remark 1share the same which-way data (2) the experiment
can be arranged so that the first photon hits the screen (sending a pulse toward the
coincidence counter) before the second one reaches the beam-splitter causing the erasure
or recording of the which-way data with probability 1/2 (but the time interval between
these two events must be significantly smaller than the time interval between creation
of photon pairs to preserve the information provided by the coincidence counter). The
location Xof the impact (on the x-axis) of the first photon on the screen (see Figure 7)
is then recorded and used to predict whether the which-way information will be erased
(R= 0) or kept/recorded (R= 1). More precisely by applying Bayes’ rule we obtain
that P[R= 1|xXx+δx] = P[R=1]
P[xXx+δx]P[xXx+δx|R= 1]. Using P[x
Xx+δx] = P[xXx+δx|R= 0]P[R= 0] + P[xXx+δx|R= 1]P[R= 1]
11
and P[R= 0] = 1
2we deduce that
P[R= 1|xXx+δx] = 1
1 + f(x)with f(x) = P[xXx+δx|R= 0]
P[xXx+δx|R= 1] .(1)
Let dbe the distance between the two slits and Lthe distance between the slits and
the screen (where Xis recorded). Write λthe wavelength of the photons and a:= λL
d.
Using the standard approximations P[xXx+δx|R= 1] 2I0δx and P[xX
x+δx|R= 0] 4I0cos2(πx
a)δx (valid for xL) we obtain that
P[R= 1|xXx+δx]1
1 + 2 cos2(πx
a).(2)
Therefore if the proposed experiment is successful, then the distribution of the random
variable Rwould be biased by that of Xand this bias could be used by a microprocessor
whose output would predict the value of the random variable R(prior to its realization)
upon observation of the value of X. This bias is such that, if the value of Xcorresponds
to a dark fringe of the interference pattern and a high intensity part of the particle
pattern, i.e. if cos(πx
a)0 and x/a 0, then the photon must be reflected at BSa
and BSb(i.e. R= 1) with a probability close to one. Observe that the value of Ris
determined by whether the photon is reflected rather than transmitted the beam splitters
BSaand BSb(which are large masses of materials that could be at large distance from
the screen D0). Therefore, if the proposed experiment is successful, then for values
of Xcorresponding to a dark fringe of the interference pattern, it would appear as if
the measuring, recording, and observing of impact location Xdetermines whether the
which-way data will or will not be erased. Such a result would solve the causal flow
of time issue in delayed erasure experiments: detection at D0would now determine (or
introduce a bias in) the choice, i.e. reflection or transmission, at BSaand BSb. However,
a new issue would be created: The detection at D0deterministically selecting (or, for
a general value of X, strongly biasing the probability of) the choice at BSaand BSb
(reflection or transmission) when that choice is supposed to be random (or, for a general
value of X, independent from X). Although this could be seen as a paradox such a
result would have a very simple explanation in a “simulated universe”: the values of X
and Rare realized at the moment the recorded data becomes available to the observer
(experimenter).
4.5 Exploiting conflicting requirements of consistency preservation and
detection avoidance
The purpose of the thought experiment (illustrated in a simplified and conceptual form
in Figure 8) described here is not only to test the role of the observer in the outcome
of a variant of the delayed choice quantum eraser experiment, but also to show that if
the conscious observer/experimenter plays no role in the outcome then the rendering of
reality would have significant discontinuities. More precisely, we will use the logical flow
of this thought experiment to prove, per absurdum, that at least one of the following
outcomes must hold true.
12
Figure 8: Testing the role of the observer
(I) Steps (1) or (2) in Figure 8do not hold true (which would be a discontinuity in
the rendering reality).
(II) Reality is rendered at the moment the corresponding information becomes avail-
able for observation by an experimenter (which would be an indication that the
simulation (VR) theory is true).
(III) Which-way data can be recorded with a wave pattern (which would be a paradox).
Consider the delayed choice quantum eraser experiment [40,25] illustrated in Figure
3. Let tbe the interval of time (in the reference of lab where the experiment is
13
performed) separating the impact of the first (signal) photon on the screen D0from the
moment the second (idler) photon reaches the beam splitter BSccausing the which-way
data to be either available (recorded) or not available (erased). The experiments of X.
Ma, J. Kofler, A. Qarry et al. [27] suggest that the set-up could be such that tcould
be arbitrarily large without changing the outcome of the delayed erasure (we will make
that assumption). We will also assume that the time interval between the production of
entangled pairs of photons can be controlled so that during each time interval t, only
one pair of photons runs through the experiment.
Write Pwave the probability distribution on Xassociated with a wave pattern. Write
Pparticle the probability distribution on Xassociated with a particle pattern. For Ia
subset of the possible values of X, write
δ(I) := Pparticle[XI] + Pwave[X6∈ I] (3)
Assume that the distance separating the two slits and the distance separating the screen
D0from the two slits are such that Ican be chosen so that δ(I)<0.9. Observe that
δ(I) = 1+Pparticle [XI]Pwave[XI], therefore minIδ(I) = TV(Pparticle,Pwave) where
TV(Pparticle,Pwave) is the total variation distance between Pparticle, and Pwave. Therefore
the possibility of choosing Iso that δ(I)<0.9 is equivalent to TV(Pparticle,Pwave)>0.1
(a) Remove the beam splitters BSaand BSb(or modify them to be totally transparent)
so that which-way data is not available. The experimentalist observes the outcome
of the experiment after X has been realized and the possibility of which-way data
has been eliminated. One should get an interference pattern at D0, as illustrated in
Figure 8-(a).
(b) Increase tfrom t108s to t60s. The totally transparent beam splitters
BSaand BSboccur on the timeline at t/2. The experimentalist observes the
outcome of the experiment after Xhas been realized and the possibility of which-
way has been eliminated. If the outcome of the experiment does not depend on t
then one should get an interference pattern at D0, precisely as it did in 8-(a) and as
illustrated in Figure 8-(b).
(c) Introduce, as illustrated in Figure 8-(c), a manual switch that, when activated,
causes beam splitters BSaand BSbto totally reflect (become mirrors) so that which-
way data will always be collected and remain available. This manual switch gives
the experimentalist the option to produce and record (or not) which-way data by
activating the switch (or not) at T < t/2). Assume the switch can be quickly
activated or not at the (arbitrary) decision of the experimentalist. If the position of
the switch at T= 0 leads to erasure of the which-way data, and if the experimentalist
observes the outcome of the experiment at T > t(after Xhas been realized and
which-way has been erased), then an interference pattern should be observed as
illustrated in Figure 8-(c).
(d) If the experimentalist observes the value of Xat T= 0 instead of at T > t. Then
the following outcomes are possible
14
iXis sampled from Pwave (an interference pattern) when the switch is inactive
and from Pparticle (a particle pattern) when the switch is active.
ii Xis always sampled from Pparticle (a particle pattern).
iii Xis always sampled from Pwave (an interference pattern).
iv Not (i), (ii) or (iii).
Assume that alternative (i) holds. Let the experimentalist implement the following
switch activation strategy with Ichosen so that δ(I)<0.9.
Strategy 1. Use the following algorithm
If X6∈ Ithen do not activate the switch (let which-way be erased).
If XIthen activate the switch (record which-way).
Let Pbe the probability distribution of Xin outcome (i). Observe that
1 = P[XI] + P[X6∈ I] = Pparticle[XI] + Pwave[X6∈ I] = δ(I),(4)
which is a contradiction with δ(I)<0.9, and therefore outcome (i) cannot hold.
Outcome (iv) would be a discontinuity. Outcome (iii) would allow the experimental-
ist to always activate the switch and record which-way with an interference pattern
as illustrated in Figure 8-(d). Outcome (ii) would would be a strong indicator that
this reality is simulated. Indeed if one gets a particle pattern at D0independent
of the position of the switch, then the observation at T= 0 would have been the
cause since this would be the only difference between (c) and (d) if the switch is not
activated.
If the outcome of the experiment of Figure 8-(c) is a wave pattern and that of Figure 8-(d)
is a particle pattern then the test is successful: the outcome is not entirely determined
by the experimental/detection set-up and X(reality/content) must be realized/rendered
at the moment when which-way becomes available to an experimenter/observer. This
experiment is likely to be successful in the sense that the only possible outcomes are:
the exposure of discontinuities in the rendering of reality, or paradoxes.
Clarification of the notion of pattern in Figure 8.Since in the experiments
illustrated in Figure 8, samples/realizations Xiof Xare observed one (Xi/photon) at
a time (at T= 0 or for T > t), we define “pattern” as the pattern formed by a large
number nof samples/realizations X1, . . . , Xnof X. In Figures 8-(a), 8-(b) and 8-(c)
these samples are observed after the erasure of the which-way data and the resulting
aggregated pattern (formed by X1, . . . , Xnfor large n) must be that of an interference
pattern. In the experiment illustrated in Figure 8-(d) samples/realizations of Xare
observed one at a time, and the experimenter can decide after observing each Xito record
(by turning the switch on) the corresponding which-way data or let that information be
erased (by leaving the switch in its initial off state). Since the experimenter can base his
15
decision to activate the switch at any step ion the values of X1, . . . , Xi, the experimenter
can implement an activation strategy such that the pattern formed by the subset of
elements of {X1, . . . , Xn}with switch on is, to some degree, arbitrary (e.g. create a
3 slit pattern by activating the switch only when the value of Xis in 3 predetermined
narrow intervals). Similarly the experimenter can implement an activation strategy such
that the pattern formed by the subset of elements of {X1, . . . , Xn}with switch off is,
to some degree, arbitrary. However he has no control over the pattern formed by all the
elements {X1, . . . , Xn}(with switch positions on or off ). Either
1. The pattern formed by aggregates of the values of X1, . . . , Xnis independent of the
positions of the switch (at all steps 1, . . . , n), if each Xiis observed at T= 0, and
is that of a particle pattern (due to the availability of the which-way information
to the experimenter at T=0). In particular, in the experiment of Figure 8-(d),
the experimenter may always keep the switch off so that none of the samples has
a paired/recorded which-way data and he would still obtain a particle pattern.
This is not a paradox since the rendering is triggered through the availability of
which-way at T= 0. There is also no contraction with the suggested outcome of
the experiment of Figure 5since in that experiment the recording of the which-way
data is determined prior to the realization of X.
2. Or the pattern formed by aggregates of the values of X1, . . . , Xndepends on the
positions of the switch (at all steps 1, . . . , n) that are, at each step i, determined
by the experimentalist 20safter the observation of Xi(i.e. not produce an Xiin
a dark fringe of the diffraction pattern if the experimentalist decides 20slater to
not activate the switch, which would be the paradoxes discussed around Strategy
1since the activation strategy is arbitrary).
Difference between the experiment of Figure 7and that of Figure 8-(d).
Observe that in the experiment illustrated in Figure 7, the value of Xis used at T= 0
(by a microprocessor, µ-p) to predict the later value of R(i.e., the erasure or recording
of which-way). In Figure 8-(d), the value of Xis observed by an experimenter before
deciding whether which-way should be erased or recorded. Although in both experiments
the value of Xseems to be operated on at T= 0 two different outcomes should be
expected if the simulation theory is true based on the analysis of how a VR engine would
operate. In Figure 7the pattern at D0formed by the subset of elements of {X1, . . . , Xn}
for which R= 0 is that of an interference pattern; and the pattern at D0formed by
the subset of elements of {X1, . . . , Xn}for which R= 1 is that of a particle pattern.
In Figure 8-(d) the pattern at D0is always that of a particle pattern (independently
from the decision of the experimenter to activate the switch and record the data). This
difference is based on the understanding that, if the decisions of the experimenters are
external to the simulation, then, while in the experiment of Figure 7the VR engine
would be able to render the values of Xand Rat the same moment to the experimenter
(since the microprocessor using the value of Xwould be part of the simulation), the
VR engine would not necessarily be able to predict the (arbitrary) decision (that may
16
or may not depend on the value of X) of the experimenter (to activate the switch) in
the experiment proposed in Figure 8-(d) (which-way is available for observation by the
experimenter at T= 0). This difference could also be understood as a clarification of
the notion of availability of which-way data in a VR.
Control of the switch by a microprocessor. The switch of Figure 8-(d) could in
principle be activated by microprocessor. In that setup the time interval tcould be
significant reduced from the value proposed in Figure 8-(d). Since Strategy 1would
still be available for implementation (as an algorithm), Alternative (i) discussed in the
outcome of Figure 8-(d) would still lead to a contradiction. Alternative (iii) would
allow us to record which-way with an interference pattern. Alternative (iv) would be
a discontinuity. Alternatives (ii) or (iii) would an indicator of an intelligent VR engine
reacting to the intention of the experimentalist. Alternative (i) leads to a logical paradox
for δ(I)<0.9.
Note that Eqn. (4) would still be a contradiction if δ(I)<1 (and the existence of
such an Iis ensured by TV(Pparticle,Pwave)>0). We use δ(I)<0.9 to account for exper-
imental noise. Although we cannot predict the outcome of the proposed experiment, we
can prove based on Eqn. (4) that the pattern produced at the screen D0cannot be the
result of sampling Xfrom a particle distribution when the switch is active and a wave
distribution when the switch is inactive. Therefore, although the experiment has not
been performed yet we can already predict that its outcome will be new. One possible
outcome is that the Xwill be sampled from a particle distribution independently of the
position of the switch which would also be an indicator of a VR engine reacting to the
intent of the experiment.
4.6 Further questions and further tests
The simplest test for of the simulation theory is the experiment proposed in Subsection
4.2 where which-way is detected but not made available for observation or recording.
Although a positive outcome for the experiment of Figure 5would support the validity of
the simulation theory, it would also call for more questions and more tests to characterize
the process of rendering of reality. Some of these questions are listed below,
1. For which-way datum to cause its corresponding dot on the pattern screen to be
in a particle pattern, does it have to be correlatable (usually in time) with its
associated point on the pattern screen, or is only a recording the (which-way )
datums existence sufficient?
2. Does anonymous (unlabeled) which-way data cause its corresponding dot on the
pattern screen to be in a particle pattern?
3. Is the objective recording of which-way information (e.g on a hard drive) necessary
to produce a particle pattern, or is subjective observation (e.g. in the imperfect
memory of an experimenter) of the which-way data sufficient?
17
By using the same strategy as in subsection 4.5 it is possible to show that if there is
a difference between the objective and subjective recording of which-way (i.e. objective
recording leads to a particle pattern and subjective recording leads to a wave pattern)
then the VR server would have to adjust its rendering to the intent of the experiment
to avoid creating a paradox (which would indicate that the VR server, the source of the
VR, is conscious). The proof is as follows. Assume that which-way data is collected
only on a perishable media that persists for Tseconds (after which the data is lost
permanently). Assume that during this interval of time the experimenter has the option
to record the data permanently (on a hard drive). Assume for the sake of the clarity
of argument that the impact locations of the wave pattern and the particle pattern do
not overlap (the argument can be extended to the overlapping case using probabilistic
inequalities as in subsection 4.5). More precisely assume that there exists a portion Iof
the pattern screen such that if the impact is in Ithen it must be part of a wave pattern
and if it is not in Ithen it must be part of a particle pattern. Now for each impact
let the experimenter follow the rule: (1) If the impact is in Ithen objectively record
which-way data (on a hard drive) (2) If the impact is not in Ithen do not objectively
record which-way data (simply observe it). Then by definition of the set I, the pattern
formed by impacts whose which-way data has not been objectively recorded cannot be
a wave pattern, which implies that either (a) there is no difference between objective
and subjective recording (both lead to a particle pattern and no impact is observed in
I) (b) the VR server can adjust its rendering to the intent of the experiment. Note also
that if the proposed rule (record which-way if and only if the pattern impact is in I) is
enforced by an algorithm (i.e. if the experimenter can be taken out of the loop), then the
resulting paradox seems to reinforce the idea that the VR server would have to adjust
its rendering based on the intent of the experiment (since as suggested by the quantum
eraser experiment and discussed in Subsection 4.5, detection of which-way, seems to not
be sufficient to ensure a particle pattern).
Acknowledgments. We thank Lorena Buitrago for her help with Figure 4. We also
thank an anonymous referee whose detailed comments and suggestions have lead to
significant improvements.
References
[1] A. Aspect, J. Dalibard, and G. Roger. Experimental test of bell’s inequalities using
time-varying analyzers. Physical review letters, 49(25):1804, 1982.
[2] A. Aspect and P. Grangier. Wave-particle duality for single photons. Hyperfine
Interactions, 37(1-4):1–17, 1987.
[3] A. Aspect, P. Grangier, and G. Roger. Experimental realization of einstein-
podolsky-rosen-bohm gedankenexperiment: a new violation of bell’s inequalities.
Physical review letters, 49(2):91, 1982.
18
[4] S.R. Beane, Z. Davoudi, and J. Savage. Constraints on the universe as a numerical
simulation. The European Physical Journal A, 50(148), 2014.
[5] J. S. Bell. On the Einstein-Podolsky-Rosen paradox. Physics, 1:195–200, 1964.
[6] A. Benavoli, A. Facchini, and M. Zaffalon. Quantum mechanics: The bayesian
theory generalized to the space of hermitian matrices. Phys. Rev. A, 94:042106,
Oct 2016.
[7] N. Bostrom. Are you living in a computer simulation? Published in Philosophical
Quarterly, 53(211):243–255, 2003.
[8] Carlton M. C., C. A. Fuchs, and R. Schack. Quantum probabilities as bayesian
probabilities. Physical Review A, 65:022305, 2002.
[9] T. Campbell. My big TOE. Lightning Strike Books, 2007.
[10] Jon Cockayne, Chris Oates, Tim Sullivan, and Mark Girolami. Bayesian probabilis-
tic numerical methods. arXiv preprint arXiv:1702.03673, 2017.
[11] Bryce S DeWitt. The many-universes interpretation of quantum mechanics. In The
many-worlds interpretation of quantum mechanics, page 167, 1973.
[12] P. Diaconis. Bayesian numerical analysis. In Statistical decision theory and related
topics, IV, Vol. 1 (West Lafayette, Ind., 1986), pages 163–175. Springer, New York,
1988.
[13] A. Einstein, B. Podolsky, and N. Rosen. Can quantum-mechanical description of
physical reality be considered complete? Phys. Rev., 47:777–780, May 1935.
[14] Michael Esfeld. Essay review wigners view of physical reality michael esfeld1. Studies
in History and Philosophy of Modern Physics, 30:145–154, 1999.
[15] Hugh Everett III. “elative state” formulation of quantum mechanics. Reviews of
modern physics, 29(3):454, 1957.
[16] M. G. Fernndez-Godino, C. Park, N.-H. Kim, and R. T. Haftka. Review of multi-
fidelity models. arXiv:1609.07196, 2016.
[17] R. P. Feynman, R. B. Leighton, and Sands M. The Feynman Lectures on Physics,
Vol. 3. Addison-Wesley, 1965.
[18] Christopher Fuchs and Katherine Taylor. A private view of quantum reality. Wired
and Quanta Magazine, 2015.
[19] R. Ghanem, D. Higdon, and H. Owhadi (Eds.). Handbook of Uncertainty Quantifi-
cation. Springer-Verlag, New York, 2017.
[20] P. Hennig, M. A. Osborne, and M. Girolami. Probabilistic numerics and uncertainty
in computations. Proc. A., 471(2179):20150142, 17, 2015.
19
[21] Ravi Hiranand. 18 quintillion planets: The video game that imagines an entire
universe. CNN, 2015.
[22] Ryszard Horodecki, Pawe l Horodecki, Micha l Horodecki, and Karol Horodecki.
Quantum entanglement. Reviews of modern physics, 81(2):865, 2009.
[23] V. Jacques, E Wu, F. Grosshans, F. Treussart, P. Grangier, A. Aspect, and J.-F.
Roch. Experimental realization of wheeler’s delayed-choice gedanken experiment.
Science, 315(5814):966–968, 2007.
[24] C. onsson. Elektroneninterferenzen an mehreren unstlich hergestellten feinspal-
ten. Zeitschrift ur Physik, 161(4):454–474, 1961.
[25] Y.-H. Kim, R. Yu, S. P. Kulik, Y. Shih, and M. O. Scully. Delayed “choice” quantum
eraser. Physical Review Letters, 84(1):1, 2000.
[26] F. M. Larkin. Gaussian measure in Hilbert space and applications in numerical
analysis. Rocky Mountain J. Math., 2(3):379–421, 1972.
[27] Xiao-Song Ma, Johannes Kofler, Angie Qarry, Nuray Tetik, Thomas Scheidl, Rupert
Ursin, Sven Ramelow, Thomas Herbst, Lothar Ratschbacher, Alessandro Fedrizzi,
et al. Quantum erasure with causally disconnected choice. Proceedings of the Na-
tional Academy of Sciences, 110(4):1221–1226, 2013.
[28] N David Mermin. What’s wrong with this pillow? Physics Today, page 9, 1989.
[29] N David Mermin. Physics: Qbism puts the scientist back into science. Nature,
507(7493):421–423, 2014.
[30] N David Mermin. Why qbism is not the copenhagen interpretation and what john
bell might have thought of it. In Quantum [Un] Speakables II, pages 83–93. Springer,
2017.
[31] A. S. Nemirovsky. Information-based complexity of linear operator equations. J.
Complexity, 8(2):153–175, 1992.
[32] A. O’Hagan. Bayes-Hermite quadrature. J. Statist. Plann. Inference, 29(3):245–260,
1991.
[33] H. Owhadi. Bayesian numerical homogenization. Multiscale Model. Simul.,
13(3):812–828, 2015.
[34] H. Owhadi. Multigrid with rough coefficients and multiresolution operator decom-
position from hierarchical information games. SIAM Review, 59(1):99–149, 2017.
arXiv:1503.03467.
[35] H. Owhadi and C. Scovel. Toward machine wald. In R. Ghanem, D. Higdon, and
H. Owhadi, editors, Handbook of Uncertainty Quantification, pages 1–35. Springer,
2016.
20
[36] H. Owhadi and C. Scovel. Universal scalable robust solvers from computational
information games and fast eigenspace adapted multiresolution analysis. arXiv
preprint arXiv:1703.10761, 2017.
[37] E. W. Packel. The algorithm designer versus nature: a game-theoretic approach to
information-based complexity. J. Complexity, 3(3):244–257, 1987.
[38] H. Poincar´e. Calcul des probabilit´es. Georges Carr´es, Paris, 1896.
[39] Maximilian Schlosshauer, Johannes Kofler, and Anton Zeilinger. A snapshot of
foundational attitudes toward quantum mechanics. Studies in History and Phi-
losophy of Science Part B: Studies in History and Philosophy of Modern Physics,
44(3):222–230, 2013.
[40] M. O. Scully and K. Dr¨uhl. Quantum eraser: A proposed photon correlation exper-
iment concerning observation and “delayed choice” in quantum mechanics. Physical
Review A, 25(4):2208, 1982.
[41] J. E. H. Shaw. A quasirandom approach to integration in Bayesian statistics. Ann.
Statist., 16(2):895–914, 1988.
[42] A. V. Sul0din. Wiener measure and its applications to approximation methods. I.
Izv. Vysˇs. cebn. Zaved. Matematika, 1959(6 (13)):145–158, 1959.
[43] G. ’t Hooft. The Cellular Automaton Interpretation of Quantum Mechanics.
Springer, 2016.
[44] G. I. Taylor. Interference fringes with feeble light. In Proceedings of the Cambridge
Philosophical Society, volume 15, pages 114–115, 1909.
[45] J. F. Traub, G. W. Wasilkowski, and H. Wo´zniakowski. Information-based com-
plexity. Computer Science and Scientific Computing. Academic Press, Inc., Boston,
MA, 1988. With contributions by A. G. Werschulz and T. Boult.
[46] J. Von Neumann. Zur Theorie der Gesellschaftsspiele. Math. Ann., 100(1):295–320,
1928.
[47] J. Von Neumann and O. Morgenstern. Theory of Games and Economic Behavior.
Princeton University Press, Princeton, New Jersey, 1944.
[48] John Von Neumann. Mathematical foundations of quantum mechanics. Number 2.
Princeton university press, 1955.
[49] A. Wald. Statistical decision functions which minimize the maximum risk. Ann. of
Math. (2), 46:265–280, 1945.
[50] J. A. Wheeler. The “past” and the “delayed-choice” double-slit experiment. 1978.
21
[51] J. A. Wheeler. Information, physics, quantum: The search for links. In W. H. Zurek,
editor, Complexity, Entropy, and the Physics of Information. Addison-Wesley, 1990.
[52] B. Whitworth. The physical world as a virtual reality. CDMTCS Research Report
Series, (316), 2007.
[53] Eugene Wigner and Henry Margenau. Symmetries and reflections, scientific essays.
American Journal of Physics, 35(12):1169–1170, 1967.
[54] H. Wo´zniakowski. Probabilistic setting of information-based complexity. J. Com-
plexity, 2(3):255–269, 1986.
[55] H. Wo´zniakowski. What is information-based complexity? In Essays on the com-
plexity of continuous problems, pages 89–95. Eur. Math. Soc., urich, 2009.
[56] K. Zuse. Rechnender raum. Elektronische Datenverarbeitung, 8:336–344, 1967.
22
... work as a precursor to Bostrom's (Chalmers 2022: 83). Philosophers, computer scientists, and physicists who consider the simulation hypothesis include: Arvan (2014Arvan ( , 2015, Beane et al. (2014), Campbell et al. (2017), Dainton (2002Dainton ( , 2012, Johnson (2011), andMizrahi (2017). For criticisms, see, e.g., Weatherson (2003) and Summers and Arvan (2022). ...
Chapter
Full-text available
In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are not genuine (and so not knowable), then life in the simulation is illusory and therefore, not as valuable as a non-simulated life. Taking up these questions, I argue that in general, the value of social knowledge for a meaningful life dramatically swamps the value of non-social knowledge for a meaningful life. Along the way, I propose a non-additive model of the meaningfulness of life, according to which the overall effect of some potential contributor of value to a life depends in part on what is already in a life. One upshot is that the vindication of non-social knowledge, absent a correlative vindication of social knowledge, contributes either not at all or scarcely at all to the claim that our lives in the simulation might be deeply meaningful. This is so even though the vindication of non-social knowledge does forestall the concern that in the simulation, our lives might be wholly meaningless.
... This is essential because traditional Copenhagen interpretations of the classic double slit experiment interpret the particle-like diffraction pattern (see Figure 18B) wavefunction collapse (i.e., the interference pattern of Figure 18A disappears). However, this Copenhagen cannot account for several experiments such as the delayed choice eraser experiment (see Figure 18C) whereby the photoelectric detector is placed after the slits and therefore cannot measure which slit (its path) the electron passed through (Campbell et al., 2017) despite this leading to particle-like diffraction pattern. This retro-causality violates laws of energy and information conservation, so it is not possible from a physicalist interpretation, thus the Copenhagen interpretation is incorrect. ...
Article
Full-text available
There have been impressive advancements in the field of natural language processing (NLP) in recent years, largely driven by innovations in the development of transformer-based large language models (LLM) that utilize “attention.” This approach employs masked self-attention to establish (via similarly) different positions of tokens (words) within an inputted sequence of tokens to compute the most appropriate response based on its training corpus. However, there is speculation as to whether this approach alone can be scaled up to develop emergent artificial general intelligence (AGI), and whether it can address the alignment of AGI values with human values (called the alignment problem). Some researchers exploring the alignment problem highlight three aspects that AGI (or AI) requires to help resolve this problem: (1) an interpretable values specification; (2) a utility function; and (3) a dynamic contextual account of behavior. Here, a neurosymbolic model is proposed to help resolve these issues of human value alignment in AI, which expands on the transformer-based model for NLP to incorporate symbolic reasoning that may allow AGI to incorporate perspective-taking reasoning (i.e., resolving the need for a dynamic contextual account of behavior through deictics) as defined by a multilevel evolutionary and neurobiological framework into a functional contextual post-Skinnerian model of human language called “Neurobiological and Natural Selection Relational Frame Theory” (N-Frame). It is argued that this approach may also help establish a comprehensible value scheme, a utility function by expanding the expected utility equation of behavioral economics to consider functional contextualism, and even an observer (or witness) centric model for consciousness. Evolution theory, subjective quantum mechanics, and neuroscience are further aimed to help explain consciousness, and possible implementation within an LLM through correspondence to an interface as suggested by N-Frame. This argument is supported by the computational level of hypergraphs, relational density clusters, a conscious quantum level defined by QBism, and real-world applied level (human user feedback). It is argued that this approach could enable AI to achieve consciousness and develop deictic perspective-taking abilities, thereby attaining human-level self-awareness, empathy, and compassion toward others. Importantly, this consciousness hypothesis can be directly tested with a significance of approximately 5-sigma significance (with a 1 in 3.5 million probability that any identified AI-conscious observations in the form of a collapsed wave form are due to chance factors) through double-slit intent-type experimentation and visualization procedures for derived perspective-taking relational frames. Ultimately, this could provide a solution to the alignment problem and contribute to the emergence of a theory of mind (ToM) within AI.
... proposes that the universe is a quantum computer [11] and the other one proposes that the system performing the simulation is distinct from its simulation (the universe) [14] [15]. In this paper, we suggest another possibility which is a "hybrid" between the two propositions based on the following assumptions 1) The system performing the simulation is distinct from its simulation. ...
... The above definition of physical reality as the connection between the rendering-of-thesimulated-environment to the player and the collapse-of-the-wave-function seen by the observer is based implicitly on the assumption of finite computation resources and requirement of low computational complexity [35]. ...
Preprint
Full-text available
Quantum mechanics in the Wigner-von Neumann interpretation is presented. This is characterized by 1) a quantum dualism between matter and consciousness unified within an informational neutral monism, 2) a quantum perspectivism which is extended to a complementarity between the Copenhagen interpretation and the many-worlds formalism, 3) a psychophysical causal closure akin to Leibniz parallelism and 4) a quantum solipsism, i.e. a reality in which classical states are only potentially-existing until a conscious observation is made.
... There are a number of contemporary cosmogonic theories -which we would introduce in what we have called quantum metaphysics (Sandu, 2011a) -which consider that the current world we live in is actually a simulation (Bostrom, 2003;Chalmers, 2003;Dainton, 2012;Campbell et al., 2017;Preston, 2019;Thomas, 2020). The idea of virtual reality is closely linked to the idea of simulation, of transforming reality into something that seems to be real but does not necessarily have a material consistency, an ontological consistency, if we want, in philosophical terms. ...
Article
There are a number of contemporary cosmogonic theories, which we would introduce into what we have called quantum metaphysics, which holds that the current world we live in is actually a simulation. Technological globalization can be analyzed from the perspective of the mutations that occurred in the interpretation of the social space from the traditional one, limited to the geographical coordinates of the interaction, to a delocalized and universalizing one. The pandemic caused by the COVID-19 virus has accentuated this trend. The digital revolution produces a phenomenon of virtualization of social space - in the sense of transferring socializing interactions to virtual environments - with special and somewhat unpredictable consequences for the evolution of being and even the human species. Next, we consider the identification of some constitutive dimensions of the phenomenon of the virtualization of the social space, of its evolutionary tendencies and of the eventual sociopathologies. The cultural space, being essentially cross-cultural, generates a continuous (re) negotiation of the interpretation of the social reality and of the construction of new interpretative models. Understanding the interpretive drift of the concept of social reality in the context of communication virtualization allows us to say that the virtualization of social space has led humanity to explore a sui generis additional dimensions of space, noetic in nature, experienced in the form of instantaneous communication and virtual ubiquity.
Article
Full-text available
Islamic theology’s emphasis on reflecting on God’s signs finds resonance in simulation theory, offering a novel perspective on ongoing debates among Muslims in Europe and elsewhere. The Simulation Hypothesis posits that our reality, a potential computer-generated simulation, challenges conventional perspectives. Once a philosophical curiosity, it is now in the spotlight. This hypothesis suggests our perceived reality might be a construct, diverging from traditional views. It introduces a reality model where a Simulator, resembling a divine figure, controls the simulation in a way akin to religious teachings. This departure aligns with intelligent design, challenging a chance-based universe. It accentuates the potential for an afterlife, intensifying theological discussions.
Chapter
Video games created with advanced algorithms that generate game content now define the allure and constitution of gaming culture. AI systems even allow players to produce self-generated levels of gaming, allowing for deviation in unexpected, creative ways from the game’s initial narrative structure, thus increasing the challenge and magnetism associated with video gaming in general. The video game has become its own world, increasingly attenuating or even eliminating the perception of the difference between its make-believe world and actual reality. It has thus become a format for discussing and evaluating the so-called simulation hypothesis, which envisions life as a creation of an AI (in some unknown dimension of existence) playing a video game in which humans are its avatars. This chapter will look at AI-generated video gaming as a template for what may be happening across pop culture domains, as well as its philosophical-existential implications. Central to the discussion is the notion of narrative, given that the AI technologies provide the means for creating the game’s story “on the go,” so to speak. In effect, the players are the scriptwriters, actors, and directors at once, with AI standing above them like a Wizard of Oz, directing the unfolding narrative.
Article
Full-text available
Este trabalho propõe uma revisão do argumento da simulação de Nick Bostrom, sob uma perspectiva cultural e multimidiática, a fim de discutir eventuais influências culturais e ideológicas no desenvolvimento do argumento original. Também consideraremos críticas ou contra-argumentos propostos por outros autores que já discutiram esse assunto. Em suma, partimos da hipótese de que todo o argumento de Bostrom está tão contaminado por um “caldo de cultura” (midiático, metafísico, político e econômico), que é difícil confirmar o distanciamento que se espera de uma reflexão filosófica de tal monta. Em outras palavras, nos perguntamos se o argumento de Bostrom aparece como um subproduto de tal “caldo de cultura”, se surge como mais uma manifestação superestrutural na esteira do sistema de produção pós-industrial de extrema acumulação de capital financeiro.
Article
Full-text available
We show how the discovery of robust scalable numerical solvers for arbitrary bounded linear operators can be automated as a Game Theory problem by reformulating the process of computing with partial information and limited resources as that of playing underlying hierarchies of adversarial information games. When the solution space is a Banach space B endowed with a quadratic norm \|\cdot\|, the optimal measure (mixed strategy) for such games (e.g. the adversarial recovery of uBu\in B, given partial measurements [ϕi,u][\phi_i, u] with ϕiB\phi_i\in B^*, using relative error in \|\cdot\|-norm as a loss) is a centered Gaussian field ξ\xi solely determined by the norm \|\cdot\|, whose conditioning (on measurements) produces optimal bets. When measurements are hierarchical, the process of conditioning this Gaussian field produces a hierarchy of elementary bets (gamblets). These gamblets generalize the notion of Wavelets and Wannier functions in the sense that they are adapted to the norm \|\cdot\| and induce a multi-resolution decomposition of B that is adapted to the eigensubspaces of the operator defining the norm \|\cdot\|. When the operator is localized, we show that the resulting gamblets are localized both in space and frequency and introduce the Fast Gamblet Transform (FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT can be used to solve and diagonalize arbitrary PDEs with constant coefficients, the FGT can be used to decompose a wide range of continuous linear operators (including arbitrary continuous linear bijections from H0sH^s_0 to HsH^{-s} or to L2L^2) into a sequence of independent linear systems with uniformly bounded condition numbers and leads to O(NpolylogN)\mathcal{O}(N \operatorname{polylog} N) solvers and eigenspace adapted Multiresolution Analysis (resulting in near linear complexity approximation of all eigensubspaces).
Article
Full-text available
We consider the problem of gambling on a quantum experiment and enforce rational behavior by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In our quantum setting, they yield the Bayesian theory generalized to the space of Hermitian matrices. This very theory is quantum mechanics: in fact, we derive all its four postulates from the generalized Bayesian theory. This implies that quantum mechanics is self-consistent. It also leads us to reinterpret the main operations in quantum mechanics as probability rules: Bayes' rule (measurement), marginalization (partial tracing), independence (tensor product). To say it with a slogan, we obtain that quantum mechanics is the Bayesian theory in the complex numbers.
Chapter
Full-text available
A time-reversible cellular automaton can most easily be described by regarding its evolution law as a two-step process, for instance by first allowing the even sites to update, then the odd sites. The discrete Hamiltonian operator procedure of Chap. 19 may be used. Subsequently, we apply a perturbative algebra, starting from the Baker Campbell Hausdorff expansion.
Chapter
Full-text available
The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed by humans because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to think as humans, especially when faced with uncertainty, is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well-posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tends to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with decision theory, machine learning, Bayesian inference, stochastic optimization, robust optimization, optimal uncertainty quantification, and information-based complexity.
Article
Full-text available
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data has led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimisers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Article
The emergent field of probabilistic numerics has thus far lacked rigorous statistical principals. This paper establishes Bayesian probabilistic numerical methods as those which can be cast as solutions to certain Bayesian inverse problems, albeit problems that are non-standard. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well-defined, encompassing both non-linear and non-Gaussian models. For general computation, a numerical approximation scheme is developed and its asymptotic convergence is established. The theoretical development is then extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, with some illustrative applications presented.
Chapter
Christopher Fuchs and Rüdiger Schack have developed a way of understanding science, which, among other things, resolves many of the conceptual puzzles of quantum mechanics that have vexed people for the past nine decades. They call it QBism. I speculate on how John Bell might have reacted to QBism, and I explain the many ways in which QBism differs importantly from the orthodox ways of thinking about quantum mechanics associated with the term “Copenhagen interpretation.”
Article
Partway down the optic axis of the traditional double­slit experiment stands the central element, the doubly-slit screen. This chapter discusses the question whether the photon—or the electron—shall have come through both the slits or only through one of them after it has already transversed that screen. The possibility to use the receptor at the end of the apparatus to record well-defined interference fringes is known. One can determine the lateral kick given to the receptor by each arriving quantum and can record the fringes or the kicks but not both. The arrangement for the recording of the one automatically rules out the recording of the other. It is easy to complicate the double-slit interference pattern. For that purpose, it is enough to have a complicated single-slit diffraction pattern and let the waves from two such slits interfere. It is not necessary to understand every point about the quantum principle in order to understand something about it.
Article
We introduce a near-linear complexity (geometric and meshless/algebraic) multigrid/multiresolution method for PDEs with rough (L^∞) coefficients with rigorous a-priori accuracy and performance estimates. The method is discovered through a decision/game theory formulation of the problems of (1) identifying restriction and interpolation operators (2) recovering a signal from incomplete measurements based on norm constraints on its image under a linear operator (3) gambling on the value of the solution of the PDE based on a hierarchy of nested measurements of its solution or source term. The resulting elementary gambles form a hierarchy of (deterministic) basis functions of H^1_0(Ω) (gamblets) that (1) are orthogonal across subscales/subbands with respect to the scalar product induced by the energy norm of the PDE (2) enable sparse compression of the solution space in H^1_0(Ω) (3) induce an orthogonal multiresolution operator decomposition. The operating diagram of the multigrid method is that of an inverted pyramid in which gamblets are computed locally (by virtue of their exponential decay), hierarchically (from fine to coarse scales) and the PDE is decomposed into a hierarchy of independent linear systems with uniformly bounded condition numbers. The resulting algorithm is parallelizable both in space (via localization) and in bandwith/subscale (subscales can be computed independently from each other). Although the method is deterministic it has a natural Bayesian interpretation under the measure of probability emerging (as a mixed strategy) from the information game formulation and multiresolution approximations form a martingale with respect to the filtration induced by the hierarchy of nested measurements.