ArticlePDF Available

Abstract and Figures

Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.
Content may be subject to copyright.
NT@UW-12-14
INT-PUB-12-046
Constraints on the Universe as a Numerical Simulation
Silas R. Beane,1, 2, Zohreh Davoudi,3, and Martin J. Savage3,
1Institute for Nuclear Theory, Box 351550, Seattle, WA 98195-1550, USA
2Helmholtz-Institut für Strahlen- und Kernphysik (Theorie),
Universität Bonn, D-53115 Bonn, Germany
3Department of Physics, University of Washington,
Box 351560, Seattle, WA 98195, USA
(Dated: January 8, 2014 - 16:15)
Abstract
Observable consequences of the hypothesis that the observed universe is a numerical simulation
performed on a cubic space-time lattice or grid are explored. The simulation scenario is first
motivated by extrapolating current trends in computational resource requirements for lattice QCD
into the future. Using the historical development of lattice gauge theory technology as a guide,
we assume that our universe is an early numerical simulation with unimproved Wilson fermion
discretization and investigate potentially-observable consequences. Among the observables that are
considered are the muon g2and the current differences between determinations of α, but the
most stringent bound on the inverse lattice spacing of the universe, b1>
1011 GeV, is derived
from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could
reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational
symmetry breaking that reflects the structure of the underlying lattice.
beane@hiskp.uni-bonn.de. On leave from the University of New Hampshire.
davoudi@uw.edu
savage@phys.washington.edu
1
arXiv:1210.1847v2 [hep-ph] 9 Nov 2012
I. INTRODUCTION
Extrapolations to the distant futurity of trends in the growth of high-performance com-
puting (HPC) have led philosophers to question —in a logically compelling way— whether
the universe that we currently inhabit is a numerical simulation performed by our distant
descendants [1]. With the current developments in HPC and in algorithms it is now pos-
sible to simulate Quantum Chromodynamics (QCD), the fundamental force in nature that
gives rise to the strong nuclear force among protons and neutrons, and to nuclei and their
interactions. These simulations are currently performed in femto-sized universes where the
space-time continuum is replaced by a lattice, whose spatial and temporal sizes are of the
order of several femto-meters or fermis (1 fm = 1015 m), and whose lattice spacings (dis-
cretization or pixelation) are fractions of fermis 1. This endeavor, generically referred to as
lattice gauge theory, or more specifically lattice QCD, is currently leading to new insights
into the nature of matter 2. Within the next decade, with the anticipated deployment of
exascale computing resources, it is expected that the nuclear forces will be determined from
QCD, refining and extending their current determinations from experiment, enabling pre-
dictions for processes in extreme environments, or of exotic forms of matter, not accessible
to laboratory experiments. Given the significant resources invested in determining the quan-
tum fluctuations of the fundamental fields which permeate our universe, and in calculating
nuclei from first principles (for recent works, see Refs. [4–6]), it stands to reason that future
simulation efforts will continue to extend to ever-smaller pixelations and ever-larger vol-
umes of space-time, from the femto-scale to the atomic scale, and ultimately to macroscopic
scales. If there are sufficient HPC resources available, then future scientists will likely make
the effort to perform complete simulations of molecules, cells, humans and even beyond.
Therefore, there is a sense in which lattice QCD may be viewed as the nascent science of
universe simulation, and, as will be argued in the next paragraph, very basic extrapolation
of current lattice QCD resource trends into the future suggest that experimental searches
for evidence that our universe is, in fact, a simulation are both interesting and logical.
There is an extensive literature which explores various aspects of our universe as a simula-
tion, from philosophical discussions [1], to considerations of the limits of computation within
our own universe [7], to the inclusion of gravity and the standard model of particle physics
into a quantum computation [8], and to the notion of our universe as a cellular automa-
ton [9–12]. There have also been extensive connections made between fundamental aspects of
computation and physics, for example, the translation of the Church-Turing principle [13, 14]
into the language of physicists by Deutsch [15]. Finally, the observational consequences due
to limitations in accuracy or flaws in a simulation have been considered [16]. In this work, we
take a pedestrian approach to the possibility that our universe is a simulation, by assuming
that a classical computer (i.e. the classical limit of a quantum computer) is used to simulate
the quantum universe (and its classical limit), as is done today on a very small scale, and ask
if there are any signatures of this scenario that might be experimentally detectable. Further,
we do not consider the implications of, and constraints upon, the underlying information,
and its movement, that are required to perform such extensive simulations. It is the case
that the method of simulation, the algorithms, and the hardware that are used in future
simulations are unknown, but it is conceivable that some of the ingredients used in present
1Surprisingly, while QCD and the electromagnetic force are currently being calculated on the lattice, the
difficulties in simulating the weak nuclear force and gravity on a lattice have so far proved insurmountable.
2See Refs. [2, 3] for recent reviews of the progress in using lattice gauge theory to calculate the properties
of matter.
2
SPECTRUM HcloverL
MILC HasqtadL
0
2
4
6
8
10
12
14
15
20
25
30
Years After 1999
Log@L5b6D
L=10-6m, b =0.1 fm
L=1 m, b =0.1 fm
20
40
60
80
100
120
140
50
100
150
200
250
Years After 1999
Log@L5b6D
FIG. 1. Left panel: linear fit to the logarithm of the CRRs of MILC asqtad and the SPECTRUM
anisotropic lattice ensemble generations. Right panel: extrapolation of the fit curves into the future,
as discussed in the text. The blue (red) horizontal line corresponds to lattice sizes of one micron
(meter), and vertical bands show the corresponding extrapolated years beyond 1999 for the lattice
generation programs.
day simulations of quantum fields remain in use, or are used in other universes, and so
we focus on one aspect only: the possibility that the simulations of the future employ an
underlying cubic lattice structure.
In contrast with Moore’s law, which is a statement about the exponential growth of
raw computing power in time, it is interesting to consider the historical growth of mea-
sures of the computational resource requirements (CRRs) of lattice QCD calculations, and
extrapolations of this trend to the future. In order to do so, we consider two lattice gener-
ation programs: the MILC asqtad program [17], which over a twelve year span generated
ensembles of lattice QCD gauge configurations, using the Kogut-Susskind [18] (staggered)
discretization of the quark fields, with lattice spacings, b, ranging from 0.18 to 0.045 fm, and
lattice sizes (spatial extents), L, ranging from 2.5to 5.8fm, and the on-going anisotropic
program carried out by the SPECTRUM collaboration [19], using the clover-Wilson [20, 21]
discretization of the quark fields, which has generated lattice ensembles at b0.1fm, with
Lranging from 2.4to 4.9fm [22]. At fixed quark masses, the CRR of a lattice ensem-
ble generation (in units of petaFLOP-years) scales roughly as the dimensionless number
λQCD L5/b6, where λQC D 1fm is a typical QCD distance scale. In fig. 1 (left panel), the
CRRs are presented on a logarithmic scale, where year one corresponds to 1999, when MILC
initiated its asqtad program of 2+1-flavor ensemble generation. The bands are linear fits
to the data. While the CRR curves in some sense track Moore’s law, they are more than
a statement about increasing FLOPS. Since lattice QCD simulations include the quantum
fluctuations of the vacuum and the effects of the strong nuclear force, the CRR curve is a
statement about simulating universes with realistic fundamental forces. The extrapolations
of the CRR trends into the future are shown in the right panel of fig. 1. The blue (red)
horizontal line corresponds to a lattice of the size of a micro-meter (meter), a typical length
scale of a cell (human), and at a lattice spacing of 0.1fm. There are, of course, many caveats
to this extrapolation. Foremost among them is the assumption that an effective Moore’s
Law will continue into the future, which requires technological and algorithmic developments
to continue as they have for the past 40 years. Related to this is the possible existence of
the technological singularity [23, 24], which could alter the curve in unpredictable ways.
3
And, of course, human extinction would terminate the exponential growth [1]. However,
barring such discontinuities in the curve, these estimates are likely to be conservative as
they correspond to full simulations with the fundamental forces of nature. With finite re-
sources at their disposal, our descendants will likely make use of effective theory methods,
as is done today, to simulate every-increasing complexity, by, for instance, using meshes
that adapt to the relevant physical length scales, or by using fluid dynamics to determine
the behavior of fluids, which are constrained to rigorously reproduce the fundamental laws
of nature. Nevertheless, one should keep in mind that the CRR curve is based on lattice
QCD ensemble generation and therefore is indicative of the ability to simulate the quantum
fluctuations associated with the fundamental forces of nature at a given lattice spacing and
size. The cost to perform the measurements that would have to be done in the background
of these fluctuations in order to simulate —for instance— a cell could, in principle, lie on a
significantly steeper curve.
We should comment on the simulation scenario in the context of ongoing attempts to
discover the theoretical structure that underlies the Standard Model of particle physics, and
the expectation of the unification of the forces of nature at very short distances. There
has not been much interest in the notion of an underlying lattice structure of space-time
for several reasons. Primary among them is that in Minkowski space, a non-vanishing spa-
tial lattice spacing generically breaks space-time symmetries in such a way that there are
dimension-four Lorentz breaking operators in the Standard Model, requiring a large num-
ber of fine-tunings to restore Lorentz invariance to experimentally verified levels [25]. The
fear is that even though Lorentz violating dimension four operators can be tuned away at
tree-level, radiative corrections will induce them back at the quantum level as is discussed
in Refs. [26, 27]. This is not an issue if one assumes the simulation scenario for the same
reason that it is not an issue when one performs a lattice QCD calculation 3. The under-
lying space-time symmetries respected by the lattice action will necessarily be preserved at
the quantum level. In addition, the notion of a simulated universe is sharply at odds with
the reductionist prejudice in particle physics which suggests the unification of forces with a
simple and beautiful predictive mathematical description at very short distances. However,
the discovery of the string landscape [28, 29], and the current inability of string theory to
provide a useful predictive framework which would post-dict the fundamental parameters of
the Standard Model, provides the simulators (future string theorists?) with a purpose: to
systematically explore the landscape of vacua through numerical simulation. If it is indeed
the case that the fundamental equations of nature allow on the order of 10500 solutions [30],
then perhaps the most profound quest that can be undertaken by a sentient being is the ex-
ploration of the landscape through universe simulation. In some weak sense, this exploration
is already underway with current investigations of a class of confining beyond-the-Standard-
Model (BSM) theories, where there is only minimal experimental guidance at present (for
one recent example, see Ref. [31]). Finally, one may be tempted to view lattice gauge the-
ory as a primitive numerical tool, and that the simulator should be expected to have more
efficient ways of simulating reality. However, one should keep in mind that the only known
way to define QCD as a consistent quantum field theory is in the context of lattice QCD,
which suggests a fundamental role for the lattice formulation of gauge theory.
Physicists, in contrast with philosophers, are interested in determining observable con-
sequences of the hypothesis that we are a simulation 4 5. In lattice QCD, space-time is
3Current lattice QCD simulations are performed in Euclidean space, where the underlying hyper-cubic
symmetry protects Lorentz invariance breaking in dimension four operators. However, Hamiltonian lattice
formulations, which are currently too costly to be practical, are also possible.
4There are a number of peculiar observations that could be attributed to our universe being a simulation,
4
replaced by a finite hyper-cubic grid of points over which the fields are defined, and the
(now) finite-dimensional quantum mechanical path integral is evaluated. The grid breaks
Lorentz symmetry (and hence rotational symmetry), and its effects have been defined within
the context of a low-energy effective field theory (EFT), the Symanzik action, when the lat-
tice spacing is small compared with any physical length scales in the problem [33, 34] 6.
The lattice action can be modified to systematically improve calculations of observables, by
adding irrelevant operators with coefficients that can be determined nonperturbatively. For
instance, the Wilson action can be O(b)-improved by including the Sheikholeslami-Wohlert
term [21]. Given this low-energy description, we would like to investigate the hypothesis
that we are a simulation with the assumption that the development of simulations of the
universe in some sense parallels the development of lattice QCD calculations. That is, early
simulations use the computationally “cheapest” discretizations with no improvement. In par-
ticular, we will assume that the simulation of our universe is done on a hyper-cubic grid 7
and, as a starting point, we will assume that the simulator is using an unimproved Wilson
action, that produces O(b)artifacts of the form of the Sheikholeslami-Wohlert operator in
the low-energy theory 8.
In section II, the simple scenario of an unimproved Wilson action is introduced. In
section III, by looking at the rotationally-invariant dimension-five operator arising from
this action, the bounds on the lattice spacing are extracted from the current experimental
determinations, and theoretical calculations, of g2of the electron and muon, and from
the fine-structure constant, α, determined by the Rydberg constant. Section IV considers
the simplest effects of Lorentz symmetry breaking operators that first appear at O(b2),
and modifications to the energy-momentum relation. Constraints on the energy-momentum
relation due to cosmic ray events are found to provide the most stringent bound on b. We
conclude in section V.
II. UNIMPROVED WILSON SIMULATION OF THE UNIVERSE
The simplest gauge invariant action of fermions which does not contain doublers is the
Wilson action,
S(W)=b4X
xL(W)(x) = b4m+4
bX
x
ψ(x)ψ(x)
+b3
2X
x
ψ(x)(γµ1) Uµ(x)ψ(x+bˆµ)(γµ+ 1) U
µ(xbˆµ)ψ(xbˆµ),(1)
which describes a fermion, ψ, of mass minteracting with a gauge field, Aµ(x), through the
gauge link,
Uµ(x) = exp ig ˆx+bˆµ
x
dzAµ(z),(2)
but that cannot be tested at present. For instance, it could be that the observed non-vanishing value of
the cosmological constant is simply a rounding error resulting from the number zero being entered into a
simulation program with insufficient precision.
5Hsu and Zee [32] have suggested that the CMB provides an opportunity for a potential creator/simulator
of our universe to communicate with the created/simulated without further intervention in the evolution
of the universe. If, in fact, it is determined that observables in our universe are consistent with those that
would result from a numerical simulation, then the Hsu-Zee scenario becomes a more likely possibility.
Further, it would then become interesting to consider the possibility of communicating with the simulator,
5
where ˆµis a unit vector in the µ-direction, and gis the coupling constant of the theory.
Expanding the Lagrangian density, L(W), in the lattice spacing (that is small compared with
the physical length scales), and performing a field redefinition [38], it can be shown that the
Lagrangian density takes the form
L(W)=ψD+ ˜mψψ +Cp
gb
4ψσµν Gµν ψ+O(b2),(3)
where Gµν =i[Dµ, Dν]/g is the field strength tensor and Dµis the covariant derivative. ˜m
is a redefined mass which contains O(b)lattice spacing artifacts (that can be tuned away).
The coefficient of the Pauli term ψσµν Gµν ψis fixed at tree level, Cp= 1 + O(α), where
α=g2/(4π). It is worth noting that as is usual in lattice QCD calculations, the lattice
action can be O(b)improved by adding a term of the form δL(W)=Csw gb
4ψσµν Gµν ψto the
Lagrangian with Csw =−Cp+O(α). This is the so-called Sheikholeslami-Wohlert term.
Of course there is no reason to assume that the simulator had to have performed such an
improvement in simulating the universe.
III. ROTATIONALLY INVARIANT MODIFICATIONS
Lorentz symmetry is recovered in lattice calculations as the lattice spacing vanishes when
compared with the scales of the system. It is useful to consider contributions to observables
from a non-zero lattice spacing that are Lorentz invariant and consequently rotationally
invariant, and those that are not. While the former type of modifications could arise from
many different BSM scenarios, the latter, particularly modifications that exhibit cubic sym-
metry, would be suggestive of a structure consistent with an underlying discretization of
space-time.
1. QED Fine Structure Constant and the Anomalous Magnetic Moment
For our present purposes, we will assume that Quantum Electrodynamics (QED) is simulated
with this unimproved action, eq. (1). The O(b)contribution to the lattice action induces an
additional contribution to the fermion magnetic moments. Specifically, the Lagrange density
that describes electromagnetic interactions is given by eq. (3), where the interaction with
an external magnetic field Bis described through the covariant derivative Dµ=µ+ie ˆ
QAµ
with e > 0and the electromagnetic charge operator ˆ
Q, and where the vector potential
satisfies ∇ × A=B. The interaction Hamiltonian density in Minkowski-space is given by
Hint =e
2mψAµ(i
µi
µ)ˆ
+ˆ
Qe
4mψσµν Fµν ψ+Cp
ˆ
Qeb
4ψσµν Fµν ψ+... . (4)
where Fµν =µAννAµis the electromagnetic field strength tensor, and the ellipses denote
terms suppressed by additional powers of b. By performing a non-relativistic reduction, the
or even more interestingly, manipulating or controlling the simulation itself.
6The finite volume of the hyper-cubic grid also breaks Lorentz symmetry. A recent analysis of the CMB
suggests that universe has a compact topology, consistent with two compactified spatial dimensions and
with a greater than 4σdeviation from three uncompactified spatial dimensions [35].
7The concept of the universe consisting of fields defined on nodes, and interactions propagating along the
links between the nodes, separated by distances of order the Planck length, has been considered previously,
e.g. see Ref. [36].
8It has been recently pointed out that the domain-wall formulation of lattice fermions provides a mecha-
6
last two terms in eq. 4 give rise to Hint,mag =µ·B, where the electron magnetic moment
µis given by
µ=ˆ
Qe
2m(g+ 2mb Cp+...)S=g(b)ˆ
Qe
2mS,(5)
where gis the usual fermion g-factor and Sis its spin. Note that the lattice spacing
contribution to the magnetic moment is enhanced relative to the Dirac contribution by one
power of the particle mass.
For the electron, the effective g-factor has an expansion at finite lattice spacing of
g(e)(b)
2= 1 + C2α
π+C4α
π2+C6α
π3+C8α
π4+C10 α
π5
+ahadrons +aµ,τ +aweak +mebCp+... , (6)
where the coefficients Ci, in general, depend upon the ratio of lepton masses. The calculation
by Schwinger provides the leading coefficient of C2=1
2. The experimental value of g(e)
expt/2 =
1.001 159 652 180 73(28) gives rise to the best determination of the fine structure constant
α(at b= 0) [39]. However, when the lattice spacing is non-zero, the extracted value of α
becomes a function of b,
α(b) = α(0) 2πmebCp+Oα2b,(7)
where α(0)1= 137.035 999 084(51) is determined from the experimental value of electron
g-factor as quoted above. With one experimental constraint and two parameters to deter-
mine, αand b, unique values for these quantities cannot be established, and an orthogonal
constraint is required. One can look at the muon g2which has a similar QED expansion
to that of the electron, including the contribution from the non-zero lattice spacing,
g(µ)(b)
2= 1 + C(µ)
2α
π+C(µ)
4α
π2+C(µ)
6α
π3+C(µ)
8α
π4+C(µ)
10 α
π5
+a(µ)
hadrons +a(µ)
e,τ +a(µ)
weak +mµbCp+... . (8)
Inserting the electron g2(at finite lattice spacing) gives
g(µ)(b)
2=g(µ)(0)
2+ (mµme)bCp+Oα2b.(9)
Given that the standard model calculation of g(µ)(0) is consistent with the experimental
value, with a 3.6σdeviation, one can put a limit on bfrom the difference and uncer-
tainty in theoretical and experimental values of g(µ),g(µ)
expt/2 = 1.001 165 920 89(54)(33) and
g(µ)
theory/2 = 1.001 165 918 02(2)(42)(26) [39]. Attributing this difference to a finite lattice
spacing, these values give rise to
b1= (3.6±1.1) ×107GeV ,(10)
which provides an approximate upper bound on the lattice spacing.
nism by which the number of generations of fundamental particles is tied to the form of the dispersion
relation [37]. Space-time would then be a topological insulator.
7
2. The Rydberg Constant and α
Another limit can be placed on the lattice spacing from differences between the value of α
extracted from the electron g2and from the Rydberg constant, R. The latter extraction,
as discussed in Ref. [39], is rather complicated, with the value of the Robtained from a χ2-
minimization fit involving the experimentally determined energy-level splittings. However, to
recover the constraints on the Dirac energy-eigenvalues (which then lead to R), theoretical
corrections must be first removed from the experimental values. To begin with, one can
obtain an estimate for the limit on bby considering the differences between α’s obtained
from various methods assuming that the only contributions are from QED and the lattice
spacing. Given that it is the reduced mass (µme) that will compensate the lattice spacing
in these QED determinations (for an atom at rest in the lattice frame), one can write
δα = 2πmeb˜
Cp,(11)
where ˜
Cpis a number O(1) by naive dimensional analysis, and is a combination of the con-
tributions from the two independent extractions of α. There is no reason to expect complete
cancellation between the contributions from two different extractions. In fact, it is straight-
forward to show that the O(b)contribution to the value of αdetermined from the Rydberg
constant is suppressed by α4m2
e, and therefore the above assumption is robust. In addition to
the electron g2determination of fine structure constant as quoted above, the next precise
determination of αcomes form the atomic recoil experiments, α1= 137.035 999 049(90) 9
[39], given an a priori determined value of the Rydberg constant. This gives rise to a differ-
ence of |δα|= (1.86 ±5.51) ×1012 between two extractions, which translates into
b=|(0.6±1.7) ×109|GeV1.(12)
As this result is consistent with zero, the 1σvalues of the lattice spacing give rise to a limit
of
b1>
4×108GeV ,(13)
which is seen to be an order of magnitude more precise than that arising from the muon
g2.
For more sophisticated simulations in which chiral symmetry is preserved by the lattice
discretization, the coefficient Cpwill vanish or will be exponentially small. As a result,
the bound on the lattice spacing derived from the muon g2and from the differences
between determinations of αwill be significantly weaker. In these analyses, we have worked
with QED only, and have not included the full electroweak interactions as chiral gauge
theories have not yet been successfully latticized. Consequently, these constraints are to be
considered estimates only, and a more complete analysis needs to be performed when chiral
gauge theories can be simulated.
IV. ROTATIONAL SYMMETRY BREAKING
While there are more conventional scenarios for BSM physics that generate deviations in
g2from the standard model prediction, or differences between independent determinations
9Extracted from a 87Rb recoil experiment [40].
8
of α, the breaking of rotational symmetry would be a solid indicator of an underlying space-
time grid, although not the only one. As has been extensively discussed in the literature,
another scenario that gives rise to rotational invariance violation involves the introduction of
an external background with a preferred direction. Such a preferred direction can be defined
via a fixed vector, uµ[41]. The effective low-energy Lagrangian of such a theory contains
Lorentz covariant higher dimension operators with a coupling to this background vector, and
breaks both parity and Lorentz invariance [42]. Dimension three, four and five operators,
however, are shown to be severely constrained by experiment, and such contributions in the
low-energy action (up to dimension five) have been ruled out [25, 41, 43, 44].
3. Atomic Level Splittings
At O(b2)in the lattice spacing expansion of the Wilson action, that is relevant to describing
low-energy processes, there is a rotational-symmetry breaking operator that is consistent
with the lattice hyper-cubic symmetry,
LRV =CRV b2
6
4
X
µ=1
ψ γµDµDµDµψ , (14)
where the tree-level value of CRV = 1. In taking matrix elements of this operator in the
Hydrogen atom, where the binding energy is suppressed by a factor of αcompared with
the typical momentum, the dominant contribution is from the spatial components. As each
spatial momentum scales as meα, in the non-relativistic limit, shifts in the energy levels are
expected to be of order
δE CRV α4m3
eb2.(15)
To understand the size of energy splittings, a lattice spacing of b1= 108GeV gives an
energy shift of order δE 1026 eV, including for the splittings between substates in
given irreducible representations of SO(3) with angular momentum J2. This magnitude
of energy shifts and splittings is presently unobservable. Given present technology, and
constraints imposed on the lattice spacing by other observables, we conclude that there is
little chance to see such an effect in the atomic spectrum.
4. The Energy-Momentum Relation and Cosmic Rays
Constraints on Lorentz-violating perturbations to the standard model of electroweak interac-
tions from differences in the maximal attainable velocity (MAV) of particles (e.g. Ref. [25]),
and on interactions with a non-zero vector field (e.g. Ref. [45]), have been determined pre-
viously. Assuming that each particle satisfies an energy-momentum relation of the form
E2
i=|pi|2c2
i+m2
ic4
i(along with the conservation of both energy and momentum in any
given process), if cγexceeds ce±, the process γe+ebecomes possible for photons with
an energy greater than the critical energy Ecrit.= 2mec2
ecγ/qc2
γc2
e±, and the observa-
tion of high energy primary cosmic photons with Eγ<
20 TeV translates into the constraint
cγce±<
1015. Ref. [25] presents a series of exceedingly tight constraints on differences be-
tween the speed of light between different particles, with typical sizes of δcij <
1021 1022
9
FIG. 2. The energy surface of a massless, non-interacting Wilson fermion with r= 1 as a function
of momentum in the x and y directions, bounded by π < bpx,y < π, for pz= 0 is shown in blue.
The continuum dispersion relation is shown as the red surface.
for particles of species iand j. At first glance, these constraints [26] would appear to
also provide tight constraints on the size of the lattice spacing used in a simulation of the
universe. However, this is not the case. As the speed of light for each particle in the dis-
cretized space-time depends on its three-momentum, the constraints obtained by Coleman
and Glashow [25] do not directly apply to processes occurring in a lattice simulation.
The dispersion relations satisfied by bosons and Wilson fermions in a lattice simulation
(in Minkowski space) are
sinh2(bEb
2)X
j=1,2,3
sin2(bkj
2)(bmb
2)2= 0 ;
Eb=q|k|2+m2
b+O(b2),(16)
and
sinh2(bEf)X
j=1,2,3
sin2(bkj)"bmf+ 2r X
j=1,2,3
sin2(bkj
2)sinh2(bEf
2)!#2
= 0 ;
Ef=q|k|2+m2
fr b m3
f
2q|k|2+m2
f
+O(b2),(17)
respectively, where ris the coefficient of the Wilson term, Eband Efare the energy of a
boson and fermion with momentum k, respectively. The summations are performed over
the components along the lattice Cartesian axes corresponding to the x,y, and z spatial
directions. The implications of these dispersion relations for neutrino oscillations along one
of the lattice axes have been considered in Ref. [46]. Further, they have been considered as
a possible explanation [47] of the (now retracted) OPERA result suggesting superluminal
neutrinos [48]. The violation of Lorentz invariance resulting from these dispersion relations
is due to the fact that they have only cubic symmetry and not full rotational symmetry, as
shown in fig. 2. It is in the limit of small momentum, compared to the inverse lattice spacing,
that the dispersion relations exhibit rotational invariance. While for the fundamental parti-
cles, the dispersion relations in eq. (16) and eq. (17) are valid, for composite particles, such
as the proton or pion, the dispersion relations will be dynamically generated. In the present
analysis we assume that the dispersion relations for all particles take the form of those in
10
eq. (16) and eq. (17). It is also interesting to note that the polarizations of the massless
vector fields are not exactly perpendicular to their direction of propagation for some direc-
tions of propagation with respect to the lattice axes, with longitudinal components present
for non-zero lattice spacings.
Consider the process pp+γ, which is forbidden in the vacuum by energy-momentum
conservation in special relativity when the speed of light of the proton and photon are equal,
cp=cγ. Such a process can proceed in-medium when vp> cγ, corresponding to Cerenkov
radiation. In the situation where the proton and photon have different MAV’s, the absence
of this process in vacuum requires that |cpcγ|<
1023 [25, 49]. In lattice simulations of
the universe, this process could proceed in the vacuum if there are final state momenta
which satisfy energy conservation for an initial state proton with energy Eimoving in some
direction with respect to the underlying cubic lattice. Numerically, we find that there are
no final states that satisfy this condition, and therefore this process is forbidden for all
proton momentum 10. In contrast, the process γe+e, which provides tight constraints
on differences between MAV’s [25], can proceed for very high energy photons (those with
energies comparable to the inverse lattice spacing) near the edges of the Brillouin zone.
Further, very high energy π0’s are stable against π0γγ, as is the related process γπ0γ.
With the dispersion relation of special relativity, the structure of the cosmic ray spec-
trum is greatly impacted by the inelastic collisions of nucleons with the cosmic microwave
background (CMB) [50, 51]. Processes such as γCMB +Ngive rise to the predicted
GKZ-cut off scale [50, 51] of 6×1020 eV in the spectrum of high energy cosmic rays. Recent
experimental observations show a decline in the fluxes starting around this value [52, 53],
indicating that the GKZ-cut off (or some other cut off mechanism) is present in the cosmic
ray flux. For lattice spacings corresponding to an energy scale comparable to the GKZ cut
off, the cosmic ray spectrum will exhibit significant deviations from isotropy, revealing the
cubic structure of the lattice. However, for lattice spacings much smaller than the GKZ
cut off scale, the GKZ mechanism cuts off the spectrum, effectively hiding the underlying
lattice structure. When the lattice rest frame coincides with the CMB rest frame, head-on
interactions between a high energy proton with momentum |p|and a photon of (very-low)
energy ωcan proceed through the resonance when
ω=m2
m2
N
4|p|"1 + πb2|p|2
9 Y0
4(θ, φ) + r5
14 Y+4
4(θ, φ) + Y4
4(θ, φ)!#
m3
m3
N
4|p|br +... , (18)
for |p|  1/b, where θand φare the polar and azimuthal angles of the particle momenta in
the rest frame of the lattice, respectively. This represents a lower bound for the energy of
photons participating in such a process with arbitrary collision angles.
The lattice spacing itself introduces a cut off to the cosmic ray spectrum. For both the
fermions and the bosons, the cut off from the dispersion relation is Emax 1/b. Equating
this to the GKZ cut off corresponds to a lattice spacing of b1012 fm, or a mass scale of
b11011 GeV. Therefore, the lattice spacing used in the lattice simulation of the universe
10 A more complete treatment of this process involves using the parton distributions of the proton to relate
its energy to its momentum [26]. For the composite proton, the pp+γprocess becomes kinematically
allowed, but with a rate that is suppressed by O8
QCDb7)due to the momentum transfer involved,
effectively preventing the process from occuring. With momentum transfers of the scale 1/b, the final
states that would be preferred in inclusive decays, pXh+γ, are kinematically forbidden, with invariant
masses of 1/b. More refined explorations of this and other processes are required.
11
must be b<
1012 fm in order for the GZK cut off to be present or for the lattice spacing
itself to provide the cut off in the cosmic ray spectrum. The most striking feature of the
scenario in which the lattice provides the cut off to the cosmic ray spectrum is that the
angular distribution of the highest energy components would exhibit cubic symmetry in the
rest frame of the lattice, deviating significantly from isotropy. For smaller lattice spacings,
the cubic distribution would be less significant, and the GKZ mechanism would increasingly
dominate the high energy structure. It may be the case that more advanced simulations will
be performed with non-cubic lattices. The results obtained for cubic lattices indicate that
the symmetries of the non-cubic lattices should be imprinted, at some level, on the high
energy cosmic ray spectrum.
V. CONCLUSIONS
In this work, we have taken seriously the possibility that our universe is a numerical simula-
tion. In particular, we have explored a number of observables that may reveal the underlying
structure of a simulation performed with a rigid hyper-cubic space-time grid. This is mo-
tivated by the progress in performing lattice QCD calculations involving the fundamental
fields and interactions of nature in femto-sized volumes of space-time, and by the simulation
hypothesis of Bostrom [1]. A number of elements required for a simulation of our universe
directly from the fundamental laws of physics have not yet been established, and we have
assumed that they will, in fact, be developed at some point in the future; two important
elements being an algorithm for simulating chiral gauge theories, and quantum gravity. It
is interesting to note that in the simulation scenario, the fundamental energy scale defined
by the lattice spacing can be orders of magnitude smaller than the Planck scale, in which
case the conflict between quantum mechanics and gravity should be absent.
The spectrum of the highest energy cosmic rays provides the most stringent constraint
that we have found on the lattice spacing of a universe simulation, but precision measure-
ments, particularly the muon g2, are within a few orders of magnitude of being sensitive
to the chiral symmetry breaking aspects of a simulation employing the unimproved Wilson
lattice action. Given the ease with which current lattice QCD simulations incorporate im-
provement or employ discretizations that preserve chiral symmetry, it seems unlikely that
any but the very earliest universe simulations would be unimproved with respect to the
lattice spacing. Of course, improvement in this context masks much of our ability to probe
the possibility that our universe is a simulation, and we have seen that, with the excep-
tion of the modifications to the dispersion relation and the associated maximum values of
energy and momentum, even O(b2)operators in the Symanzik action easily avoid obvious
experimental probes. Nevertheless, assuming that the universe is finite and therefore the
resources of potential simulators are finite, then a volume containing a simulation will be
finite and a lattice spacing must be non-zero, and therefore in principle there always remains
the possibility for the simulated to discover the simulators.
Acknowledgments
We would like to thank Eric Adelberger, Blayne Heckel, David Kaplan, Kostas Orginos,
Sanjay Reddy and Kenneth Roche for interesting discussions. We also thank William Det-
mold, Thomas Luu and Ann Nelson for comments on earlier versions of the manuscript. SRB
12
was partially supported by the INT during the program INT-12-2b: Lattice QCD studies of
excited resonances and multi-hadron systems, and by NSF continuing grant PHY1206498.
In addition, SRB gratefully acknowledges the hospitality of HISKP and the support of the
Mercator programme of the Deutsche Forschungsgemeinschaft. ZD and MJS were supported
in part by the DOE grant DE-FG03-97ER4014.
[1] N. Bostrom, Philosophical Quarterly, Vol 53, No 211, 243 (2003).
[2] A. S. Kronfeld, (2012), arXiv:1209.3468 [physics.hist-ph].
[3] Z. Fodor and C. Hoelbling, Rev.Mod.Phys., 84, 449 (2012), arXiv:1203.4789 [hep-lat].
[4] S. R. Beane, E. Chang, S. D. Cohen, W. Detmold, H.-W. Lin, et al., (2012), arXiv:1206.5219
[hep-lat].
[5] T. Yamazaki, K.-i. Ishikawa, Y. Kuramashi, and A. Ukawa, (2012), arXiv:1207.4277 [hep-lat].
[6] S. Aoki et al. (HAL QCD Collaboration), (2012), arXiv:1206.5088 [hep-lat].
[7] S. Lloyd, Nature, 406, 1047 (1999), arXiv:quant-ph/9908043 [quant-ph].
[8] S. Lloyd, (2005), arXiv:quant-ph/0501135 [quant-ph].
[9] K. Zuse, Rechnender Raum (Friedrich Vieweg and Sohn, Braunschweig, 1969).
[10] E. Fredkin, Physica, D45, 254 (1990).
[11] S. Wolfram, A New Kind of Science (Wolfram Media, 2002) p. 1197.
[12] G. ’t Hooft, (2012), arXiv:1205.4107 [quant-ph].
[13] J. Church, Am. J. Math, 58, 435 (1936).
[14] A. Turing, Proc. Lond. Math Soc. Ser. 2, 442, 230 (1936).
[15] D. Deutsch, Proc. of the Royal Society of London, A400, 97 (1985).
[16] J. Barrow, Living in a Simulated Universe, edited by B. Carr (Cambridge University Press,
2008) Chap. 27, Universe or Multiverse?, pp. 481–486.
[17] MILC-Collaboration, http://physics.indiana.edu/sg/milc.html.
[18] J. B. Kogut and L. Susskind, Phys.Rev., D11, 395 (1975).
[19] SPECTRUM-Collaboration, http://usqcd.jlab.org/projects/AnisoGen/.
[20] K. G. Wilson, Phys.Rev., D10, 2445 (1974).
[21] B. Sheikholeslami and R. Wohlert, Nucl.Phys., B259, 572 (1985).
[22] H.-W. Lin et al. (Hadron Spectrum Collaboration), Phys.Rev., D79, 034502 (2009),
arXiv:0810.3588 [hep-lat].
[23] V. Vinge, Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publi-
cation CP-10129, Vision-21: Interdisciplinary, 115 (1993).
[24] R. Kurzweil, The Singularity Is Near: When Humans Transcend Biology (Penguin (Non-
Classics), 2006) ISBN 0143037889.
[25] S. R. Coleman and S. L. Glashow, Phys.Rev., D59, 116008 (1999), arXiv:hep-ph/9812418
[hep-ph].
[26] O. Gagnon and G. D. Moore, Phys.Rev., D70, 065002 (2004), arXiv:hep-ph/0404196 [hep-ph].
[27] J. Collins, A. Perez, D. Sudarsky, L. Urrutia, and H. Vucetich, Phys.Rev.Lett., 93, 191301
(2004), arXiv:gr-qc/0403053 [gr-qc].
[28] S. Kachru, R. Kallosh, A. D. Linde, and S. P. Trivedi, Phys.Rev., D68, 046005 (2003),
arXiv:hep-th/0301240 [hep-th].
[29] L. Susskind, (2003), arXiv:hep-th/0302219 [hep-th].
[30] M. R. Douglas, JHEP, 0305, 046 (2003), arXiv:hep-th/0303194 [hep-th].
13
[31] T. Appelquist, R. C. Brower, M. I. Buchoff, M. Cheng, S. D. Cohen, et al., (2012),
arXiv:1204.6000 [hep-ph].
[32] S. Hsu and A. Zee, Mod.Phys.Lett., A21, 1495 (2006), arXiv:physics/0510102 [physics].
[33] K. Symanzik, Nucl.Phys., B226, 187 (1983).
[34] K. Symanzik, Nucl.Phys., B226, 205 (1983).
[35] G. Aslanyan and A. V. Manohar, JCAP, 1206, 003 (2012), arXiv:1104.0015 [astro-ph.CO].
[36] P. Jizba, H. Kleinert, and F. Scardigli, Phys.Rev., D81, 084030 (2010), arXiv:0912.2253 [hep-
th].
[37] D. B. Kaplan and S. Sun, Phys.Rev.Lett., 108, 181807 (2012), arXiv:1112.0302 [hep-ph].
[38] M. Lüscher and P. Weisz, Commun.Math.Phys., 97, 59 (1985).
[39] P. J. Mohr, B. N. Taylor, and D. B. Newell, ArXiv e-prints (2012), arXiv:1203.5425
[physics.atom-ph].
[40] R. Bouchendira, P. Cladé, S. Guellati-Khélifa, F. Nez, and F. Biraben, Phys. Rev. Lett., 106,
080801 (2011).
[41] D. Colladay and V. A. Kostelecký, Phys. Rev. D, 55, 6760 (1997).
[42] R. C. Myers and M. Pospelov, Phys.Rev.Lett., 90, 211601 (2003), arXiv:hep-ph/0301124 [hep-
ph].
[43] S. M. Carroll, G. B. Field, and R. Jackiw, Phys. Rev. D, 41, 1231 (1990).
[44] P. Laurent, D. Gotz, P. Binetruy, S. Covino, and A. Fernandez-Soto, Phys.Rev., D83, 121301
(2011), arXiv:1106.1068 [astro-ph.HE].
[45] L. Maccione, A. M. Taylor, D. M. Mattingly, and S. Liberati, JCAP, 0904, 022 (2009),
arXiv:0902.1756 [astro-ph.HE].
[46] I. Motie and S.-S. Xue, Int.J.Mod.Phys., A27, 1250104 (2012), arXiv:1206.0709 [hep-ph].
[47] S.-S. Xue, Phys.Lett., B706, 213 (2011), arXiv:1110.1317 [hep-ph].
[48] T. Adam et al. (OPERA Collaboration), (2011), arXiv:1109.4897 [hep-ex].
[49] S. R. Coleman and S. L. Glashow, Phys.Lett., B405, 249 (1997), arXiv:hep-ph/9703240 [hep-
ph].
[50] K. Greisen, Phys.Rev.Lett., 16, 748 (1966).
[51] G. Zatsepin and V. Kuzmin, JETP Lett., 4, 78 (1966).
[52] J. Abraham et al. (Pierre Auger Collaboration), Phys.Lett., B685, 239 (2010), arXiv:1002.1975
[astro-ph.HE].
[53] P. Sokolsky et al. (HiRes Collaboration), PoS, ICHEP2010, 444 (2010), arXiv:1010.2690
[astro-ph.HE].
14
... The seminal paper on the simulation hypothesis isBostrom (2003). For possible experiments which could be used to test whether or not we are an ancestor simulation seeBeane et al., (2014).3 Greene also argues that running an ancestor simulation carries a termination risk on broadly computation grounds because of the load on hypothetical computers. ...
Article
Full-text available
Preston Greene (2020) argues that we should not conduct simulation investigations because of the risk that wemight be terminated if our world is a simulation designed to research various counterfactuals about the world of the simulators. In response, we propose a sequence of arguments, most of which have the form of an "even if? response to anyone unmoved by our previous arguments. It runs thus: (i) if simulation is possible, then simulators are as likely to care about simulating simulations as they are likely to care about simulating basement (i.e. nonsimulated) worlds. But (ii) even if simulations are interested only in simulating basement worlds the discovery that we are in a simulation will have little or no impact on the evolution of ordinary events. But (iii) even if discovering that we are in a simulation impacts the evolution of ordinary events, the effects of seeming to do so could also happen in a basement world, and might be the subject of interesting counterfactuals in the basement world. Finally, (iv) there is little reason to think that there is a catastrophic effect from successful simulation probes, and no argument from the precautionary principle can be used to leverage the negligible credence one ought have in this. Thus, if we do develop a simulation probe, then let?s do it.
... Therein is proposed from the ontological parameters of Wheeler's It from Bit Argument: It is required to define variable dynamics and characteristics utilizing Poincare's surface and related terms of differential geometry of surfaces and infinitesimal analysis for a theoretical structure relevant to the Digital Universe Hypothesis [1]. Astrophysical measurement has been demonstrated with relevance to prediction for generalized quantum field effects, from contemporary WMAP and BICEP2/Keck 2014, 2015 data sets. ...
... The most interesting aspect of this issue, from a scientific and physical point of view, is whether the sceptical hypotheses are testable. Silas R. Beane, Zohreh Davoudi, and Martin Savage significantly contributed to the issue when they presented an experimental test to the hypothesis that we are in a numerical simulation [271]. They claimed that the numerical simulation scenario could reveal itself experimentally in the distributions of the highest energy cosmic rays. ...
Article
Full-text available
The central goal of this manuscript is to survey the relationships between fundamental physics and computer science. We begin by providing a short historical review of how different concepts of computer science have entered the field of fundamental physics, highlighting the claim that the universe is a computer. Following the review, we explain why computational concepts have been embraced to interpret and describe physical phenomena. We then discuss seven arguments against the claim that the universe is a computational system and show that those arguments are wrong because of a misunderstanding of the extension of the concept of computation. Afterwards, we address a proposal to solve Hempel’s dilemma using the computability theory but conclude that it is incorrect. After that, we discuss the relationship between the proposals that the universe is a computational system and that our minds are a simulation. Analysing these issues leads us to proposing a new physical principle, called the principle of computability, which claims that the universe is a computational system (not restricted to digital computers) and that computational power and the computational complexity hierarchy are two fundamental physical constants. On the basis of this new principle, a scientific paradigm emerges to develop fundamental theories of physics: the computer-theoretic framework (CTF). The CTF brings to light different ideas already implicit in the work of several researchers and provides a new view on the universe based on computer theoretic concepts that expands the current view. We address different issues regarding the development of fundamental theories of physics in the new paradigm. Additionally, we discuss how the CTF brings new perspectives to different issues, such as the unreasonable effectiveness of mathematics and the foundations of cognitive science.
... The machine learning and serving algorithms of discrete field theories proposed might provide a clue, when incorporating the basic concept of the simulation hypothesis by Bostrom 78 . The simulation hypothesis states that the physical universe is a computer simulation, and it is being carefully examined by physicists as a possible reality [79][80][81] . If the hypothesis is true, then the spacetime is necessarily discrete. ...
Article
Full-text available
A method for machine learning and serving of discrete field theories in physics is developed. The learning algorithm trains a discrete field theory from a set of observational data on a spacetime lattice, and the serving algorithm uses the learned discrete field theory to predict new observations of the field for new boundary and initial conditions. The approach of learning discrete field theories overcomes the difficulties associated with learning continuous theories by artificial intelligence. The serving algorithm of discrete field theories belongs to the family of structure-preserving geometric algorithms, which have been proven to be superior to the conventional algorithms based on discretization of differential equations. The effectiveness of the method and algorithms developed is demonstrated using the examples of nonlinear oscillations and the Kepler problem. In particular, the learning algorithm learns a discrete field theory from a set of data of planetary orbits similar to what Kepler inherited from Tycho Brahe in 1601, and the serving algorithm correctly predicts other planetary orbits, including parabolic and hyperbolic escaping orbits, of the solar system without learning or knowing Newton’s laws of motion and universal gravitation. The proposed algorithms are expected to be applicable when the effects of special relativity and general relativity are important.
... Local gauge symmetries are most celebrated for their key role in defining the Standard Model of particle physics. The potential for simulating the Standard Model Hamiltonian efficiently, and with all known symmetries intact, will be an important clue as to whether or not our own universe could be a simulation [81] and further our insight into its intrinsic informational complexity. Could it be that optimal gauge invariant algorithms actually scale worse than optimal non-gauge invariant ones when it comes to extracting observables or to nonabelian gauge groups? ...
Preprint
Full-text available
Universal quantum simulations of gauge field theories are exposed to the risk of gauge symmetry violations when it is not known how to compile the desired operations exactly using the available gate set. In this letter, we show how time evolution can be compiled in an Abelian gauge theory -- if only approximately -- without compromising gauge invariance, by graphically motivating a block-diagonalization procedure. When gauge invariant interactions are associated with a "spatial network" in the space of discrete quantum numbers, it is seen that cyclically shearing the spatial network converts simultaneous updates to many quantum numbers into conditional updates of a single quantum number: ultimately, this eliminates any need to pass through (and acquire overlap onto) unphysical intermediate configurations. Shearing is explicitly applied to gauge-matter and magnetic interactions of lattice QED. The features that make shearing successful at preserving Abelian gauge symmetry may also be found in non-Abelian theories, bringing one closer to gauge invariant simulations of quantum chromodynamics.
Article
Full-text available
Combining the ‘Holographic Principles’ of Leonard Susskind and ‘Simulation Theory’ of Nick Bostrom, a theory has been reproduced in this paper dictating the negation of the relative entropy persistent in the present, thereby forbidding their collapse as subject to an increment of entropy while moving forward in time and decrement of entropy while moving backward in time, preventing a phase of state before the Big Bang or the collapse of the convergence. The universe in its own way diverges taking the simultaneous array of ‘Past, Present, and Future' with the ‘Bread-Slice’ concept of time, the reality being augmented by some future advanced civilizations to create a Spatio-Temporal 3D ‘Hologram’ on a 2D Canvas, projecting through a simulation, thus creating exponential channels of realistic layers with a certain percentile of errors, which are so minimal in the present stage, that, the universal constants of nature, G, c, ℏ remains unaltered, which may alter in future if the error fragmentation over simulation takes growth, censoring the future reality in a state of complete superposition excluding us, who are residing in the exponential shadows of simulations.
Book
Full-text available
CONTENTS Introduction: The World According to Contemporary Philosophy Rok Benčin The World: The Tormented History of an Inescapable Para-Concept Bruno Besana World? Which World? On Some Pitfalls of a Concept Peter Klepec On Acosmic Realism Roland Végső The Actuality of a World: What Ceases Not to Be Written Ruth Ronen The End of Life Is Not the Worst: On Heidegger’s Notion of the World Jan Völker The Anatomy of the World Magdalena Germek Capital, Logic of the World Nick Nesbitt World(s) Marina Gržinić Worlds as Transcendental and Political Fictions Rok Benčin Dispersed Are We: The Novel of Worlds and the World of the Novel in Virginia Woolf’s Between the Acts Jean-Jacques Lecercle Architecture and the Distribution of the Sensible Nika Grabar How the True World Finally Became Virtual Reality Anna Longo Spectres of Eternal Return: Benjamin and Deleuze Read Leibniz Noa Levin
Article
As David J. Chalmers claims, “virtual reality is a sort of genuine reality, virtual objects are real objects, and what goes on in virtual reality is truly real.” In this paper, I will suggest that the philosophical hypothesis that we might live in a simulation can be considered to be the last and most nihilistic episode in the series of narrations about the true and apparent worlds that Nietzsche sketched in The Twilight of the Idols. I will argue that Nietzsche’s prediction about the obliteration of the apparent world has actually been fulfilled by Chalmers, and I will show why his theory must be considered one of the many fables that humans have been producing in order to organise the world according to their own ends.
Preprint
Full-text available
Whether the universe is a computer simulation, or whether we wish to efficiently model our universe in a computer simulation, there would be benefits to modeling it in a fashion analogous to computer spreadsheet, each lattice cell can be conceived as containing all the mathematical formula necessary to continuously compute its state relative to changes in all its neighboring cells, and by progression, in relation to all the cells of entire space-time lattice. Alternatively, the “real” universe may itself be built on a space cell lattice, an irregular foam of space cells, in which each cell may be conceived as a multidimensional cell of distortable space, the shape of which fully describes (a) the four basic forces (gravity, electromagnetic, strong, weak) observed at that cell of space, and (b) the probability (or weight distribution) of any quantum states overlapping the cell and its neighbors. At an appropriate scale, it would appear that this conceptual model would resolve apparent conflicts between general relativity and quantum physics. It would also provide a new interpretation of Planck’s constant as description of the number of space cell events associated with any set of observable events. If formulae operating at a lattice cell level can be improve our ability to understand and model larger scale phenomena, this would be strong evidence in favor of the theory that mathematics is not just a human invention but rather an inherent feature of space-time itself.
Article
We argue that the cosmic microwave background (CMB) provides a stupendous opportunity for the Creator of our universe (assuming one exists) to have sent a message to its occupants, using known physics. Our work does not support the Intelligent Design movement in any way whatsoever, but asks, and attempts to answer, the entirely scientific question of what the medium and message might be IF there was actually a message. The medium for the message is unique. We elaborate on this observation, noting that it requires only careful adjustment of the fundamental Lagrangian, but no direct intervention in the subsequent evolution of the universe.
Article
Introduction A good point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. Bertrand Russell Of late, there has been much interest in multiverses. What sorts could there be? And how might their existence help us to understand those life-supporting features of our own universe that would otherwise appear to be merely very fortuitous coincidences [1,2]? At root, these questions are not ultimately matters of opinion or idle speculation. The underlying Theory of Everything, if it exists, may require many properties of our universe to have been selected at random, by symmetry-breaking, from a large collection of possibilities, and the vacuum state may be far from unique. The favoured inflationary cosmological model — that has been so impressively supported by observations of the COBE and WMAP satellites — contains many apparent ‘coincidences’ that allow our universe to support complexity and life. If we were to consider a ‘multiverse’ of all possible universes, then our observed universe appears special in many ways. Modern quantum physics even provides ways in which the possible universes that make up the multiverse of all possibilities can actually exist. Once you take seriously that all possible universes can (or do) exist, then a slippery slope opens up before you. It has long been recognized that technical civilizations, only a little more advanced than ourselves, will have the capability to simulate universes in which self-conscious entities can emerge and communicate with one another [3]. They would have computer power that exceeded ours by a vast factor. Instead of merely simulating their weather or the formation of galaxies, like we do, they would be able to go further and watch the appearance of stars and planetary systems.
Article
Up-to-date data on the fundamental physical constants are briefly reviewed, as are the results of their combined analysis, namely, the new recommended values of the fundamental physical constants [Mohr P J, Taylor B N, Newell D B, "CODATA recommended values of the fundamental physical constants: 2006" Rev. Mod. Phys. 80 633 (2008)]. Following an approach presented previously (Usp. Fiz. Nauk 175 271 (2005) [Phys. Usp. 48 255 (2005)]), the author divides the data into blocks. The same block approach is used to discuss new theoretical and experimental results and their implications for the new recommended values of the constants. A comparison with the previous (1998 and 2002) sets of the recommended values of the constants is given.
Article
This paper is written from the perspective of a computer scientist and addressed to theoretical physicists. As a consequence it may seem somewhat unusual, but rest assured, everything is really quite simple! The point of this papers is that the study of certain phenomena from computer science suggests that there are computer systems (cellular automata) that may be appropriate as models for microscopic physical phenomena. Cellular automata are now being used to model varied physical phenomena normally modelled by wave equations, fluid dynamics, Ising models, etc. We hypothesize that there will be found a single cellular automaton rule that models all of microscopic physics; and models it exactly. We call this field DM, for digital mechanics.
Article
We study the possibility that the universe is flat, but with one or more space directions compactified. Using the seven-year WMAP data, we constrain the size of the compact dimension to be L/L0 ≥ 1.27,0.97,0.57 at 95% confidence for the case of three, two and one compactified dimension, respectively, where L0 = 14.4 Gpc is the distance to the last scattering surface. We find a statistically significant signal for a compact universe, and the best-fit spacetime is a universe with two compact directions of size L/L0 = 1.9, with the non-compact direction pointing in a direction close to the velocity of the Local Group.
Article
During the last few decades, an extensive development of the theory of computing machines has occurred. On an intuitive basis, a computing machine is considered to be any physical system whose dynamical evolution takes it from one of a set of 'input' states to one of a set of 'output' states. For a classical deterministic system the measured output label is a definite function f of the prepared input label. However, quantum computing machines, and indeed classical stochastic computing machines, do not 'compute functions' in the considered sense. Attention is given to the universal Turing machine, the Church-Turing principle, quantum computers, the properties of the universal quantum computer, and connections between physics and computer science.
Article
A mechanism for total confinement of quarks, similar to that of Schwinger, is defined which requires the existence of Abelian or non-Abelian gauge fields. It is shown how to quantize a gauge field theory on a discrete lattice in Euclidean space-time, preserving exact gauge invariance and treating the gauge fields as angular variables (which makes a gauge-fixing term unnecessary). The lattice gauge theory has a computable strong-coupling limit; in this limit the binding mechanism applies and there are no free quarks. There is unfortunately no Lorentz (or Euclidean) invariance in the strong-coupling limit. The strong-coupling expansion involves sums over all quark paths and sums over all surfaces (on the lattice) joining quark paths. This structure is reminiscent of relativistic string models of hadrons.