ArticlePDF Available

Abstract and Figures

Why should the universe need to be fine tuned? The thesis is presented that parameter sensitivity arises as a natural consequence of the mathematics of dynamical systems with complex outcomes. Hence, fine tuning is a mathematical correlate of complexity and should not elicit surprise. KeywordsFine tuning–Universal constants–Entropy–Complexity–Multiverse
Content may be subject to copyright.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
1
of
29
International Journal of Theoretical Physics
(2011)
50
, 1577
-1601
The Inevitability of Fine Tuning
in a Complex Universe
R.A.W.
Bradford
Bennerley,
1 Merlin Haven
Wotton
-
under
-
Edge
Glos. GL12 7BA
, UK
Tel. 0145
3 843462 / 01452 653237 / 07
805 077729
RickatMerlinHaven@hotmail.com
ABSTRACT
Why should the universe
need to be fine
tuned? The thesis is presented that
parameter sensitivity
arises as a natural consequence of the mathematics of dynamical sy
stems
with complex outcomes
.
Hence, fine tuning is a mathematical correlate of complexity and should not elicit surprise.
Keywords
Fine tuning, universal constants, entropy, complexity, multiverse
1
Fine Tuning and the Purpose of this Paper
It is ove
r
40 years since Carter
[15
]
observed that the universal constants
of physics
appear to be peculiarly fine tuned. Relatively small variations in the
universal
constants
,
it is claimed,
would
produce radical changes in the universe.
On first acquaintance th
is
seems remarkable and surprising. It will be argued here that, on the contrary, the fact that
the universe contains complex parts makes fine tuning inevitable. Had physicists
been
foresighted enough, fine tuning could have been anticipated even before ph
ysics
and
cosmology
had advanced to the stage where it could be directly demonstrated.
To make clear what
we
mean by fine tuning
,
a few examples
are given
in section 2.
We
shall argue that, whil
st the tuning of some universal constants may
not
be
as num
erically
impressive as is sometimes claimed, nevertheless tuning is evident.
What has not been
brought out clearly before is that fine tuning consists of two separate phenomena
:
parameter sensitivity and fine tuning itself
.
This is explained in section 3
.
In section 4
some toy models are used to explain why parameter sensitivity is to be expected in any
universe with a complex outcome. Section 5 then presents our general thesis that
parameter sensitivity is a mathematical result of the e
volution
of complexi
ty, the tw
o
issues
being linked via
entropy interpreted in terms of phase space volume. Section 6
discusses
briefly the relevance of this work to the cosmological constant, the causal
entropic principle, the creationist argument from design and multiverses
.
2 Examples of Fine Tuning
2.1
Varying Single Parameters
Illustrates Tuning
A few examples of fine tuning are given below.
However, there are many further
examples and these matters have been discussed many times before, see for example
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
2
of
29
[2
,3,6,7,13-1
6,20
-24
,2
6
,2
8
,2
9,
46
,
47
].
The commentary on each example explains why
some of them
may not be as impressive as they first appear.
(i)
I
f the neutron were lighter by more than 0.08%, or if it were heavier by more than
~1%, then there would be no stable atomic matte
r in the universe.
This narrow mass
range is required to avoid both nuclear capture of the atomic electrons, and also beta
decay of the nucleus.
(ii)
If the weak
nuclear force were
sufficiently
weaker then the abundance of neutrons
and protons after the first f
ew seconds would have been closely matched
and
Big
Bang N
ucleosynthesis (BBN)
would have resulted
in a universe consisting of
virtually all helium and very little hydrogen. A universe with no hydrogen would
contain no water, no hydrocarbons such as amino a
cids, and
no hydrogen bond
chemistry, and hence no
life as we know it
.
However we shall see below that
sufficiently weaker really means a lot weaker, by at least an order of magnitude.
(iii)
If the strong nuclear force (i.e., the effective low energy coupling
, g
s
) were ~15%
weaker then the deuteron would be unbound
. T
he formation of
all nuclei would be
prevented and there would be no chemical elements other than hydrogen.
(iv)
If the strength of the strong nuclear force, g
s
, were changed by
%1
the rate of the
triple
-
alpha reaction would be affected so markedly that
the production of biophilic1
abundances of either carbon or oxygen would be prevented.
Example (i)
seems impressive, but becomes much less so when it is recalled that the
neutron and
the proton share a common structure. About 99% of a nucleon s mass is due
to the virtual gluons and virtual quarks which comprise the strong nuclear force.
This
feature is shared by the neutron and the proton, which
differ only in regard to the
udd
and
uu
d valence
quarks which respectively provide the nucleons with their net quantum
numbers. Since the
u
and
d
quarks in question have masses of just a few MeV, it is no
longer particularly surprising that the neutron
-proton mass difference is also of this o
rder.
In fact this is to be expected. The moral is that there are structural reasons why the
neutron and proton masses should be very close. This is not to say that there is no
tuning at all, just that it is not so terribly fine as it first appears. It
is more indicative to
compare
pn mm
with the mass of the electron or the mass of the
u
or
d
quarks. On this
s
cale the tuning is at the level of tens or hundreds of percent, rather than less than 1%.
Nevertheless, there is some
tuning. For
example, the
d
quark must be heavier than the
electron for atomic stability
2
.
In E
xample
(ii), the neutron:proton ratio
depends upon
T
, the temperature when the
leptonic reactions which interconvert neutrons and protons are frozen o
ut by cosmic
1 The term biophilic will be used here to refer to a universe which is sufficiently similar to our universe
that conventional biochemistry could potentially arise. Biophilic universes are therefor
e a sub
-
set of all
complex universes (probably a very small sub
-
set).
2
That is, if we make the rather sweeping assumption that
udpn mmmm
. Far more carefully
argued constraints on the
u, d
and
s quark masses which produce a congenial unive
rse have been
discussed recently by Jaffe, Jenkins and Kimchi [31] and by
Damour and Donoghue [19]. The term
congenial is defined in Jaffe et al as universes with quark masses which allow for certain key nuclei to be
stable (so as to make organic chemist
ry possible).
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
3
of
29
expansion. This freeze
-out temperature depends upon the strength of the weak nuclear
force, G
F
. Because the neutron:proton ratio is
kT
mm pn /
exp
, i
t is often claimed
that it is highly sensitivity to changes in the freeze
-
out time, a
nd hence to the strength of
the weak force. Actually, a closer examination shows that if the Fermi constant, G
F,
were
reduced by an order of magnitude, the universe would still be 18% hydrogen (by mass, or
~50% by number of atoms). This would still support hydrogen burning stars with lives in
the order of billions of years, long enough for biological evolution. Reducing G
F
by a
factor of 100 would still leave the universe with ~14% hydrogen by number of atoms.
Admittedly if the hydrogen abundance were reduc
ed too much this would ultimately
prejudice the formation of the first stars, which is believed to rely on a cooling
mechanism
via molecular hydrogen.
Nevertheless, there is no obvious reason to regard as
catastrophic a reduction in G
F
by somewhat more tha
n a factor of ten.
Moreover,
the constraint on G
F
is
only
single
-
sided: it must exceed
(say)
~10%
of
its
actual value
but there is
no obvious upper bound resulting from these considerations.
If
G
F
were increased, then there would be less helium in the uni
verse. For example, a factor
of 4 increase in G
F
results in only ~0.2% hel
ium by mass. But this would seem
unimportant. Helium appears to play no essential role in the formation of large scale
structure or stellar physics
3. Although no upper bound on GF
re
sults from these
considerations, there are suggestions that Type II supernovae require GF
to lie close to its
actual value. This is because crucial aspects of the mechanism of Type II supernovae
involve neutrino interactions, i.e., weak
-
force interactions.
The neutrinos seem to be
required to interact just weakly enough to escape the core of the collapsed star, but
strongly enough to transfer
sufficient
energy to the mantle to cause the explosion.
Unfortunately the quantitative understanding of Type II supe
rnovae is too poor to deduce
just how fine tuned G
F
must be.
Hence
there is a case for considering G
F
to be a genuine instance of tuning, but it is not
necessarily terribly fine
, and may only be single
-
sided
.
Example (iii)
, deuteron stability,
does app
ear to provide a genuine instance of fine
-
tuning, requiring g
s
to exceed ~85% of its actual value. Claims are often made that there
is also an upper bound on g
s
to avoid diproton stability. If g
s
were ~10% larger, then the
diproton (
He
2
2
) would be a bound state
4
. It has frequently been claimed that this would
lead to an all
-
helium universe. The argument is that all the nucleons would end up as
helium during BBN, either via the conventional sequence starting with
Hpn 2
1
,
or
via the diproton
5 eH
He
pp 2
1
2
2
. However, this argument is just wrong. The
reason is that, even if the diproton were stable, the rate of its formation via
He
pp 2
2
is too slow for any significant number of diprotons to be formed
during BBN
, [12
]
. It is
3
The ppII and ppIII reaction sequences would be slowed by the absence of initial helium, but the ppI
sequence is unaffected.
4
The diproton is not bound in this universe. This is because the spin
-
singlet nuclear force is weaker than
the spin
-
triplet nuclear force which binds the deuteron. It is
not
, as some authors have claimed, due to
electrostatic Coulomb repulsion.
5
The inverse beta decay which converts the diproton to a deuteron is possible because the binding energy
of the deuteron (2.2
24 MeV) exceeds
epn mmm
1.804 MeV.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
4
of
29
true that the nuclear physics of stars would subsequently be very different, but there is no
obvious reason why biophilic stars would not be stable
, [12
].
There is possibly a different upper bound on g
s
. If g
s
were increased suffi
ciently that
deuterium were stable even at the higher temperatures prevailing before 1 second, when
the leptonic reactions were still active, then the nucleons could escape into the sanctuary
of helium
-
4 before the protons had gained numerical superiority
over the neutrons. This
would again lead to a universe with little hydrogen. However, rough estimates suggest
that g
s
would need to be increased by more than a factor of two for the hydrogen
abundance to fall to potentially abiophilic levels.
Hence, deuter
on stability
provides
a case for fine tuning of g
s
(a lower bound of ~85% of
its actual value)
but
any upper bound
arising from BBN
is rather generous in magnitude.
Example (iv) concerns
the famous Hoyle [
30
]
coincidence.
The instability of beryllium
-
8
(
Be
8
4) means that carbon ( C
12
6
) can be produced only by virtue of the subsequent alpha
capture reaction
C
He
Be
12
6
4
2
8
4
being extremely fast due to the existence of a
resonance of the carbon nucleus at just the ri
ght energy. Moreover, the subsequent
burning of all the carbon into oxygen is avoided
only
by the fortuitous placing of the
energy levels of the oxygen nucleus so that resonance is just avoided.
Some authors are
not impressed by t
he Hoyle
coincidence
-
for
example, Weinberg
6
[53
]
.
But actually
quite elementary arguments based on first order perturbation theory are sufficient to show
that Weinberg s objection does not stand up to quantitative scrutiny. The triple alpha
resonance energies with respect to the
ir reaction thresholds are highly sensitive to
changes in the strength of the nuclear force
.
A mere 0.4% change in the strength of the nuclear force can produce a change in the C
12
2
0
re
sonance energy of up to 38%, [17,18,
42
,
43
,
48
]
. C
onsideration of the detailed stellar
models reported in these same references suggests that a reduction in the C
12
2
0
resonance energy of perhaps ~50% will result in a reduction in carbon production of
around two orders of magnitude. A
lternatively, an increase in the C
12
2
0
resonance
energy of ~50% will result in a reduction in oxygen production of around two orders of
magnitude. These changes in resonance energy would be brought about by a change in
the strength of
the nuclear force
not exceeding +
1%
or
-
1%
respectively.
Consequently it appears that the Hoyle coincidence is impressively fine tuned, requiring
changes in the strong force of less than ±1% to challenge the likelihood of conventional
biochemistry by ser
ious depletion of either carbon or oxygen
abundance. However, this
fine
-
tuning might only be one
-
sided. The reason is that an increase in the nuclear force of
6 Weinberg, [40] says, I don t set much store by the famous coincidence emphasised by Hoyle, that there
is an excited state of C
12
with just the right energy to allow carbon production via
8
Be
reactions in
stars. We know that even
-
even nuclei have states that are well described as composites of
-
particles. One
such state is the ground state of Be
8
, which is unstable against fission into two alpha particles. The same
-
potential that
produces that sort of unstable state in Be
8
could naturally be expected to produce an
unstable state in C
12
that is essentially a composite of three alpha particles, and that therefore appears as a
low
-
energy resonance in
8
Be
reactions.
So the existence of this state doesn t seem to me to provide
any evidence of fine tuning
.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
5
of
29
less than 1% will make
8
Be
stable. The triple alpha reaction, via the resonant states
8
Be
and
2
12
0C
, would then no longer be necessary. Ins
tead a star might
accumulate
macroscopic quantities of
8
Be
and synthesise subsequent elements in a more
conventional manner.
Whether the res
ulting nuclear physics and stellar physics would
render this a feasible pathway for biophilic carbon and oxygen production is difficult to
judge.
It is concluded that the Hoyle coincidence is an instance of fine tuning, requiring that the
nuclear force ex
ceed 99% of its actual strength. It may also need to be less than 101% of
its actual value, but this is less clear.
In summary,
whilst the tuning of some universal constants may not be as numerically
impressive as is sometimes claimed, and might only be
single
-
sided in some cases,
nevertheless tuning is evidently a feature of the world.
2.2 Varying Multiple
Parameters
-
Alternative Complex Universes
In section 2.1 we have been guilty of giving the impression
that fine tuning requires
the
affected univer
sal constant to
lie within a certain range of values. Indeed many
discussions of fine tuning give this impression. But this is quite wrong. In fact, all
instances of fine tuning provide relations between
two or
more
parameters
. Examples are
as follows,
Con
sider the bound on th
e neutron mass
discussed in section 2
.1
. Algebraically this
is
Bmmmmm epnep
where
B
is the
difference in
binding energy
between
the
nucleus in question
and the nucleus obtained by replacing
p
n
.
So the
a
llowed range for the neutron mass depends upon the other masses, which in turn
depend upon
the quark masses
and other constants of the standard model.
R
educing the weak coupling constant, G
F
,
sufficiently could challenge
the
preservation
of hydrogen during
the Big Bang
. However, the excess of protons over
neutrons at the time of the freeze
-
out of the leptonic reactions depends upon the
product
pn
FmmG 3
2
, so that a reduction in G
F
can be compensated by an increase
in the nucleo
n mass difference. (This is likely to involve a reduction in the neutron
lifetime, which also influences the final proportion of hydrogen surviving the Big
Bang, but the photon:
baryon
ratio can be re
-
tuned to negate that effect if necessary).
The lower bo
und on the strength of the nuclear force, g
s
, to bind the deuteron was
stated in s
ection
2.1
as
actual
ss gg ,
85
.0
. But closer inspection reveals that the range
of the nuclear force, and the nucleon mass, are also part of this calculation. The
combina
tion of parameters which is bounded below is actually
mmg ns /
2
, where
m
is the pion mass. So the numerical bound on
s
gcan be changed by varying the
nucleon:pion mass ratio.
The stability of large nuc
lei requires that the quantum of charge is not too great or
else the Coulomb repulsion between the protons will blow the nucleus apart. But a
larger quantum of charge can be compensated by also increasing the strength of the
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
6
of
29
nuclear force. This produces an
inequality involving both
gs
and
, the
electromagnetic fine structure constant.
One of the original f
ine
-
tunings of Carter [15,16
] was the requirement that small stars
be convection dominated whilst large stars be radiation dominate
d. This leads to the
coincidence
c
Gm
m
mp
p
e2
4
12
~
. This is a statement about the relative strengths of the
gravitational and
electromagnetic forces and involves several constants
.
Since all known examples of parameter sensitivity are relations betw
een two or more
constants, it is clear that each such individual relation can at best give a constraint like
that illustrated in Figure 1.
Note, however, that there will be many constraints like Figure
1 which must be satisfied simultaneously, potentially
up to one for each case of fine
tuning.
The intersection of these separate constraints might
restrict
individual parameters
more narrowly
(s
ee Figure 2 of Bousso et al [
9]).
Figure 1
:
Illustrating a logical fallacy: the observ
ation of fine
-
tuning in parameters c
1
and c
2
does not imply that they are confined to the red dashed box.
The lesson that can be learnt from t
his simple observation is that complex
universe
s
might result for values of the universal constants which depart g
reatly from the usually
claimed fine
-
tuned bounds, p
rovided
that
the parameters
remain
in
the complex
region
,
i.e.,
between
the bounding curves in Figure 1. Radically different, but still complex,
universes may exist in these directions
in parameter spac
e7
. Support for this contention is
provided by a number of radically different model universes which have been constructed
7
And they will all exhibit parameter sensitivity
see section 5.
Fine tuning is observed in
both the parameters c
1
and
c2
, as indicated by the
arrowed lines. However, it is
incorrect to conclude that the
constants are ther
efore
restricted to the box
c1
Actually, a complex universe might result
for any (c
1
,c
2
) lying between t
he two curves.
This may include points (c
1
,c
2
) which lie far
from the values in this universe
c2
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
7
of
29
by
Aguirre [4
], by
Harnik, Kribs and Perez
[27
], by Adams [1] and by
Jaffe, Jenkins and
Kimchi
[31
].
These are summarised in turn.
Aguirre s Cold Big Bang Universe
On the basis of considering the effect of single parameter variations on struct
ure
formation, Carr and Ress [13
] have argued that
the photon:baryon ratio
cannot be
less
than ~10
6 . Similarly
, Tegmark and Rees [
51
] argue tha
t the magnitude of the primordial
density fluctuations, Q, is fine tuned to be within an order of magnitude of its value in
this universe. Despite these parameter sensitivities, Aguirre [4
] has presented a case for a
universe
potentially
capable of support
ing life in which the photon:baryon ratio
, ,
is of
order unity, and Q is smaller than its value in this universe by a factor of between a
thousand and a million. Aguirre argues that such a cosmology can produce stars and
galaxies comp
arable in size and longevity to our own. As a bonus, a rich chemistry,
including carbon, oxygen and nitrogen, can arise within seconds of the Big Bang.
The
previously claimed single-
parameter bounds on
and Q are avoided by varying b
oth at
once
, and by many orders of magnitude
.
The Weakless Universe of
Harnik, Kribs and Perez
(HKP)
Harnik, Kribs and Perez [27
]
consider a universe which has no weak nuclear force. This
is achieved by HKP by simultaneously varying the parameters of the s
tandard model of
particle physics and the cosmological parameters.
Section 2.1
discusses
how reducing the
value of the Fermi constant sufficiently would lead to a universe with insufficient
hydrogen to support familiar c
hemistry. The reason is that a
small
er G
F
produces an
earlier
freeze
-
out of the leptonic reactions
. He
nce the
temperature is
higher and the
abundance
s
of neutrons and protons
are closer to equality
. However, we have taken for
granted that the neutrons and protons achieve their thermal equilibrium densities. This
will only be the case if the weak interaction exists, since this provides the mechanism for
the
ir
inter
-
conversion. Thus, the situation is entirely different if the weak interaction is
effectively absent in the hadron era.
In this cas
e, HKP
assume that
the relative neutron
and proton abundance
can be fixed
by
fiat
, as can the photon:baryon ratio.
HKP
found that they could contrive a universe with a similar hydrogen:helium ratio as
ours, but with about 25% of the hydrogen being deuteri
um rather than protons. To do so
they
chose a photon:baryon ratio of 2.5 x 10
11
, i.e., about a
hundred times larg
er than in
our universe. HKP argue that galaxies could still form despite the much reduced visible
baryon density, but that the number density
of stars in the galaxies would be
appropriately reduced. They can claim that stars would form, because they have taken the
precaution of making the chemical composition of their universe sufficiently similar to
ours, thus ensuring that there would be a coo
ling mechanism to permit gravitational
collapse.
In the HKP universe the initial fusion reaction in stars would be the formation of helium
-
3 from a proton and a deuteron. Note that HKP have cunningly contrived to have
substantial quantities of deuterium f
ormed during BBN, so there is no need for the usual
weak
-
force
-
mediated deuteron formation reaction from two protons. Since the first stellar
reaction in HKP stars is very fast compared with the usual weak
-
mediated deuteron
formation reaction, the core tem
perature of such stars would be lower. It has to be lower
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
8
of
29
to keep the reaction rate down to a level at which the thermal power does not outstrip the
available mechanisms of heat transport away from the core.
The moral of bo
th Aguirre's
and HKP s model uni
verse
s
is that by varying more than one
parameter
at once, and by being bold enough to vary them by many orders of magnitude,
it is
possible to discover distant regions of
parameter space which potentially could
support a complex
universe
with a rich chemi
stry
. The key is varying more than one
parameter
at once. Consistent with Figure 1, the change in one parameter effectively
offsets the change in another. In addition, by making very large changes, the nature of the
physics involved changes qualitatively.
Adams Parametric Survey of Stellar Stability
Adams
[1
] has considered how common the formation of stars might be in universes with
different values for the universal constants. T
he most important quantities which
determine stellar properties are the grav
itational
constant G, the fine structure constant ,
and a composite parameter that determines nuclear reaction rates. Adams uses a simple
analytical model to determine the region within this 3-
dimensional parameter space
which permits stellar stability
. Using a pa
rameterisation based on the logarithms of the
above constants, Adams concludes that about one quarter of the region defined by either
increasing or decreasing G or by ten orders of magnitude supports the existence of
stars. Whilst this cannot
easily
be t
ranslated into a statement about probabi
lity
,
nevertheless
the requirement that stars be stable is hardly a strong constraint on the
universal constants
-
a dramatically differen
t conclusion from Smolin s [
49
]. Yet again,
so long as more than one parameter
is varied,
the universe can evolve
complexity
(in this
case, stars)
even for parameter values very different from our own.
The Modified Quark Mass Universe of Jaffe, Jenkins and Kimchi (JJK)
Another
set of
example
s of potentially biophilic alternative
uni
verse
s
, obtained by
varying more than one parameter,
has been offered by
Jaffe, Jenkins and Kimchi
[31
-33]
.
In these examples the parameters which are varied are the masses of the three lightest
quarks,
u, d, s
, together with the QCD scale,
QCD
. JJK consider, for example, a universe
in which the proton would be slightly more massive than the neutron. Whilst atomic
hydrogen would then be unstable, they argue that deuterium and tritium, as well as some
isotopes of carbon and oxygen, could
be made stable. It is feasible, therefore, that a rich
chemistry could emerge in such a universe. A more radical alternative considered by JJK
is
to reduce the strange quark mass considerably, so that nuclei become bound states of
neutrons and
rather than protons. Again JJK argue that some isotopes of hydrogen
and the surrogates of carbon and oxygen would be stable
and would be expected to
possess comparable chemistry. What would happen to stellar physics in these universes,
and wh
ether the required elements would actually be formed in abundance, is unknown.
But t
hese examples again demonstrate
the potential for complexity to arise if multiple
parameters are varied to a congenial part of parameter
space
(using the term congenial
a
s JJK use it)
.
It will be demonstrated below
that all
alternative universes
which give rise to complexity,
such as those suggested by Aguirre, HKP, Adams and JJK,
will
inevitably display fine
tuning.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
9
of
29
3
The Two Distinct
Fine Tuning
Phenomena
How fine is f
ine tuning? The examples considered in section 2.1 support the existence of
some degree of tuning, but the fineness is more debatable. Biophilic abundances of
carbon and oxygen appear to require g
s
to be fine tuned to within
%1
, and nu
clear
stability requires the neutron mass to be fine tuned to within
%1/%
08
.0
assuming a
fixed proton mass.
These cases seem quite fine , b
ut the latter example is largely, but not
completely, explained by the nucleon structure.
Klee [35
]
has point
ed out that many of
the commonly
cited fine tunings are actually not at all fine, and even claimed
order
-
of
-
magnitude tunings
are stretched to cover several orders of magnitude.
The survival of primordial hydrogen may be an example, since F
G
could accommodate
an order of magnitude reduction without hydrogen abundance being too greatly
diminished. On the other hand,
in a universe in which the gravitational and electrostatic
forces between two electrons differ by 42 orders of ma
gnitude, perhaps a mere factor
of
ten would count as reasonably
fine tuned?
W
hy
are
we
interested in the d
egree of fineness of the tuning? It is
because the finer the
tuning
,
the m
ore remarkable
is
the coincidence
or s
o one is tempted to think. But
actua
lly the intuitive notion that a very fine tuning translates into a small
a priori
probability is hard to defend.
A number of authors have pointed out that
the smallness of
the numerical window within which a parameter must lie says nothing at all about its
probability (e.g.,
Manson [38], McGrew et al [40
]).
Since
a probability measure
on
parameter space
is not available this is hard to dispute.
However, the pragmatic approach of this paper is that examples like those of section 2.1,
and others in
[2,3,6,7,1
3-
16,20
-
24,26,28,29,
46
,
47
], do illustrate a real tuning
phenomenon which requires explanation. The degree of tuning may be fine (a few
percent or less) or not
-
so
-
fine (perhaps an order of magnitude), but the latter is no less in
need of an explanation
for
being coarse
-
tuning
. Both may be regarded as small parameter
windows
, in some context
. And the fact that neither can strictly be claimed to be
improbable is not pertinent.
It is not the probability that we seek to explain, but the
examples of tuning illustrated in section 2.1 and in [2,3,6,7,13-
16,20
-
24,26,28,29,46,47].
An explanation is required for both fine and not
-
so
-
fine tunings. To avoid clumsiness of
exposition, we continue to use the phrase fine tuning to mean both fine and relatively
coarse tunin
gs.
We contend that fine tuning actually consists of two distinct phenomena.
The first phenomenon is the parameter sensitivity of the universe. This is the (apparent)
property of the universe that small changes in the parameters of physics produce
catast
rophic changes in the evolved universe. In particular the complexity of the evolved
universe, and hence its ability to support life, wo
uld be undermined by
small changes in
the universal constants
(in the pragmatic sense of small changes
discussed above)
.
Thus, parameter sensitivity is the claim that the target in parameter space which is
compatible with a complex universe is small
in some sense. The smallness of this
target,
if true, is
a feature
which requires explanation.
The second, and quite disti
nct, p
henomenon
is
that
nature
has somehow managed
to hit
this small target
which we will refer to as fine tuning
. T
he actual constants in our
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
10
of
29
universe
have to
be fine tuned to coincide with the requirements for a complex outcome.
In other words, given
that only special values for the parameters will do (i.e., given
p
arameter sensitivity),
nature
had to
contrive to adopt these particular values (i.e.,
nature
is fine tuned).
The present paper is concerned only with the first phenomenon: parameter sensiti
vity and
how it arises.
A great deal has been written about
the merits, or otherwise,
of
some sort of
M
ultiverse
as
the
contrivance by which the small target
in parameter space
is successfully
hit
(i.e., fine tuning is achieved).
It seems to have gone larg
ely unnoticed that an
explanation is
also
required of
the distinct phenomenon of
parameter sensitivity
.
The
Multiverse
postulate does not
even attempt to explain
parameter sensitivity, i.e.,
why the
target is small in the first place.
So, why
is the univer
se parameter sensitive
?
There is a danger of misunderstanding
this point. Physicists might
argue that every
instance of fine tun
ing
(e.g., those listed in section 2)
constitutes a demonstration, via
physical calculation, that the target is
small. They migh
t opine that the question why
parameter sensitivity? is answered by the totality of such calculations.
But our
calculations are merely observations that parameter sensitivity appears to prevail in
our
universe. They do not provide an explanation of why t
his should be so
, i.e., why this
feature should be expected
.
The
point can be illustrated in the following way. Before one looks into the physics of
these things, it is not obvious
that there could not be complex universes corresponding to
the bulk of the
volume of parameter space. Take life as an exemplar of complexity
and
c
onsider the universes which might result if gradual
changes were made to the universal
constants
.
We can imagine, wit
hout any nonsense being obvious
, that
the
life
forms of our
universe
might give way to a continuous spectrum of morphing lifeforms as the physical
parameters varied. Eventually radically different lifeforms would emerge, living in a universe
whose physics was also radically
different
. B
ut
t
his description of the consequenc
es of
changing the universal constants is precisely what parameter sensitivity claims is
not
true.
But why is this?
Specifically,
why could this never
be true in any universe which evolves
complexity
?
This is the question addressed in the following section
s.
We shall argue that p
arameter sensitivity
is
a
mathematical
result of the
assumed
emergence of complexity
.
We shall argue that parameter sensitivity
is inevitable in any
complex universe
, and hence, as a consequence, so is fine tuning.
4
How Parameter S
ensitivity Relates to the Evolution of Complexity
: Toy Models
T
his section illustrates, with the aid of toy models, the way in which parameter sensitivity
arises from the assumption that the universe evolves complexity.
What is meant by a complex universe
? This is a difficult question, so it is fortunate that a
complete answer is not required for the present purposes. We shall see that conditions
which are clearly necessary for the emergence of complexity turn out to be sufficient to
imply parameter sensit
ivity.
In considering the meaning of a complex universe we generally think of the universe as
it is now. The living organisms and the ecosystem of planet Earth are the epitome of
complexity. However, all this did not emerge fully formed in a single step
from the
fireball of the Big Bang. Rather it is the current state of (one part of) a universe which has
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
11
of
29
been evolving for 13.7 billion years. The history of the universe is one of increasing
complexity
, [
24,41,45
]
. Thus, the formation of helium nuclei aft
er the first few minutes
represents an increase in complexity compared with what preceded it. The same is true of
the formation of the first neutral atoms at ~360,000 years, and the first stars or galaxies at
some hundreds of millions of years. The gravita
tional congealing of matter provided the
opportunity for complex, orderly structures to arise. Despite their gaseous form, stars
have a considerable complexity of structure and evolution. The structure of galaxies is
vastly more complex still, acting as they do as stellar nurseries. And the solid
ast
ronomical bodies, planets
and asteroids, provide the opportunity for great complexity
on smaller size scales.
From the point of view of the se
cond law of thermodynamics it initially appears curious
that the
B
ig
Bang fireball, which is generally
assumed to have been in local
thermal
equilibrium,
nevertheless gave rise to a universe which spontaneously produced orderly
structures
. This comes about because the orderly, and complex, structures occur in
regions of gr
avitational collapse. Such regions have shrugged off their excess entropy,
using the vast tracts of almost empty universe as a dumping ground. This is the salient
fact: inhomogeneity of the entropy distribution is a necessary condition for the emergence
of
complexity.
This world was not always complex. It became complex.
And because, at the
fundamental level, becoming necessarily entails dynamics, t
he complexity of the world
is a product of dynamics
. Most
especially
the dynamics of gravitational collapse
and
nuclear fusion are the root cause of the
possibility
of complexity
.
Both of these processes
lead to the reduction of the entropy of (a part of) the baryonic component, the excess
entropy being carried away by other, generally lighter, particles, e.g.,
photons and
neutrinos.
And when complexity reaches the level of life, it is sustained against the
tendency to increase its entropy (decay) by the flux of free energy from its parent star,
Egan
[25],
Lineweaver
and Egan
[
37
],
Michaelian [
41
],
Wallace [
52
].
It may be unusual to speak of stellar nuclear reactions as dynamics , but the
reacting
core of a star is comprised
of
a myriad of
individual particle d
ynamics.
So if complexity
emerges from dynamics,
and
parameter sensitivi
ty is defined in reference to
complexity,
it follows that parameter sensitivity should
also
be understood as a property of dynamics.
Fortunately
thermo
dynamics, the statistical properties of many body dynamics, suffices.
The key is to recognise that only a s
mall portion of the mass
of
the universe will end up
complex.
We do not need to consider the whole observable universe, but just some
comoving region
,
, which is large enough to approximate to a closed s
ystem
.
Within
the sub
-system which will end up complex will be called .
It is necessarily an
open
sub
-
system and
may be regarded as a fixed inventory of baryonic matter.
It is claimed that the
sequence of processes leading to complexity must include steps in which
reduces its
entropy and hence becomes more ordered.
We claim that this is sufficient to produce
parameter sensitivity.
Note that, in accord with the second law of thermodynamics, the
entropy of
can only i
ncrease
.
In fact, the irreversible processes involved in forming
and sustaining
implies that the entropy of
will
increase.
The relationship between entropy and complexity is not an easy one. It is certainly not the
case that unfettered entropy minimisation leads to complexity. Arranging atoms in a
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
12
of
29
perfect crystal structure minimises their entropy but is the antithesis of complexity. On
the other hand, the maximal entropy of a gas in thermal equilibrium is also the ant
ithesis
of
complexity.
These observations are consistent with the contention that c
omplexity
appears to exist
at the boundary between order and disorder (
Prigogine [
44
],
Kauffman
[3
4]
). Many definitions of complexity have been offered, but there is no univ
ersal
agreement. Fortunately the only claim we need make is that some of the evolutionary
steps, starting from a universe devoid of structure, must involve the reduction of the
entropy of
.
If true complexity is to be the outcome, th
en such processes will not
proceed to minimisation of the entropy of , since, as noted already,
would not then
be complex. Nor is it necessary for the reduction in the entropy of
to be m
onotonic.
A
certain number
of entropy reducing processes is claimed to be necessary, but certainly not
sufficient, for the emergence of the highest degrees of complexity.
Our
contention is that this ho
lds for
any set of universal constants
, and any replac
ement
of the standard model of particle physics,
which result in a complex outcome.
A simple toy model will help illustrate a sequence of entropy reducing steps.
We
deliberately choose
a form of particle physics whic
h differs from the standard model
in
ord
er to emphasise the generality of the argument. However, it reads across t
o similar
processes in our
universe.
Let us suppose that complex structures are to be built out of combinations of two types of
matter particle:
a
and
b.
These particles are assum
ed to be present in a free state in some
primordial, chaotic epoch of the universe. The first step in reducing the entropy of the
matter component of the universe may consist of a reaction dcba
, where
c
is to
be regarded as a bound sta
te of
a
and
b
. The reaction is assumed to be exothermic.
A
lmost all
the rest mass of
a
and
b
ends up in
c
, whilst particle
d
is relatively light and
hence will carry away the bulk of the energy released. So a
,
b
and
c
are the analogues of
baryonic matter,
whereas
d
may be
the analogue of a photon
or neutrino
. The reaction
produces a reduction of the entropy of the baryonic matter simply because, if attention
is paid to the baryonic components alone, the reaction is
cba
. So one particle
results
where there were previously two. Other things being equal, entropy becomes simply
particle counting, so this represents a reduction in the baryonic entropy
by a factor of
about
2
.
Of course the reaction proceeds because it causes an
increase
in t
he total entropy of the
universe. There is no deficit in the number of
all types of
particle in the reaction
dcba
. Moreover, the energy released leads to an entropy increase. The energy
released by the reaction is the binding energy,
Bc,
of
a
and
b in the bound state c
. For the
reaction to make net progress in the face of potential thermal dissociation (i.e., despite the
reverse reaction
badc
), the binding energy must be large compared with the
typical prevailing therma
l energy,
kT
. Hence the energy of the photon , d
, is
much larger
than
kT
. Presuming that these
d
particles have some mechanism available by which they
can come into thermal equilibrium with the surroundings, it fol
lows that after
thermalisation the initial energy of the
d
particle will be spread over
1/~
kT
Bc
different particles.
Increasing the energy of a large number of particles constitutes
an
entropy increase. In summary, the reduction of the bary
onic entropy is bought at the cost
of an overall increase in entropy which is born by the other particles (e.g., particles
d
).
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
13
of
29
Any reasonable measure of the strength of the interaction which causes the binding of
a
and
b
will be such that the binding ener
gy
Bc
increases monotonically with the strength.
Hence we can use
Bc
itself as a measure of t
he strength of this interaction
. The reactio
n
will proceed only if the interaction
is strong enough to overcome thermal dissociation,
which we can write symbolical
ly as
kT
Bc
(whilst accepting that there will really be
phase space factors and so on which complicate the exact expression). There is nothing in
the least surprising about this: the interaction must be strong enough to make the reaction
g
o. But note that
kT
Bc
is a constraint on the universal constant , c
B
, which measures
the strength of the interaction, and this constraint is necessary in order to achieve a
reduction of the baryonic entropy via dcba .
This is
an example of how entropy
reduction of
results in a constraint on the universal constants, and ultimately parameter
sensitivity.
Our toy model has not yet achieved any significant complexity, and could
not be
expected to do so in a single step
. A succession of entropy reducing steps of differing
kinds is to be expected, and a complex outcome must at least include a great diversity of
compound particles. So we suppose that subsequent reactions can occur
w
hich result in
compound particles of the form
mn ca
. We may write this as
Ne
ca
mc
na
mn
,
it
being understood that this is shorthand for a sequence of two
-
body reactions. There are
N
light particle
s,
e
, which carry
away the bulk
of the binding energy released in forming the
composite particle
mn ca
. For the same reasons as before, such a reaction involves an
overall increase in entropy, as it must, but a reduction in the baryonic entropy
because
m
n
particles are replaced by just one compound baryonic particle. Also for the
same reason as before, the interaction which binds the compound particles
mnca
must
have a certain minimum strength to overcome thermal dissociation (the re
verse reaction).
However, there is now an important additional feature. Reactions of the form
Ne
ca
mc
na
mn
can occur only if the initial reactions
dcba
do not consume
all the
a
particles.
But this leads
to an
upper bound
on the strength of the interaction
measured by
c
B
, in addition to its lower bound established previously. This can be seen
as follows.
Firstly l
et us assume
that
dcba
is far faster than
Ne
ca
mc
na
mn
. If
dcba
were to proceed to completion, reactions
Ne
ca
mc
na
mn
could not
occur. For
Ne
ca
mc
na
mn
to occur, t
here
would need to be some
means of
terminating the
dcba
reactions whilst s
ome
a
particles remain
ed
.
This may
arise
due to
dcba
being frozen
-
out by falling temperature or by cosmic expansion. Or
it may be that some other reactions are also occurring which result in copious production
of
d
particles. A very large density of
d
particles will favour the reverse reaction
badc
and might lead to a dynamic equilibrium at some non
-
zero density of
a
particles.
Whatever the mechanism, a non-
zero density of
a
particles can result only if
the strength of the
interaction
,c
B, is insufficient to drive the reaction against the
countervailing effects.
For example,
dcba
would be frozen
-
out by cosmic
expansio
n if the reaction rate falls below the universal expansion rate (i.e., the Hubble
parameter). But the reaction ra
te will increase if
the interaction strength,
c
B
,
were
increased
. So freeze
-
out whilst a reasonable abundance of
a
particles remains requires
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
14
of
29
that the interaction strength,
c
B
,
is less than some bound. Too large a value for c
B
would
result
in a late freeze
-
out, when the
a
particle density had already fallen too far.
Secondly, what happens if we assume that
dcba
is far
slower
than
Ne
ca
mc
na
mn
? In this case the burden of achieving a balance of compound
particles
mnca
shifts to the relative rates of
the various reactions implicit in the sequence
leading to
Ne
ca
mc
na
mn
. We shall see in the toy model below that this also
requires the interaction strength,
c
B
, to be bounded both above and below.
And finally, what if we ass
ume that the rates of
dcba
and
Ne
ca
mc
na
mn
are comparable? Well, in that case, we have assumed fine tuning from the start
namely
that the strength of the interaction driving the first reaction is closely matched by the
stre
ngth of the interaction driving the second.
So
there is an upper bound to
Bc
as well as t
he lower bound established previously
. Only
if the strength of the force is bracketed between a lower and an upper bound can both
reactions
dcba
and
Ne
ca
mc
na
mn
actually happen. This is the origin of
parameter sensitivity, as illustrated by Figure 1, for this illustrative example
.
The point is that parameter sensitivity
(in the sense of a parameter being bounded both
above and below) i
s necessary in order
that the baryonic matter can undergo
sequential
entropy reductions through both
dcba
and
Ne
ca
mc
na
mn .
The upper and
lower bounds are
both
required in order to produce the compound particles
mnca
.
T
he prize is to
achieve a rich variety of suitable chemistry, not merely a monoculture of
a
single nuclear variety
.
In
our universe, nucleo
synthesis
in stars
enriches the ISM with
a
very b
road range of nuclei in several
ways. Some lighter nucl
ei are ejected as the star
evolves, in the form of stellar winds and various instability phenomena.
In
fully evolved
stars,
elements up to
iron are preserved by the shell structure in which there is a gradation
of temperature, pressure and density conditio
ns.
Finally, elements beyond iron are made
by the complex physics occurring during supernovae.
It is not trivial to e
nsur
e
a balance
of this kind in the abundance of the products of nucleosynthesis
.
In our universe, the synthesis of the chemical element
s inside stars is immensely
complicated.
So
to illustrate how the production of a balance of elements requires fine
tuning we
opt for a
simple toy
model
. We return to
Gamow s original idea
in which
the
chemical elements are made during
BBN
. To permit this to happen we change particle
phy
sics and nuclear physics
drastically.
There is no neutron in this
universe;
the only
nucleon is the proton.
Protons in this universe
can form bound states of two, three, four,
and up to ten nucleons via some
analogue of the
s
trong nuclear force
.
The
time
-
temperature relation is assumed equal
to that in our universe.
The nuclear
reaction rates
are
contrived
to give the more highly charge
d
nuclei a reasonable chance of being
formed, despite the falling temperature and density.
This means radically altering the
Coulomb barrier (by
fiat
, there is no underlying theory here). The strength of the nuclear
force is measured by the binding energy per nucleon,
B
, which is assumed to be the same
for all nuclei.
The absolute number densit
y of nucleons is found from an assumed photon:baryon
ratio,
, and the usual black
-body photon density in terms of temperature. The absolute
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
15
of
29
number densities are also adjusted due to cosmic expansion following the usual time
dependence
.
The reaction network consists of 25 different reactions. These reactions are
eventually frozen
-
out by cosmic expansion.
Th
e parameters
B
and
are the universal
constants whose fine tuning we wish to demonstrate.
The specific algebr
aic and numerical assumptions of the model are given in the
Appendix, and are essentially arbitrary.
It is rather obvious what the outcome will be. If
the fusion reactions are sufficiently fast compared with the cosmic expansion rate, the
reaction sequence
will proceed to completion before freeze
-
out. The universe will then
contain only the highest mass nuclei and no lighter nuclei. This will occur if either the
nuclear force is sufficiently strong (large
B) or the nucleon density is sufficiently high
(smal
l
). Conversely, if the fusion reactions are sufficiently slow compared with the
cosmic expansion rate, even the first compound nucleus (pp) will not have time to form
before freeze
-
out. The universe will be all hydrogen (
perhaps
perm
anently so i
f
this
universe remains star-
free
). This will occur if either the nuclear force is sufficiently weak
(small
B
) or the nucleon density is sufficiently low (large
).
Figure 2
:
Relative abundance
of nuclei for B=1MeV: Comparison of different
Relative Abundance of Nuclear Yields: Com
parison of
Different Photon:Nucleon Ratios for B = 1 MeV
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
2
3
4
5
6
7
8
9
10
Atomic Number
Fraction of Total Nuclei
photon nucleon ratio 2x10^7
photon nucleon
ratio 10^7
photon nucleon ratio 3x10^6
photon nucleon ratio 10^6
photon nucleon ratio 3x10^5
photon nucleon ratio 10^5
photon nucleon ratio 3x10^4
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
16
of
29
Figure 3
:
Relative abundance of nuclei for
=106
: Comparison of different
B
Range of Binding Energy (B) producing Chemical Diversity
Versus Photon:Nucleon Ratio
0.001
0.01
0.1
1
10
100
1000
10000
1.00E+04
1.00E+05 1.00E+06
1.00E+07
1.00E+08 1.00E+09
photon:nucleon ratio
B (MeV)
lower bound of B for diversity
upper bound of B for diversity
Figure 4
:
Ranges of
B and (between the lines)
producing chemical diversity
Relative Abundance of Nuclear Yields: Comparison of
Differing Bindin
g Energies, B, for Photon
-
Nucleon Ratio 10^6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
2
3
4
5
6
7
8
9
10
Atomic Number
Fraction of Nuclei
B = 0.1
B = 0.3
B = 1
B = 3
B = 10
B = 30
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
17
of
29
The numerical results
of the model are in accord with these expectations, Figures 2 and 3.
The optimal balance of light and heavy nuclei is obtained for B = 1 MeV and 6
10
.
Suppose a chemically diverse
universe has at least a fraction 10
-4
of each element.
To
produce
such a chemically diverse
universe it is necessary that B and
fall within the
range shown in Figure 4. This is the realization of Figure 1 for this model.
Outside these
bounds the universe would be virtually all hydrogen
or virtually all neon (Z=10).
The
universal constants must
therefore
be fine tuned to produce a complex outcome
, i.e., a
diversity of chemical elements
. The tuning is not actually terribly fine, but then many of
the instances of tuning in our universe a
re not so terribly fine either.
Suppose we seek to circumvent this conclusion. We could make the fusion reactions very
rapid compared
to cosmic expansion, but protect the lighter nuclei from being consumed
in some way. For example, we could assume the pre
sence of some other type of particle
which could combine with protons or nuclei such that, after combination, the resulting
compound object was immune to further fusion reactions. Call this reaction X.
If reaction X is too fast
compared with the fusion re
actions
then we will end up with an
all hydrogen universe again. Conver
sely, if reaction X is too slow
we will end up
with
a
universe
containing
only the heaviest nuclei. So this contrivance fails: we will again
require a
fine tuning
relating
B
and
to whatever universal constant controls reaction X.
It will be apparent from these examples that the connection between parameter sensitivity
and a complex outcome is quite elementary in nature.
On the other hand, a few specific
examples
cannot establish the general truth of the assertion that parameter sensitivity is
always
a concomitant property of a universe which evolves complexity. This is addressed
next.
5 The General Dynamical Explanation of Parameter Sensitivity
We wish to
demonstr
ate that
any
sequence of entropy reductions of some subsystem,
,
must always require tunings of the
universal
parameters. Such a demonstration is
requi
red for any system of the
sort
consider
ed
i
n the preceding examples
, in which
compl
exity is achieved by binding fundamental particles in ever more intricate ways. But
the proof should also accommodate any other form of entropy reduction, not just that
arising from particle binding. For example, volume contraction due to gravitational
col
lapse is a particularly important cause of entropy reduction in our universe.
Actually the demonstration should be more general still, since it is conceivable that
complexity could arise in quite a different manner. For example, structure
might not
reside
in binding
particles (i.e., in identifying their spatial degrees of freedom) but in a
correlation or coherence between spatially separated particles. Also, the fundamental
degrees of freedom may not be those of particles, but of fields. Most physicists be
lieve
that the number of field degrees of freedom is actually finite rather than a continuum,
limited to a spatial resolution at around the Planck scale. It is even possible (according to
the holographic hypothesis) that the resulting spatial lattice is on
ly two dimensional,
rather than t
hree dimensional, [11,
50
]. It is desirable to formulate the argument in terms
which are sufficiently general to embrace all these possibilities and more.
The natural arena for a generalised exposition is phase space. Phase
space comprises the
totality of degrees of freedom of the system in question, both generalised coordinates (or
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
18
of
29
fields),
i
q
, and generalised momenta,
i
p
. A point in phase space specifies a unique
microstate of the sy
stem. Studies of bulk behaviour consider, not individual microstates,
but macrostates corresponding to large numbers of possible microstates, and hence to
large volumes of phase space. For example, the pressure and temperature of a gas may
specify its macr
ostate, but there are many corresponding microstates, each of which
uniquely specifies the position and velocity of every molecule at a given time. The
volume of phase space within which a system might lie is a measure of the number of
possible microstates
, and hence is related to the entropy of the macrostate.
The evolution of complexity is thus related to the variation
over time
of
the phase space
volume occupied by
. Specifically, it is claimed that there are necessarily physical
processes which reduce the phase space volume
of
, since this
is equivalent to
reduction of the entropy of
.
Now the development over time of a given microstate
is described by
a trajectory through
phase space
. For deterministic physics, this trajectory is uniquely determined by the
initial state (the starting point in phase space) plus the physical laws
and
the values of the
universal constants. It follows that the development of the extended phase space regio
n
corresponding to a macrostate is also uniquely determined by the starting region, the laws
of physics and the universal constants, j
c
. In particular this applies to
.
It is
claim
ed
that there must be processes wh
ich sequen
tially reduce the volume of the phase space
region
occupied by
. Suppose these processes are labelled 1, 2, 3 , in chronological
order. The phase space volumes of
at these times are
...
321 VVV
. But each
volume is calculable from the initial state for a given set of universal constants,
j
c
.
Taking the initial state and the laws of physics as understood, we can thus write
...
321 jjj cVcVcV . This is meant to emphasis
e that each phase space volume
can be calculated in terms of the assumed universal constants. This sequence of
inequalities between
calculable
functions of
j
c therefore implies constraints upon the
universal constants themselves. This
is the origin of parameter sensitivity.
It would not be reasonable to expect complexity to arise from a single physical process. A
sequence of processes is to be expected. The particular sub
-
set of universal
constants
which is important will
differ from o
ne process to another. So the sequence of processes
gives rise to different tunings, possibly of different sets of parameters.
It is important to recognise that the dynamics determines not just the change of volume of
the phase space region representing
but also its position within phase space. This is
crucial to the next process in the sequence. As we saw in the toy model above, moving to
a phase space region with no
a
particles may prevent
further
entropy red
uction. Hence,
the
path
taken by
through phase space is crucial. Achieving
sequential
entropy
reductions is a strong constraint upon the
global
path, not just upon the current process.
For example, the entropic benefit
to
of preserving hydrogen in the first minutes after
the Big Bang is realised only a billion years later with the formation of hydrocarbon
chemistry.
The phase space algebra can be made more explicit, and this has the advantage of
displaying t
he crucial path dependence more clearly.
For any deterministic physics the
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
19
of
29
phase space trajectory
txk
is uniquely defined by the initial microstate,
0
k
x
. This
statement can be written in differential form as,
jki
icxf
dt
dx
,
(1)
The functions
i
fspecify the physics which determines the system s evolution, and depend
both upon the current state,
txk
, and upon the universal constants,
j
c
.
We assume
that
the phase space is sufficiently generally formulated that it can address particle reactions.
Thus, if
dcba
occurs, the phase space includes regions representing both
ba
and
dc
.
Suppose, for c
onvenience of exposition, that significant physical events occur
at the discrete times
,...
,, 321 ttt
when the region of phase space occupied by
is
,...
,, 321
respectively. Consider a small
element of
volume of
L
at time
L
t
and
location
k
x
,
ii
dx
dV
. A time
t
later the evolved system effectively defines a new
set of coordinates
iiii ftxttxx . The evolved v
olume
ii
xdVd
is related to
the initial volume by the Jacobian determinant:
dV
x
x
Vd j
i
.
And s
ince
j
i
ij
j
ix
f
t
x
x
we find that
ii
i
j
ix
f
t
x
x1
to first order in
t
. Re
-
arranging
and integratin
g over the region
L yields the evolution of the phase space volume, L
V
,
corresponding to
, to be,
dV
x
f
dt
dV
L
L
ii
i
(2)
In the case of a conservative system defined by Hamiltoni
an mechanics, and if
were
replaced by the approximately closed system
,
the RHS of (2) would be identically zero
by virtue of Hamilton s equations8. Hence, the phase space volume does not change for
conservative
Hamiltonian systems (Liouville s theorem).
But, by hypothesis,
is a dissipative sub
-
system, necessarily ejecting energy and entropy
into its surroundings, so that its phase space volume
is reducing
,0
dt
dV
L
. This is t
he
familiar behaviour of dissipative dynamic systems, whose phase space volume tends to
shrink asymptotically onto some attractor, typically of lower dimension than the phase
space (albeit probably fractal). Thus, the phase space volume
of
might shrink
asymptotically to zero.
In fact, the dimension of
will reduce each time
particl
es bind to
form composite baryonic particles
.
However, an equation like (2) will continue to hold
with the volume reinterpreted as b
eing
of reduced dimension.
8 The velocity field i
f
in phase space has ze
ro divergence because of Hamilton's equations.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
20
of
29
The entropy of
need not reduce monotonically. It is sufficient that there are times when
it does so. By definition, these are the times
,...
,, 321 ttt
Consequently our thesis i
s that,
0
1
1
dV
x
cf
dt
dV
j
cii
ji
(3a)
0
2
2
dV
x
cf
dt
dV
j
cii
ji
(3b)
0
3
3
dV
x
cf
dt
dV
j
cii
ji
etc.
(3c)
In (3a
-
c) the dependencies upon the universal constants,
j
c
, have been explicitly
displayed. Not only do the p
hase space velocity functions,
i
f
, depend upon the
universal
constants, but so do
the changing phase space region
s,
L,
occupied by
over which the
integrals are carried out
. The path through phase space taken by the evolving region L
depends upon the preceding region from which it derives,
1L
, and also upon the
physics encoded in the functions
i
f
, and also upon the universal constan
ts,
j
c
.
For
a conservative Hamiltonian system
, (3a
-
c) would be replaced by equali
ties. Moreover
these equalities
would be algebraic identities. But
(3a
-
c) are not algebraic identities. They
are dynamic constraints on any subsystem whos
e entro
py is successively reducing. And
note that all the dynamic variables,
i
x
, vanish from (3a
-
c) by virtue of the volume
integrations. Hence, (3a
-
c) are actually constraints on the universal constants,
j
c
, which
m
ust be fulfilled if
is to have sequential reductions in entropy. These constraints
constitute fine, or
possibly
not
-
so
-
fine, tuning.
These
relation
s are
the algebraic
embod
iment of Figure 1
.
The importance of the path,
L, taken through phase space must be emphasised. The path
determines the potential for continuing entropy reduction in later steps. The opportunity
for entropy reduction in, say, step 2, would be lost if step 1 resulted in a 2
which is not
conducive to further entropy reduction. A phase space path which permits continuing
entropy reductions over many sequential processes might appear highly contrived.
Indeed, it may
be
highly contrived. However, it is not our
concern here to exp
lain how
complexity comes about (i.e., how such a path may
arise spontaneously
). It is sufficient
to note that complexity requires this, and that relations like (3a
-
c) must follow.
This is the
origin of parameter sensitivity.
There are a number of potenti
al challenges to the generality of the above argument.
Firstly, it is possible to question whether (3a
-
c) do actually constrain the parameters
j
c
.
After all, if we replace
with the closed system
the second law of thermodynamics
tells us that,
0
dV
x
cf
ii
ji
(4
)
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
21
of
29
and this must hold for
any
values of the parameters
j
c
. So, despite appearances, (4) is
not
a constraint upon j
c
. Why
should (3a
-
c) not also be of this type? To refu
te this
consider a
set of parameters,
0
j
c
, which correspond to switching off all
inelastic
interactions.
Under suitable circumstances t
he sub
-
system
can
now
be conside
red as
effectively
closed.
It is no longer exporting its entropy to
.
In which case we have,
0
0
dV
x
cf
ii
ji
(5
)
But comparison of (3a
-
c) with (5) shows that (3a
-
c) clearly
do
constraint the parameters
j
c
since there is a set of values,
0
j
c
, for which (3a
-
c) are
not
true.
The second objection is that it is not entirely clear, in the general case, what is meant by
the distinct processes , enumerated above as 1, 2, 3
At the risk of being tautological,
these relate to the discrete fine tunings
. The limiting curve in parameter space which is
defined by the inequalities (3a
-
c) may be identified with what
Bousso et al [
9] refer to as
a catastrophic boundary .
The third ob
jection may be that not everything is dynamics . But actually, everything
that involves change
is
dynamics.
I
t may be unfamiliar to formulate some physical
processes in terms of dynamics
, simply because
the dynamic perspective may not be
helpful in perfor
ming calculations. But the physics underlying any change in any system
must be dynamical when considered at the microstate level. The formulation in terms of
phase space is a convenient means of linking the entropy reduction to the evolution as
determined
by the universal constants. It does not matter that it may not be a practical
means of carrying out a calculation.
The fourth
objection is the
assum
ption of
deterministic
(classical)
dynamics. This does
not address the potential indeterminism due to quant
um mechanical behaviour. However,
we are concerned with physics on an astronomical scale, with correspondingly large
phase space volumes. It is true that quantum calculations are essential in providing much
of the input to astrophysical calculations (e.g.,
reaction rates). However, there is no
compelling reason to be concerned that this translates into any significant indeterminism
at the scale of interest. As far as bulk behaviour is concerned, and providing we do not
attempt to address what happens at t =
0, it is likely that the classical formulation is
sufficient
to establish the principle
.
The final objection is that we have been cavalier in comparing phase space volumes
when the dimension of the sub
-
space occupied by
is reduci
ng. In a true continuum this
might indeed be a problem. But really
,
ta
lking about phase space volumes is just a
shorthand for counting the number of degrees of freedom which are all ultimately
integers, whatever the dimension
may be
in the continuum desc
ription.
6 Interpretation
of Equations (3a,b,c )
in Our Universe
There is a danger that the v
ery generality of equations (3a,b,
c
) obscure what is actually
a simple message: every instance of fine, or not
-
so
-
fine, tuning in our universe is actually
an ins
tance of entropy reduction.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
22
of
29
Consider the examples of section 2.1. The fine-
tuned nucleon mass range (or
alternatively, the coarsely tuned quark mass range) relates
to nuclear stability, and hence
to the
potential for
entropy reduction via nuclear fusion.
The same applies to the lower
bound on the strong nuclear coupling of
s
g
85
.0
.
T
he case of the Hoyle coincidence (the
production of biophilic abundances of carbon and oxygen)
is different. In this case the
entropy reduction does not occur a
t the time of stellar nucleosynthesis. Indeed, the star
would reduce its own entropy more if a long sequence of alpha capture reactions occurred
producing some very heavy nuclei, with little remnant carbon and oxygen. The entropic
benefit of biophilic prod
uction of carbon and oxygen,
to sub-
system
only
, occurs later
with the formation of organic molecules in the giant gas clouds and with planetary
formation and subsequent planetary chemistry. Similarly, the
constraint on the Fermi
cons
tant ar
i
sing from
survival of hydrogen during BBN does not reduce
entropy at that
time. The entropy
would be lower if the result
of BBN
were all helium. The benefit to
sub
-
system
is realised only with the formation of main sequence st
ars and with the
occurrence
of hydrocarbon chemistry after a billion years or so.
So the complexity
-
benefit of one of equations (3a,b,c ) may be long delayed, and is conferred on system
alone.
Whatever examples of fine (or coarse) t
uning are chosen, the relation to entropy
reduction can always be made. This is simply because all tunings are identified
as being
required
to permit
the formation of structure and complexity
and this implies entropy
reduction of the open sub
-system,
, in question.
But the complexity
-
benefit might occur
much later than the process giving rise to the numerical fine tuning.
For example,
Carter
[15,16] has argued that planetary formation is
facilitated due to
typical stars lying near
th
e boundary of convection and radiation dominance. This gives rise to the tuning
c
Gm
m
mp
p
e2
4
12
~
which, whilst derived from stellar physics, does not relate to
increased
complexity in the stars concerned. Instead the complexity
benefit
occurs only l
ater upon
planetary formation. Planets are system
in this case.
A further example is that,
in our universe
,
there must be stable stars to forge the chemical
elements beyond lithium.
The formation of stars requires a cooling mechanism
, this being
the mechanism by which entropy is exported away from the collapsing material
.
Assuming that
cooling is dominated by photon emission, this must require some lower
bound on the fine structure constant. Despite the fact that star formation is dif
ficult to
analyse algebraically, it is clear that cooling must require a lower bound on
since there
would be no cooling in the limit
0
. The lower bound on
is re
quired
both to
bring
about
the immediate entropy reduction of the collapsing
gas cloud
, and also to
p
ermit
the
formation
of the chemical elements
thereafter
.
The complexity
-
benefit of the formation of
the chemical elements occurs later still.
Where the above examples involve a
del
ayed
entropy reduction in system
, this
emphasises the importance of the path taken through phase-
space in the general
formulation of section
5. Thus, whilst equations (3a,b,c ) express our thesis in general
form, the interpretation
for any specific model universe is very simple.
Indeed it is
essentially merely a re
-
statement of the manner in which the fine tunings are usually
derived.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
23
of
29
Finally, what about fine, as opposed to not
-
very
-
fine, tuning? It is not clear wh
ether
equations (3
a,b,c
) produce particularly
fine
tuning.
But this does not matter. Equations
(3a,b,c
) are precisely the degree of tunin
g required to permit the associated
complexity
to arise. Whether this tuning happens to be fine or coarse is unimportant. Consequently,
referring back to section 3, we now see that it is inappropriate to be
overly
concerned
about the actual degree of fineness of the tunings evident in our universe.
They are as fine
as
they need to be.
7
Relation of these Observations to Other Issues
What
is and what is not explained
The hypothesis of this pape
r
does not address
how the universe becomes
fine tuned.
An
explanation
of parameter sensitivity
alone
has been offered, i.e., why the target in
parameter space is necessarily small.
No comment is
ma
de regard
ing how the universe
contrive
s
to hit this small target.
The
existing
scientific
responses to this question have
involved one of the many variants of the Multiverse. The present work leaves these
offered explanations for fine tuning untouched, nei
ther supporting nor refuting
them
.
However one
thing is achieved:
it is inappropriate to be surprised by fine tuni
ng. Fine
tuning
is inevitable given the existence of c
omplexity
.
The Cosmological Constant
Not all apparent fine tunings need necessarily ari
se in the manner proposed in section 5.
At one time the extreme flatness (
1
) required at early epochs in order
that the
universe be
even
approximately
flat now might have been regarded as fine tuning. But its
very fineness spoke of a
mechanistic explanation, such as is provided by inflation theory.
It is possible that a mechanistic explanation may ultimately be forthcoming for the
apparent extreme tuning of the cosmological constant,
, although a
revolution in
fun
damental physics is probably required to
achieve it, Albrecht et al [5
].
Alternatively,
the fine tuning of
may
be an observer selection effect,
Martel et al [39], Weinberg
[53
],
Lineweaver
and Egan
[36
], Egan [25
].
The present work
offers no enlightenment on
this issue.
Relationship to the Causal Entropic Principle
The Causal Entropic Principle has been propos
ed by Bousso et al [8
-
10] as a criterion
for
the abundance of observers
in anthropic predictions.
These authors
define the ca
usal
entropic principle thus,
the principle asserts that physical parameters are most likely to
be found in the range of values for which the total entropy production within a causally
connected region is maximized .
They argue that,
the formation of any
complex
structure (and thus, of observers in particular), is necessarily accompanied by an
increase in the entropy of the environment. Thus, entropy production is a necessary
condition for the existence of observers. In a region where no entropy is produced, no
observers can form. The
ent
ropic principle
is the assumption that the number of
observers is proportional, on average, to the amount of
entropy produced
. Bousso et al
are
careful to emphasise that it is the matter entropy only to
which they refer
.
T
he
Be
kenstein
-Hawking entropy associated with black hole or cosmological horizons
should
not be included.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
24
of
29
The causal entropic principle is therefore closely related to the arguments of this paper,
though not the same. We have not claimed
that entropy i
s maximised,
within any
particular region. Our assumption is weaker, namely that
reduces its entropy. This
irreversible process necessarily leads to an increase in the entropy of
. This is
consistent with the s
pirit of the causal entropic principle, but attaining a maximum has
not been assumed here.
8
Conclusion
Parameter sensitivity and fine tuning are two separate phenomena.
The problem of
elucidating
why the universe is parameter sensitive has not previousl
y been addressed,
despite much
effort expended on
fine tuning.
The thesis has been presented that parameter sensitivity arises as a natural consequence
of the mathematics of dynamical systems with complex outcomes. The argument is as
follows: the emergence
of complexity in a sub
-
system,
,
requires
a sequence
of entropy
reduction
s of
, which can be interpreted as a
sequence of
reducing phase space
volume
s
. This leads, via a very general formul
ation of system evolu
tion, to
constraint
s
on
the set of universal constants
. This is the origin of parameter sensit
ivity
.
Hence,
if a
universe contains
complex
parts
then parameter sensitivity is inevitably
observed
, and therefore fine t
uning will always be
required to produc
e
a complex world.
In other words, complex universes are generically fine tuned.
The
fine tuning
of our
universe
should therefore not elicit surprise. Moreover, any alternative universe which
gives rise to complexity, such as the model universes of Aguirre
[4],
Harnik, Kribs and
Perez [27
]
, Adams [1] or
Jaffe, Jenkins and Kimchi
[31
-
33
]
, would inevitably display
fine tuning.
This answers the question raised earlier: why could varying the universal
constants never give rise to a continuous morphing of lifefo
rms in any universe? It is
because any complex universe will be fine tuned.
This paper has
not address
ed how the universe achieves its fine tuning, only that fine
tuning is an inevitable requirement for a complex outcome.
Consequently the
extent to
which
postulating
a
Multiverse
may be motivated
is unchanged by the pre
sent work
.
Appendix
:
Details of
the
Toy Model for Gamow
-
style BBN Nucleosynthesis
The model does not bear any relationship to the physics of our universe
. It is intended
only as an illustra
tion of how fine tuning arises if a balance of chemical elements is to be
the outcome.
A nucleus of N nucleons is assumed to have a binding energy of (N
-
1)B, so
that any reaction between a nucleus of N nucleons and a nucleus of M nucleons to create
a nucle
us of N + M nucleons involves an increase in binding energy b
y B. All reactions
of this
form
MNMN nnn
are permitted to occur
, up to
10
MN
. The rate of
these
reaction
s is set to
,
3
1
3
exp
kT
B
B
NMC
nnnR MNMN
(A.1
)
where C =
22,000. Equ.(A.1
) gives the reaction rate in s
-1
(mole/cm
3)-1
when B is in MeV.
In our universe
the exponential
in (A.1)
would represent the Coulomb barrier which
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
25
of
29
would be of increasing height
for nuclei of increasing charge
. In the al
ternative universe
the
potential barrier
has been made
the same height for all nuclei.
It is important to include
a barrier term so that BBN ceases virtually completely after a
few minutes or hours. The nuclei formed during BBN are then permanent feat
ures of the
universe. But
we wish to give the heavier elements a chance of forming during BBN and
hence the barrier is made independent of atomic number. (We make no comment as to
whether this would be possible by adjusting the parameters of the standard model
quite
possibly not,
but such is not the intention).
The universal temperature
is
taken to be,
t
T
10
10
(A.2
)
where T is in K and t is in seconds.
The universe is
assumed to contain an equilibrium
number density of zero
-
mass, spin one particles (photo
ns),
given in mole/cm
3
by,
3
2436
.0 c
kT
A
(A.3
)
where A = 6 x 10
29
m
-3
.
The total number density of nucleons is given in terms of an
assumed photon:nucleon ratio,
,
n
(A.4
)
T
he size sca
le of the universe varies as
tR
in the radiation dominated era being
assumed
here, and hence the above relations are consistent with a constant number of
photons and a constant number of nucleons, though the mean energies of both are
re
ducing.
Since all reactions of the form
MNMN nnn
are assumed to occur, and since we
have assumed that there are no stable nuclei beyond n
10
, there are thus 25 contributing
reactions. The effect of these reactions on both particle creation an
d particle consumption
must be addressed. The original supply of nucleons is only consumed and not replenished
by any of these reactions (since we
do
not model photodisintegration directly).
Conversely, n
10
is only created and not consumed by any reactions
.
Writing t
he rate of
a
reaction
baba nnn
a
s
ab
, a
n example equation
-
for the rate of change of the
number density of n
6
nuclei
-
is,
64636261334251
6
46362616332415
dt
d
(A.5
)
Ten equations of this form, one for each
n
, comprise the complete reaction network.
These ten equations constitute the
macroscopic description of the
dynamical
system,
together with Equs.(A.1
-4
). Numerical integration by time stepping from some starting
time, t
s
, to some finishing time,
tF
, is straightforward. Hence, the final abundance of each
of the nuclei is found.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
26
of
29
It is necessary, of course, to take account of universal expansion in reducing the absolute
number densities as time proceeds. This was done by multiplying all the particle
densities
by a factor
2
3
tt t at the end of each time increment, t
.
It remains only to define how the starting and finishing times for the integration, t
s
and t
F
,
are determined. The former is defined by photodisintegr
ation, and the latter by cosmic
expansion freezing
-
out the reactions. A full dynamical treatment would model the
photodisintegration reaction rate
MNMN nnn
. However, at high enough
temperatures there is such a numerical preponderance of phot
ons with energies in excess
of B that this photodisintegration reaction is far faster than the rate of formation of
compound nuclei. At such temperatures we can assume as a working approximation that
there are no compound nuclei present.
The
simulation
o
f
the nuclear reactions
is therefore
started
at the earliest time that the nuclei become stable against photodisintegration. By
integration of the black body spectrum, the fraction of photons with energies in excess of
B is given by,
1
2
11
22
417
.0 x
exxBE
Photons
(A.6
)
where,
11 /
kT
Bx
and
T1
is the highest temperature for nuclear stability. This occurs
when the fraction of photons with E > B is less than the nucleon:photon fraction, so that
T1
is found by setting (A.6
) equal to
/1
, i.e.,
1
22
417
.0 1
2
11 x
exx
(A.7
)
The time at which this occurs is then found from (A.2
) and def
ines
ts
.
However, the net
rate of production of nuclear species will be virtually zero at
ts
, since nuclear stability is
only marginal at that
time. To account for this in a very crude fashion we arbitrarily
factor the rate of nucleus formation (i.e. the forward reaction rate) by ss ttt /
for
times between
ts
and 2
ts
.
Reactions cease when their rate falls below the Hubble param
eter. Hence, the freeze
-
out
of the nuclear reaction
baba nnn
, whose rate is
ab
, is assumed in our simple
treatment to occur when,
t
ab
ba
2
1
(A.8
)
Integration continues until all reactions are frozen out, which defines time
tF
. In cases
where the freeze
-out time for the first reaction
,
2
nnn ,
is earlier than the time
ts
at
which stability against photodisintegration occurs, then no nuclear reactions will take
place and there will be n
o nucleosynthesis at all.
References
[1]
Adams, F.C. [2008]: Stars In Other Universes: Stellar structure with different
fundamental constants
,
JCAP08(2008)010.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
27
of
29
[2]
Agrawal, V., Barr, S. M., Donoghue, J. F., Seckel, D.
[
1998a
]:
Anthropic
considerations in multiple
-
domain theories and the scale of electroweak symmetry
breaking
,
Phys.Rev.Lett,.
80,
1822.
[3]
Agrawal, V., Barr, S. M., Donoghue, J. F., Seckel, D.
[
1998b
]:
The anthropic
principle and the mass scale of the standard model,
Phys.Rev
., D
57,
5480.
[4]
Aguirre, A. [
2001]:
The Cold Big
-
Bang Cosmology as a Counter
-
example to
Several Anthropic Arguments
, Phys. Rev.
D64
, 083508.
[5]
Albrecht, A.
, et al
[2006]:
Report of the Dark Energy Task Force
, arXiv:astro
-
ph/0609591.
[6]
Barrow,J.D., Morris,S.C., Freeland,S.J., Harper,C.L. (
editors) [2008]:
Fitness of the
Cosmos for Life: Biochemistry and Fine
-
Tuning
, Cambridge University Press
[7]
Barrow, J. D., Tipler, F. J.
[1986
]:
The Anthropic Cosmological Principle
, Oxford
University Press.
[8]
Bousso,R., Harnik,R., Kribs,G.D. Perez,G. [2007]:
Predicting the Cosmological
Constant from the Causal Entropic Principle
,
Phys.Rev.
D76
:043513.
[9]
Bousso,
R.,
Hall,
L.J.,
Nomura
, Y. [2009]:
Multiverse Understanding of
Cosmological Coincidences
, Phys.Rev.
D80
:063510.
[10]
Bousso,R., Harnik,R. [2010]:
The Entropic
Landscape
,
arXiv:1001.1155
.
[11]
Bousso
, R. [2002]:
The holographic principle
, Rev.Mod.Phys.
74
, 825
-
874.
[12]
Bradford, R.A.W. [2009]:
The Effect of Hypothetical Diproton Stability on the
Universe
,
J.Astrophys. Astr.,
30, 119
-
131.
[13]
Carr, B.J., Rees, M.J. [1979]:
The
anthropic principle and the structure of the
physical world
, Nature
278
, 605
-
612.
[14]
Carr, B.
J.
[2007]:
Universe or Multiverse?
, Cambridge University Press.
[15]
Carter, B. [1967]
:
The Significance of Numerical Coincidences in Nature, Part I:
The Role of Fundamen
tal Microphysical Parameters in Cosmogony
, Department of
Applied Mathematics and Theoretical Physics, University of Cambridge, Preprint.
Now available as arXiv:0710.3543.
[16]
Carter, B. [1974]
:
Large number coincidences and the anthropic principle in
cosmology
,
in Confrontations of cosmological theories with observational data
(I.A.U. Symposium 63) ed. M. Longair (Reidel, Dordrecht, 1974) 291
-
298.
[17]
Csoto,A., Oberhummer,H., Schlattl,H. [2000]:
At the edge of nuclear stability:
nonlinear quantum ampliers
, Heavy Io
n Physics
12
, 149. arXiv:nucl
-
th/0010051.
[18]
Csoto,A., Oberhummer,H., Schlattl,H. [2001]:
Fine
-
tuning the basic forces of
nature by the triple
-
alpha process in red giant stars
, Nucl.Phys.
A688
, 560c.
arXiv:astro
-
ph/0010052.
[19]
Damour, T., Donoghue, J.F.: [2008]:
Constraints on the variability of quark masses
from nuclear binding
,
Phys. Rev. D
78
, 014014.
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
28
of
29
[20]
Davies, P. C. W. [1972]; Time variation of the coupling constants
,
J.Phys.
A,
5,
1296.
[21]
Davies, P. C. W. [1982]:
The Accidental Universe
, Cambridge University Pre
ss.
[22]
Davies, P. C. W. [2004]:
Multiverse cosmological models
,
Mod.Phys.Lett.,
A
19,
727.
[23]
Davies, P.C.W., [2006]:
The Goldilocks Enigma: Why is the Universe Just Right for
Life?: Allen Lane, London
[24]
Dyson, F. J.
[
1971
]:
Energy in the universe
,
Sci.Am.,
225,
5
1.
[25]
Egan, C.A. [2009]: Dark Energy, Anthropic Selection Effects, Entropy and Life,
PhD thesis, University of New South Wales. Available as
arXiv:1005.0745.
[26]
Gribbin, J., Rees, M. [1989]:
Cosmic Coincidences: Dark Matter, Mankind, and
Anthropic Cosmology
: B
antam Books, NY
[27]
Harnik, R., Kribs, G.D., Perez, G. [2006]:
A Universe Without Weak Interactions
,
Phys.Rev. D74 (2006) 035006
[28]
Hogan, C. J. [
2000
]:
Why the universe is just so
,
Rev.Mod.Phys.,
72,
1149.
[29]
Hogan, C. J.
[
2006
]:
Nuclear astrophysics of worlds in
the string landscape
,
Phys.Rev.
D,
74,
123514.
[30]
Hoyle,F. [1954]:
On Nuclear Reactions Occurring in Very Hot Stars. I. The
Synthesis of Elements from Carbon to Nickel
, Astrophysics Journal Supplement,
1
,
121
-
146.
[31]
Jaffe, R.L., Jenkins, A., Kimchi, I. [2009]:
Quark masses: An environmental impact
statement
,
Phys.Rev.
D79
, 065014.
[32]
Jenkins, A. [2009]:
Anthropic constraints on fermion masses
,
Acta Phys. Pol. B
Proc. Suppl. 2, 283
288.
[33]
Jenkins, A., Perez, G. [2010]:
Looking for Life in the Multiverse
, Sci.Am. Ja
nuary
2010, 302
[34]
Kauffman, S.A., [1993]:
The Origins of Order
, Oxford University Press.
[35]
Klee, R. [2002]:
The Revenge of Pythagoras: How a Mathematical Sharp Practice
Undermines the Contemporary Design Argument in Astrophysical Cosmology
,
Brit.J.Phil.Sci.,
53, 331
-
354.
[36]
Lineweaver,C.H., Egan,C.A. [2007]:
The Cosmic Coincidence as a Temporal
Selection Effect Produced by the Age Distribution of Terrestrial Planets in the
Universe
, ApJ
671, 853
860.
[37]
Lineweaver,C.H., Egan,C.A. [2008],
Life, gravity and the secon
d law of
thermodynamics
, Physics of Life Reviews,
5, 225
242.
[38]
Manson,N.A. [2000]:
There Is No Adequate Definition of Fine-
tuned for Life
,
Inquiry,
43, 341
52
.
[39]
Martel, H., Shapiro, P. R., Weinberg, S. [1998]:
Likely values of the cosmological
The Inevitability of Fine Tuning
in a Complex Universe
, R.A.W. Bradford
Page
29
of
29
constant
, As
trophys.J
.,
492,
29.
[40]
McGrew, T., McGrew, L., Vestrup, E. [2001]:
Probabilities and the Fine
-
Tuning
Argument: a Sceptical View
, Mind,
110
, 1027
1038.
[41]
Michaelian,K. [2010]:
Thermodynamic origin of life
, Earth Syst.Dynam.Discuss.
1
,
1-
39. Also available as
arXiv:0907.0042.
[42]
Oberhummer,H, Csoto,A., Schlattl,H. [1999]:
Fine
-
Tuning Carbon
-
Based Life in
the Universe by the Triple
-
Alpha Process in Red Giants
, arXiv:astro
-
ph/9908247.
[43]
Oberhummer,H., Csoto,A., Schlattl,H. [2000]:
Stellar production rates of carbon
a
nd its abundance in the universe
, Science
289
, 88.
[44]
Prigogine, I., Stengers, I. [1984]:
Order Out Of Chaos
, Bantam Books, USA.
[45]
Ratra,B.,
Vogeley
,M.S. [2008]:
Resource Letter: BE
-
1: The Beginning and
Evolution of the Universe
, ,
Publ.Astron.Soc.Pac.120:235-2
65,
arXiv: 0706.1565.
[46]
Rees, M. J.
[1999
]:
Just Six Number: The Deep Forces that Shape the Universe
,
Weidenfeld & Nicolson, London.
[47]
Rees, M.J.
[2003]:
Numerical Coincidences and Tuning in Cosmology
,
in
Fred
Hoyle's Universe
, ed C. Wickramasinghe et al. (
Kluwer), pp 95-
108 (2003),
arXiv:astro
-
ph/0401424.
[48]
Schlattl,H., Heger,A., Oberhummer,H., Rauscher, T., Csoto,A. [2004]:
Sensitivity of
the C and O production on the Triple
-
Alpha Rate
, Astrophys. And Space Sci.
291
,
27.
[49]
Smolin, L. [1997]:
The Life of the Co
smos
, Weidenfeld & Nicholson, London.
[50]
Susskind, L. [1995]:
The World as a Hologram
,
J.Math.Phys.36:6377-
6396.
[51]
Tegmark, M., Rees, M.J. [1998]:
Why is the CMB Fluctuation Level 10
-5?
,
Astrophys.J.,
499, 526
-
532.
[52]
Wallace,D. [2010]:
Gravity, Entropy, and Cosm
ology: in Search of Clarity
, British
Journal for the Philosophy of Science (Advance Access April 26, 2010).
[53]
Weinberg, S. [2005]:
Living in the Multiverse
, in the symposium
Expectations of a
Final Theory
, Trinity Co
llege Cambridge, September 2005 (obtainabl
e as
arXiv:hep
-
th/0511037
and also within
Carr [2007])
.
This document was created with Win2PDF available at http://www.win2pdf.com.
The unregistered version of Win2PDF is for evaluation or non-commercial use only.
This page will not be added after purchasing Win2PDF.
... These two conditions, thus, significantly reduce the congenial corridor from À 12:9 x 3 4:1 (13) to À1:4 x 3 À0:5: ...
... This indeed was one of the conclusions in [7], as summarized more elegantly in [10], as well as [13] where, reviewing the alternative universe landscapes studied by [7,10,[14][15][16], it was observed that if one is prepared to adjust another parameter in a compensating manner, it might be possible to find other regions in the parameter space that are also congenial. However, that does not remove the fine-tuning problem, as the alternative values are still finely tuned and this is inevitable to produce complexity as observed in our present Universe. ...
Article
The work of Jaffe, Jenkins and Kimchi [Phys. Rev. D79, 065014 (2009)] is revisited to see if indeed the region of congeniality found in their analysis survives further restrictions from nucleosynthesis. It is observed that much of their congenial region disappears when imposing conditions required to produce the correct and required abundances of the primordial elements as well as ensure that stars can continue to burn hydrogen nuclei to form helium as the first step in forming heavier elements in stellar nucleosynthesis. The remaining region is a very narrow slit reduced in width from around 29 MeV found by Jaffe et al. to only about 2.2 MeV in the difference of the nucleon/quark masses. Further bounds on $\delta m_q /m_q$ seem to reduce even this narrow slit to the physical point itself.
... Fine-tuning arguments have a long history [174,177,178,222]. Although many previous treatments have concluded that the universe is fine-tuned for the development of life [49,57,61,89,122,131,132,166,179,184,280,345,446,467,528,529], it should be emphasized that different authors make this claim with widely varying degrees of conviction (see also [94,125,165,233,281,358,364,447]). We also note that this topic has been addressed through the lens of philosophy (see [158,195,217,343,484] and references therein), although this present discussion will focus on results from physics and astronomy. ...
... In the intermediate regime, denoted here as the Galactic Habitable Zone (GHZ), the galactic background radiation is as bright as the daytime sky on Earth, so that planets are potentially habitable over a wide range of orbits (including unbound planets). parameter X ∼ 1 (from equation [97]), the optical depth for planetary disruption is also of order unity (from equation [94]). As a result, a sizable fraction of the planets residing in the galactic habitable zone will be freely floating, rather than in orbit about a particular star. ...
Preprint
(abridged) Both fundamental constants that describe the laws of physics and cosmological parameters that determine the cosmic properties must fall within a range of values in order for the universe to develop astrophysical structures and ultimately support life. This paper reviews current constraints on these quantities. The standard model of particle physics contains both coupling constants and particle masses, and the allowed ranges of these parameters are discussed first. We then consider cosmological parameters, including the total energy density, the vacuum energy density, the baryon-to-photon ratio, the dark matter contribution, and the amplitude of primordial density fluctuations. These quantities are constrained by the requirements that the universe lives for a long time, emerges from the BBN epoch with an acceptable chemical composition, and can successfully produce galaxies. On smaller scales, stars and planets must be able to form and function. The stars must have sufficiently long lifetimes and hot surface temperatures. The planets must be massive enough to maintain an atmosphere, small enough to remain non-degenerate, and contain enough particles to support a complex biosphere. These requirements place constraints on the gravitational constant, the fine structure constant, and composite parameters that specify nuclear reaction rates. We consider specific instances of possible fine-tuning in stars, including the triple alpha reaction that produces carbon, as well as the effects of unstable deuterium and stable diprotons. For all of these issues, viable universes exist over a range of parameter space, which is delineated herein. Finally, for universes with significantly different parameters, new types of astrophysical processes can generate energy and support habitability.
... Furthermore, maybe there is a deep link between the different constants and physical laws, such that it makes no sense to change just one parameter at a time. Changing a parameter would automatically perturb other parameters (see [11], p1581). Fortunately, more recent research have gone much further than these one-parameter variations. ...
... We now see the fundamental importance to define cosmic outcomes and the emergence of complexity in a very general manner, so they can also apply to other possible universes. Bradford [11] proposed such a framework when he wrote about sequences of entropy reduction. Aunger's [3] systems theoretical approach in terms of energy innovation, organization and control is also a higher-level approach. ...
Article
Full-text available
This paper introduces foundations for a new kind of cosmology. We advocate that computer simulations are needed to address two key cosmological issues. First, the robustness of the emergence of complexity, which boils down to ask: "what would remain the same if the tape of the universe were replayed?" Second, the much debated fine-tuning issue, which requires to answer the question: "are complex universes rare or common in the space of possible universes?" We argue that computer simulations are indispensable tools to address those two issues scientifically. We first discuss definitions of possible universes and of possible cosmic outcomes - such as atoms, stars, life or intelligence. This leads us to introduce a generalized Drake-like equation, the Cosmic Evolution Equation. It is a modular and conceptual framework to define research agendas in computational cosmology. We outline some studies of alternative complex universes. However, such studies are still in their infancy, and they can be fruitfully developed within a new kind of cosmology, heavily supported by computer simulations, Artificial Cosmogenesis. The Appendix [section 9] provides argumentative maps of the paper's main thesis. KEYWORDS: artificial cosmogenesis, cosmic evolution, computational cosmology, digital physics, Drake equation, Cosmic Evolution Equation, robustness, fine-tuning, multiverse.
... On the other hand, the difference can't be too large, 10 See Barrow and Tipler (1986), Leslie (1989), Davies (2006), Ellis (2007), and Barnes (2012) for further examples and more detailed references. Stenger (2011) and Bradford (2011 attempt to play down the accuracies claimed of fine-tuning, whilst Aguirre (2001) casts doubt on its limited scope. ...
Chapter
Full-text available
Our laws of nature and our cosmos appear to be delicately fine-tuned for life to emerge, in a way that seems hard to attribute to chance. In view of this, some have taken the opportunity to revive the scholastic Argument from Design, whereas others have felt the need to explain this apparent fine-tuning of the clockwork of the Universe by proposing the existence of a ‘Multiverse’. We analyze this issue from a sober perspective. Having reviewed the literature and having added several observations of our own, we conclude that cosmic fine-tuning supports neither Design nor a Multiverse, since both of these fail at an explanatory level as well as in the more quantitative context of Bayesian confirmation theory (although there might be other reasons to believe in these ideas, to be found in religion and in inflation and/or string theory, respectively). In fact, fine-tuning and Design even seem to be at odds with each other, whereas the inference from fine-tuning to a Multiverse only works if the latter is underwritten by an additional metaphysical hypothesis we consider unwarranted. Instead, we suggest that fine-tuning requires no special explanation at all, since it is not the Universe that is fine-tuned for life, but life that has been fine-tuned to the Universe.
Article
Both the fundamental constants that describe the laws of physics and the cosmological parameters that determine the properties of our universe must fall within a range of values in order for the cosmos to develop astrophysical structures and ultimately support life. This paper reviews the current constraints on these quantities. The discussion starts with an assessment of the parameters that are allowed to vary. The standard model of particle physics contains both coupling constants (α,α s ,α w ) and particle masses (m u ,m d ,m e ), and the allowed ranges of these parameters are discussed first. We then consider cosmological parameters, including the total energy density of the universe (Ω), the contribution from vacuum energy (ρ Λ ), the baryon-to-photon ratio (η), the dark matter contribution (δ), and the amplitude of primordial density fluctuations (Q). These quantities are constrained by the requirements that the universe lives for a sufficiently long time, emerges from the epoch of Big Bang Nucleosynthesis with an acceptable chemical composition, and can successfully produce large scale structures such as galaxies. On smaller scales, stars and planets must be able to form and function. The stars must be sufficiently long-lived, have high enough surface temperatures, and have smaller masses than their host galaxies. The planets must be massive enough to hold onto an atmosphere, yet small enough to remain non-degenerate, and contain enough particles to support a biosphere of sufficient complexity. These requirements place constraints on the gravitational structure constant (α G ), the fine structure constant (α), and composite parameters (C ⋆ ) that specify nuclear reaction rates. We then consider specific instances of possible fine-tuning in stellar nucleosynthesis, including the triple alpha reaction that produces carbon, the case of unstable deuterium, and the possibility of stable diprotons. For all of the issues outlined above, viable universes exist over a range of parameter space, which is delineated herein. Finally, for universes with significantly different parameters, new types of astrophysical processes can generate energy and thereby support habitability.
Book
Full-text available
In this fascinating journey to the edge of science, Vidal takes on big philosophical questions: Does our universe have a beginning and an end or is it cyclic? Are we alone in the universe? What is the role of intelligent life, if any, in cosmic evolution? Grounded in science and committed to philosophical rigor, this book presents an evolutionary worldview where the rise of intelligent life is not an accident, but may well be the key to unlocking the universe's deepest mysteries. Vidal shows how the fine-tuning controversy can be advanced with computer simulations. He also explores whether natural or artificial selection could hold on a cosmic scale. In perhaps his boldest hypothesis, he argues that signs of advanced extraterrestrial civilizations are already present in our astrophysical data. His conclusions invite us to see the meaning of life, evolution and intelligence from a novel cosmological framework that should stir debate for years to come.
Article
Our laws of nature and our cosmos appear to be delicately fine-tuned for life to emerge, in way that seems hard to attribute to chance. In view of this, some have taken the opportunity to revive the scholastic Argument from Design, whereas others have felt the need to explain this apparent fine-tuning of the clockwork of the Universe by proposing the existence of a `Multiverse'. We analyze this issue from a sober perspective. Having reviewed the literature and having added several observations of our own, we conclude that cosmic fine-tuning supports neither Design nor a Multiverse, since both of these fail at an explanatory level as well as in a more quantitative context of Bayesian confirmation theory (although there might be other reasons to believe in these ideas, to be found in religion and in inflation and/or string theory, respectively). In fact, fine-tuning and Design even seem to be at odds with each other, whereas the inference from fine-tuning to a Multiverse only works if the latter is underwritten by an additional metaphysical hypothesis we consider unwarranted. Instead, we suggest that fine-tuning requires no special explanation at all, since it is not the Universe that is fine-tuned for life, but life that has been fine-tuned to the Universe.
Book
We explore several concepts for analyzing the intuitive notion of computational irreducibility and we propose a robust formal definition, first in the field of cellular automata and then in the general field of any computable function f from N to N. We prove that, through a robust definition of what means "to be unable to compute the nth step without having to follow the same path than simulating the automaton or the function", this implies genuinely, as intuitively expected, that if the behavior of an object is computationally irreducible, no computation of its nth state can be faster than the simulation itself.
Article
Full-text available
If the results of the first LHC run are not betraying us, many decades of particle physics are culminating in a complete and consistent theory for all non-gravitational physics: the Standard Model. But despite this monumental achievement there is a clear sense of disappointment: many questions remain unanswered. Remarkably, most unanswered questions could just be environmental, and disturbingly (to some) the existence of life may depend on that environment. Meanwhile there has been increasing evidence that the seemingly ideal candidate for answering these questions, String Theory, gives an answer few people initially expected: a huge "landscape" of possibilities, that can be realized in a multiverse and populated by eternal inflation. At the interface of "bottom-up" and "top-down" physics, a discussion of anthropic arguments becomes unavoidable. We review developments in this area, focusing especially on the last decade.
Book
Where does it all come from? Where are we going? Are we alone in the universe? What is good and what is evil? The scientific narrative of cosmic evolution demands that we tackle such big questions with a cosmological perspective. I tackle the first question in Chapters 4-6; the second in Chapters 7-8; the third in Chapter 9 and the fourth in Chapter 10. However, where do we start to answer such questions? In Chapters 1-3, I elaborate the concept of worldview and argue that we should aim at constructing comprehensive and coherent worldviews. In Chapter 4, I identify seven fundamental challenges to any ultimate explanation. I conclude that our explanations tend to fall in two cognitive attractors, the point or the cycle. In Chapter 5, I focus on the free parameters issue, while Chapter 6 is a critical analysis of the fine-tuning issue. I conclude that fine-tuning is a conjecture and that we need to further study how typical our universe is. This opens a research endeavor that I call artificial cosmogenesis. In Chapter 7, I show the importance of artificial cosmogenesis from extrapolating the future of scientific simulations. I then analyze two other evolutionary explanations of fine-tuning in Chapter 8: Cosmological Natural Selection and the broader scenario of Cosmological Artificial Selection. In Chapter 9, I inquire into the search for extraterrestrials and conclude that some binary star systems are good candidates. Since those putative beings feed on stars, I call them starivores. The question of their artificiality remains open, but I propose a prize to further continue and motivate the scientific assessment of this hypothesis. In Chapter 10, I explore foundations to build a cosmological ethics and conclude that the ultimate good is the infinite continuation of the evolutionary process. Appendix I summarizes my position and Appendix II provides argumentative maps of the entire thesis.
Article
Proponents of the FineTuning Argument frequently assume that the narrowness of the lifefriendly range of fundamental physical constants implies a low probability for the origin of the universe 'by chance'. We cast this argument in a more rigorous form than is customary and conclude that the narrow intervals do not yield a probability at all because the resulting measure function is nonnormalizable. We then consider various attempts to circumvent this problem and argue that they fail.
Article
Recent advances in string theory and inflationary cosmology have led to a surge of interest in the possible existence of an ensemble of cosmic regions, or "universes", among the members of which key physical parameters, such as the masses of elementary particles and the coupling constants, might assume different values. The observed values in our cosmic region are then attributed to an observer selection effect (the so-called anthropic principle). The assemblage of universes has been dubbed "the multiverse". In this paper we review the multiverse concept and the criticisms that have been advanced against it on both scientific and philosophical grounds.
Article
According to 't Hooft the combination of quantum mechanics and gravity requires the three-dimensional world to be an image of data that can be stored on a two-dimensional projection much like a holographic image. The two-dimensional description only requires one discrete degree of freedom per Planck area and yet it is rich enough to describe all three-dimensional phenomena. After outlining 't Hooft's proposal we give a preliminary informal description of how it may be implemented. One finds a basic requirement that particles must grow in size as their momenta are increased far above the Planck scale. The consequences for high-energy particle collisions are described. The phenomenon of particle growth with momentum was previously discussed in the context of string theory and was related to information spreading near black hole horizons. The considerations of this paper indicate that the effect is much more rapid at all but the earliest times. In fact the rate of spreading is found to saturate the bound from causality. Finally we consider string theory as a possible realization of 't Hooft's idea. The light front lattice string model of Klebanov and Susskind is reviewed and its similarities with the holographic theory are demonstrated. The agreement between the two requires unproven but plausible assumptions about the nonperturbative behavior of string theory. Very similar ideas to those in this paper have long been held by Charles Thorn.
Article
Energy flows in the universe are described and the genesis of the various kinds of energy is discussed. The process by which energy is channeled in the cosmos are discussed with implications for the earth highlighted.
Article
Understanding the observed cosmic acceleration is widely ranked among the very most compelling of all outstanding problems in physical science. Many believe that nothing short of a revolution will be required in order to integrate the cosmic acceleration (often attributed to ``dark energy'') with our understanding of fundamental physics. The DETF was formed at the request of DOE, NASA and NSF as a joint subcommittee of the Astronomy and Astrophysics Advisory Committee (AAAC) and the High Energy Physics Advisory Panel (HEPAP) to give advice on optimizing our program of dark energy studies. To this end we have assessed a wide variety of possible techniques and strategies. I will present our main conclusions and discuss their implications.