Available via license: CC BY 4.0

Content may be subject to copyright.

arXiv:2110.07656v1 [physics.hist-ph] 14 Oct 2021

Determinism Beyond Time Evolution

Emily Adlam1

1University of Western Ontario

October 18, 2021

Physicists are increasingly beginning to take seriously the possibility of laws which may be non-local, global,

atemporal, retrocausal, or in some other way outside the traditional time-evolution paradigm. Yet our understanding

of determinism is still predicated on a forwards time-evolution picture, making it manifestly unsuited to the diverse

range of research programmes in modern physics. In this article, we set out a generalization of determinism which

does not presuppose temporal directedness, and we explore some of the consequences of this generalization for the

the philosophy of determinism and chance.

We begin in section 1 by identifying several problems with the Laplacean deﬁnition of determinism. First,

Laplacean determinism fails to deliver meaningful verdicts when applied to theories with non-standard approaches

to time-evolution. Second, Laplacean determinism gives a special status to the initial state of the universe, which

seems unjustiﬁed given the manifest time-symmetry of nearly all physical laws. And third, Laplacean determinism

fails to distinguish clearly between the related but distinct notions of predictability and determinism. Indeed, these

problems are linked: if we elide determinism with predictability then naturally we will think of determinism in terms

of the determination of the future by the past, but if we consider determinism to be a property of the world which is

distinct from facts about predictive capabilities, we will be more inclined to seek deﬁnitions of determinism which

make sense from a global, external point of view.

In section 3 we use a constraint-based framework to provide several such deﬁnitions, distinguishing between

strong, weak and hole-free global determinism. Then in section 4 we discuss some interesting consequences of these

generalized notions of determinism. In section 5, we show that this approach sheds new light on the long-standing

debate surrounding the nature of objective chance, because it transpires that in a globally deterministic world it is

possible to have events which appear probabilistic from the local point of view but which nonetheless don’t require us

to invoke ‘objective chance’ from the external point of view. Finally, in section 6 we discuss how global determinism

relates to several other relevant research programmes.

1 Problems for Laplacean Determinism

1.1 Temporal Direction

The notion of determinism was famously ﬁrst articulated by Laplace, who suggested that ‘we ought to regard the

present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow.’ [1]

That is, a theory is deterministic if according to that theory, the present state of the universe, together with the laws of

nature, is sufﬁcient to determine everything that happens in the future. We will refer to this approach as ‘Laplacean

determinism.’ Although the terminology has evolved somewhat, our modern notion of determinism is still essentially

a version of Laplacean determinism: in order to reconcile determinism with relativity’s denial of absolute simultaneity

it is necessary to update the deﬁnition to refer to the state on any given hyperplane of simultaneity, rather than simply

‘the present state,’ but the fundamental idea of the present determining the future is unchanged. For example, ref [2]

provides a careful classiﬁcation of three different approaches to deﬁning determinism for physical theories: we may

require that the solution to the differential equations should always be unique, we may require that if two linear

temporal realizations can be mapped at one time they can always be mapped at all future times, or we may require

1

that the theory’s models are not branching. However, as the authors note, all three of their approaches are based on

the core idea ‘that given the way things are at present, there is only one possible way for the future to turn out’ and

therefore all three approaches are different expressions of the temporally directed picture associated with Laplacean

determinism.

This made sense for Laplace, who was working in the context of Newtonian physics wherein the fundamental

role of laws is to give rise to time evolution: the Newtonian universe is a kind of computer which takes in an initial

state and evolves it forward in time [3]. But there have been several major scientiﬁc revolutions since the time of

Newton, and thus deﬁnitions of determinism based on a Newtonian time-evolution picture may not be well-suited to

the realities of modern physics. Ref [4] details a large variety of research programmes in modern physics which are

moving away from the time evolution paradigm, so here we will limit ourselves to mentioning a few examples. First,

although it is most common to conceptualise classical mechanics in terms of the Newtonian schema [5] in which laws

act on states to produce time evolution, there is also an alternative Lagrangian description of classical mechanics in

which systems are required to take the path which optimizes a quantity known as the Lagrangian [6]. Path integrals -

the analogue of the Lagrangian method within quantum mechanics [7] - have become so important to quantum ﬁeld

theory that increasingly we are seeing calls to take the Lagrangian description more seriously [8, 9]. And as argued

by Wharton [3], taking Lagrangian methods seriously leads to a novel ‘all-at-once’ approach to lawhood in which we

think of laws applying externally and atemporally to the whole of history at once [10], so the time evolution paradigm is

no longer appropriate. In a similar vein, retrocausal approaches to the interpretation of quantum mechanics have been

attracting signiﬁcant attention in recent years; see for example the two-state vector interpretation [11], the transactional

interpretation [12], Kent’s approach to Lorentzian beables [13], Wharton’s retrocausal path integral approach [14],

and Sutherland’s causally symmetric Bohmian model [15]. The proliferation of such models provides good reason to

suppose that a correct understanding of quantum mechanics may require us to rethink some of our ideas about time and

temporal evolution. Likewise, large sectors of the growing ﬁeld of quantum information science are concerned with

discovering general constraints on what information-processing tasks can be achieved using quantum systems. For

example, there is the ‘no-signalling principle” [16], ‘information causality’ [17], ‘monogamy,’ and ‘no-cloning’ [18].

These sorts of constraints are regarded by many researchers in the ﬁeld as being deep and fundamental features of

physical reality, and yet they are certainly not dynamical laws or time evolution laws in the usual sense, since they are

primarily concerned with describing what is possible or impossible. In response to these developments, refs [4, 19]

have argued that we should move away from the time evolution approach to lawhood and instead conceptualize laws

in terms of global cosntraints which are understood to constrain the whole of spacetime at once.

Clearly we will encounter difﬁculties if we try to apply the deﬁnition of Laplacean determinism to worlds governed

by laws like these which fall outside the time evolution paradigm. For example, consider a non-Markovian world

which is governed by laws such that the evolution at a given time depends not only on the present state, but also

on some facts about the past which are not recorded in the present state. The non-Markovian world fails to satisfy

Laplacean determinism, since the state at a given time does not sufﬁce to determine all future states. But it seems

odd to refer to such a world as ‘indeterministic’ - after all, the evolution at a given time is entirely determined by

the past. Similarly, consider the example of a world whose ontology is something like the Bell ﬂash ontology [20],

i.e. reality is composed of ‘a constellation of ﬂashes,’ which are to be regarded as pointlike events, so there are no

‘states’ in this world. Clearly in such a world the present state of the universe can’t determine the future, since there

is no present state, but nonetheless it might be the case that the distribution of future ﬂashes is fully determined by the

distribution of past ﬂashes1and under those circumstances one might think it seems reasonable to describe this world

as ‘deterministic’ even though it doesn’t satisfy the standard deﬁnition of Laplacean determinism. The problem is that

the Laplacean approach to determinism presupposes ‘temporal locality,’ i.e. it assumes that events at a given time can

depend only on the state of the world at that time [10], and therefore worlds like the non-Markovian world and the

ﬂash world are necessarily classiﬁed as indeterministic despite exhibiting perfect determination of the future by the

past.

In response to this problem, Dowe has proposed several alternative deﬁnitions of determinism which allow that

worlds exhibiting temporal nonlocality may still be deterministic - for example, his modal-nomic deﬁnition proposes

that a world Wis deterministic iff ‘for any time t and any other physically possible world W’, if W and W’ agree up

until t then they agree for all times.’ [21] Both the retrocausal world and the ﬂash world can potentially be deterministic

1This is not the case in the actual GRW ﬂash model, but one can imagine a similar theory where it would be the case.

2

according to this deﬁnition. But Dowe’s approach retains the temporally-directed features of Laplacean determinism,

and thus it still does not accommodate the full spectrum of non-Newtonian laws appearing in modern physics. For

example, consider a retrocausal world in which there is both a forwards-evolving state and a backwards-evolving state,

and the initial and ﬁnal states of the universe together with the laws of nature sufﬁce to determine everything which

happens over the whole course of history. In this world the evolution at a given time is not wholly determined by

the past, nor indeed by the future, so the world will be judged indeterministic by Laplacean determinism and Dowe’s

criterion, and yet there seems to be a sense in which it is not really indeterministic - after all, the events in it are ﬁxed

once and for all by the forwards and backwards evolving states, so nothing that happens is random and we have no

need to invoke anything that looks like an objective chance. 2Similarly, a world governed by ‘all-at-once’ laws in the

style of Wharton will not typically be judged as deterministic by either Laplacean determinism or Dowe’s criterion,

because in the all-at-once picture past and future events depend on one another mutually and reciprocally and therefore

we will not always be able to write these dependence relations wholly in terms of determination of the future by the

past. Yet ‘all-at-once’ laws could in principle ﬁx the whole course of history uniquely, and therefore there is a sense in

which worlds governed by such laws don’t really look indeterministic. Thus it seems very reasonable to think that we

ought to have a deﬁnition of determinism which allows retrocausal and all-at-once worlds and other worlds like them

to be deterministic.

At this point we should reinforce that it is not our intention to have an argument over terminology - one could

certainly make a case that determinism should be deﬁned in terms of evolution forwards in time because that is the

way the term has always been used, and proponents of this way of thinking might then be inclined to suggest that

the non-Markovian world, ﬂash world, retrocausal world and all-at-once world should not be considered deterministic

despite some intuitions to the contrary. We have no particular quarrel with this position: our intention here is simply

to argue that there is a meaningful distinction between worlds like the non-Markovian/ﬂash/retrocausal/all-at-once

world, and worlds in which some events are not determined by anything, so there is a need for terminology which

recognises that distinction. For convenience and familiarity we will describe the former as ‘deterministic’ and the latter

as ‘indeterministic,’ but readers who prefer to use the term determinism to refer only to determination by forwards-

evolving states are welcome to adopt whatever alternative terminology they prefer.

1.2 The Initial State

A further problem for the traditional deﬁnition of Laplacean determinism (and other temporally directed approaches

like Dowe’s) is that it suggests the initial state of the universe should be thought of as an ‘input’ which is then evolved

forwards in time by the laws of nature to produce the rest of history. But there is a tension between this traditional

picture and the manifest time-symmetry of the laws of physics3, because from a mathematical point of view we

could equally well start at the end of time and evolve backwards, or indeed start in the middle of time and evolve

both forwards and backwards. So it doesn’t necessarily seem that the traditional picture’s metaphysical commitment

to a special role for the initial state is really supported by the empirical evidence: of course there is certainly a

phenomenological arrow of time pointing in the forward direction, but the way in which it emerges out of largely

time-symmetric fundamental ph ysics remains controversial [23, 24], so it is certainly n ot straightfor ward to argue that

the phenonemological arrow supports the Newtonian picture of initial conditions plus forward time evolution.

One possible option for restoring time symmetry is to suppose that both the initial conditions and the ﬁnal con-

ditions of the universe are given as inputs to the universe-as-computer and then we get both forwards and backwards

time evolution. It would be difﬁcult to make this work in the context of a theory like classical mechanics which obeys

Laplacean determinism, since in such a theory the ﬁnal condition is fully determined by the initial condition, so there

is no freedom for the initial and ﬁnal conditions to be distinct inputs. But such a move is possible within a probabilistic

theory like quantum mechanics, and indeed several proposals of this kind have been made, such as the two-state vector

2Note that here and throughout this article we will use ‘objective chance’ to refer exclusively to chances which arise from the fundamental

laws of nature, such as the probabilities arising from the Born rule within indeterministic interpretations of quantum mechanics - i.e. in this article

‘objective chance’ does not include higher-level emergent chances, chances derived via the method of arbitrary functions, deterministic probabilities

or anything else that might in another context be called an objective chance.

3It is common to hold up the measurement process in quantum mechanics as an example of laws which are not time-symmetric, but it has recently

been observed that in fact the Born rule is time-symmetric [22], so it is not clear that any of the laws of physics point clearly to the existence of time

asymmetry at a fundamental level.

3

formalism for quantum mechanics in which we impose an initial condition and a ﬁnal condition and intermediate

measurement outcomes are determined by the interaction of the forward-evolving state and the backwards-evolving

state [11]. It should be noted, however, that in this sort of picture it’s still not the case that the initial and ﬁnal states

can be regarded as completely independent ‘inputs’, for they must be at least minimally consistent with one another:

for example, if it is the case that conservation laws apply to the whole universe, then presumably quantities like the

total amount of mass-energy must be the same in both the initial and ﬁnal states

Moreoever, although this approach does solve the asymmetry problem, it still gives rise to underdetermination,

because the physics does not compel us to choose boundary points for our inputs: we could equally well choose some

spacelike hyperlane at some point in the middle of time and evolve forwards or backwards from it. Thus the insistence

that the world contains some form of directed time evolution at the metaphysical level actually leads to a strong form of

underdetermination, since any spacelike hyperplane could in principle be the hyperplane from which everything else in

the universe is generated via forwards and backwards time evolution, and there is no possible way to tell which one in

the real ‘initial input.’ Underdetermination is not always bad news, but when the underdetermination is generated only

by our metaphysical preconceptions (as for example in the hole argument [25]) physicists and philosophers alike tend

to be suspicious, and so it is natural to ask if there is an alternative metaphysical picture we could adopt which would

make this extreme form of underdetermination go away (such as moving to sophisticated substantivalism in the case

of the hole argument [26]). One possible response would be to adopt a best-systems approach to lawhood [27, 28], in

order to argue that these apparently distinct possibilities are merely different systematizations of the Humean mosaic

so there is no fact of the matter about which one of them is correct; but that route sidesteps all the interesting questions

about what lawhood and determinism might look like if we don’t presume the usual input-output form, and thus there

is a clear mandate for a new deﬁnition of determinism which addresses those questions and avoids singling out some

‘input’ from which the rest of history is generated.

1.3 Predictability

A ﬁnal difﬁculty for the Laplacean deﬁnition of determinism is that it fails to properly disentangle the concepts of

determinism and predictability. Laplace’s original comments on the matter conﬂated the two, describing determinism

metaphorically in terms of the predictive abilities of ‘an intellect which at a certain moment would know all forces

that set nature in motion, and all positions of all items of which nature is composed,’ [1] and this way of thinking is

still evident in many modern analyses of determinism [29]. Yet at least in theory, predictability and determinism are

supposed to be quite distinct - predictability is an epistemic matter, whereas determinism is metaphysical, which is to

say, it is supposed to capture something about the way the world really is, independent of our ability to ﬁnd out things

about it. Thus for example Clark, considering the possibility that the functional dependence of the future on the present

state might fail to be effectively computable, asserts that ‘there is no reason at all to tie the claim of determinism to a

thesis of global predictability.’ [30] The Laplacean deﬁnition doesn’t seem to do justice to this intuition.

Clearly this problem is linked to the ﬁrst two, because the temporally directed deﬁnition of Laplacean determinism

is to a large extent a consequence of conﬂating predictability and determinism: traditionally we say that a theory or

world is deterministic if facts about the present state fully determine the future, and non-coincidentally, we ourselves

have a strong practical interest in predicting the future from facts about the present. But if deteminism is genuinely to

be regarded as a property of the world rather than as a function of our practical interests, there’s no reason it should be

deﬁned in this way: from the point of view of the universe as a whole, it need not be the case that things are always

determined in the particular temporal direction in which human observers usually want to predict them. Of course,

the temporal prejudice written into the deﬁnition of Laplacean determinism is harmless so long as we are dealing with

laws of nature which always take the standard dynamical, forward-evolving form, since in that picture determination

of the future by the present state is the only possible type of determination, but since some parts of physics are now

beginning to move away from the time-evolution picture, we are in need of a new way of thinking about determinism

which decouples it from time-evolution.

Now, one might perhaps respond to this criticism of Laplacean determinism by pushing back on the inuition that

determinism is supposed to say something about the world itself independently of any theory. Indeed, although the

notion of determinism as a property of the world itself is widespread and intuitive, philosophers of physics are often

disinclined to use the term in this way, for as Butterﬁeld notes [31], the idea that the world as a whole may be deter-

4

ministic is often predicated on ideas about ‘events’, ‘causation’ and ‘prediction’ which ‘are vague and controversial

notions, and are not used (at least not univocally) in most physical theories.’ Thus in the philosophy of physics it is

common to limit talk about determinism to discussions about whether some particular theory is deterministic.

In fact, we agree whole-heartedly with Butterﬁeld’s criticisms. However, we contend that the problem arises

not because the notion of determinism as a property of the world is incoherent, but because current deﬁnitions of

determinism don’t have the conceptual resources to capture this notion. In particular, given that the deﬁnition of

Laplacean determinism is so focused on the practical interests of human observers, it is hardly surprising that things get

murky when we try to apply that deﬁnition in an observer-independent way. The solution, therefore, is not to abandon

the idea of determinism as a property of the world, but rather to set out a precise way of talking about determinism

which is perspective-neutral and independent of the speciﬁc epistemic concerns of observers like ourselves. Thus we

hope to mitigate Butterﬁeld’s concerns by offering an improved, obsever-independent deﬁnition of determinism which

makes no implicit or explicit appeal to issues of predictability.

2 The Constraint Framework

Determinism and objective chance are patently modal notions - a world is deterministic if it is the case that the course

of history in that world could not have gone any other way. Thus we submit that a meaningful notion of determinism

for worlds, rather than merely theories, must be predicated on a fairly robust approach to modality - for every possible

world there must be some well-deﬁned modal structure which we can invoke to determine whether or not that world is

deterministic. Therefore we will henceforth take for granted the existence of a well-deﬁned modal structure, although

we will not comment further on the metaphysical nature of that modal structure - it might be deﬁned by the axioms

of the systematization of that world’s Humean mosaic which is robustly better than all other systematizations [28],

or it could result from governing laws within any one of the realist approaches to lawhood [32–34], or it could be

understood in terms of Lewis’ modal realism [35], or it could be something else altogether.

Objective modal structure is most commonly discussed in the context of ontic structural realism, whose proponents

often characterise it in terms of causal structure [36–38]. But as argued in ref [4], due to the asymmetrical nature of

the standard notion of causation this approach doesn’t work well when we’re dealing with possibilities outside the time

evolution paradigm, so in this paper we will make use of the more general approach to characterising modal structure

developed in ref [4], which is based on constraints. Recall that the Humean mosaic for a possible world is the set of

all local matters of particular fact in that world, i.e. all the instantiations of categorical properties across the spacetime

of that world, including facts about the structure of spacetime itself if necessary. We will deﬁne a constraint as a set of

Humean mosaics. In some cases, a constraint can be expressed in simple English as as a requirement like ‘no process

can send information faster than light,’ such that the constraint corresponds to exactly the set of mosaics in which the

requirement is satisﬁed. But clearly there are many sets of mosaics which will not have any straightforward English

characterisation, so we will not be able to state the corresponding constraint in simple terms. Nonetheless, each set

still deﬁnes a unique constraint.

Using these deﬁnitions, we postulate that every world has some set of laws of nature which are an objective fact

about the modal structure of that world, and we characterise these laws in terms of probability distributions over

constraints. For example, a law which prohibits superluminal signalling induces a probability distribution which

assigns probability 1to the constraint consisting of all the Humean mosaics in which no superluminal signalling

occurs, and 0to all other constraints. Within this picture, we can imagine that the laws of nature operate as follows:

ﬁrst, for each law a constraint is drawn according to the associated probability distribution, and then the constraints

govern by singling out a set of mosaics from which the Humean mosaic of the actual world must be drawn - i.e.

the actual mosaic must be in the intersection of all the chosen constraints. As shown in ref [4], a very large class

of possible laws can be written in this framework - it can accommodate the various non-Newtonian laws discussed in

section 1.1 but it can also accommodate standard dynamical laws by singling out sets of dynamically possible histories.

Thus we can use these probability distributions over constraints as proxies for the laws of nature without needing to

choose any particular metaphysical account of lawhood, and this makes it possible for us to give a general deﬁnition

of determinism which does not depend on any ontological commitments other than the commitment to the existence

of objective modal structure.

We could of course have given a similar deﬁnition in terms of probability distributions over mosaics rather than

5

constraints, but we have chosen to use constraints here in order to acknowledge that a law may require (determinis-

tically or probabilistically) that the actual Humean mosaic belongs to a given set, but then say nothing further about

which particular mosaic within the set will be selected - that is, the law prescribes no distribution within the set, not

even the uniform distribution. This point will be a crucial feature of our approach to determinism.

3 Global Determinism

Now our task is to ﬁnd a way of decoupling the notion of determinism from time evolution. One obvious way to

achieve this would be to appeal to the intuitive idea that determinism and objective chance are mutually exclusive:

that is, either it is the case that the fundamental laws of nature are deterministic, or it is the case that they involve

some sort of ‘intrinsic randomness.’ This line of thought leads to the idea that we should simply deﬁne determinism

as the absence of objective chance. But the problem with this route is that we don’t really have a strong grasp on what

objective chances are either - the only generally agreed-upon characterisation is that they should obey the Principal

Principle [27].

The constraint framework provides a precise way of formulating this idea. Indeed, it allows us to disambiguate

several different ways in which a world may be deterministic. First, we will say a world (which, recall, can be

associated with some set of laws of nature) obeys global determinism iff the probability distributions induced by the

laws are trivial:

Deﬁnition 3.1. A world satisﬁes global determinism iff every one of its fundamental laws induces a probability

distribution which assigns probability 1to a single constraint and zero to all disjoint constraints.

When this condition is satisﬁed, we can get rid of the probability distributions altogether and simply assign to each

law of nature a constraint, and the actual mosaic must lie in the intersection of all of these constraints. This is the

analogue in the constraint framework of the idea that determinism is associated with the absence of objective chance:

‘no objective chance’ is translated as ‘there are no objective probability distributions over constraints.’ Of course the

actual mosaic must still be selected from the intersection, but this selection is only ‘arbitrary’ rather than chancy, since

as noted in section 2 the laws do not deﬁne any particular probability distribution over the mosaics in the intersection,

not even the uniform distribution.

However, it might be argued that deﬁning determinism as the absence of objective chance is in some cases too

broad. Consider for example Einstein’s hole argument, which points out that the laws of general relativity allow us to

deﬁne a region of spacetime or ‘hole’ such that the distribution of metric and matter ﬁelds inside the hole are not ﬁxed

by the ﬁelds outside the hole [25]. A common response to this argument is to adopt ‘sophisticated substantivalism’

which allows us to say that the various possible distributions inside the hole are not really physically distinct; but for

now let us consider an ‘Einstein Hole World’ where the conﬁgurations that may exist inside the hole really are all

physically distinct4. Intuitively, it seems clear that this world is not deterministic, since a large variety of physically

inequivalent conﬁgurations inside the hole are all compatible with the laws of nature. But the theory deﬁnes no

probability distribution over the different conﬁgurations inside the hole, so as far as we know there aren’t really any

‘chances’ here, not even the uniform distribution - some conﬁguration must occur, and so some conﬁguration does, but

there is nothing that can be said about how likely different conﬁgurations are to occur. We will henceforth refer to cases

of this kind, involving events which are not determined by anything but which also are not chancy events, as ‘arbitrary.’

Thus it seems that a generalized deﬁnition of determinism should allow that worlds containing arbitrarinesss of this

kind may be indeterministic even in the absence of objective chance.

In light of this possibility, we distinguish further between strong and weak global determinism:

Deﬁnition 3.2. A world satisﬁes strong global determinism iff it satisﬁes global determinism and there is only one

mosaic in the intersection of the set of constraints associated with its laws of nature

Deﬁnition 3.3. A world satisﬁes weak global determinism iff it satisﬁes global determinism and there is more than

one mosaic in the intersection of the set of constraints associated with the laws of nature

4This is intended only as an example; we don’t mean to claim that this is the right take on the Einstein hole argument, or even that a world like

this is actually possible.

6

In the case of strong global determinism the actual mosaic is singled out uniquely by the laws and we have neither

chance nor arbitrariness; whereas weak global determinism allows arbitrariness but not chance. Thus Einstein hole

worlds satisfy only weak global determinism and not strong global determinism. One might perhaps question whether

‘weak global determinism,’ really ought to count as a form of determinism, given that it allows events to occur which

are not determined by anything. However, even worlds satisfying Laplacean determinism exhibit some arbitrariness

- the initial state of the universe is ‘arbitrary’ in just the same way as the conﬁguration inside the hole in an Einstein

hold world, since we don’t assign objective chance distributions over initial states but we don’t usually take them to be

determined by anything either. So when we set out to generalize determinism in a way that does not give any special

status to the initial state of the universe, we have two options - either we allow arbitrariness elsewhere as well, or we

allow no arbitrariness whatsoever. Weak global determinism takes the former route and strong global determinism the

latter, so both have at least some claim to be the spiritual heir of Laplacean determinism.

That said, there does seem to be a sense in which a world satisfying Laplacean determinism is more strongly

deterministic than an Einstein hole world, since the arbitrariness involved in the former is limited to the start of

time, whereas in the latter we can have undetermined holes popping into existence all over the place. So ideally we

would like to offer some further distinction which distinguishes between worlds satisfying Laplacean determinism and

worlds like the Einstein hole world. Of course, we could add a special exception for the initial state into our deﬁnition

of determinism, but this would seem like a relic of the older temporally directed approach - if we’re no longer insisting

that laws must take the forward time-evolution form there’s no good reason to treat the initial state as special. So

instead we suggest the following deﬁnition:

Deﬁnition 3.4. A world associated with a set of laws of nature satisﬁes hole-free global determinism iff it satisﬁes

global determinism, and there is no pair of mosaics in the intersection of the set of constraints associated with its laws

of nature which are identical everywhere except on some small subregion of spacetime

The point of this deﬁnition is that it ensures the arbitrariness involved in selecting the actual mosaic can’t be

localised in any particular region, which means we can’t have any indeterministic ‘holes.’ To see how the deﬁnition

works, consider a world satisfying Laplacean determinism in which the laws of nature are time-reversal invariant. In

the constraint framework, this world has a set of laws of nature which are each associated with a single constraint, and

the intersection of these constraints is a set of mosaics such that each mosaic in the set has a different initial condition.

Moreover, because the laws of nature of this world are deterministic and time-reversal invariant, it must be the case

that no pair of mosaic in the set are the same on any time-slice after the beginning of time, since otherwise reversing

the direction of time would take a single state into two different states, in violation of the assumption that the laws

of this world satisfy Laplacean determinism. Thus when the actual mosaic is drawn from this set of mosaics, this

has the effect of ﬁxing the state of the world on the initial state, but since the mosaics differ at all subsequent times

as well one could equally well regard this process as ﬁxing the state of the world on any other time slice. So there

is no fact of the matter about which particular time-slice is responsible for the rest of history: when we select one

of the allowed mosaics we determine all the time-slices at once. Therefore the arbitrariness involved in a universe

satisfying Laplacean determinism can’t be located in any speciﬁc spacetime region, so the Laplacean universe does

indeed satisfy weak hole-free global determinism. On the other hand, Einstein hole worlds are ruled out, since in

Einstein hole worlds the arbitrariness is localised to the areas inside the holes. Thus this deﬁnition captures what is

distinctive about worlds satisfying Laplacean determinism without needing to give special signiﬁcance to the initial

condition - the important thing about such worlds is not that the arbitrariness occurs only at the start of time, but

that the arbitrariness is not localised anywhere, and thus in general such worlds have a high degree of predictability

and consistency even though the laws of nature do not ﬁx everything in the world uniquely. Note that according to

this deﬁnition strong global determinism is a form of hole-free global determinism, since in the case of strong global

determinism there is only one mosaic in the intersection of the set of constraints associated with the laws of nature and

therefore there can’t be any pair of mosaics in that intersection.

3.1 Solutions

We now demonstrate that these new deﬁnitions of determinism resolve the problems that we set out in section 1.

First, none of these deﬁnitions of determinism has any intrinsic temporal direction: rather than treating the initial

7

conditions as an input and asking if they are sufﬁcient to determine the rest of the course of history, we have the

laws induce constraints which pick out entire mosaic, and then we ask if these laws involve any objective chances

and if they sufﬁce to pick out the course of history uniquely. Indeed, the deﬁnition of hole-free global determinism

prohibits the localisation of ‘arbitrariness’ in speciﬁc spacetime regions, so it explicitly rules out approaches which

treat some particular region of spacetime as an input or initial condition. Thus these deﬁnitions for determinism

work much better than the Laplacean approach when applied to the problem cases we considered in section 1.1:

the non-Markovian world, the ﬂash world, the retrocausal world and the all-at-once world all satisfy weak global

determinism, and they may also satisfy hole-free or even strong global determinism, depending on the speciﬁc details

of the models in question. Moreover, as shown in ref [4], Lagrangian laws, retrocausal laws, constraint-based laws and

many other types of laws outside the time-evolution paradigm can be expressed in the constraint form, and therefore

this constraint-based approach to determinism can be meaningfully applied to all of them.

It is also straightforward to see that the constraint-based approach resolves the underdetermination problem set out

in section 1.2. For in order to apply the constraint approach to a universe governed by time-evolution laws satisfying

Laplacean determinism and time-reversal invariance, we need only select a mosaic from the set of all mosaics in which

those laws are satisﬁed: there is no evolution and thus no need to pick a privileged point in history from which the

evolution is understood to begin. Of course there will be some arbitrariness associated with this choice since time-

evolution laws do not ﬁx the course of history uniquely, but as noted in section 3 this arbitrariness is not localised

anywhere so we have no underdetermination.

Indeed, this approach has the interesting consequence of dissolving the distinction between initial conditions and

parameter values. Wallace has observed that the physical content of a theory has three aspects: the qualitative form

of its dynamical equations (in which the coefﬁcients are unspeciﬁed parameters); the actual, numerical values of

the parameters (expressed as dimensionless ratios); and the initial conditions [39]. As Wallace notes, it is common to

consider that the parameter values are ‘lawlike’ while the initial conditions are merely ‘contingent’ - that is, parameters

and initial conditions are usually thought to have importantly different modal status. But in the constraint framework

the case for this different status is signiﬁcantly weaker. If the universe satisﬁes strong global determnism then both

the initial conditions and the numerical values of the parameters must be determined by the laws of nature - that is, we

must have state rigidity and parameter rigidity, in Wallace’s terms. But if the universe is only weakly deterministic,

it could well be the case that across the set of mosaics in the intersection of the constraints there is some variation in

the initial conditions of the universe and also in the numerical value of the parameters, so drawing a mosaic from the

set entails arbitrarily selecting both initial conditions and parameter values. In this context both initial conditions and

parameters are simply different facets of the degrees of freedom left by the laws of nature, and thus they have precisely

the same modal status. Of course, it could also be the case that the parameters are in fact ﬁxed by the laws of nature

while the initial conditions are not, but likewise it could be the case that the initial conditions are in fact ﬁxed by the

laws of nature while the parameters are not: until we have some speciﬁc evidence for either parameter rigidity or state

rigidity, parameters and initial conditions should be afforded the same modal status.

Finally, the constraint-based approach realises our desideratum of making a clean distinction between determinism

and predictability, as the deﬁnition we have provided here characterises determinism entirely in terms of objective

modal structure, with no appeal to any facts about the practical interests or epistemic limitations of human scientists.

A world which is deterministic in the global sense may indeed exhibit a high degree of predictability, but it also may

not - the way in which the laws determine the course of events may be highly non-local in space and time, meaning

that it may be impossible for limited local agents to gather enough data to see the whole picture. Thus this framework

provides a precise way of talking about determinism as a property of a world rather than merely a theory, opening the

door for that notion to be applied to a variety of ongoing metaphysical discussions.

3.2 Comments

1. The distinction we have made here is similar to one employed by Penrose, who distinguished between deter-

minism, i.e. what we have referred to as Laplacean determinism (‘if the state of the system is known at any one

time, then it is completely ﬁxed at all later (or indeed earlier) times by the equations of the theory’) and strong

determinism, i.e. what we have referred to as strong global determinism (‘ it is not just a matter of the future

being determined by the past; the entire history of the universe is ﬁxed, according to some precise mathematical

8

scheme, for all time.’) [40]. However, Penrose’s scheme does not seem to accommodate the intermediate possi-

bility of worlds which satisfy weak global determinism but not Laplacean determinism, such as worlds which

have some degree of arbitrariness at some point other than the initial state of the universe but nonetheless have

no objectively chancy events. Moreover, Penrose is rather vague about what it would take for the entire history

of the universe to be ﬁxed, but the constraint framework allows us to be precise: the history of the universe is

ﬁxed if and only the laws of nature induce trivial probability distributions over constraints, and the intersection

of the constraints associated with these laws contains exactly one mosaic.

2. In one sense, weak global determinism, strong global determinism and hole-free global determinism are all

weaker than Laplacean determinism, since worlds like the retrocausal world and the ﬂash world could satisfy

either weak, strong or hole-free global determinism whilst failing to satisfy Laplacean determinism. However,

in another sense strong global determinism is much stronger than Laplacean determinism, because Laplacean

determinism allows for the existence of arbitrariness (in the selection of initial conditions and the selection of

parameter values) while strong global determinism insists that everything is ﬁxed once and for all by the laws

of nature.

3. We reinforce that all of these deﬁnitions for determinism depends crucially on the claim that reality has a unique,

observer-independent modal structure. Given any Humean mosaic we could come up with an inﬁnite number

of sets of constraints whose intersection contains only this mosaic - trivially, for a mosaic Awe can pick two

random mosaics Band Cand choose the constraints {A, B}and {A, C }. So simply ﬁnding such a set which is

consistent with our observations doesn’t tell us that the world must be deterministic: we must make the further

claim that these constraints are in fact the constraints which feature in the objective modal structure of reality,

and therefore we must believe that there is some fact of the matter about what this objective modal structure

really is. So for example, assessing a claim that the world satisﬁes strong global determinism has several steps:

ﬁrst we must verify that the proposed constraints do indeed single out a unique Humean mosaic, second, we

must verify that the proposed constraints are consistent with our observations, and third, we must decide whether

the proposed constraints seem like plausible candidates for laws. This last part of the process will naturally be

shaped by whatever expectations we have about laws - for example, if we think the laws of nature are simple

then we will tend to expect that the constraints induced by the laws can be expressed in a closed form in simple

terms.

4. Our deﬁnitions also depend crucially on the stipulation that the laws of nature assign no probability distribution

over the mosaics in the constraints that they induce, nor over the mosaics in the intersection of all of those

constraints. However, this certainly does not entail that we ourselves may not assign any distribution over

mosaics in the intersection. For we may have Bayesian priors which lead us to assign some distribution over

mosaics before taking any of the laws of nature into account - for example we might simply assign the uniform

distribution, or we might choose a distribution which favours simpler mosaics. Thus when we learn about

a constraint induced by a law, we will update our beliefs by eliminating all mosaics not consistent with the

corresponding constraint and renormalizing our credences, so after conditioning on all of the laws we will be

left with a normalized probability distribution which is non-zero only on mosaics in the intersection of the

constraints. In the case of strong global determinism, no matter what priors we start out with, after conditioning

on the laws we will be left with a distribution that assigns probability 1to a single mosaic; but in the case of

weak global determinism we will in general end up with a non-trivial distribution, which will be the uniform

distribution if the original prior distribution was the uniform distribution, but which may be non-uniform if

our original priors were non-uniform. But this distribution is a subjective probability distribution derived from

our original subjective priors, not an objective chance distribution arising from the laws themselves: so in

this case we still have determinism in the sense of ‘the absence of objective chance,’ even though observers

may still assign non-trivial probabilities over mosaics compatible with the laws. This is precisely analogous

to the way in which we might assign subjective probability distributions over initial conditions in the context

of Laplacean determinism, for example in analyses of statistical mechanical systems based on the principle of

indifference [41].

5. Our discussion so far has largely left unanalysed the term ‘Humean mosaic.’ However, it’s important to re-

9

inforce that given our deﬁnitions, questions about whether speciﬁc theories are compatible with strong, weak

and hole-free global determinism will be sensitive to our judgements about what sorts of categorical properties

feature in the Humean mosaic. For example, consider the case of electromagnetism [42]: if the electromag-

netic potentials are considered to be mere mathematical devices which are not part of the Humean mosaic,

then classical electromagnetism satisﬁes both Laplacean determinism and weak global determinism, whereas

if the electromagnetic potentials are a part of the Humean mosaic, then it’s not possible for a world governed

by laws including the laws of classical electromagnetism to satisfy Laplacean determinism or hole-free weak

global determinism - though it could still satisfy weak global determinism, since electromagnetism prescribes

no probability distribution over gauge-equivalent potential conﬁgurations and therefore according to the deﬁni-

tions adopted here they are arbitrary rather than chancy. Quantum mechanics is another example: if we suppose

that quantum mechanics is a complete theory of reality and we take it that wavefunction collapse is a part of the

Humean mosaic, then the theory does not satisfy weak or strong global determinism or Laplacean determinism,

whereas if we say that only the unitarily evolving quantum state is part of the Humean mosaic, then we get the

Everett interpretation [43], which satisﬁes Laplacean determinism and weak global determinism.5One might

worry that this is a weakness in the deﬁnition. However, it is in fact a deliberate choice, because as noted earlier

we are focusing in this paper on deﬁning determinism for worlds, not theories, so there is no need to specify

what is in the Humean mosaic: all local matters of particular fact in a given possible world are in the Humean

mosaic for that world, whatever those matters of fact might turn out to be. This means that we can’t expect to

straightforwardly read off any given theory whether that theory satisﬁes weak or strong global determinism: we

must ﬁrst specify which elements of the theoretical structure are to be considered part of the Humean mosaic,

which is to say, we must specify an ontology for the theory. But this is not unique to global determinism - as

noted above, Laplacean determinism can likewise give different conclusions for electromagnetism and quantum

mechanics depending on what we take to feature in the ontology.

6. Since we have deﬁned probabilistic laws of nature in terms of probability distributions over deterministic con-

straints, it will always be possible to make a probabilistic law into a deterministic one by simply shifting the

probabilistic part of the process into the deﬁnition of the law. For example, if we have a probabilistic law which

stipulates that with probability p(O)the actual mosaic will belong to the constraint O, we can turn this into a de-

terministic law by drawing a constraint Ofrom the probability distribution p(O)and then making a deterministic

constraint which requires that the actual mosaic will belong to the selected constraint O. And similarly, we can

turn a deterministic law of nature into a probabilistic law of nature by formulating some probability distribution

over constraints which assigns nonzero probability to the constraint associated with the deterministic law. So

there are always options for moving back and forward between the probabilistic and deterministic pictures, and

therefore ultimately the choice between these pictures is an interpretational stance rather than something that can

be proven via empirical observations. That said, for certain sorts of constraints there seems a strong presumption

in favour of one picture rather than the other. For example, if we observe that all As are Bs, we could in principle

regard this as the consequence of a probabilistic law of nature which assigns high probability to the constraint

consisting of the set of Humean mosaics in which all As are Bs, but this seems unnecessarily complex, since we

could do just as well with a deterministic law of nature which assigns probability 1to the constraint consisting of

all Humean mosaics in which all As are Bs. Conversely, constraints for which we must specify a large number

of parameters which approximately match some probability distribution seem most naturally understood within

the probabilistic paradigm, otherwise there will be a sense that we are just shifting ‘objective chance’ into the

choice of the laws of nature rather than genuinely eliminating it (we will see an example of this in section A).

4 Applications

We have criticized Laplacean determinism for being inappropriately intertwined with our practical interests; but one

might argue that being aligned with our practical interests is precisely what made Laplacian determinism a useful

concept, and thus one might worry that generalizing determinism as we have done here serves only to make the

concept less useful. Thus in this section we will explain how this generalization of determinism and objective chance

5The Everett interpretation in its usual form does not satisfy strong global determinism, since the initial state of the universe is arbitrary.

10

may have important consequences for scientiﬁc and philosophical thinking. The concept of Laplacean determinism

has played a range of different intellectual roles in science and philosophy, so we will proceed by examining how

global determinism can contribute to some of these domains of application.

4.1 Scientiﬁc Explanation

Determinism has played an important role in shaping our expectations for scientiﬁc explanation - for a long time it

was standard in science to explain events by showing how they could be deterministically produced by past conditions.

This is formalised in the ‘deductive-nomological’ model for scientiﬁc explanation [48, 51], which posits that a valid

explanation should be composed of a set of sentences from which the explanandum logically follows, where at least

one of these sentences must express a law of nature and that law must be an essential element in the deduction of the

explanandum. Most common examples of DN-explanations involve time-evolution laws or at least temporally directed

laws, and thus DN explanations usually involve postulating some past conditions and showing that the deterministic

forwards time-evolution of these conditions gives rise to the explanandum. This model is sometimes relaxed to allow

for explanations involving statistical inference rather than exact logical deduction [52], but deductive-statistical and

inductive-statistical explanations are simply a generalisation of forwards-evolving deterministic explanations and thus

they typically preserve the expectation that the future will be explained by lawlike evolution from the past.

Global determinism opens up new possibilities for scientiﬁc explanation - for if the deﬁnition of Laplacean de-

terminism is largely a function of our practical interests then there is no reason to expect that all valid scientiﬁc

explanations will be predicated on forwards time evolution, since for scientiﬁc realists one of the main purpose of

scientiﬁc explanation is to achieve understanding, to which practical concerns are only tangentially relevant. So when

we come upon phenomena for which there seems to be no satisfactory explanation within a forward time-evolution

picture, we should be open to the possibility of explanations based on global laws using the constraint framework. A

good example of this is the past hypothesis: by deﬁnition nothing evolves into the initial state of the universe, so the

initial state can’t be given a lawlike explanation if we assume that laws always take the forward time-evolution form.

But the past hypothesis can straightforwardly be written as a constraint - it is simply the set of all Humean mosaics

in which the arrangement of local matters of particular fact at one end of the mosaic has low entropy (or some other

more sophisticated characterisation of the desired initial state - see ref [53] for discussion). The past hypothesis can

therefore be explained in this framework by hypothesizing that there is some law of nature which induces this particu-

lar constraint, thus ensuring that the initial state will necessarily be a low entropy state whilst the selection of one low

entropy initial state in particular remains arbitrary.

Similar arguments can be made about parameter values: for example, cosmologists are currently much exercised

over the problem of why the cosmological constant is so much smaller than their models suggest it ought to be [54].

It’s hard to see how this feature could be explained within the kinematical/dynamical picture, because the cosmological

constant is usually considered to be a fundamental constant of nature which has always had the value that it has, and

so there is no option to say that it has been produced by evolution from some earlier conditions. Thus most current

approaches to explaining the value involve fairly exotic explanatory strategies, such as anthropic arguments in the

context of a putative multiverse [55]. However, within the constraint framework there is no reason why we can’t

simply explain the cosmological constant in a lawlike manner: all we need to do is suggest that there is some law

of nature which induces the constraint consisting of all the Humean mosaics in which the cosmological constant is

very small. In the constraint framework, this nomological explanation has exactly the same status as more familiar

nomological explanations, such as explaining that apples fall due to the law of gravity: the key point is that nomological

explanations do not necessarily have to have a temporal character.

Of course, some caution is required with this strategy. Seeking explanations for phenomena that we ﬁnd surprising

or conspiratorial is frequently a good way of making scientiﬁc progress, so we don’t want to make explanation too

cheap: answering every scientiﬁc question with ‘Because there’s a law which makes it so,’ isn’t likely to lead to

any new insights. But of course, the same is true with standard DN and inductive-statistical explanation - not every

proposed DN or inductive-statistical explanation is interesting and informative, so we have a variety of criteria which

can be applied to judge which explanations have merit. These criteria, including simplicity, uniﬁcation, the absence of

ﬁne-tuning and so on, can equally well be applied to constraint-based explanations. Thus rather than simply imposing

a constraint consisting of Humean mosaics in which the initial state is simple or a constraint consisting of Humean

11

mosaics in which the cosmological constant is small, we might want to come up with some more general feature from

which these constraints could be derived - for example, one could imagine deriving the past hypothesis from a more

general constraint that singles out Humean mosaics which are sufﬁciently interesting or varied. This constraint rules

out mosaics where everything is in thermal equilibrium throughout the whole of history, which does indeed entail that

the initial state must have low entropy, but it may also have other interesting consequences, and if the consequences

turn out to be varied and powerful enough we would have good reason to accept the proffered explanation. The

overarching point is that global determinism offers us a new sort of explanatory framework: there can be good and

bad explanations within that framework, and there’s work to be done to establish the appropriate criteria for judging

these sorts of explanations, but nonetheless this new approach is a promising route to answering questions which seem

intractable within standard explanatory paradigms.

4.2 Assessment of Theories

Determinism has long functioned as a gold standard for an ideal scientiﬁc theory - for example, the idea that quantum

mechanics might be indeterministic was accepted only begrudgingly by many physicists at the time of the theory’s

formulation, as witnessed by Einstein’s complaint that ‘He (God) does not play dice.’ [56, 57] Over the last century

many attempts have been made to ‘complete’ quantum mechanics so that it obeys Laplacean determinism after all [58],

and indeed two of the most popular interpretations of quantum mechanics (the de Broglie-Bohm interpretation [59]

and the Everett interpretation [43]) do satisfy Laplacean determinism.

However, if we accept that the standard deﬁnition of Laplacean determinism is a function of practical interests

rather than a metaphysically meaningful category, there is less justiﬁcation for regarding Laplacean determinism as

the ultimate goal of a scientiﬁc theory. Obviously, theories which satisfy Laplacean determinism are still desirable due

to their practical utility for predicting the future, but from the point of view of the scientiﬁc realist with an interest in

understanding how things really are, theories which satisfy weak, strong or hole-free determinism may be just as well

motivated. After all, all three of these notions of determinism do justice to Einstein’s intuition that ‘He does not play

dice,’ in the sense that none of them allows the existence of genuine objective chances from the external, ‘god’s-eye’

point of view.

Thus this approach to determinism suggests new ways of thinking about what a good scientiﬁc theory should look

like. For example, instead of coming up with theories which postulate a set of states and a set of differential equations,

perhaps we should be looking more seriously at theories postulating laws which govern ‘all-at-once’ from an external

point of view. As noted in section 1.1, modern physics does already contain examples of such laws, but in general

these examples have been arrived at by starting from a temporal evolution picture and subsequently generalizing it

or reformulating it; things might look quite different if we started from the assumption that we are looking for a

global, ‘all-at-once’ theory and then proceeded without insisting on the existence of a time-evolution formulation of

the theory. Adjusting our gold standard to more closely reﬂect the form of the true laws of nature is likely to be a good

way to stimulate progress toward understanding those laws, so we have strong practical motivations for rethinking the

gold standard.

4.3 Free Will

Laplace’s ideas about determinism gave rise to a spirited debate over the possibility of free will in a deterministic

universe which has continued to this day [60–62]. The possibility of global determinism certainly raises new questions

in this debate. For example, even if you believe that we don’t have free will in the context of Laplacean determinism,

you might still be willing to say that we could have free will in the context of some sorts of global determinism.

After all, if the course of history is determined by laws which apply to the whole of history all at once, our actions

are determined by other events, but also those events are partly determined by our actions, since each part of the

history is optimized relative to all the other parts. So there is a degree of reciprocity which is lacking in the Laplacean

context, where the present state determines our actions and our actions do not act back on the present state. This is an

interesting direction of enquiry, but somewhat outside the scope of the present paper, so we will leave it as a topic for

future research.

12

5 Objective Chance

Objective chance is famously a murky notion - though a number of analyses of the topic exist, including frequentist

and propensity approaches, none has so far garnered universal approval and all seem to have serious problems to

overcome [44]. Moreover this lack of clarity is a signiﬁcant weakness for variety of standard philosophical positions

which require some notion of probability or chance - for example, proponents of the Everett interpretation often employ

‘no worse off’ arguments where they claim that the difﬁculty of accounting for probabilities in the Everett picture is no

reason to reject their approach because accounting for ordinary objective chance is just as difﬁcult [43]. It is tempting

to respond to all this by simply insisting that there aren’t any objective chances at the level of fundamental laws,

but that strategy is hampered the fact that the current scientiﬁc evidence seems to be pointing away from Laplacean

determinism. As noted earlier, it’s common to suppose that we are faced with a dichotomic dilemma - either the world

satisﬁes Laplacean determinism or there exist objective chances - and thus, since it doesn’t appear to be the case that

quantum mechanics satisﬁes Laplacean determinism, it may seem inevitable that we accept the existence of objective

chances.

But the possibility of global determinism demonstrates that this division of the possibilities is too rigid, for it

turns out that we can postulate worlds which do not satisfy Laplacean determinism but which nonetheless are globally

deterministic, and therefore the probabilistic features of quantum mechanics do not force us to accept the existence

of objective chance from the global, external point of view. There are a variety of ways in which such apparently

probabilistic events might emerge within a globally deterministic world - they will appear wherever the world contains

some events which depend in part on facts about reality that are not accessible to observers in the local region of those

events - but for concreteness, in order to demonstrate how chances can emerge from a globally deterministic world we

will henceforth focus on one particularly simple possibility, where probabilistic events arise from frequency constraints

which look something like ‘the set of mosaics in which eighty percent of events of type E have the outcome A.’ The

existence of these sorts of global constraints is entirely compatible with strong, weak or hole-free global determinism,

so one could equally well say either that this approach reduces objective chance to frequency constraints, or that it

simply eliminates objective chance altogether from the global point of view.

Indeed, obtaining objective chances from frequency constraints can also be regarded as a way of reducing them

to subjective probabilities: the relevant events are sampled from a set of events with prespeciﬁed relative frequencies,

so the objective chances attached to these events simply describe a process of sampling without replacement akin

to the common textbook example of selecting a ball from a jar containing a speciﬁed mixture of black and white

balls. And of course the probabilities involved in sampling without replacement can be described in terms of Bayesian

credence functions with partially unknown initial conditions, since we know the initial proportions of black and white

balls but not the order in which they will be drawn. Thus deriving chances from frequency constraints provides a

satisfying explanation of the close conceptual and mathematical relationship between objective chance and subjective

probability: objective chances behave like subjective probabilities because they really are just subjective probabilities

over a large and temporally extended domain.

In appendix A we discuss in greater detail the form that these frequency constraints might take; but for now, in

order to see if frequency constraints can give rise to chances which behave in the way we would expect, we will

consider the following desiderata for objective chance:

1. The Principal Principle: Objective chances should satisfy the Principal Principle.

2. Probabilistic conﬁrmation: We should be able to conﬁrm facts about objective chances by means of observing

relative frequencies.

3. Exchangeability: Given a sequence of events which are not causally related to one another, the chance for a

given sequence should depend only on the relative frequencies in that sequence and not on the order in which

the events occur.

4. Counterfactual independence: Given a sequence of events which are not causally related to one another, the

chance for the outcome of an individual event should not depend on the frequency of occurrence of other

outcomes in the sequence.

13

First, do chances derived from frequency constraints satisfy the Principal Principle? Well, in accordance with the

method of direct inference [45], if you know that you are in a Humean mosaic which belongs to the set of mosaics in

which exactly eighty percent of As are Bs, and you have no other information, you should indeed set your credences

for the next observed Ato be Bto eighty percent. Now in most real cases we will already have observed some As

and therefore it is not quite the case that we are sampling at random from the entire class, so the correct probability to

assign might actually be inﬁnitesimally different from eighty percent. But provided that the class is large enough, this

difference will be so small that it will make no practical difference to the way in which we use this probability in our

reasoning processes and thus to all effects and purposes the Principal Principle does indeed pick out the frequencies

that appear in the constraints.

Next let us check that it is possible to conﬁrm facts about chances derived from frequency constraints by means

of observing relative frequencies: if we observe a set of events of a certain type and ﬁnd that eighty percent of them

have the outcome A, do we have grounds to conﬁrm the hypothesis that we are in a Humean mosaic in which eighty

percent of instances of this event type have outcome A? Of course, in general we will be able to observe only a very

tiny proportion of the total number of such events occurring across the whole Humean mosaic, so we can’t conﬁrm

this hypothesis by direct observation; rather we must assume that the As that we have observed are a representative

sample of the full set of As. Thus the legitimacy of this inference depends crucially on the assumption that the As form

a homogenous class which can be expected to exhibit stable relative frequencies across time - it is not the case that

just any relative frequencies can be extrapolated in this way, as that would allow us to conﬁrm virtually any hypothesis

we like about future relative frequencies by simply labelling events in the right sort of way. Thus from observations

of a relatively small subclass of instances of a given event type, we do not have grounds to conﬁrm the hypothesis that

eighty percent of instances of this event type have outcome A, but we do have grounds to conﬁrm the hypothesis that

the laws of nature induce a constraint to the effect that eighty percent of As must be Bs. That is, the inference must

include the hypothesis that we have identiﬁed a special class of events which appears directly in the laws of nature and

that the relative frequencies we have seen are the consequence of a constraint which is induced by the laws of nature.

Of course we could always be wrong about this hypothesis, but nonetheless, observing behaviour which is consistent

with the hypothesis provides some degree of conﬁrmation for it.

One way of putting this would be to say that we can conﬁrm objective chances derived from frequency constraints

only when they pertain to ‘natural kinds,’ where natural kinds are to be deﬁned as exactly those categories which

feature in the constraints induced by the laws of nature. This would be in accordance with the common view that it

is (mostly) natural kinds which permit inductive inference [46]. However, across philosophy ‘natural kinds’ play a

variety of roles, and therefore this terminology has a certain amount of baggage - although in principle what counts as

a natural kind is subject to the dictates of science, some philosophers have claimed to be able to identify natural kinds

purely by perception and/or reﬂection, and people often have quite speciﬁc intuitions about what is and is not a natural

kind, so for example it is common to suppose that members of a natural kind should have properties in common and/or

be qualitatively similar through the lens of our perceptual equipment [?]. But one of the motivations for adopting the

constraint framework is to allow us to state more general possibilities for laws which do not necessarily correspond

to our existing intuitions, and therefore we reinforce that probabilistic conﬁrmation of this sort is not valid only for

categories of the sort which are traditionally regarded as natural kinds - in principle it is legitimate for any category

which appears in the laws of nature, regardless of whether that category looks like a natural kind to us. Of course,

in practice we have to make decisions about what sort of event classes can be expected to exhibit stable relative

frequencies across time, and in the absence of other evidence these decisions are often based on our intuitions about

natural kinds. But one can imagine cases where we might make these decisions for other reasons - for example, if a

theory which has been successful in one domain were to be uniﬁed with another domain in a way that leads naturally

to laws governing probabilities over a class of events that we would not otherwise have considered a candidate for a

‘natural kind.’

Let us move to the ﬁnal two desiderata, which are somewhat more mathematical in character. It is clear that

chances obtained from frequency constraints will satisfy exchangeability, since these constraints govern only relative

frequencies and not order of appearance. But what about counterfactual independence? Well, we have seen that in the

frequency constraint picture, observing random events must be understood as a form of sampling without replacement,

and so just as we would update our probabilities for the next event after each random draw when performing sampling

without replacement, it seems we ought to do the same in the frequency constraint case: if I know that exactly 50

14

percent of events of a given type must have the outcome A, then if my ﬁrst one hundred observations all have the

outcome B I will conclude that the remaining set of events must contain one hundred more As than Bs, so I will

perform the standard sort of inference we see in sampling without replacement, which involves updating my credences

to reﬂect the fact that the next outcome is more likely to be A than B. Note that this remains true for any ﬁnite total

number of events, so the conclusion does not change if I don’t know the total number of events. It also doesn’t matter if

we have a constraint which prescribes the probability only approximately; there will always be some threshold number

nof Bs such that if I have observed nmore Bs than As, then the set of unobserved events must contain more As than

Bs in order to get ﬁnal frequencies which are correct to within the limits allowed by the relevant constraint.

This seems problematic, as the absence of counterfactual independence jars with many of our intuitions about

probability - for example, it has the consequence that under certain circumstances the gambler’s fallacy is not really a

fallacy at all. Indeed Hajek argues against standard frequentism on similar grounds, pointing out that deﬁning proba-

bilities as relative frequencies prevents us from saying that the probability for an individual event is counterfactually

independent of what happens in other similar events distant in space and time [47]. So we must now ask ourselves

whether we can accept an approach which fails to satisfy counterfactual independence as an acceptable analysis of

objective chance. One possible response for the proponent of frequency constraints is to point out we will not get

failures of counterfactual independence if the total number of instances is inﬁnite, since sampling without replacement

from an inﬁnite set is mathematically equivalent to sampling with replacement. And after all, we can never know for

sure that the number of instances of an event type is ﬁnite, since that would require us to have illegitimate knowledge

of the future, so it might be argued that we should never make updates to our probabilities which are premised on the

number of instances of an event type being ﬁnite, meaning that the usual sort of inferences we see in sampling-without-

replacement should not be allowed with regard to sequences of chancy events. However, although it is true that we can

never know for sure that the number of instances is ﬁnite, nonetheless one can imagine cases where our best theories

give us good reason to believe that this is so (for example, if we had a theory which implied that spacetime is discrete

and the universe has a ﬁnite extent and time has both a start and an end), so at least in principle there are circumstances

where counterfactual independence really would be violated in this picture.

Another possible response is to point out that we would only ever get violations of counterfactual independence

with regard to sequences of chancy events if we had knowledge of frequency constraints that would trump the observed

relative frequencies, and one might think that this could never come about. For example, in real life if we performed

one hundred measurements and got the result B every time, we would probably form the working hypothesis that

there is a constraint requiring that all events of this type must have outcome B, rather than supposing that exactly ﬁfty

percent of all events of this type must have outcome B and that therefore subsequent events must be more likely to

have outcome A than outcome B in order to make up the total. However, probabilistic laws do not usually exist in

isolation, and therefore one can imagine a case where we decide based on considerations of symmetry or coherence

with the rest of our theory that the objective chance of B is 0.5 even though so far we have seen signiﬁcantly more

Bs than As. It would seem that under those circumstances, if we accepted the frequency constraint analysis and we

had reason to believe that the total number of instances of the relevant event type across all of spacetime would be

ﬁnite, we would have reason to assign a credence greater than 0.5 to the proposition that the next such event will have

outcome A.

So in fact, probably the appropriate response here is to simply say that violations of counterfactual independence

should sometimes be allowed with respect to sequences of chancy events, because there is little reason to expect our

intuitions to be a good guide on this point. For a start, most of our intuitions are based on probabilistic events that

involve subjective probabilities rather than objective chances of the kind that appear in the fundamental laws of nature

- due to decoherence, macroscopic events do not typically depend sensitively on quantum mechanical measurement

outcomes, and therefore paradigmatic probabilistic events like ‘rolling a die’ or ‘ﬂipping a coin’ can be understood

entirely in terms of our ignorance of the speciﬁc initial conditions. Provided that the relevant initial conditions are

independent (or at least independent enough for practical purposes) these sorts of events do indeed obey counterfactual

independence, which is likely the source of our intuitions on this point. Furthermore, even when we do observe events

that may involve objective chances, such as the results of quantum measurements, we are presumably observing only

a very small subset of the total number of events of this type, so we should not expect these observations to tell us very

much about the class as a whole. If we know that the exactly ﬁfty percent of instances of a certain event type have

the outcome A, and also that 1011 of these events occur across all of spacetime, then in principle the fact that we have

15

observed one hundred more Boutcomes than Aoutcomes does make it more likely that the next outcome will be an

A, but in practice the change in probability is so small that it will not lead to any observable effects.

Therefore, the fact that counterfactual independence conﬂicts with our intuitions is no reason to reject constraint

frequencies as an approach to chance, since our intuitions have been developed in entirely the wrong context of ap-

plication. Indeed, counterfactual independence can be regarded as a way of expressing the expectation that objective

chances should be intrinsic properties of individual entities which are spatially and temporally local; but in many ways

chances actually look like properties not of individual entities but rather of large collections of entities. For example,

we can’t ascertain chances during a single observation, but rather we must observe a large number of events of the

same type and then make an inference about what the chances are. A number of popular approaches to the analysis of

objective chance - particularly the propensity interpretation, which holds that chances are intrinsic properties of indi-

vidual entities akin to colours and masses [48] - struggle to make sense of this feature [49]. But frequency constraints

do justice to the apparently collective nature of objective chance by turning chances into a form of global coordination

across space and time, where events are constrained to coordinate non-locally amongst themselves so as to ensure

that outcomes occur with (roughly) the right frequency. Hajek worries that ‘it’s almost as if the frequentist believes

in something like backward causation from future results to current chances,’ - and indeed, that is exactly what the

proponent of frequency constraints does believe!

Frequency constraints are similar to a view that has been advanced by Roberts under the name of nomic frequentism

[50]; much of Roberts’ discussion is also relevant here, so we won’t dwell on the points of agreement, but in appendix

B we discuss a few points of difference. Frequency constraints are also clearly related to ﬁnite frequentism, which is

which is the view that objective probabilities are by deﬁnition equal to the actual relative frequencies of the relevant

sort of event across all of spacetime. But the frequency constraint approach avoids several of the main problems

encountered by ﬁnite frequentism. For example, it has been objected that ﬁnite frequentism entails that probabilities

can’t be deﬁned for processes which as a matter of fact occur only once or not at all [47]. But this is not a problem

for the frequency constraint view provided that we take a robust attitude to modality which maintains that constraints

are ontologically prior to the Humean mosaic - in that case a constraint exists and is well-deﬁned regardless of how

many occurrences of the event type in question actually occur. Another common objection to ﬁnite frequentism is

that it has the result that probability depends on a choice of reference class, and in the case of macroscopic everyday

events this can be very nontrivial, because different descriptions of the event may suggest different natural reference

classes which can lead to very different probabilities [47]. However, this is not a problem for the frequency constraint

view provided that we take constraints to be objective and mind-independent, because the reference class is deﬁned

within the constraint and therefore there is always a deﬁnite fact about which reference class is relevant. Of course,

there is still an epistemological question about how we as observers can decide which reference class actually appears

in the real underlying constraint, and it’s always possible that we will get it wrong and will thus make the wrong

predictions, but there is no ambiguity in the deﬁnition of the objective chance itself. Roberts gives more examples of

the advantages of frequency constraints over ﬁnite frequentism in ref [50].

6 Related Topics

In this section, we discuss the relationship between global determinism and some other relevant research programmes.

6.1 Superdeterminism

Since the terms ‘global determinism’ and ‘superdeterminism’ are superﬁcially similar, it’s important to reinforce that

these concepts are not identical. Broadly speaking, ‘superdeterminism’ describes approaches to quantum mechanics

which deny the existence of non-locality in quantum mechanics by rejecting the assumption of statistical indepen-

dence which goes into Bell’s theorem, i.e. the idea that our choice of measurement is independent of the state of the

system which we are measuring [63–65]. There are two typical ways to achieve this. First, given that our choice of

measurement is always determined or at least inﬂuenced by facts about the past (e.g. the physical state of our brain in

the time leading up to our choice) one could imagine the initial state of the universe being arranged such that our mea-

surement choices are always correlated with the states of the systems we are measuring. Second, one could imagine

16

there is some sort of retrocausality or ‘future outcome dependence’ where the state of the system we are measuring is

inﬂuenced by our future decisions about what measurement to perform.

It is immediately clear that neither of these approaches actually requires determinism: a probabilistic dependency

between choice of measurement and state is already enough to vitiate statistical independence and thus restore locality.

Thus it is actually somewhat misleading to use the term ‘superdeterminism’ to refer to the violation of statistical in-

dependence. The term ‘superdeterminism’ contains the word ‘determinism’ because Bell originally proved a theorem

ruling out deterministic local hidden variable theories [66], and so at that time superdeterminism was suggested as

an approach to restore both locality and determinism; but a later version of Bell’s theorem ruled out all local hidden

variable theories, either probabilistic or deterministic [67], so in fact determinism is not really the key issue here,

and modern proponents of superdeterminism are usually motivated more by the desire to preserve locality than any

particular interest in preserving determinism.

That said, for the moment let us focus on superdeterministic approaches which are also deterministic in some sense.

What sort of determinism would that be? Well, if we take the ﬁrst route where correlations between measurement

choices and states are written into the initial state of the universe, then we have just standard Laplacean determinism

(and thus also weak global determinism). If we take the second route involving retrocausality, the resulting theory

would probably not be compatible with Laplacean determinism, because events which are determined by something

in the future usually can’t also be fully determined by the past; but such a theory certainly could exhibit either weak

or strong global determinism. However, although superdeterminism can be a form of global determinism, the concept

we have deﬁned here is signiﬁcantly more general than superdeterminism. In particular, a major motivation for our

approach to was to understand what determinism could look like in a world which includes spatial and/or temporal

non-locality, whereas superdeterminism is typically employed as a way of denying the existence of either spatial or

temporal non-locality, and therefore global determinism also accommodates realist approaches which take a very

different approach from superdeterminism.

6.2 Deterministic Probabilities

There is a long-standing philosophical debate around the question of whether it is possible to have chances in a deter-

ministic world [68–70], but this debate usually takes place against the backdrop of Laplacean determinism. Moving to

a picture based on global determinism therefore opens up new ways in which we can have ‘chancy’ events in a deter-

ministic world, since we saw in section 5 that an event can be described by a non-trivial probability distribution when

we condition on all of the information available to local observers, even if the event is not objectively chancy when

we consider the way in which it is embedded in the complete objective modal structure of reality. This reinforces the

point that ‘chanciness’ is in many cases relative to a perspective: obviously if the outcome of a process is determined

by something in the future it is for all practical purposes objectively chancy from the point of view of observers like

ourselves who have directly epistemic access only to the past and present, but this does not necessarily mean that the

process must be ‘chancy’ from the point of view of the underlying modal structure. This particular approach does

depend on the assumption that there exists some unique observer-independent modal structure, but it’s clear that even

without that assumption we will get different conclusions about whether a process is ‘chancy’ depending on which

information about reality we are willing to condition on, and therefore questions about the existence of chances in a

deterministic universe should be phrased carefully to account for the way in which different perspectives will lead to

different conclusions even in very simple cases.

6.3 Closed Causal Loops

Refs [71, 72] employed a notion of ‘global determinism’ which was similarly intended as a generalization of deter-

minism for non-Newtonian contexts. In this section we consider how that approach to determinism relates to the

framework we have set out in this article.

In refs [71, 73], g lobal d eterminism was used in the form of a prohibition on indeterministic c losed loops: the idea

is that the values of variables in these closed loops ‘come out of nowhere,’ which is to say, they are not determined

by anything, so closed causal loops are not consistent with a globally deterministic universe. Now, clearly variables

which are not determined by anything are incompatible with a universe that obeys strong global determinism, so the

argument of refs [7 1,73] work perfectly well if ‘global determinism’ is intended to mean ‘strong global determinism.’

17

Moreover, indeterministic closed loops are also ruled out by hole-free weak global determinism, because the arbitrari-

ness associated with the variables in a closed causal loop is localised to the events inside the loop. On the other hand

indeterministic closed loops are not ruled out by weak global determinism simpliciter, since the values of variables

inside the loop could be arbitrary rather than chancy. Thus insofar as the results of refs [71, 73] can be regarded as

evidence for the hypothesis of global determinism, they should be regarded as evidence for strong global determinism

or hole-free weak global determinism.

7 Conclusion

We have argued that deﬁnitions of determinism drawn directly from Laplace’s original vision are no longer ﬁt for

purpose, since they can’t be usefully applied to the diverse range of laws outside the time evolution paradigm that ap-

pear in modern physics. However, we contend that once determinism is disentangled from predictability, the intuition

behind determinism captures a real and important classiﬁcation of ways the world could be. Thus in this article we

have provided several generalized deﬁnitions of determinism which can be applied to a wide variety of nonstandard

laws, thus rehabilitating determinism for the post-time-evolution era.

Using these deﬁnitions, we have shown that a world governed by global laws may fail to satisfy the standard

conception of Laplacean determinism but may nonetheless exhibit a form of determinism on a global scale, where

we say we have some form of ‘determinism’ provided that we are never required to assign probability distributions

over entire Humean mosaics or worlds. This has the consequence of dissolving the traditional dichotomic dilemma

where we supposedly have to choose between Laplacean determinism and intrinsic randomness - a world governed

by global laws may exhibit what appear to be objective chances from the point of view of local observers whilst

requiring no irreducible objective chances from the external, global perspective. We have discussed one possible

way of implementing this vision by means of constraints prescribing relative frequencies across spacetime, although

frequency constraints are only one possible way in which chancy events could arise within a globally deterministic

universe.

Several interesting conceptual points have come up along this journey. In particular, the possibility of weak global

determinism requires something of a conceptual shift, requiring us to distinguish between events which are chancy

and events which are undetermined but nonetheless not chancy. This distinction is not entirely new - for example, it

is common to hold that the initial conditions of the world are not determined by anything but also not governed by

any objective probabilities - but it is not always well articulated, and often fails to be taken seriously as an option

for any phenomena other than the initial conditions of the universe. Weak global determinism thus offers a welcome

halfway house for those who ﬁnd it implausible that everything in the universe is fully determined by the laws of

nature, but who are also suspicious of the ill-deﬁned concept of objective chance. We have brieﬂy surveyed some of

the consequences of this new perspective, nothing that it offers interesting insight into for long-standing debates in

which determinism plays a role within both physics and philosophy.

A Simple and functional constraint frequentism

The simplest way to deﬁne frequency constraints is to say that the actual relative frequency of a certain sort of event

across the whole of spacetime must be equal or approximately to some value. That is to say, these constraints are

of the form ‘the actual mosaic must belong to the set of mosaics in which the actual relative frequency of events of

type E is equal to some value.’ However, on their own constraints of this kind do not seem capable of explaining the

sorts of inferences we make about the relationships between different types of events. Simple frequency constraints

are capable of explaining why it is the case that observing a sequence of events of type Mcould lead us to form

expectations about the results of future events of type M; however, they do not explain why it is the case that observing

a set of events of various types sometimes leads us to form expectations about the results of events of a type we have

not yet observed. For example, consider a ﬁxed measurement with two outcomes, Aand B, on a particle in a set of

states sin(θ)|0i+cos(θ)|1iparametrized by θ. Suppose I perform one thousand measurements for each value of θin

the set S={0,π

20 ,2π

20 ... 9π

20 ,π

2}and I observe that in each case the relative frequency for outcome Ais very close to

cos2(θ). This gives me grounds to form expectations about the likely results of future measurements on states with θin

18

the set S, but what about a measurement on a state with θ=23π

276 ? According to the simple frequency constraint view,

I have no grounds for forming any expectation here, because I have not observed any events of this particular type and

therefore I don’t have access to any relative frequencies which could inform my expectations. Yet in a real situation of

this kind we would clearly be inclined to set our probability for outcome Aequal to cos2(23π

276 )- indeed, this is exactly

what we do in fact do in quantum mechanics, since we take it that the theory gives us well-deﬁned probabilities for

all quantum measurements, even though the set of states is continuous so we can’t ever actually perform all possible

measurements. Neither standard frequentism nor simple constraint frequentism view can easily account for this sort

of interpolation; indeed in both cases it seems as if we must regard the similarity of the functional form for the relative

frequencies across different values of θas a mere coincidence.

The obvious way to ﬁx this problem by moving to functional constraints, where we deﬁne relative frequencies not

case by case but rather in terms of functional relationships. So for example, we might postulate a constraint which

says something like ‘for a set of continuous measurements parametrised by θ, the relative frequency of outcome one

across all actual measurements for a given θis given by f(θ),’ where fis some suitable function of θ. But this leads

to new problems if the range of values of one or more parameters is continuous - for then if we select our parameters

at random, no event type will ever occur more than once, and therefore none of these relative frequencies will not be

well deﬁned. Moreover, even if we don’t select the parameter at random and instead try to prepare the same value

of the parameter many times, one might worry that due to limitations on the accuracy of our state preparations we

will never in fact be able to prepare the same parameter twice, so the relative frequencies still won’t be well deﬁned.

Yet in scenarios of this type the probabilities still exist and have an observable effect - for example, in the case where

the functional relationship is cos2(θ), then we would expect to see outcome Amore often for values of θclose to

zero, and this effect will be observable even if we never measure the same state twice. So simply constraining relative

frequencies doesn’t seem to give the right results for continuous measurements.

Moreover, relaxing the constraints doesn’t help either. Suppose we insist that the actual relative frequencies are

always such as to minimize the distance (by some appropriate metric) from the true probability, conditional on the

actual number of outcomes. Various metrics are possible, but most sensible metrics (including, for example, the one

proposed by Roberts [50]) would seem to have the property that if only one measurement of a given type is ever made,

then the outcome which is most likely must occur. This might not be a problem if in fact the vast majority of event

types happen a great many times, but it causes difﬁculties if we consider the case of continuous measurements where

each event type happens only once, since then every measurement must always have its most likely outcome - so for

example, in the cos2(θ)example, it would necessarily be the case that every measurement with θ < π

3would have

outcome Aand every measurement with θ > π

3would have outcome B. This would prevent us from ever assigning

any non-trivial probabilities to event types with continuous parametrisations, whereas in quantum mechanics we quite

frequently do exactly that.

Perhaps the most elegant solution to this problem is to hypothesize that the world is fundamentally discrete, so

that no parametrisation is ever really continuous; then we can straightforwardly return to the idea that probabilistic

laws correspond to constraints constraining actual relative frequencies. In particular, this would probably involve

insisting that spacetime is discrete, since probabilistic measurements in quantum mechanics may be parametrized by

spatiotemporal variables. But while there are a few indications from modern physics that discretized spacetime is not

out of the question, concluding that spacetime must be discrete based solely on a conceptual analysis of probability

seems a bit of a reach, so it behoves us to consider alternative possibilities. One option would be to impose an artiﬁcial

discretization - for example, in the case discussed above, we could divide [0,π

2]into Nintervals and then insist that

the number of Aoutcomes across the set of all measurements with θ∈[nπ

2N,(n+1)π

2N]must be (approximately) equal

to cos2(mπ

2N). Here, Nshould be regarded as a free parameter which we can set to ensure that this constraint is

functionally indistinguishable from the usual continuous probability rule. This allows us to obtain what is in effect a

ﬁnite set of prepare-measure scenarios from the original continuous set of prepare-measure scenarios, and therefore

we can straightforwardly apply simple or functional constraints. However, it must be admitted that insisting on an

overlay of discreteness within a continuous theory looks very awkward, and indeed if we really believed that all

possible spacetime measurements were discretized in this way, then in a sense it would really be the discretized

units rather than the continuous underlying manifold which played the relevant functional role in our spatiotemporal

experiences, and at this juncture one might well start to wonder whether there is really any justiﬁcation for retaining

the manifold in the theory at all. An alternative would be to postulate a function f(θ, n)such that for each pair θ, n, if

19

nmeasurements are made with parameter θacross the whole of spacetime, then the total number of Aoutcomes must

be equal to f(θ, n). However, for the reasons we have just discussed, we can’t choose this function such that f(θ, n)

is always equal to the most likely number of Aoutcomes, since then we would have f(θ, 1) = 1 for every θwhere A

is the most likely outcome, and we have already seen that this doesn’t work out. So either we would have to let f(θ, n)

depend on something else in the universe, or we would have to choose it at random, in which case we seem just to

have shifted ‘objective chance’ into the choice of the constraint rather than eliminating it entirely.

A further problem for both simple and functional constraint frequentism is the possibility that the relevant event

type might occur an inﬁnite number of times. In this case, we can’t simply impose a frequency constraint of the kind

discussed above; we might hope to impose a constraint on the limiting relative frequency of a given outcome, but in an

inﬁnite sequence one can obtain any relative frequency one likes by simply rearranging the items in the sequence, so

this will not produce well-deﬁned results [50]. What we seem to need is some rule to the effect that in any sufﬁciently

large patch of spacetime (containing a large but ﬁnite number of instances of the event type in question), the relative

frequencies of the various outcomes must obey the relevant constraint. That is to say, we should impose a constraint

which selects only Humean mosaics where the frequencies are approximately uniform across spacetime and where the

relevant ﬁnite frequency constraint is satisﬁed within all sufﬁciently large patches of spacetime. For example, Roberts

proposes the requirement that if we select any spacetime point and then move outwards in a series of concentric 4-

dimensional shells, counting instances of the relevant event type in each shell as we go, then the resulting sequence

must converge to the appropriate relative frequency. Alternatively, note that we have already seen that constraint

frequentism seems to work best in a context where spacetime is discretized; and if spacetime is discretized and also

bounded (i.e. there is both a start and end of time and space has a ﬁnite extent), then no event type can be repeated an

inﬁnite number of times, since there are not an inﬁnite number of possible locations at which events can occur. So if

we have already accepted the need for discretization to make sense of constraint frequentism, we might also be willing

to consider insisting that spacetime is bounded, in which case we have no need to deal with the inﬁnite case.

It’s interesting that there appears to be a close connection between global determinism and discreteness/ﬁniteness;

we intend to investigate this connection further in future work. Note, however, that the apparent connection holds

only if we insist that the fundamental laws of nature include irreducible probabilities, and if we insist that those

fundamental laws must be analysed as frequency constraints as opposed to some other sort of nonlocal determination.

If for example one were to move to an account like the de Broglie-Bohm interpretation of quantum mechanics where

at the most fundamental level the theory exhibits local determinism, then one can certainly have global determinism

without any need for discretization, since there will be no need to postulate any frequency constraints.

B Nomic frequentism

Roberts’ ‘nomic frequentism’ proposal [50, 57], where he suggests that laws about probabilities should be understood

in terms of laws of the form ‘R percent of the Fs are Gs’ has much in common with the frequency constraint approach

that we have discussed in this article. However, there are a few key differences. First, Roberts presents nomic fre-

quentism as a general analysis of all probabilities that appear in laws, including for example the laws of evolutionary

biology, whereas we have suggested it only for the speciﬁc case of objective chances appearing in the fundamental

laws of nature, and in particular quantum mechanics - we would expect that the laws of evolutionary biology could be

satisfactorily understood in terms of subjective probabilities. Moreoever, we do not even claim that all of the objective

chances appearing in the fundamental laws of nature must be attributed to frequency constraints; we merely observe

that this is one possible way in which apparently probabilistic events could arise in a deterministic universe. So our

account is in that sense considerably less general and ambitious than Roberts’.

Second, Roberts suggests that ‘eighty percent of As are Bs’ should be regarded as conceptually equivalent to a law

like ‘All As are Bs,’ (or rather, the latter should be regarded a special case of the former). But there is one important

conceptual difference: the constraint ‘All As are Bs’ does not seem to require any ‘communication’ between the As, as

it is enough that each Ahas the intrinsic property of being a B, whereas ‘exactly eighty percent of As are Bs’ or even

‘approximately eighty percent of As are Bs’ does seem to require some sort of coordination, as in order to ensure that

the relative frequency is exactly or approximately correct, it seems that each A must ‘know’ something about what the

other As are doing. Thus frequency constraints seem to require some form of non-locality (as Roberts himself later

observes), which means we are in quite a different conceptual space from standard laws like ‘all As are Bs.’

20

Third, Roberts’ solution to the apparent absence of counterfactual independence within nomic frequentism is to say

that ‘the way a nomic frequentist will represent independence of distinct fair coin-tosses is by denying the existence

of any law that implies that the conditional frequency of heads on one toss given the results of another toss is different

from (50 percent).’ But as we have seen, it does not seem that this can be entirely correct: constraint frequentism does

allow the violation of counterfactual independence in at least certain special cases. Roberts justiﬁes his solution on the

basis that knowing facts which violate counterfactual independence ‘would require (us) to have advance intelligence

from the future,’ but this argument seems to depend implicitly on the assumption that we are in a Humean context

where the only information which is relevant to inferences about a future event is an actual observation of the event

itself. But if the laws which induce the frequency constraints are understood as laws which are ontologically prior to

the Humean mosaic, and if we are able to make correct inferences about the laws based on observations of a limited

subset of the mosaic, then we would in principle be able to know about violations of counterfactual independence

without having any illegitimate information about events which have not yet occurred. This sort of ‘knowledge of the

future’ is not really any different from the knowledge of the future that we get from more familiar sorts of scientiﬁc

laws: the laws constrain the Humean mosaic, including the future, and thus by ﬁguring out the laws we can make

inferences about the future.

References

[1] Pierre Simon Laplace. Th ´

eorie analytique des probabilit´

es. Courcier, 1820.

[2] Thomas M ¨uller and Tomasz Placek. Deﬁning determinism. The British Journal for the Philosophy of Science,

69(1):215–252, 2018.

[3] K. Wharton. The Universe is not a Computer. In Foster B. Aguirre, A. and G. Merali, editors, Questioning the

Foundations of Physics, pages 177–190. Springer, November 2015.

[4] Emily Adlam. Laws of nature as constraints, 2021.

[5] Lee Smolin. The unique universe. Physics World, 22(06):21, 2009.

[6] A.J. Brizard. An Introduction to Lagrangian Mechanics. World Scientiﬁc, 2008.

[7] R.P. Feynman, A.R. Hibbs, and D.F. Styer. Quantum Mechanics and Path Integrals. Dover Books on Physics.

Dover Publications, 2010.

[8] J. B. Hartle. The spacetime approach to quantum mechanics. Vistas in Astronomy, 37:569–583, 1993.

[9] R. D. Sorkin. Quantum dynamics without the wavefunction. Journal of Physics A Mathematical General,

40:3207–3221, March 2007.

[10] Emily Adlam. Spooky Action at a Temporal Distance. Entropy, 20(1):41, 2018.

[11] Yakir Aharonov and Lev Vaidman. The Two-State Vector Formalism of Quantum Mechanics, pages 369–412.

Springer Berlin Heidelberg, Berlin, Heidelberg, 2002.

[12] John G. Cramer. The transactional interpretation of quantum mechanics. Rev. Mod. Phys., 58:647–687, Jul 1986.

[13] A. Kent. Solution to the Lorentzian quantum reality problem. Phys Rev A, 90(1):012107, July 2014.

[14] Ken Wharton. A new class of retrocausal models. Entropy, 20(6):410, May 2018.

[15] R. Sutherland. Causally Symmetric Bohm Model. eprint arXiv:quant-ph/0601095, January 2006.

[16] L. Masanes, R. Renner, M. Christandl, A. Winter, and J. Barrett. Full security of quantum key distribution from

no-signaling constraints. Information Theory, IEEE Transactions on, 60(8):4973–4986, Aug 2014.

21

[17] M. Pawlowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, and M. ˙

Zukowski. Information causality as a

physical principle. Nature, 461:1101–1104, October 2009.

[18] V. Scarani, S. Iblisdir, N. Gisin, and A. Ac´ın. Quantum cloning. Reviews of Modern Physics, 77:1225–1256,

October 2005.

[19] Eddy Keming Chen and Sheldon Goldstein. Governing without a fundamental direction of time: Minimal prim-

itivism about laws of nature, 2021.

[20] J. S. Bell and Alain Aspect. Are there quantum jumps? In Speakable and Unspeakable in Quantum Mechanics,

pages 201–212. Cambridge University Press, second edition, 2004. Cambridge Books Online.

[21] Phil Dowe. Action at a temporal distance in the best systems account. European Journal for Philosophy of

Science, 9, 10 2019.

[22] Andrea Di Biagio, Pietro Don `a, and Carlo Rovelli. Quantum information and the arrow of time, 2020.

[23] H. Price. Time’s Arrow & Archimedes’ Point: New Directions for the Physics of Time. Oxford Paperbacks:

Philosophy. Oxford University Press, 1996.

[24] Eddy Keming Chen. Quantum mechanics in a time-asymmetric universe: On the nature of the initial quantum

state, September 2018.

[25] John D. Norton. The Hole Argument. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy.

Metaphysics Research Lab, Stanford University, summer 2019 edition, 2019.

[26] Bryan W. Roberts and James Owen Weatherall. New perspectives on the hole argument. Foundations of Physics,

50(4):217–227, 2020.

[27] David Lewis. A subjectivist’s guide to objective chance. In Richard C. Jeffrey, editor, Studies in Inductive Logic

and Probability, pages 83–132. University of California Press, 1980.

[28] David Lewis. Humean supervenience debugged. Mind, 103(412):473–490, 1994.

[29] Karl Popper. The Open Universe: An Argument for Indeterminism From the Postscript to the Logic of Scientiﬁc

Discovery. Routledge, 1992.

[30] Peter Clark and Jeremy Butterﬁeld. Determinism and probability in physics. Aristotelian Society Supplementary

Volume, 61(1):185–244, 1987.

[31] Jeremy Butterﬁeld. Determinism and indeterminism. 2005.

[32] D. M. Armstrong. What is a Law of Nature? Cambridge University Press, 1983.

[33] Alexander Bird. The dispositionalist conception of laws. Foundations of Science, 10(4):353–70, 2005.

[34] Brian Ellis. Scientiﬁc Essentialism. Cambridge University Press, 2001.

[35] David Lewis. On the Plurality of Worlds. Wiley-Blackwell, 1986.

[36] Nora Berenstain and James Ladyman. Ontic structural realism and modality. In Elaine Landry and Dean Rickles,

editors, Structural Realism: Structure, Object, and Causality. Springer, 2012.

[37] James Ladyman and Don Ross. Every Thing Must Go: Metaphysics Naturalized. Oxford University Press, 2007.

[38] Michael Esfeld. The modal nature of structures in ontic structural realism. International Studies in the Philosophy

of Science, 23(2):179–194, 2009.

[39] David Wallace. Naturalness and emergence, February 2019.

22

[40] R. Penrose. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford

Landmark Science. OUP Oxford, 2016.

[41] Christopher J. G. Meacham. Contemporary approaches to statistical mechanical probabilities: A critical com-

mentary - part i: The indifference approach. Philosophy Compass, 5(12):1116–1126, 2010.

[42] Germain Rousseaux. The Gauge non-invariance of classical electromagnetism. Annales Fond. Broglie, 30:387–

397, 2005.

[43] David Wallace. Everett and structure. Studies in History and Philosophy of Science Part B: Studies in History

and Philosophy of Modern Physics, 34(1):87–105, 2003.

[44] Alan H´ajek. Interpretations of probability. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy.

Metaphysics Research Lab, Stanford University, winter 2012 edition, 2012.

[45] Timothy McGrew. Direct inference and the problem of induction. The Monist, 84:153–178, 04 2001.

[46] W. Van Orman Quine. Ontological Relativity and Other Essays. John Dewey essays in philosophy. Columbia

University Press, 1969.

[47] Alan H ´ajek. ”Mises Redux”: Fifteen arguments against ﬁnite frequentism. Erkenntnis (1975-), 45(2/3):pp.

209–227, 1996.

[48] Karl Popper. The logic of scientiﬁc discovery. Routledge, 1959.

[49] Antony Eagle. Twenty-one arguments against propensity analyses of probability. Erkenntnis, 60(3):371–416,

2004.

[50] John T. Roberts. Laws about frequencies. 2009.

[51] Carl G. Hempel and Paul Oppenheim. Studies in the logic of explanation. Philosophy of Science, 15(2):135–175,

1948.

[52] Carl G Hempel et al. Aspects of scientiﬁc explanation. 1965.

[53] David Wallace. The logic of the past hypothesis, November 2011. To appear in B. Loewer, E. Winsberg and B.

Weslake (ed.), currently-untitled volume discussing David Albert’s ”Time and Chance”.

[54] Steven Weinberg. The cosmological constant problem. Rev. Mod. Phys., 61:1–23, Jan 1989.

[55] Tom Banks, Michael Dine, and Lubos Motl. On anthropic solutions of the cosmological constant problem.

JHEP, 01:031, 2001.

[56] Albert Einstein. Albert einstein to max born 1. Physics Today, 58(5):16–16, 2005.

[57] David Wallace. The quantum measurement problem: State of play. 2007.

[58] Marco Genovese. Research on hidden variable theories: A review of recent progresses. Physics Reports,

413(6):319–396, 2005.

[59] P.R. Holland. The Quantum Theory of Motion: An Account of the de Broglie-Bohm Causal Interpretation of

Quantum Mechanics. Cambridge University Press, 1995.

[60] Clifford Williams. Free Will and Determinism: A Dialogue. Hackett Publishing Company, 1980.

[61] Peter Van Inwagen. The incompatibility of free will and determinism. Philosophical Studies, 27(3):185–199,

1975.

[62] Kristin M. Mickelson. The problem of free will and determinism: An abductive approach. Social Philosophy

and Policy, 36(1):154–172, 2019.

23

[63] Sabine Hossenfelder and Tim Palmer. Rethinking superdeterminism. Frontiers in Physics, 8:139, 2020.

[64] Sabine Hossenfelder. Superdeterminism: A guide for the perplexed, 2020.

[65] T. N. Palmer. Invariant set theory, 2016.

[66] J. Bell. Against ’measurement’. Physics World, August 1990.

[67] Michael Esfeld. Ontic structural realism and the interpretation of quantum mechanics. European Journal for

Philosophy of Science, 3, 01 2013.

[68] C. D. McCoy. No chances in a deterministic world.

[69] Luke Glynn. Deterministic chance. British Journal for the Philosophy of Science, 61(1):51–80, 2010.

[70] J. Dmitri Gallow. A subjectivist’s guide to deterministic chance. Synthese, forthcoming.

[71] Emily Adlam. Quantum Mechanics and Global Determinism. Quanta, 7:40–53, 2018.

[72] Emily Adlam. Tsirelson’s bound and the quantum monogamy bound from global determinism, 2021.

[73] Emily Adlam. Tsirelson’s bound and the quantum monogamy bound from global determinism, 2020.

24