ArticlePDF Available

Razor sharp: The role of Occam's razor in science

Wiley
Annals of the New York Academy of Sciences
Authors:

Abstract and Figures

Occam's razor—the principle of simplicity—has recently been attacked as a cultural bias without rational foundation. Increasingly, belief in pseudoscience and mysticism is growing. I argue that inclusion of Occam's razor is an essential factor that distinguishes science from superstition and pseudoscience. I also describe how the razor is embedded in Bayesian inference and argue that science is primarily the means to discover the simplest descriptions of our world.
This content is subject to copyright. Terms and conditions apply.
DOI: 10.1111/nyas.15086
PERSPECTIVE
Razor sharp: The role of Occam’s razor in science
Johnjoe McFadden
Leverhulme Quantum Biology Doctoral
TrainingCentre, University of Surrey,
Guildford, UK
Correspondence
Johnjoe McFadden, Leverhulme Quantum
Biology Doctoral Training Centre, University of
Surrey, UK.
Email: j.mcfadden@surrey.ac.uk
Funding information
The Leverhulme Trust, Grant/AwardNumber:
DS-2017-079; Biotechnology and Biological
Sciences Research Council, Grant/Award
Number: BB/010611/1
Abstract
Occam’s razor—the principle of simplicity—has recently been attacked as a cultural
bias without rational foundation. Increasingly, belief in pseudoscience and mysticism is
growing. I argue that inclusion of Occam’s razor is an essential factor that distinguishes
science from superstition and pseudoscience. I also describe how the razor is embed-
ded in Bayesian inference and argue that science is primarily the means to discover the
simplest descriptions of our world.
KEYWORDS
bayesian inference, history of science, occam’s razor, postmodernism, science education
Our modern world is built on the back of scientific advances such
as modern electronics and modern molecular vaccinology that deliv-
ered protection from Covid-19 disease within a year of the discovery
of SARS-CoV-2. Yet, significant levels of distrust in the enterprise
of science continue to be a feature of many cultures and societies.
According to a recent Gallup poll, for example, 20% of Americans
believe in the literal truth of the Christian bible, including the cre-
ation story.1Covid-19 vaccine hesitancy remains a persistent problem,
with rates (as of 2021) as high as 45% (Russia), 46% (Italy), 43%
(US), and 76% in Kuwait.2A recent international survey asking peo-
ple whether “they have a lot of trust in scientists to do what is right
for the public” returned rates ranging from 14% to 60% in different
countries, with a median of 36% and stark differences that depended
on political leaning—for example, with only 20% of right-identifying
Americans trusting scientists compared to 62% of left-identifying
Americans.3In education, despite progress there remains a consid-
erable gender gap between achievement in STEM subjects of girls
and women compared to boys and men at nearly all levels.4Over
8% of patients in some countries opt for alternative or complemen-
tary treatments, such as homeopathic medicine, that are supported by
public funds despite the fact that they undermine conventional treat-
ments and are contrary to fundamental scientific principles such as
the law of mass action (homeopathy) and have led to deaths.5,6,7,8
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided
the original work is properly cited.
© 2023 The Authors. Annals of the New YorkAcademy of Sciences published by Wiley Periodicals LLC on behalf of The New YorkAcademy of Sciences.
Despite the absence of evidence of harm, GM crops such as Golden
Rice, which is safe to eat9and has the potential to prevent vitamin
A deficiency (a leading cause of blindness and infant mortality in the
world), continue to be banned in many countries, including those com-
prising the EU.10 Climate skepticismaremains a significant challenge
to implementation of climate control measures.11 Aglobalsurveyof
participants from 52 countries published in 2020 concluded that “pub-
lic understanding of science is generally low”.12 In a recent article,
philosophers Stefaan Blancke and Maarten Boudry attribute the preva-
lence of pseudoscience belief systems to several causes, prominent
among them being a lack of understanding of fundamental principles
of science.13
While the above is problematic, one can ask even scientists whether
they fully understand the fundamental principles of their disciplines.
Philosophy of science is rarely taught as a component of scientific
education, but if pushed to identify the defining feature of their dis-
ciplines most scientists generally choose the principle of falsifiability
(attributable to Karl Popper).14 Yet, attempting to falsify a theory
or hypothesis can be as (or more) difficult as proving it’s truth or
veracity; this has been shown by re-emergence of theories/hypotheses
that had supposedly been falsified, such as inheritance of acquired
characteristics, now accepted within epigenetics, or what Einstein
aOften supported by populist politicians, who appear to be on the ascendency across the globe.
8wileyonlinelibrary.com/journal/nyas AnnNYAcadSci.2023;1530:8–17.
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES 9
called his “greatest blunder”, his cosmological constant,15 which has
recently “remerged” as dark energy. Even in everyday practice, exper-
imental scientists who discover evidence contrary to their favorite
theory/hypothesis will often turn to “fudge factors” to accommodate
contrary data.16
A famous example is the phlogiston theory invented by the Ger-
man chemists/alchemists Georg Ernst Stahl (1659–1734) and Johann
Joachim Becher (1635–82) to account for the observation that a
wooden log loses mass when it is burned to ash. They named the mate-
rial that leaves combustible material phlogiston (from the Greek phlox
for flame) and claimed that it was the agent of heat and combustion.
The later observation that some metals actually gain mass during burn-
ing might be assumed to have disproved the phlogiston theory, but
to save the theory phlogiston advocates proposed that some forms
possess negative mass.17 With an inexhaustible supply of fudge fac-
tors, even outlandish theories may remain consistent with any amount
of experimental data. It should also be remembered that some theo-
ries are, in practice, non-falsifiable (for example, string theory or the
Big Bang) yet are generally accepted as bone fide components of the
scientific enterprise.18
Other criteria that are often cited as essential for science do not
define it. For example, experimentation is highlighted in many defini-
tions of “science”; but the medieval alchemists performed thousands of
experiments that got them nowhere. Moreover, chefs may experiment
with a new recipe, just as a composer might experiment with new kinds
of composition, but neither are considered to be doing science. And,
as already highlighted, several areas of science, including pure math-
ematics and cosmology, are pursued without experimentation. Even
one of the most famous scientific theories of the 19th century, Dar-
win and Wallace’s theory of evolution by natural selection, was not
tested experimentally until the mid-twentieth century, with experi-
ments measuring the acquisition of genetic resistance to antibiotics by
bacteria.
Another criterion frequently cited to be important for science is
mathematical reasoning, but astrologers, numerologists, and home-
opaths also make use of mathematical principles. Still other definitions
emphasize science’s systematic approach to knowledge acquisition.
For example, according to Wikipedia, “Science is a systematic endeavor
that builds and organizes knowledge in the form of testable explana-
tions and predictions about the universe.”19 But the definition could
easily be applied to almost any human endeavor, from cookerybto
plumbing or painting.
Without a guiding fundamental principle, the teaching of science
tends to lurch between educational approaches that emphasize sci-
ence as either a repository of knowledge or a methodology, prompting
the educator Jonathan Osbourne to argue in response to proposed U.S.
educational reforms that
a basic problem with the emphasis on teaching science
through inquiry is that it represents a confusion of the
bE.g., cookery is a systematic endeavor that builds and organizes knowledge of flavors, foods,
and cooking procedures to form testable predictions of what makes a tasty meal.
goal of science—to discover new knowledge about the
material world—with the goal of learning science—to
build an understanding of the existing ideas that con-
temporary culture has built about the natural and living
world that surround us.20
The Razor
Occam’scrazor,21,22 or the principle of parsimony that “entities should
not be multiplied beyond necessity” was highlighted as a fundamental
principle of modern science by many of its pioneers. The razor owes
itsnametothe14
th century Franciscan friar William of Occam (1287–
1347).21 Born in the village of Ockham in Surrey, William studied and
taught at Oxford where he used his razor and radical nominalism to dis-
mantle much of medieval metaphysics, an accomplishment that led to
his trial, before the Pope in Avignon, for heretical teaching. After accus-
ing the Pope of heresy, William was forced to flee Avignon and died in
exile in 1347. Yet his work inspired a new movement, known as the via
moderna, in European universities that went on to influence the Renais-
sance, the Enlightenment, and the Scientific Revolution,23–25 though it
is ignored in most histories of science today.
The earliest scientific applications of Occam’s razor were to the
heavens. The Parisian via moderna scholar Jean Buridan (1301–58) con-
sidered the motions of the heavens. To account for these, medieval
astronomers preceding Buridan imported the ancient Greek complex
system of crystal spheres that carried the stars, five visible planets, sun,
and moon on their geocentric motions across the sky. For their diurnal
rotations, Buridan considered a simpler model. He wrote that,
Just as it is better to save the appearances through
fewer causes than through many . . .. Hence it is better
to say that the earth (which is very small) is moved most
rapidly and the highest sphere is at rest, than to say the
opposite.26
Buridan was essentially arguing that the heavenly body’s diurnal
rotations could be just a matter of perspective from a rotating Earth
and, in reality, “the highest sphere” carrying the fixedstars is stationary.
Note that Buridan did not argue for a rotating Earth on any observa-
tional grounds but only that it should be preferred because it is a simpler
model—hence, application of Occam’s razor.
Despite Buridan’s appeal to the razor, medieval astronomy
remained dominated by the Ptolemaic geocentric model inherited
from the Greek world, the latter accommodating the complex motions
in the heavens with an equally complex model of circles within circles
known as epicycles. Two centuries later, when Copernicus came to
study the Ptolemaic model, he was horrified by several of its features
prompting him to write that “having become aware of these defects,
I often considered whether . . . it could be solved with fewer and
much simpler constructions than were formerly used.”27 His radical
solution was to allow the Earth to move first, like Buridan’s model,
cThe alternative spelling “Ockham” is used in many publications.
10 ANNALS OF THE NEW YORK ACADEMY OF SCIENCES
by spinning on its axis each day. This move eliminated the Ptolemaic
diurnal circles from the sun, moon, and planets, a simplification that
helped Copernicus to discern an even simpler system with the Earth
not only spinning but orbiting the sun each year, which eliminated
Ptolemy’s annual circles from each of the five visible planets.
Famously however,despite its heliocentric perspective, Copernicus’
system retained several features of Ptolemy’s model, including per-
fect circles and uniform motion. Moreover,his heliocentric model made
no more accurate predictions than his geocentric Greek predecessor.
Lacking observational evidence in favor of heliocentricity, as Rhonda
Martens has argued28 both Copernicus and his follower Rheticus jus-
tified his heliocentric model primarily on the grounds of its superior
harmony and simplicity. For example, in his Revolutions, Copernicus
argued that, “I hold it easier to concede this than to let the man be
distracted by an almost endless multitude of circles, which those are
obliged to do who detain [the earth] in the centre of the world.”29 Rheti-
cus similarly claims that, “For in the common hypotheses [the Ptolemaic
system] there appeared no end to the invention of spheres”.30
Despite the failure of Copernicus’ argument to provide more accu-
rate predictions, the giants of the Scientific Revolution—from Tycho
Brahe to Johannes Kepler, Galileo, and Newton—were convinced his
heliocentric model should be preferred on the grounds that it was sim-
pler. For example, Brahe claimed that heliocentricity circumvents all
that is superfluous and discordant in the system of Ptolemy”,31 whereas
Kepler insisted that
She [nature] loves simplicity, she loves unity. Nothing
ever exists in her which is useless or superfluous, but
more often she uses one cause for many effects. Now
under the customary [Ptolemaic] hypothesis there is no
end to the invention of circles, but under Copernicus’s a
great many motions follow from a few circles.
In his Dialogue Concerning the Two Chief World Systems, published
in 1632,32 Galileo sets out more detailed arguments. First the pro-
Copernican protagonist Salviati makes a case for preferring simpler
solutions, writing that
it is much simpler and more natural to keep everything
with single motion than to introduce two. But I do not
assume that the introduction of the two be impossi-
ble, nor that I intend to draw a necessary proof of this;
merely a greater probability‘‘ (Dialogue 2, second day).
It is interesting to note that, several centuries before Bayesian
statistics aligned simplicity with probability (to be discussed below),
Galileo had already recognized that the case for simplicity is based on
probability rather than a proof. Salviati’s protagonist in the Dialogue,
Simplico, makes this explicit, as he points out to Salviati “it seems to me
that you base your case on the greater ease and simplicity of producing
the same effect”. Salviati goes on to apply the simplicity principle, first
to fixed stars, arguing that “it seems to me that it is much more effec-
tive and convenient to make them immobile than to have them roam
around ....
In the third day of the Dialogue, Salviati discusses the planets arguing
that “Ptolemy introduces vast epicycles adapting them one by one to
each planet . .. all of which can be done away with by one simple motion
of the Earth.” He also considers the absurd physicality of the Ptolemaic
system asking,
do you not think it extremely absurd, Simplicio, that in
Ptolemy’s construction where all planets are assigned
their own orbits, one above another, it should be nec-
essary to say that Mars, placed above the sun’s sphere,
often falls so far that it breaks through the sun’s orb,
descends below this and gets closer to the earth than
the body of the sun is, and then a little later soars
immeasurably above it?
He goes to argue:
you see gentlemen, with what ease and simplicity the
annual motions, if made by the Earth, lends itself for
supplying reasons for the apparent anomalies which
are observed in the movements of the five planets ....
It removes them all and reduces these movements to
equable and regular motion; and it is Nicolas Coper-
nicus who first clarified for us the reason for this
marvellous effect.
Finally, Sagredo, the arbiter of the debate, concludes:
for my part I am convinced, so far as my senses are
concerned, there is a great difference in the simplicity
and ease of effecting results by the means given in this
new arrangement than the multiplicity, confusion and
difficulty found in the ancient generally accepted one
.... Thus it is saidthat Nature does notmultiplythings
unnecessarily.
In his Principles of Philosophy, published in 1649,33 Descartes sim-
ilarly identified the Copernican heliocentric system as ‘‘somewhat
simpler and clearer’’ than either the Ptolemaic system or Brahe’s geo-
heliocentric system in which the Earth remained at the center of the
universe but the five planets revolved around the Sun, which itself
orbits the Earth.
Isaac Newton did not explicitly make a case for the heliocentric sys-
tem based on simplicity, presumably because by the time he wrote
The Mathematical Principles of Natural Philosophy (also known simply as
The Principia), in 1687, the heliocentric system was considered pretty
much proven by the much greater accuracy of Kepler’s astronomical
predictions based on his heliocentric system. However, in The Principia,
Newton nails his commitment to Occam’s razor explicitly as Rule 1 in
the section titled “Rules of Reasoning in Philosophy,” insisting that “we
are to admit no more causes of natural things than such as are both
true and sufficient to explain their appearances.”34 Newton’s authority
helped to cement the reputation of Occam’s razor as a scientific virtue
in modern science so that, for example, when describing the theory of
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES 11
natural selection, Alfred Russell Wallace(who independently of Darwin
developed the theory of natural selection), wrote that “the theory itself
is exceedingly simple, and the facts on which it rests—though exces-
sively numerous individually, and coextensive with the entire organic
world—yet come under a few simple and easily understood classes.”35
But not everyone was convinced of the value ofsimplicity. After pub-
lishing his special relativity equations in 1905, Albert Einstein strove
to find relativistic laws that incorporated gravity and acceleration. His
initial approach was to strive for completeness—by incorporating the
maximal amount of data—rather than simplicity. He constructed equa-
tions that incorporated as many observations as possible and then
attempted to work backwards to construct a simple unifying theory.He
even berated his colleague Max Abraham for his alternative approach
of first searching for the simplest or most elegant solutions, writing
that “I was totally ‘bluffed’ by the beauty and simplicity of his [Abra-
ham’s] equations”.36 He went on to blame Abraham’s failure on “what
happens when one operates formally [looking for elegant mathematical
solutions], without thinking physically.”
Yet, after spending around a decade ploughing, unsuccessfully,
through one complex equation after another, Einstein eventually
changed tack and adopted Abraham’s approach of examining only
the simplest and most elegant equations, and only later testing them
against physical facts. This “razor-first” approach led to the discovery
of a theory Einstein described as “of incomparable beauty”, the gen-
eral theory of relativity. This experience prompted him to re-evaluate
the role of simplicity in science and provided valuable insight into the
usefulness of the razor in theory construction. He wrote that:
A theory can be tested by experience, but there is no
way from experience to the construction of a theory,
[adding that] equations of such complexity . . . can be
found only through the discovery of a logically simple
mathematical condition that determines the equations
completely or almost completely.36
Here, Einstein describes what might be called the inverse problem
of theory construction: that it is easy to start from a simple theory and
generate complex outputs, but usually impossible to do the inverse.
This insight is essentially equivalent to the well-known category of
inverse problems in physics, engineering, and mathematics: given some
data or observations, the challenge is to find the parameters or inputs
into a model that predicts those observations. The problem is that,
although a precise configuration of inputs into a defined theory or
model completely determines its outputs (predictions of the theory
or model), the same is not true in reverse because the same set of
data could have been generated by a wide, potentially infinite variety
of theories or models that are suitably parameterized. So, the inverse
problem has no unique solution (for example, a potential infinite com-
binations of electrical sources can generate the same EMF,37 a problem
that gives rise to the difficulty in determining the precise electrical
sources in the brain that gave rise to an observed EEG signal38,39).
The ancient problem of solving the motions of the heavenly bodies
from Earth-based observations was also an inverse problem, as demon-
strated by the fact that two very different models—geocentric and
heliocentric—generated very similar predictions.dAs David Merritt
recently noted, “Even incorrect theories can make correct predic-
tions, and there will always be an infinite number of theories (most
of them yet undreamed of) that can correctly explain any finite set of
observations.”40
Backlash to the razor
Despite its success, the Occam’s razor simplicity criterion favored
by Copernicus and subsequent scientists has been criticized by sev-
eral prominent historians of science, including Thomas Kuhn41 and
Arthur Koestler.42 They argue instead that application of the razor is
based largely on ascetic, philosophical or cultural, rather than scientific,
grounds. In his book, The Sleepwalkers: A History of Man’s Changing Vision
of the Universe Koestler points out that because Copernicus retained
perfect circles and uniform motion in his model, he had to introduce
additional epicycles so that his final circle count was like Ptolemy’s.
Similarly, although in The Structureof Scientific Revolutions41 Kuhn listed
simplicity among his five core theoretical virtues that characterize sci-
ence, he went on to insist that “[j]udged on purely practical grounds,
Copernicus’ new planetary system was a failure, it was neither more
accurate nor significantly simpler than its Ptolemaic predecessor.” He
also claimed that Copernicus’ arguments from harmony or unification
“appeal, if at all .. . [to an] aesthetic sense, and that alone”.43 Lacking
justification in either simplicity (one of five core theoretical virtues
that characterize science) or accuracy, Kuhn concluded that scientific
advances, such as the Copernican revolution, are based not solely on
reason, but also on cultural bias, irrationality, and aesthetic preference.
This criticism of the influence of Occam’s razor in science remains
common. In medical diagnosis the razor is sometimes contrasted with
Hickam’s dictum, which states that patients can have more than one
disease rather than a single cause of their symptoms.eIn the scien-
tific literature, Occam’s razor has been attacked with claims that “its
rhetorical purpose [is] as an old saw persuading us to champion the
supposed virtue of simplicity.”44 In systems biology, for example, it
has been claimed that because life is “irreducibly complex” Occam’s
razor has no role in model selection.45 Recent popular science arti-
cles by influential authors have made similar claims that echo Kuhn
and Koestler’s dismissal of Copernicus’ simplicity claim, arguing that
Occam’s razor represents the “tyranny of simple explanations”46 or
that it is “appealing, widely believed, and deeply misleading”.47 This
dismissal of simplicity as a criterion for scientific advance has likely
contributed to what has been described as “a worrying trend to favour
unnecessarily complex interpretations”.48
The dismissal of simplicity as a valid criterion in science has, unsur-
prisingly, been picked up by critics of the scientific endeavor including
dCopernicus’s revolutionary movecan be seen as an early example of Einstein’s later approach
of finding a simple model or equation and only later checking whether it fits the facts. Coper-
nicus was lucky, his heliocentric model did pretty well. Simple theories or models are not
guaranteed to be true or even the best models, but they are a good starting point as there are
vastly fewer simple than complex models.
eOne form of Hickam’s dictum states: “A man can have as many diseases as he damn well
pleases.” See “Letter from the editor: Occam versus Hickam”. https://doi.org/10.1016/S0037-
198X(98)80001-1
12 ANNALS OF THE NEW YORK ACADEMY OF SCIENCES
post-modernist and relativist philosophers who insist that, without it,
science has no more claim to objective truth than witchcraft, folk belief,
or astrology. For example, the philosopher of science Paul Feyerabend
argued that:
To those who look at the rich material provided by his-
tory[ofscience]...itwillbecomeclearthatthereis
only one principle that can be defended under all cir-
cumstances and in all stages of human development. It
is the principle: anything goes.49
According to postmodernists such as Feyerabend, science merely
takes its place alongside other belief systems such as religion, mysti-
cism, witchcraft, folk beliefs, astrology,homeopathy, or the paranormal.
Each has, they claim, its own truths and none can claim any monopoly
on the truth. Feyerabend went on to champion his relativistic phi-
losophy in public education arguing that science should not have any
privileged status in school classes over mysticism, magic, or religion,
a view that was enthusiastically adopted by a generation of relativist
and constructivist philosophers and educators50 and purveyors of
so-called creation science.51 The supposed failure of the principle of
simplicity to account for the success of Copernicus’ heliocentric model
of the solar system has been, and remains, an influential idea. But is the
criticism valid?
Was Copernicus’ heliocentric system simpler than
Ptolemy’s geocentric model?
The key point is that neither Copernicus nor his followers such as
Brahe, Kepler, Galileo, or Newton based their support for the supposed
simplicity of his heliocentric model on any kind of crude circle count.
Instead, their case was based on the recognition of features that made
no physical sense in the geocentric system but were accommodated by
additional complexity that could be removedin the heliocentric system.
For example, in the phenomenon known as retrograde motion, planets,
which normally travel from East to West across the sky, occasionally
reversetheirdirectionofmotiontotravelWesttoEast,beforerevers-
ing again. In Ptolemy’s model, retrograde motion is accommodated
with planetary epicycles—essentially a wheel within a wheel—but
never explained. In the heliocentric model, these epicycles can be
removed as retrograde motion is revealed as an effect of perspec-
tive, caused by the Earth either overtaking, or being overtaken by, the
retrograde planet as both orbit the Sun.
Another set of planetary epicycles in the Ptolemaic model has a
period that exactly matches the Earth year. What is an Earth year doing
in the geocentric orbit of, say, Jupiter or Mars? It makes no physical
sense. The annual periodic motion is, once again, accommodated in
Ptolemy’s model by an epicycle, but not explained. These epicyclesvan-
ish in Copernicus’ model once its frame of reference is shifted from the
Earth to the sun.
It was the heliocentric system’s elimination of arbitrary and phys-
ically infeasible features, such as these, rather any circle count, that
FIGURE 1 Plot of orbit of the planet Venus from heliocentric (left)
and geocentric perspective with the semi-major axis earth equal to
0.0167 which is about the linear eccentricity of the planet’s orbit.
Drawn with Gerd Breitenbach’s “Curves of planetary motion in
geocentric perspective” orbital simulator available at
http://gerdbreitenbach.de/planet/planet.html.
convinced the giants of modern science that Copernicus’s system was
simpler, and thereby more likely to be right. As Tycho Brahe had
insisted, heliocentricity does indeed “circumvents all that is superflu-
ous and discordant in the system of Ptolemy.”31 However, as Koestler,
Kuhn and others point out, because he retained both perfect circles and
uniform motion, Copernicus did indeed have to add additional epicy-
cles that corrected for the eccentricity of the planetary orbits and
variations in orbital velocity. But, as Galileo had pointed out four cen-
turies earlier, these are tiny compared to the “vast epicycles” needed
in the geocentric system that (unknowingly) corrected for the Earth’s
rotation and orbit around the sun.
For example,Venus’ orbit has an eccentricity of 0.0068, which is very
close to being a perfect circle. The difference between the two correc-
tions is apparent in Figure 1, where the orbit of Venus (with 0.0068
eccentricity) has been plotted from a heliocentric (left) and geocentric
(right) perspective. The deviation from perfect circularity is there in
the heliocentric model but it is tiny compared to the geocentric model.
Kepler did not need to count circles to recognize which model was
simpler and thereby provided a suitable platform for his revolutionary
move of bending the circles into (slight) ellipses and abandoning Coper-
nicus’ commitment to uniform motion, to reveal the simple model of the
solar system that we know today.
So the giants of modern science were not fooled by Copernicus’
claim that his model was simpler than Ptolemy’s. As a physical model
of the world, rather than a calculating device, it was indeed simpler. But
are simpler models more likely to be “true”?
The Bayesian razor
As Elliot Sober has argued, there is not one but many forms of Occam’s
razor,21 each with its own meaning and justification. However, the form
of the razor that is embedded (though often not recognized as such) in
modern science can be called the Bayesian razor.
Although Bayesian inference was developed more than two cen-
turies ago, it wasn’t until 1939 that the statistician and geophysicist
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES 13
Harold Jeffreys provided a quantitative form of Occam’s razor via
Bayesian statistics in Chapter 5 of the first edition of his 1939 textbook
Theory of Probability.52 Further contributions were made by William
Jefferys and James Berger,53,54 as well as David MacKay55 in the early
1990’s. Their insights can be illustrated with the assistance of two dice:
a simple six-sided die and a more complex 60-sided die. Say I have both
dice and hidden I throw one of them. I call out the number 39 and ask
you to guess which die I have thrown. You consult the Bayes’ equation
P(A|B)=P(B|A)×P(A)/P(B), where Aand Bare events, such as throwing
either a six or sixty-sided die (the hypothesis) or that that a die lands
on a particular number (the data). Pis probability. So, P(A)istheprior
probability of throwing either the six or sixty-sided die which, assuming
I am fair, is 0.5 for each. P(B) is a normalizing factor that can be ignored
in this example. From the perspective of Occam’s razor, the key factor
is P(B|A), which is known as the likelihood, that is, the probability that
the data would have been generated, given the hypothesis P(A|B). The
value we want to calculate in this example is the posterior probability of
both dice having thrown the number 39. For the six-sided hypothesis,
the likelihood, P(B|A) is equal to zero, since a six-sided die cannot pos-
sibly throw the number 39. Multiplying the prior probability of 0.5 by
zero gives us zero; so there is zero probability that I threw the six-sided
die. For the sixty-sided die hypothesis, the likelihood of a sixty-sided
die throwing the number 39, P(B|A) equals 1/60. Multiplying this by the
prior of 0.5 gives a posterior probability of 1/120. To compare these
two hypotheses, we divide the larger by the smaller posterior prob-
ability, in this case 1/120 divided by zero, which equals infinity. The
sixty-sided die is infinitely more likely to have been the source of the
data than the six-sided die.
The answer is, of course, obvious without having to resort to
Bayesian inference. But it is less obvious for another example. Instead
of, say, the number 39 after throwing one of the two unseen dice again,
I call out the number 5. This number could have been generated by a
throw of either die. Since they have the same prior probability, are both
dice equally likely? Both Occam’s razor and Bayesian inference insist
that the simpler hypothesis, the six-sided die, should be preferred. This
can be seen is we go through the sums again. The priors of 0.5 are
unchanged and the likelihood of the sixty-sided die throwing the num-
ber five is the same as it was for throwing the number 39, 1/60. But
whereas there was zero likelihood of the six-sided die throwing the
number 39, there is a 1/6 likelihood of it throwing the number 5. When
these likelihoods are multiplied by the 0.5 priors, then the posterior
probability for the sixty-sided die throwing a 5 is again 1/120 but the
posterior probability for the six-sided die throwing the same number is
now 1/12. Comparing the two hypotheses, the simpler six-sided die is
ten times more likely to have been the source of the data than the sixty-
sided die. Occam’s razor and Bayesian inference agree that, all things
being equal, we should choose the simpler hypothesis.
Note however, that Bayesian razor delivers probabilities, not cer-
tainty (just as Galileo had intuited several centuries earlier). Sixty-sided
dice do occasionally throw the number 5, just as complex explanations
are sometimes correct. But if both simple and complex models, theo-
ries, or hypotheses account for the data equally well—just as did the
Ptolemaic and Copernican models—then, according to both Occam’s
razor and Bayesian inference, the simplest is more likely to be the
source of that data.
A useful way of visualizing the Bayesian razor is as a comparison
between the virtual space of possible data and the space of the actual
data. In the dice example, the space of actual data is the same for both
hypotheses: it is a single number. The space of possible data is, how-
ever, ten-times larger for the more complex sixty-sided die compared
to the six-sided die. The Bayesian razor is then a measure of the degree
of collapse of the space of the possible data onto the space of the data.
Simple models make sharp predictions, so the collapse is small. They are
favored over complex models with many parameters and larger possi-
ble data space that can be adjusted to fit a wider range of data. As the
physicist John von Neuman famously quipped “with four parameters I
can fit an elephant, and with five I can make him wiggle his trunk”.56 In
this sense, Ptolemy’s model was an astronomical elephant that, with its
ready availability of huge epicycles, could have fitted almost any kind
of motions in the heavens. The Bayesian likelihood of the data, given
the multiplicity of possible models, was small. Although Copernicus’s
model possessed a similar number of epicycles, most were tiny com-
pared to Ptolemy’s so the space of possible data that they could be
fitted to them was much more constrained. The Bayesian likelihood of
the data given the model was much larger than Ptolemy’s. Models with
even fewer parameters, such as Kepler’s elliptical solar system, make
even sharper predictions, so that, in a sense, it would be a miracle if they
fitted the data if they don’t also happen to be true.
The giants of modern science—such as Kepler, Galileo, or Newton—
did not need Bayesian inference to know that Copernicus’ model was
simpler than Ptolemy’s. As modern scientists, they had an instinctive
regard and respect for simplicity.fKarl Popper—a favorite philoso-
pher of many contemporary scientists and widely considered a giant of
philosophy of science—argued that simplicity goes together with the
criterion of falsifiability.InThe Logic of Scientific Discovery Popper wrote
that
Above all, our theory explains why simplicity is so desir-
able. To understand this there is no need to assume
‘a principle of economy of thought’ or anything of this
kind. Simple statements, if knowledge is our object, are
to be prized more highly than less simple ones because
they tell us more; because their empirical content is greater;
and because they are better testable. [Popper’s italics].14
Simple theories make sharper predictions than complex theories,
and so can be disproved by a greater range of data.
Occam’s razor and biology
But when is it justified to opt for a more complex model? This can be
explored in the field of biology where the role of the razor is most
fTalltales tend to be long tales, and they knew that it would take a very long tale to provide a
physical model that was consistent with the Ptolemaic system.
14 ANNALS OF THE NEW YORK ACADEMY OF SCIENCES
FIGURE 2 Hypothetical metabolic pathway where metabolite A is
converted to metabolite E via any combination of pathways B, C, or D.
often challenged with claims that biological systems are irreducibly
complex in the sense that they cannot be reduced to a collection of
parts. With arguments such as these, the biochemist and popular sci-
ence writer, Michael Behe, went on to argue that this irreducibility
could not be accounted for in standard neo-Darwinian evolutionary
theory.57 Although this claim has been widely refuted in the biological
literature (see e.g. Ref.58 ), it does highlight the danger of accepting that
there are areas of science where Occam’s razor is inappropriate.
One of the areas where the role of Occam’s razor remains partic-
ularly controversial is its application to complex systems, such as in
the science of systems biology. Model construction is central to sys-
tems biology but there remains a tension between the aims of the
modeler for completeness and the use of Bayesian approaches for
model selection that automatically incorporate a preference for simple
models.
For example, genome-scale modelling of phenomena such as
metabolism or intracellular signaling, tends to aim for completeness
by including as many enzymatic steps, genes, regulators, and path-
ways that can be deduced from genomes. However, the problem of
fitting, for example, metabolic flux values, to networks, scales with the
exponential of the size of the network, so it becomes computationally
intractable for genome-scale networks with potentially many hundreds
or thousands of enzymatic steps. For this reason, techniques, such as
13C-metabolic flux analysis (MFA),59,60 use only relatively small-scale
networks of up to 100 or so reactions, representing sub-pathways such
as central metabolism, that can parameterized with data, such as mass
spectrometry-derived isotopomer fractions. Even these networks are,
typically, underdetermined, so there are many, usually, millions, of pos-
sible flux solutions that need to explored with methods such as genetic
algorithms, Monte Carlo simulations or simulated annealing,61 to find
the optimal set of parameter values that best fits the data. There will
usually be many possible “best fit” solutions that range from the sim-
plest to much more complex. What criterion should be used to sort
them? One of the pioneers of systems biology, Fridolin Gross advised
against the use of Occam’s razor, writing that “systems biology offers
formal models as a remedy for such inferential failures, but the kind
of simplicity introduced in the models drives them away from the
truth”.62
Consider a trifurcating step in a metabolic pathway in a microorgan-
ism such as is illustrated in Figure 2where one mole of metabolite A is
converted to one mole of metabolite E via three alternative metabolic
intermediates, B, C, or D mediated by three different enzymatic reac-
tions in pathways B, C, or D. The task is to determine the actual flux
through the network. Methods, such as 13C-MFA, may be applied to
determine the parameter values for fluxes B, C, and D that best fit the
data. Clearly, the simplest solution has only a single flux with normal-
ized value 1 through B, C, or D as this would generate the same data
as the situation where only a single pathway existed, and one pathway
is clearly simpler than three. Let’s imagine that the simplest solution is
with flux only going through B. However, data are inherently noisy, so
not all the data will be accounted for by a single flux solution. The exper-
iment is performed, and the data are fitted and both maximal likelihood
and Bayesian methods are used to fit the noisy data to the model. A
typical result from Bayesian methods that incorporate Occam’s razor
and thereby favor the simplest solutions would be the predict that all
the flux goes through pathway B, since one flux is simpler that two
fluxes. Maximal likelihood methods that favor fit of the data to amodel
would likely predict a more complex solution in which, say, 90% of flux
goes through pathway B with about 5% of flux going through each of C
and D. Gross’s advice would be to avoid the Bayesian approaches with
their preference for simple solution that, he warned, drive the scientist
further from the “truth”.
But what is “truth” in model fitting? Gross’ advice implies that we
somehow know the truth prior to the experiment. In a sense, we do, in
that the genome sequence, transcriptomics, and proteomics data may
all agree that pathways C and D are encoded in the genome of the
imaginary organism. If science is about discovering truth, then surely
all three pathways should be included in any solution.
A key insight into the nature of science was made eight centuries ago
by William of Occam, who wrote that:
the science of nature is neither about the things that are
born and die, nor about natural substances, nor about
the things we see moving around .... Properly speak-
ing, the science of nature is about intentions of the
mind that are common to such things, and that stand
precisely for such things in many statements.63
In this extraordinary statement, William was, I believe, pointing out
that science isn’t strictly about truth in the sense of attaining complete
knowledge of the nature of real objects in the world, because, ulti-
mately, there is no wayto (as one might say) “peek behind the veil of our
sensory perceptions” to confirm that models derived from sensory per-
ceptions and data correspond to a truth in the world. Instead, science
is about constructing mental models or theories that, once generated,
make psychological sense of the sensory data generated by ultimately
unknowable objects in the world.
Note however that, although, as a nominalist, William opposed the
prevailing medieval notion of philosophical realism, that abstract quali-
ties are real existing things existingout there in the world, he was not an
anti-realist in the modern sense of denying that relationships between
objects are extra-mental. For William “the intellect does nothing to
bring it about that the universe is one, or that a whole is composed [of
its parts], or that . .. a triangle has three [angles]”.64 So, objects, such as
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES 15
triangles, and the relations between them, are for William real existing
things in the world.
In my view, the point William was making with his “intentions of
the mind” statement is that science is about constructing mental mod-
els with terms, such as, for example, “mass”, that are “common to such
things” in the sense that they represent a real property of real objects
that “that stand precisely for such things in many statements”—such
as the law of mass action or Newton’s laws of motion that allow many
statements to be made about such as about the rate of fall of apples or
planets. But William also realized that if a scientific theory (but not the
objects it aims to describe) is just a mental construct, then there is no
limit to the number of (mental) entities that can be included. To avoid
the pitfall of “over-parameterized” models (i.e., having entities beyond
necessity, such as angels), William proposed that science should adopt
the principle of simplicity—his eponymous razor.
Adopting this perspective, a Bayesian solution that favors a single
pathway from A to E is not making an ontological claim about truth in
the world but an epistemological claim that there is no evidence from
the data for the operation of pathways ABE or ADE, so it should be
removed from the model that generated the data.
The question arises then of when is it justified to include reactions
B and D in the model? The answer is: when there is evidence for their
activity in the data; for example, when metabolites are detected that
are only present in those pathways. Importantly,this doesn’t mean that
one simply abandons Occam’s razor. Instead, the additional data are
incorporated into Bayes’ equation that, like the arrival of the num-
ber 39 in the dice example, will then output “zero probabilities” for
solutions that do not include reactions B and D.
The alternative approach of adopting more complex solutions on the
basis of prior knowledge of what is true is likely to result in errors, such
as fitting experimental noise to inactive model pathways and thereby
delivering the systems biology equivalent of von Neumann’s elephant.
Another important advantage of using simpler models in biology is that
noise in the data remains as such (noise), rather than being overfitted to
parameters. Experimental and theoretical studies have demonstrated
an important role for noise in biological systems where, for exam-
ple, it can give rise to control properties of metabolic systems.65,66
Fitting noise to deterministic models may overlook a functional
role for noise in biological systems, thereby generating erroneous
conclusions.
CONCLUSIONS
To appreciate the value of Occam’s razor, it is instructive to follow the
career of Robert Boyle, often hailed as the father of modern chem-
istry, who began his chemical career as a mystic and alchemist. He built
a laboratory at Stalbridge where he indulged his passion for exper-
iments that followed obscure formulae with exotic ingredients and
instructions such as “alter and dissolve the sea and the woman between
winter and spring;” he also wrote excitedly about a worm on the “Som-
brero Coast” that transforms first “into a tree or then into a stone.” He
related tales of a “foreign chemist” who claimed to have met a tonsured
monk who could summon wolves out of thin air.67 Although this seems
all very bizarre and nonsensical today, many of the greatest intel-
lects of the Enlightenment (including, though later than Boyle, Newton
and Johann Joachim Becher (who developed the phlogiston theory of
combustion)) were adherents of alchemy. Why did chemistry flourish
whereas alchemy died, while both utilized mathematics, experiment,
and other tools of science? In The Sceptical Chymist: or Chymico-Physical
Doubts & Paradoxes, published in 1661, the mature Boyle provides us
with his own reasoning, arguing against
noble Experiments, Theories, which either like Pea-
cock’s feathers made a great show, but are neither
solid nor useful, or else like Apes, if they have some
appearance of being rational, are blemished with some
absurdity or other, that when they are Attentively
consider’d, makes them appear Ridiculous.68
Boyle recognized that the greatest challenge for 17th century sci-
entists was to identify those theories that give rise to solid and useful
science. He provided ten key principles by which “good and excellent
hypotheses” could be separated from the “peacock theories”. Around
half are based, in one way or another, on the principle of simplicity.
For example, the seventh principle asserts that good theories should
“clearly intelligible be”,22 a mark of simple theories. His sixth principle is
more explicit stating that “a great part of the work of true philosophers
has been, to reduce the true principles of things to the smallest num-
ber they can, without making them insufficient.” Boyle doesn’t identify
who the true philosophers were, but in another passage he refers “to
the generally owned rule about hypotheses, that entia non sunt mul-
tiplicanda absque necessitate” (“entities must not be multiplied beyond
necessity,” i.e., Occam’s razor).22,69 As already noted, Newton included
the razor in his principles of science so that despite being a believer he
excluded alchemy or other mysticalnotions from his greatest scientific
work, The Principia.
The medieval world and the Middle Ages were beset with esoteric
theories filled with angels, demons, gods, fabulous animals, mystical
influences, and both benign and malign spirits that could transform
base metals into gold, worms into trees, or invent fish that could sink
ships.70 By applying Occam’s razor, the giants of the Enlightenment
and the Scientific Revolution rejected entities beyond necessity—and
thereby mysticism, religion, and theology—to forge modern science.
Our world today is seemingly beset with belief in theories of divine,
mystical, and supernatural forces, as well as peacock theories that are
more likely referred to as “pseudoscience” and “fake news.” We should,
following the giants of modern science, keep Occam’s razor close when
practicing or teaching science, not least because simple theories are
more easily communicated and understood. The message that science
is, ultimately,the method by which we use the tools of experimentation,
mathematics, and logic to find the simplest explanations of the com-
plex phenomena of our world provides a clear mission statement for
the whole of the scientific enterprise.
16 ANNALS OF THE NEW YORK ACADEMY OF SCIENCES
AUTHOR CONTRIBUTIONS
JJMcF was responsible for conceptualization, writing, revising and
editing.
COMPETING INTERESTS
The author declares no competing interests.
ORCID
Johnjoe McFadden https://orcid.org/0000-0003-2145-0046
PEER REVIEW
The peer review history for this article is available at https://publons.
com/publon/10.1111/nyas.15086
REFERENCES
1. Newport, F. (2022). Fewer in US Now See Bible as Literal Word of God.
Gallup News Posted on news gallup.com, 6.
2. Sallam, M. (2021). COVID-19 vaccine hesitancy worldwide: A con-
cise systematic review of vaccine acceptance rates. Vaccines,9(2),
160.
3. Funk, C., Tyson, A., & Kennedy, B. (2020). Science and scientists held in
high esteem across global publics. Pew Research Center, 29.
4. Alam, A. (2022). Psychological, Sociocultural, and Biological Eluci-
dations for Gender Gap in STEM Education: A Call for Translation
of Research into Evidence-Based Interventions. Proceedings of the
2nd International Conference on Sustainability and Equity (ICSE-2021).
Atlantis Highlights in Social S.
5. Tascilar, M., De Jong, F. A., Verweij, J., & Mathijssen, R. H. J. (2006).
Complementary and alternative medicine during cancer treatment:
Beyond innocence. The Oncologist,11(7), 732–741.
6. Relton, C., Cooper, K., Viksveen, P., Fibert, P., & Thomas, K. (2017).
Prevalence of homeopathy use by the general population worldwide:
A systematic review. Homeopathy,106(02), 69–78.
7. Hotez, P. J. (2021). Anti-science kills: From Soviet embrace of pseudo-
science to accelerated attacks on US biomedicine. PLoS Biology,19(1),
e3001068.
8. Cattaneo, E., & Corbellini, G. (2014). Stem cells: Taking a stand against
pseudoscience. Nature,510(7505), 333–335.
9. Greedy, D. (2018). Golden Rice is safe to eat, says FDA. Nature
Biotechnology,36(7), 559.
10. Kumar, K., Gambhir, G., Dass, A., Tripathi, A. K., Singh, A., Jha, A. K.,
Yadava, P., Choudhary, M., & Rakshit, S. (2020). Genetically modified
crops: Current status and future prospects. Planta,251, 1–27.
11. Huber, R. A., Greussing, E., & Eberl, J.-M. (2022). From populism to cli-
mate scepticism: The role of institutional trust and attitudes towards
science. Environmental Politics,31(7), 1115–1138.
12. Krauss, A., & Colombo, M. (2020). Explaining public understanding
of the concepts of climate change, nutrition, poverty and effective
medical drugs: An international experimental survey. PLoS ONE,15(6),
e0234036.
13. Boudry, M., Blancke, S., & Pigliucci, M. (2015). What makes weird
beliefs thrive? The epidemiology of pseudoscience. Philosophical Psy-
chology,28(8), 1177–1198.
14. Popper, K. (2005). The Logic of Scientific Discovery. Routledge.
15. O’raifeartaigh, C., & Mitton, S. (2018). Interrogating the Legend of
Einstein’s ‘Biggest Blunder.Physics in Perspective,20(4), 318–341.
16. Westfall, R. S. (1973). Newton and the fudge factor. Science,179(4075),
751–758.
17. Ladyman, J. (2011). Structural realism versus standard scientific real-
ism: The case of phlogiston and dephlogisticated air. Synthese,180(2),
87–101.
18. Ellis, G., & Silk, J. (2014). Scientific method: Defend the integrity of
physics. Nature,516(7531), 321–323.
19. https://en.wikipedia.org/wiki/Science
20. Osborne, J. (2014). Teaching scientific practices: Meeting the chal-
lenge of change. Journal of Science Teacher Education,25(2), 177–196.
21. Sober, E. (2015). Ockham’s Razors. Cambridge University Press.
22. McFadden, J. (2021). Life is Simple: How Occam’s Razor Set Science Free
and Shapes the Universe. Basic Books.
23. Gillespie, M. A. (2008). The Theological Origins of Modernity.The
University of Chicago Press.
24. Hooykaas, R. (1987). The rise of modern science: When and why? The
British Journal for the History of Science,20(4), 453–473.
25. Duhem, P. (2015). To Save the Phenomena, an Essay on the Idea of Physical
Theory from Plato to Galileo. University of Chicago Press.
26. Shapiro, H. (1964). Medieval Philosophy: Selected Readings From Augus-
tine to Buridan. New York, Modern Library.
27. Rosen, E. (1937). The Commentariolus of Copernicus. Osiris, 3, 123–
141. http://www.jstor.org/stable/301584
28. Martens, R. (2009). Harmony and simplicity: Aesthetic virtues and the
rise of testability. Studies in History and Philosophy of Science Part A,
40(3), 258–266.
29. Copernicus, N. (1978). De revolutionibus orbium coelestium (On the
Revolutions of the Heavenly Spheres). Available online at e.g. http://
adsharvardedu/books/1543drocbook (1543)
30. Rosen, E. (1971). The Commentarialus of Copernicus, The Letter against
Werner, The Narratio prima of Rheticus. Edward Rosen, trans.
31. Gingerich, O. (1974). The astronomy and cosmology of Copernicus.
Highlights of Astronomy,3, 67–85.
32. Galilei, G. (1953). Dialogue Concerning the Two Chief World Systems,
Ptolemaic and Copernican. Univ of California Press.
33. Miller, R. (1984). René Descartes: Principles of Philosophy.Translated, with
Explanatory Notes. Springer Science & Business Media.
34. Newton, I. (1999). The Principia: Mathematical Principles of Natural
Philosophy. Univ of California Press.
35. Wallace, A. R. (2007). Darwinism: An Exposition of the Theory of Natural
Selection with Some of Its Applications. Cosimo, Inc.
36. Norton, J. D. (2000). Nature is the realisation of the simplest con-
ceivable mathematical ideas’: Einstein and the canon of mathematical
simplicity. Studies in History and Philosophy of Science Part B: Studies in
History and Philosophy of Modern Physics,31(2), 135–170.
37. Jackson, J. D. (1999). Classical Electrodynamics. American Association
of Physics Teachers.
38. Grech, R., Cassar, T., Muscat, J., Camilleri, K. P., Fabri, S. G., Zervakis,
M., Xanthopoulos, P., Sakkalis, V., & Vanrumste, B. (2008). Review
on solving the inverse problem in EEG source analysis. Journal of
Neuroengineering and Rehabilitation,5(1), 1–33.
39. Baillet, S. (2014). Forward and inverse problems of MEG/EEG. D.
Jaeger, & R. Jung. Encyclopedia of Computational Neuroscience, 1–8.
40. Merritt, D. (2020). A philosophical Approach to MOND: Assessing the Mil-
gromian Research Program in Cosmology. Cambridge UniversityPress.
41. Kuhn, T. S. (2012). The Structure of Scientific Revolutions. University of
Chicago Press.
42. Koestler, A. (2017). The Sleepwalkers: A History of Man’s Changing Vision
of the Universe. Penguin UK.
43. Kuhn, T.S. (1957). The Copernican Revolution: Planetary Astronomy in the
Development of Western Thought. Harvard University Press.
44. Bleakley, A. (2010). Blunting Occam’s razor: Aligning medical educa-
tion with studies of complexity. Journal of Evaluation in Clinical Practice,
16(4), 849–855.
45. Westerhoff, H. V., Winder, C., Messiha, H., Simeonidis, E., Adamczyk,
M., Verma, M., Bruggeman, F. J., & Dunn, W. (2009). Systems biology:
The elements and principles of life. Febs Letters,583(24), 3882–3890.
46. Ball, P. (2016). The Tyranny of Simple Explanations. The Atlantic.
https://www.theatlantic.com/science/archive/2016/08/occams-
razor/495332/
ANNALS OF THE NEW YORK ACADEMY OF SCIENCES 17
47. Al-Khalili, J. (2022). Cutting Down Ockham’s Razor. https://www.
openmindmag.org/articles/the-deceptive-allure-of- simplicity
48. Mazin, I. (2022). Inverse Occam’s razor. NaturePhysics,18(4), 367–368.
49. Feyerabend, P. (1993). Against Method.Verso.
50. Geelan, D. R. (1997). Epistemological anarchy and the many forms of
constructivism. Science & Education,6, 15–28.
51. Pennock, R. T. (2010). The postmodern sin of intelligent design cre-
ationism. Science & Education,19, 757–778.
52. Jeffreys, H. (1939). Theory of Probability. Clarendon Press.
53. Jefferys, W. H., & Berger, J. O. (1991). Sharpening Ockham’s razor on a
Bayesian strop. Technical Report.
54. Jefferys, W. H., & Berger, J. O. (1992). Ockham’s razor and Bayesian
analysis. American Scientist,80(1), 64–72.
55. Mackay, D. J. C. (1992). Bayesian interpolation. Neural Computation,
4(3), 415–447.
56. Dyson, F. (2004). A meeting with Enrico Fermi. Nature,427(6972),
297–297.
57. Behe, M. J., Dembski, W., & Ruse, M. (2010). Irreducible complexity:
Obstacle to Darwinian evolution. In R. Arp, & A. Rosenberg (Eds.),
Philosophy of Biology: An Anthology (pp. 427–439). Wiley.
58. Miller, K. R. (2010). The flagellum unspun—the collapse of ‘Irreducible
Complexity. In R. Arp, & A. Rosenberg (Eds.), Philosophy of Biology: An
Anthology (pp. 439–449). Wiley.
59. Wiechert, W., Möllney, M., Petersen, S., & De Graaf, A. A. (2001).
A universal framework for 13C metabolic flux analysis. Metabolic
Engineering,3(3), 265–283.
60. Zamboni, N., Fendt, S.-M., Rühl, M., & Sauer, U. (2009). 13C-based
metabolic flux analysis. Nature Protocols,4(6), 878–892.
61. Ashyraliyev, M., Fomekong-Nanfack, Y., Kaandorp, J. A., & Blom, J.
G. (2009). Systems biology: Parameter estimation for biochemical
models. The FEBS Journal,276(4), 886–902.
62. Gross, F. (2019). Occam’s razor in molecular and systems biology.
Philosophy of Science,86(5), 1134–1145.
63. Kaye, S. M., & Martin, R. M. (2001). On Ockham. Cengage
Learning.
64. Occam Wo. Ordinatio I, d. 30, q. 1 in Opera Theologica iv, [quoted
in https://plato.stanford.edu/entries/relations-medieval/]. In Boehner
ea (ed) Opera Philosophica, Ph. : St. Bonaventure, NY: The Franciscan
Institute (1967–86). 316–317.
65. Westerhoff, H. V., & Chen, Y.-D. (1984). How do enzyme activities
control metabolite concentrations? European Journal of Biochemistry,
142(2), 425–430.
66. Rocco, A. (2009). Stochastic control of metabolic pathways. Physical
Biology,6(1), 016002.
67. Pilkington, R. (1959). Robert Boyle: Father of Chemistry. J. Murray.
68. Sargent, R.-M. (2009). The Diffident Naturalist: Robert Boyle and the
Philosophy of Experiment. University of Chicago Press.
69. Wojcik, J. W. (2002). Robert Boyle and the Limits of Reason. Cambridge
University Press.
70. Copenhaver, B. P. (1991). A Tale of Two Fishes: Magical Objects in Nat-
ural History from Antiquity Through the Scientific Revolution. Journal
of the History of Ideas,52(3), 373–398.
How to cite this article: McFadden, J. (2023). Razor sharp: The
role of Occam’s razor in science. AnnNYAcadSci,1530, 8–17.
https://doi.org/10.1111/nyas.15086
... Medical decision-making requires clinicians to rely on available scientific evidence to ensure accurate diagnoses and determine the most effective course of treatment. Ideally, these decisions are driven by a solid foundation of empirical data, including randomized controlled trials, systematic reviews and large-scale observational studies (Young et al., 2020;McFadden 2023). However, in real-world clinical practice, such high-quality evidence is not always available. ...
Preprint
Full-text available
Medical decision-making relies on scientific data to ensure accurate diagnoses and effective treatments. However, in many real-world scenarios, medical data may be incomplete, unreliable or conflicting, leaving clinicians with significant uncertainty. We propose a pragmatic and structured approach to medical judgment when sufficient empirical evidence is unavailable. We build a heuristic model that integrates multiple sources of knowledge-including evidence-based medicine levels, expert consensus, individual clinical experience, logical reasoning and cognitive biases-to derive a quantifiable degree of belief in a given treatment decision. Each source is assigned a weighted value and their cumulative score determines whether a proposed medical intervention should be accepted or rejected. Unlike traditional Bayesian models, which rely on probabilistic updates, our method prioritizes pragmatic decision-making through an aggregation of both statistical and non-statistical evidence. Our approach integrates both subjective and collective knowledge, emphasizing the central role of clinical expertise and contextual factors in medical practice when evidence-based medicine is insufficient. Through a hypothetical case study on antibiotic administration, we illustrate the practical application of our model. We conclude that our heuristic belief-aggregation model, by formalizing the weighting of diverse epistemic sources-including pragmatic reasoning-enhances decision-making in ambiguous medical contexts where conventional empirical validation is unavailable.
... where h·i indicates a posterior mean. Expressing Eq. (14) in words reveals a Bayesian form of Occam's razor [57][58][59][60][61][62][63][64][65][66], ...
Article
Full-text available
We uncover two precise interpretations of traditional electroweak fine-tuning (FT) measures that were historically missed. : the traditional FT measure shows the change in plausibility of a model in which a parameter was exchanged for the Z boson mass relative to an untuned model in light of the Z boson mass measurement. the traditional FT measure shows the exponential of the extra information, measured in nats, relative to an untuned model that you must supply about a parameter in order to fit the Z mass. We derive the mathematical results underlying these interpretations, and explain them using examples from weak scale supersymmetry. These new interpretations allow us to rigorously define FT in particle physics and beyond, shed fresh light on the status of extensions to the Standard Model and, lastly, allow us to precisely reinterpret historical and recent studies using traditional FT measures. Published by the American Physical Society 2025
... We analyse these proxies within the context of what is known about the drivers of fires at the scale reflected in this palaeoecological study: landscape and decade-to century-scale. We also employ Occam's Razor (the rule of parsimony) when interpreting the stratigraphic time-series data [19]. We also note that investigating the past carries with it inherent uncertainties, particularly with respect to chronology [20]. ...
Article
Full-text available
This is a reply to the comments of Penna [1] and Feller [2] on our paper “The Curse of Conservation: empirical evidence demonstrating that changes in land-use legislation drove catastrophic bushfires in Southeast Australia” [3]. Penna [1] and Feller [2] present a series of critiques of our data and narrative that we summarise under two central themes: 1. Misrepresentation of the Land Conservation Act (1970) [4] and its authority, the Land Conservation Council; and 2. Data accuracy and representativeness, data interpretation and conflicts with the existing literature.
... Simplicity is an important criterion for evaluating and selecting among competing theories or models [43]. When more parameters are included in the machine learning model, the complexity of the model increases [44]. ...
Article
Full-text available
Accurately predicting unseen data, instead of mere memorization of training examples, is a critical goal of machine learning. This generalization is particularly important in the field of chemical sensors, where the ability to accurately predict the chemical properties or concentration levels of unknown samples is crucial. The paper presents a comprehensive yet accessible introduction to various machine learning concepts, highlighting the importance of model interpretability and generalization in ensuring reliable and accurate results in this context. Nonlinear sensor array data are utilized to introduce key concepts (e.g., bias-variance tradeoff) and techniques (linear models, partial least squares regression, support vector machines, k-nearest neighbors, decision trees, ensemble methods, automated machine learning, symbolic regression, and artificial neural networks), providing a solid foundation to make informed decisions when selecting machine learning techniques for sensor-specific regression applications. The results clearly indicate a number of conclusions. First, overparameterized deep feedforward neural networks show great accuracy and generalization when trained on a sufficiently large dataset. Second, symbolic regression models proved to be more accurate than deep feedforward neural networks and classical machine learning techniques on smaller datasets. Third, the performance of various machine learning models was dataset-dependent, showing the importance of comparative studies to determine the most suitable approach. It is clear that the optimal model cannot be known a priori. This paper aims to provide a starting point for investigations on the performance of different machine learning techniques in chemical sensor applications.
... This criterion allows not to separate A. tegmentosum from other maples. The assumption that the androecium of A. tegmentosum has the same origin as in all other Sapindaceae is also consistent with the generally accepted principle of Occam's razor (Walsh 1979;McFadden 2023). Also, the observed position of the stamens is probably non-optimal due to being unevenly spaced, and therefore can be regarded as a special feature most likely inherited from an ancestor (Timonin 1993). ...
Preprint
Full-text available
The genus Acer belongs to the family Sapindaceae, whose representatives are characterized by a pentamerous perianth but typically possess only eight stamens. Such an androecium is believed to have evolved through the loss of two stamens. However, there is still no consensus on the origin of eight-staminate androecium including the positions of the two lost stamens and the pathway of their reduction compared to other Sapindaceae. We examined the early stages of flower development in five maple species belonging to different sections – four species with eight stamens and one species with ten stamens – using scanning electron microscopy. Measurements were performed to analyze the relative positions of stamen primordia, their size, and the floral meristem surface area. In addition, the perianth and androecium vasculature was studied to reveal petal-stamen complexes. We found that in three of four 8-staminate species, three stamens are initiated from common petal-stamen primordia, and five arise from single primordia. In A. tegmentosum Maxim., four stamens appear from common primordia with petals, and four from single primordia. Despite developmental differences, stamen distribution within the flower and the angles between adjacent stamens indicate a similar androecium construction in all species. In most species with eight stamens, the differences between two andoecial whorls are vanished. In contrast, A. nikoense (Miq.) Maxim., with ten stamens, possesses two distinct stamen whorls, the antipetalous stamens are initiated from common primordia. In the 8-staminate androecia of the genus Acer , the same two stamens have been lost as in other Sapindaceae. Within genus Acer , there is a certain decrease in the relative size of the floral meristem, accompanied by an increase in the number of common petal-stamen primordia and increased heterogeneity of the androecium (in A. tegmentosum ) or reduction of some floral organs.
... Based on available evidence, the connection between ASD and acetaminophen offers an extremely clear example of an application of Occam's Razor. This heuristic tool is considered an important element in scientific processes and advancement [78] and dictates that the simpler explanation that accounts for available observations is likely correct. A tally published in 2022 [1] found six unknown factors that must be invoked and eight largely independent observations that must be attributed to coincidence if acetaminophen is not involved in the induction of ASD. ...
Article
Full-text available
More than 20 previously reported lines of independent evidence from clinical observations, studies in laboratory animal models, pharmacokinetic considerations, and numerous temporal and spatial associations indicate that numerous genetic and environmental factors leading to inflammation and oxidative stress confer vulnerability to the aberrant metabolism of acetaminophen during early development, leading to autism spectrum disorder (ASD). Contrary to this conclusion, multivariate analyses of cohort data adjusting for inflammation-associated factors have tended to show little to no risk of acetaminophen use for neurodevelopment. To resolve this discrepancy, here we use in silico methods to create an ideal (virtual) population of 120,000 individuals in which 50% of all cases of virtual ASD are induced by oxidative stress-associated cofactors and acetaminophen use. We demonstrate that Cox regression analysis of this ideal dataset shows little to no risk of acetaminophen use if the cofactors that create aberrant metabolism of acetaminophen are adjusted for in the analysis. Further, under-reporting of acetaminophen use is shown to be a considerable problem for this analysis, leading to large and erroneously low calculated risks of acetaminophen use. In addition, we argue that factors that impart susceptibility to acetaminophen-induced injury, and propensity for acetaminophen use itself, can be shared between the prepartum, peripartum, and postpartum periods, creating additional difficulty in the analysis of existing datasets to determine risks of acetaminophen exposure for neurodevelopment during a specific time frame. It is concluded that risks of acetaminophen use for neurodevelopment obtained from multivariate analysis of cohort data depend on underlying assumptions in the analyses, and that other evidence, both abundant and robust, demonstrate the critical role of acetaminophen in the etiology of ASD.
... Expressing eq. (14) in words reveals a Bayesian form of Occam's razor [57][58][59][60][61][62][63][64][65][66], ...
Preprint
We uncover two precise interpretations of traditional electroweak fine-tuning (FT) measures that were historically missed. (i) a statistical interpretation: the traditional FT measure shows the change in plausibility of a model in which a parameter was exchanged for the Z boson mass relative to an untuned model in light of the Z boson mass measurement. (ii) an information-theoretic interpretation: the traditional FT measure shows the exponential of the extra information, measured in nats, relative to an untuned model that you must supply about a parameter in order to fit the Z mass. We derive the mathematical results underlying these interpretations, and explain them using examples from weak scale supersymmetry. These new interpretations shed fresh light on historical and recent studies using traditional FT measures.
... Based on available evidence, the connection between ASD and acetaminophen offers an extremely clear example of an application of Occam's Razor. This heuristic tool is considered an 6 important element in scientific processes and advancement [77], and dictates that the simpler explanation which accounts for available observations is likely correct. A tally published in 2022 [1] found six unknown factors that must be invoked and eight largely independent observations that must be attributed to coincidence if acetaminophen is not involved in the induction of ASD. ...
Preprint
Full-text available
More than 20 lines of independent evidence from clinical observations, studies in laboratory animal models, pharmacokinetic considerations, and numerous temporal and spatial associations previously indicate that numerous genetic and environmental factors leading to inflammation and oxidative stress confer vulnerability to aberrant metabolism of acetaminophen during early development, leading to autism spectrum disorder (ASD). Contrary to this conclusion, multivariate analyses of cohort data adjusting for inflammation-associated factors have tended to show little to no risk of acetaminophen use for neurodevelopment. To resolve this discrepancy, here we use in-silico methods to create an ideal (virtual) population of 120000 individuals in which 50% of all cases of virtual ASD are induced by oxidative stress-associated cofactors and acetaminophen use. We demonstrate that Cox regression analysis of this ideal data set shows little to no risk of acetaminophen use if cofactors which create aberrant metabolism of acetaminophen are adjusted for in the analysis. Further, under-reporting of acetaminophen use is shown to be a considerable problem for this analysis, leading to large and erroneously low calculated risks of acetaminophen use. In addition, we argue that factors which impart susceptibility to acetaminophen-induced injury, and propensity for acetaminophen use itself, can be shared between the prepartum, peripartum, and postpartum periods, creating additional difficulty in analysis of existing datasets to determine risks of acetaminophen exposure for neurodevelopment during a specific time frame. It is concluded that risks of acetaminophen use for neurodevelopment obtained from multivariate analysis of cohort data depend on underlying assumptions in the analyses, and that other evidence, both abundant and robust, demonstrate the critical role of acetaminophen in the etiology of ASD.
Preprint
Full-text available
Real and epistemically justified judgements require individuals' beliefs grounded on empiric inductive experience. Yet, solid scientific evidence is frequently unavailable, unreliable or controversial, preventing individuals to build their own beliefs in real-life events that require clear-cut and fast answers, especially across domains like employment, policymakers and medicine. Here we ask whether it is feasible to achieve a real and justified belief when scientific data are lacking. We suggest a novel procedure to achieve a quantifiable degree of belief to implement medical judgement. A medical question is given in terms of the logical sentence "if x, then y" provided with just two possible responses, i.e., yes or not. Several sources contribute to formulate a medical response, namely 1) Evidence-Based Medicine levels, 2) individual experience, 3) collective knowledge, 4) logical reasoning and 5) confounding factors like chance, emotivity, cognitive biases. Every source is assigned a number corresponding to a value of belief. Their sum gives a single number standing for the total value of belief. A total value of belief below zero is deemed as a negative response to our medical question, above zero as a positive response. Quantifying the contribute of every source, our pragmatic approach can handle the fairly common occurrence in which qualitative factors like emotional issues, personal beliefs, wisdom of the crowds, confirmation biases, narrative fallacy, etc., are more effective than scientific evidence to gain belief and formulate judgements.
Article
Full-text available
Why do populist citizens oppose climate change? Thus far, data constraints limited the ability to test different theoretical mechanisms against each other. We argue that populist attitudes affect climate attitudes through two distinct channels, namely institutional trust and attitudes towards science. The former argument focuses on political institutions as the central actors in implementing climate policy. Individuals who distrust these institutions are more sceptical about climate change. The latter argument claims that populists deny climate change because they distrust the underlying climate science. According to this view, populists would view climate scientists as part of the self-serving elite that betrays the people. Utilising data from the Austrian National Election Study and structural equation modelling, we find strong support for the relationship of populism and climate attitudes via attitudes towards science and institutional trust. Populists systematically hold more negative attitudes towards science and political institutions, and consequently deny climate change.
Article
Full-text available
Utility of vaccine campaigns to control coronavirus 2019 disease (COVID-19) is not merely dependent on vaccine efficacy and safety. Vaccine acceptance among the general public and healthcare workers appears to have a decisive role in the successful control of the pandemic. The aim of this review was to provide an up-to-date assessment of COVID-19 vaccination acceptance rates worldwide. A systematic search of the peer-reviewed English survey literature indexed in PubMed was done on December 25, 2020. Results from 31 peer-reviewed published studies met the inclusion criteria and formed the basis for the final COVID-19 vaccine acceptance estimates. Survey studies on COVID-19 vaccine acceptance rates were found from 33 different countries. Among adults representing the general public, the highest COVID-19 vaccine acceptance rates were found in Ecuador (97.0%), Malaysia (94.3%), Indonesia (93.3%) and China (91.3%). However, the lowest COVID-19 vaccine acceptance rates were found in Kuwait (23.6%), Jordan (28.4%), Italy (53.7), Russia (54.9%), Poland (56.3%), US (56.9%), and France (58.9%). Only eight surveys among healthcare workers (doctors and nurses) were found, with vaccine acceptance rates ranging from 27.7% in the Democratic Republic of the Congo to 78.1% in Israel. In the majority of survey studies among the general public stratified per country (29/47, 62%), the acceptance of COVID-19 vaccination showed a level of ≥ 70%. Low rates of COVID-19 vaccine acceptance were reported in the Middle East, Russia, Africa and several European countries. This could represent a major problem in the global efforts to control the current COVID-19 pandemic. More studies are recommended to address the scope of COVID-19 vaccine hesitancy. Such studies are particularly needed in the Middle East and North Africa, Sub-Saharan Africa, Eastern Europe, Central Asia, Middle and South America. Addressing the scope of COVID-19 vaccine hesitancy in various countries is recommended as an initial step for building trust in COVID-19 vaccination efforts.
Article
Full-text available
The United States witnessed an unprecedented politicization of biomedical science starting in 2015 that has exploded into a complex, multimodal anti-science empire operating through mass media, political elections, legislation, and even health systems. Anti-science activities now pervade the daily lives of many Americans, and threaten to infect other parts of the world. We can attribute the deaths of tens of thousands of Americans from COVID-19, measles, and other vaccine-preventable diseases to anti-science. The acceleration of anti-science activities demands not only new responses and approaches but also international coordination. Vaccines and other biomedical advances will not be sufficient to halt COVID-19 or future potentially catastrophic illnesses, unless we simultaneously counter anti-science aggression.
Book
Dark matter is a fundamental component of the standard cosmological model, but in spite of four decades of increasingly sensitive searches, no-one has yet detected a single dark-matter particle in the laboratory. An alternative cosmological paradigm exists: MOND (Modified Newtonian Dynamics). Observations explained in the standard model by postulating dark matter are described in MOND by proposing a modification of Newton's laws of motion. Both MOND and the standard model have had successes and failures – but only MOND has repeatedly predicted observational facts in advance of their discovery. In this volume, David Merritt outlines why such predictions are considered by many philosophers of science to be the 'gold standard' when it comes to judging a theory's validity. In a world where the standard model receives most attention, the author applies criteria from the philosophy of science to assess, in a systematic way, the viability of this alternative cosmological paradigm.
Article
Dark matter is a fundamental component of the standard cosmological model, but in spite of four decades of increasingly sensitive searches, no-one has yet detected a single dark-matter particle in the laboratory. An alternative cosmological paradigm exists: MOND (Modified Newtonian Dynamics). Observations explained in the standard model by postulating dark matter are explained in MOND by proposing a modification of Newton's laws of motion. Both MOND and the standard model have had successes and failures – but only MOND has repeatedly predicted observational facts in advance of their discovery. In this volume, David Merritt outlines why such predictions are considered by many philosophers of science to be the 'gold standard' when it comes to judging a theory's validity. In a world where the standard model receives most attention, the author applies criteria from the philosophy of science to assess, in a systematic way, the viability of this alternative cosmological paradigm.
Article
Scientists have long preferred the simplest possible explanation of their data. More recently, a worrying trend to favour unnecessarily complex interpretations has taken hold.
Conference Paper
Despite recent progress in narrowing gender gap in enrolment in math courses and relative achievements therein, girls and women still remain underrepresented when it comes to formal education in math-intensive academic fields such as engineering, mathematics, technology, and science, often referred to as STEM. Capacity to undertake a job and also the drive to put scientific and mathematical aptitude to use are both included in career paths. Individual variations in cognitive capability and motivation are impacted by a variety of sociocultural variables. The authors have presented six explanations for girl’s inadequate representation in math-intensive STEM professions after analysing academic researches carried out in last three decades in domains of education, economics, sociology, and psychology: (a) preconceptions and biases based on gender, (b) field-specific ability beliefs, (c) lifestyle values or work-family balance preferences, (d) professional inclinations or desires, (e) comparative cerebral capabilities, and (f) cognitive aptitude. The study goes on to discuss sociocultural and biological causes for reported gender differences in motivational and cognitive factors, as well as the evolutionary period(s) during which each variable became most important. Authors concludes with evidence and scientific-proof based research, policy and practise suggestions for improving STEM inclusivity, and gives recommendations and directions for forthcoming researches.
Book
Described by the philosopher A.J. Ayer as a work of ‘great originality and power’, this book revolutionized contemporary thinking on science and knowledge. Ideas such as the now legendary doctrine of ‘falsificationism’ electrified the scientific community, influencing even working scientists, as well as post-war philosophy. This astonishing work ranks alongside The Open Society and Its Enemies as one of Popper’s most enduring books and contains insights and arguments that demand to be read to this day. © 1959, 1968, 1972, 1980 Karl Popper and 1999, 2002 The Estate of Karl Popper. All rights reserved.
Book
In this study of Robert Boyle's epistemology, Jan W. Wojcik reveals the theological context within which Boyle developed his views on reason's limits. After arguing that a correct interpretation of his views on 'things above reason' depends upon reading his works in the context of theological controversies in seventeenth-century England, Professor Wojcik details exactly how Boyle's three specific categories of things which transcend reason – the incomprehensible, the inexplicable, and the unsociable – affected his conception of what a natural philosopher could hope to know. Also covered in detail is Boyle's belief that God had deliberately limited the human intellect in order to reserve a full knowledge of both theology and natural philosophy for the afterlife.