ArticlePublisher preview available
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ockham’s razor is the characteristic scientific penchant for simpler, more testable, and more unified theories. Glymour’s early work on confirmation theory (1980) eloquently stressed the rhetorical plausibility of Ockham’s razor in scientific arguments. His subsequent, seminal research on causal discovery (Spirtes et al. 2000) still concerns methods with a strong bias toward simpler causal models, and it also comes with a story about reliability—the methods are guaranteed to converge to true causal structure in the limit. However, there is a familiar gap between convergent reliability and scientific rhetoric: convergence in the long run is compatible with any conclusion in the short run. For that reason, Carnap (1945) suggested that the proper sense of reliability for scientific inference should lie somewhere between short-run reliability and mere convergence in the limit. One natural such concept is straightest possible convergence to the truth, where straightness is explicated in terms of minimizing reversals of opinion (drawing a conclusion and then replacing it with a logically incompatible one) and cycles of opinion (returning to an opinion previously rejected) prior to convergence. We close the gap between scientific rhetoric and scientific reliability by showing (1) that Ockham’s razor is necessary for cycle-optimal convergence to the truth, and (2) that patiently waiting for information to resolve conflicts among simplest hypotheses is necessary for reversal-optimal convergence to the truth.
Synthese (2016) 193:1191–1223
DOI 10.1007/s11229-015-0993-9
S.I.: THE PHILOSOPHY OF CLARK GLYMOUR
Realism, rhetoric, and reliability
Kevin T. Kelly1·Konstantin Genin1·Hanti Lin2
Received: 13 July 2014 / Accepted: 11 December 2015 / Published online: 15 April 2016
© Springer Science+Business Media Dordrecht 2016
Abstract Ockham’s razor is the characteristic scientific penchant for simpler, more
testable, and more unified theories. Glymour’s early work on confirmation theory
(1980) eloquently stressed the rhetorical plausibility of Ockham’s razor in scientific
arguments. His subsequent, seminal research on causal discovery (Spirtes et al. 2000)
still concerns methods with a strong bias toward simpler causal models, and it also
comes with a story about reliability—the methods are guaranteed to converge to true
causal structure in the limit. However, there is a familiar gap between convergent
reliability and scientific rhetoric: convergence in the long run is compatible with any
conclusion in the short run. For that reason, Carnap (1945) suggested that the proper
sense of reliability for scientific inference should lie somewhere between short-run
reliability and mere convergence in the limit. One natural such concept is straightest
possible convergence to the truth, where straightness is explicated in terms of minimiz-
ing reversals of opinion (drawing a conclusion and then replacing it with a logically
incompatible one) and cycles of opinion (returning to an opinion previously rejected)
prior to convergence. We close the gap between scientific rhetoric and scientific relia-
bility by showing (1) that Ockham’s razor is necessary for cycle-optimal convergence
to the truth, and (2) that patiently waiting for information to resolve conflicts among
simplest hypotheses is necessary for reversal-optimal convergence to the truth.
BHanti Lin
ika@ucdavis.edu
Kevin T. Kelly
kk3n@andrew.cmu.edu
Konstantin Genin
kgenin@andrew.cmu.edu
1Carnegie Mellon University, Pittsburgh, USA
2University of California at Davis, Davis, USA
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... The idea is that the extent to which a theory's measure of universality is truth-conducive is in part determined by a posteriori considerations, not just on its a priori ones (Cf., Sober 1994, p. 141). 35 One such attempt is found in Kelly et al. (2016), which can be viewed as elaborating upon Popper's notion of universality by, one, critiquing his relation between simplicity and falsifiability, 36 and two, revising it in terms of a necessary simplicity. The basic critique is that a theory's extent of possible falsifiers may be silent on the overall complexity of that theory. ...
... For instance, if general relativity theory GRT is falsifiable, then GRT ⋂ P has at least as many potential falsifiers as GRTbut P could be hopelessly complex, in which case the conjunction GRT ⋂ P appears to be more complex than GRT alone, contrary to Popper's proposal" (Kelly et al. 2016(Kelly et al. , p. 1207. ...
... What then allows for the evaluation between GRT ⋂ P and GRT as the better theory must be based on subsequent empirical tests. It is this withholding of judgment until a time wherein one can empirically distinguish between two otherwise equivalent theories that is expressed, by Kelly et al. (2016), as the condition of patience for truth-optimized solutions of empirical problems (p. 1214). ...
Preprint
Full-text available
Abstract: Popper's account of science is an endeavour in establishing the relationship between universality and truth. The idea is that the more an empirical law is universal, by precluding certain realities from obtaining in an evidentially falsifiable way, the more the law is supported by instances of its predictions being evidentially verified. The logical structure of this dynamic is captured by Popper's notion of 'corroboration'. However, this notion is suspect, for, depending on one's interpretation of evidential givenness, the relation between a law's degree of universality and evidential corroborability could instead invert, thereby contradicting Popper. This paper also explores how a conceptualization of universality in terms of necessary simplicity-i.e., a measure of simplicity that is also sensitive to the evidence at hand-can better recontextualize evidential givenness to be about evidential support for a theory's predictive truth conduciveness, against Popper's understanding of evidential support for a theory's veracity concerning the evidence at hand. However, it is argued that employing necessary simplicity to attain truth conduciveness in a theory's predictions must appeal to specific background assumptions concerning the state of affairs the evidence is supposed to be about. When these background assumptions are denied as being necessarily instantiated, then a relation between necessary simplicity and truth conduciveness becomes contingently uncertain.
... They are powerconjugated with rupture and reformation of atomic (or molecular) bonds, and with changes of material symmetry (the couples). In other words, along the path followed here there is economy in the representation of actions, along the guidelines suggested by Ockham's razor, if we accept and use it being careful to avoid epistemic problems (for this last philosophical aspect see [16] and also [17,18]). Then, we write a non-standard version of the energy balance including the relative power ( §4). ...
Article
Full-text available
We look at a mechanical dissipation inequality differing from the standard one by what we call a relative power, a notion that is appropriate in the presence of material mutations. We prove that a requirement of structural invariance for such an inequality under the action of diffeomorphism-based changes of observers (covariance) implies (i) the representation of contact actions in terms of the first Piola–Kirchhoff stress, (ii) local balances of standard and configurational actions, (iii) a priori constitutive restrictions in terms of free energy, and (iv) a representation of viscous-type stress components. This article is part of the theme issue ‘Foundational issues, analysis and geometry in continuum mechanics’.
... She notes that science has long shown a preference for more parsimonious models, not out of mere aesthetic whimsy, but because of well-founded principles regarding the inherent simplicity of nature (Baker, 2016). Recent results in formal learning theory confirm that an Ockham's Razor approach to hypothesis testing is the optimal strategy for convergence to the truth under minimal topological constraints (Kelly et al., 2016). Breiman (2001) famously introduced the idea of a Rashomon set -a collection of models that estimate the same functional relationship using vastly different algorithmic assumptions and methods, yet all perform reasonably well (say, within 5% of the top performing model). ...
Chapter
Full-text available
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
... Claveau and Grenier 2019;Kummerfeld and Danks 2014;Landes 2021;Trpin et al. 2021;Schippers 2014;Schindler 2011;Kelly et al. 2016;Mayo-Wilson 2014;Olsson and Schubert 2007;Pittard 2017), we thought that it would be beneficial to provide a forum for an open exchange of ideas, in which philosophers working in different paradigms could come together. Our call for expression of interest received a great variety of promised manuscripts, the variety is reflected in the published papers. ...
... She notes that science has long shown a preference for more parsimonious models, not out of mere aesthetic whimsy, but because of well-founded principles regarding the inherent simplicity of nature (Baker 2016). Recent results in formal learning theory confirm that an Ockham's Razor approach to hypothesis testing is the optimal strategy for convergence to the truth under minimal topological constraints (Kelly et al. 2016). Breiman (2001) famously introduced the idea of a Rashomon set 8 -a collection of models that estimate the same functional relationship using different algorithms and/or hyperparameters, yet all perform reasonably well (say, within 5% of the top performing model). ...
Article
Full-text available
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
... She notes that science has long shown a preference for more parsimonious models, not out of mere aesthetic whimsy, but because of well-founded principles regarding the inherent simplicity of nature (Baker, 2016). Recent results in formal learning theory confirm that an Ockham's Razor approach to hypothesis testing is the optimal strategy for convergence to the truth under minimal topological constraints (Kelly et al., 2016). Breiman (2001) famously introduced the idea of a Rashomon set -a collection of models that estimate the same functional relationship using vastly different algorithmic assumptions and methods, yet all perform reasonably well (say, within 5% of the top performing model). ...
... Martin and Irvine 1984a, b;Perovic et al. 2016) and the inductive approach (IA) (e.g. Genin and Kelly 2015;Kelly et al. 2016;Baltag et al. 2015;Kelly 2004;Schulte 2000Schulte , 2018. The former seeks to identify the optimal organizational structure of agents of scientific knowledge-production, such as individual researchers, research groups, laboratories, etc. ...
Article
Full-text available
We argue that inductive analysis (based on formal learning theory and the use of suitable machine learning reconstructions) and operational (citation metrics-based) assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability. The full text is available on the following link: https://rdcu.be/bNGvo
Article
Full-text available
Aim: To explore healthful leadership practices in nursing and midwifery evident within the Covid-19 pandemic in the UK, the contextual facilitators, barriers, and outcomes. Background: Globally, the health and care sector are under pressure and despite nurses and other professionals demonstrating resilience and resourcefulness in the COVID-19 pandemic, this has negatively impacted on their health and wellbeing and on patient care. Evaluation: Two searches were conducted in July 2021 and December 2021. Inclusion/exclusion criteria were identified to refine the search, including papers written since the beginning of the pandemic in 2020. A total of 38 papers were included principally from the USA and UK. 10 were research papers, the others were commentaries, opinion pieces and editorials. MS Teams literature repository was created. A unique critical appraisal tool was devised to capture contexts, mechanisms and outcomes whilst reflecting more standardised tools i.e., the Critical Appraisal Skills Programme and the Authority, Accuracy, Coverage, Objectivity and Date tool (AACOD) tool for reviewing grey literature to refine the search further. Key issues: Six tentative theories of healthful leadership emerged from the literature around leadership strategies which are relational, being visible and present; being open and engaging; caring for self and others; embodying values; being prepared and preparing others and using available information and support. Contextual factors that enable healthful leadership practices are in the main, created by leaders' values, attributes, and style, as well as the culture within which they lead. The literature suggests leaders who embody values of compassion, empathy, courage, and authenticity create conditions for positive and healthful relations between leaders and others. Nurse and midwives' voices are however absent from the literature in this review. Conclusion: Current available literature would suggest healthful leadership practices are not prioritized by nurse leaders but the perspectives of nurses' and midwives' about the impact of such practices on their well-being is missing. Tentative theories are offered as a means of identifying healthful leadership strategies, the context that enable these and potential outcomes for nurses an midwives. These will be explored in phase two of this study. Implications for nursing management: Nurse leaders must be adequately prepared to create working environments that support nurses' and midwives' wellbeing, so that they may be able to provide high quality care. Ensuring a supportive organisational culture which embodies the values of healthfulness may help to mitigate the impact of the COVID-19 pandemic on nurses' and midwives' wellbeing in the immediate aftermath and going forward.
Article
Evaluative studies of inductive inferences have been pursued extensively with mathematical rigor in many disciplines, such as statistics, econometrics, computer science, and formal epistemology. Attempts have been made in those disciplines to justify many different kinds of inductive inferences, to varying extents. But somehow those disciplines have said almost nothing to justify a most familiar kind of induction, an example of which is this: “We’ve seen this many ravens and they all are black, so all ravens are black.” This is enumerative induction in its full strength. For it does not settle with a weaker conclusion (such as “the ravens observed in the future will all be black”); nor does it proceed with any additional premise (such as the statistical IID assumption). The goal of this paper is to take some initial steps toward a justification for the full version of enumerative induction, against counterinduction, and against the skeptical policy. The idea is to explore various epistemic ideals, mathematically defined as different modes of convergence to the truth, and look for one that is weak enough to be achievable and strong enough to justify a norm that governs both the long run and the short run. So the proposal is learning-theoretic in essence, but a Bayesian version is developed as well.
Chapter
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
Article
Full-text available
We investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of solvability and learnability, and we use them to prove that AGM-style belief revision is "universal", i.e., that every solvable problem is solvable by AGM conditioning.
Book
Described by the philosopher A.J. Ayer as a work of ‘great originality and power’, this book revolutionized contemporary thinking on science and knowledge. Ideas such as the now legendary doctrine of ‘falsificationism’ electrified the scientific community, influencing even working scientists, as well as post-war philosophy. This astonishing work ranks alongside The Open Society and Its Enemies as one of Popper’s most enduring books and contains insights and arguments that demand to be read to this day. © 1959, 1968, 1972, 1980 Karl Popper and 1999, 2002 The Estate of Karl Popper. All rights reserved.
Chapter
Two books have been particularly influential in contemporary philosophy of science: Karl R. Popper's Logic of Scientific Discovery, and Thomas S. Kuhn's Structure of Scientific Revolutions. Both agree upon the importance of revolutions in science, but differ about the role of criticism in science's revolutionary growth. This volume arose out of a symposium on Kuhn's work, with Popper in the chair, at an international colloquium held in London in 1965. The book begins with Kuhn's statement of his position followed by seven essays offering criticism and analysis, and finally by Kuhn's reply. The book will interest senior undergraduates and graduate students of the philosophy and history of science, as well as professional philosophers, philosophically inclined scientists, and some psychologists and sociologists.
Book
Formal learning theory is one of several mathematical approaches to the study of intelligent adaptation to the environment. The analysis developed in this book is based on a number theoretical approach to learning and uses the tools of recursive-function theory to understand how learners come to an accurate view of reality. This revised and expanded edition of a successful text provides a comprehensive, self-contained introduction to the concepts and techniques of the theory. Exercises throughout the text provide experience in the use of computational arguments to prove facts about learning. Bradford Books imprint
Article
Among the various meanings in which the word ‘probability’ is used in everyday language, in the discussion of scientists, and in the theories of probability, there are especially two which must be clearly distinguished. We shall use for them the terms ‘probability 1 ’ and ‘probability 2 '. Probability 1 is a logical concept, a certain logical relation between two sentences (or, alternatively, between two propositions); it is the same as the concept of degree of confirmation. I shall write briefly “c” for “degree of confirmation,” and “c( h, e )” for “the degree of confirmation of the hypothesis h on the evidence e “; the evidence is usually a report on the results of our observations. On the other hand, probability 2 is an empirical concept; it is the relative frequency in the long run of one property with respect to another. The controversy between the so-called logical conception of probability, as represented e.g. by Keynes, and Jeffreys, and others, and the frequency conception, maintained e.g. by v. Mises and Reichenbach, seems to me futile. These two theories deal with two different probability concepts which are both of great importance for science. Therefore, the theories are not incompatible, but rather supplement each other.