Article

Scientific understanding: truth or dare?

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

It is often claimed—especially by scientific realists—that science provides understanding of the world only if its theories are (at least approximately) true descriptions of reality, in its observable as well as unobservable aspects. This paper critically examines this ‘realist thesis’ concerning understanding. A crucial problem for the realist thesis is that (as study of the history and practice of science reveals) understanding is frequently obtained via theories and models that appear to be highly unrealistic or even completely fictional. So we face the dilemma of either giving up the realist thesis that understanding requires truth, or allowing for the possibility that in many if not all practical cases we do not have scientific understanding. I will argue that the first horn is preferable: the link between understanding and truth can be severed. This becomes a live option if we abandon the traditional view that scientific understanding is a special type of knowledge. While this view implies that understanding must be factive, I avoid this implication by identifying understanding with a skill rather than with knowledge. I will develop the idea that understanding phenomena consists in the ability to use a theory to generate predictions of the target system’s behavior. This implies that the crucial condition for understanding is not truth but intelligibility of the theory, where intelligibility is defined as the value that scientists attribute to the theoretical virtues that facilitate the construction of models of the phenomena. I will show, first, that my account accords with the way practicing scientists conceive of understanding, and second, that it allows for the use of idealized or fictional models and theories in achieving understanding.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... We could now expand this list to include (among others): Elgin (2007), Pritchard (2008), Kvanvig (2009), Gardiner (2012); and Mizrahi (2012). 3 For examples not covered in detail here, see de Regt (2015Regt ( , 2016 and Wilkenfeld (2017Wilkenfeld ( , 2019. 4 While antirealist notions of SP are available, most notably the functionalist-internalist accounts of Kuhn (1962Kuhn ( , 1991 and Laudan (1977Laudan ( , 1981Laudan ( , 1984, interest in them has waned in recent years; Shan (2019) is a notable exception. 5 The connection between truth and progress is most famously highlighted in Putnam's (1975) 'no miracles' argument, a more contemporary interpretation of which can be found in Lipton (2003). ...
... We could now expand this list to include (among others): Elgin (2007), Pritchard (2008), Kvanvig (2009), Gardiner (2012); and Mizrahi (2012). 3 For examples not covered in detail here, see de Regt (2015Regt ( , 2016 and Wilkenfeld (2017Wilkenfeld ( , 2019. 4 While antirealist notions of SP are available, most notably the functionalist-internalist accounts of Kuhn (1962Kuhn ( , 1991 and Laudan (1977Laudan ( , 1981Laudan ( , 1984, interest in them has waned in recent years; Shan (2019) is a notable exception. 5 The connection between truth and progress is most famously highlighted in Putnam's (1975) 'no miracles' argument, a more contemporary interpretation of which can be found in Lipton (2003). ...
... It is interesting to note that, unlike knowledge, understanding is not an intrinsically realist notion. In recent work, for example, de Regt (2015Regt ( , 2016 has argued for an antirealist notion of understanding which is not even moderately factive. 9 In contrast, Dellsén is keen to maintain his realist convictions and thus takes understanding to be quasi-factive, suggesting that 'the explanatorily/predictively essential elements of a theory must be true in order for the theory to provide grounds for understanding ' (2016: p. 73, fn6). ...
Article
Full-text available
Contemporary debate surrounding the nature of scientific progress has focused upon the precise role played by justification, with two realist accounts having dominated proceedings. Recently, however, a third realist account has been put forward, one which offers no role for justification at all. According to Finnur Dellsén’s (Stud Hist Philos Sci Part A 56:72–83, 2016) noetic account, science progresses when understanding increases, that is, when scientists grasp how to correctly explain or predict more aspects of the world that they could before. In this paper, we argue that the noetic account is severely undermotivated. Dellsén provides three examples intended to show that understanding can increase absent the justification required for true belief to constitute knowledge. However, we demonstrate that a lack of clarity in each case allows for two contrasting interpretations, neither of which serves its intended purpose. On the first, the agent involved lacks both knowledge and understanding; and, on the second, the agent involved successfully gains both knowledge and understanding. While neither interpretation supports Dellsén’s claim that understanding can be prised apart from knowledge, we argue that, in general, agents in such cases ought to be attributed neither knowledge nor understanding. Given that the separability of knowledge and understanding is a necessary component of the noetic account, we conclude that there is little support for the idea that science progresses through increasing understanding rather than the accumulation of knowledge.
... Second, there are some scholars who, while accepting the legitimacy of the debate on the aim(s) of science, espouse different goals than truth or empirical adequacy, such as manifest truth (Lyons 2005), knowledge (Bird 2007(Bird , 2010 or understanding (e.g., de Regt 2015;Potochnik 2015;. Lyons (2005), for instance, argues that the kind of true statements that science seeks is not vacuous or detached from empirical examination but whose truth is empirically manifested (Lyons 2005, 174). ...
... Currently, the aim of science is also discussed in the analysis of scientific understanding. For instance, scholars such as de Regt (2015; and Potochnik (2015; identify the attainment of understanding rather than truth as the aim of science. Pointing to the practice of idealization, they argue that the truthfulness of models with respect to the target system is often sacrificed to enhance understanding. ...
Article
Full-text available
The aim or goal of science has long been discussed by both philosophers of science and scientists themselves. In The Scientific Image (van Fraassen 1980), the aim of science is famously employed to characterize scientific realism and a version of anti-realism, called constructive empiricism. Since the publication of The Scientific Image, however, various changes have occurred in scientific practice. The increasing use of machine learning technology, especially deep learning (DL), is probably one of the major changes in the last decade. This paper aims to explore the implications of DL-aided research for the aim of science debate. I argue that, while the emerging DL-aided research is unlikely to change the state of classic opposition between constructive empiricism and scientific realism, it could offer interesting cases regarding the opposition between those who espouse truth as the aim of science and those oriented to understanding (of the kind that sacrifices truth).
... These include the ability to follow an explanation, to explain in one's own words, or to draw conclusions about relevantly similar cases. Others have suggested further abilities, e.g., the ability to answer questions about counterfactual cases (Grimm, 2011), to make predictions (de Regt, 2015), to qualitatively solve problems (Newman, 2017), to construct (scientific) models (de Regt, 2015), or to evaluate competing explanations (Khalifa, 2013). ...
... These include the ability to follow an explanation, to explain in one's own words, or to draw conclusions about relevantly similar cases. Others have suggested further abilities, e.g., the ability to answer questions about counterfactual cases (Grimm, 2011), to make predictions (de Regt, 2015), to qualitatively solve problems (Newman, 2017), to construct (scientific) models (de Regt, 2015), or to evaluate competing explanations (Khalifa, 2013). ...
Article
Full-text available
A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific abilities is motivated by research from numerous disciplines involved in XAI. Second, an abilities-based approach is highly versatile and can capture different forms of understanding important in XAI application contexts. Third, abilities can be operationalized for empirical studies. Fourth, abilities can be used to clarify the link between explainability, understanding, and societal desiderata concerning AI, like fairness and trustworthiness. Conceptualizing understanding as abilities can therefore support interdisciplinary collaboration among XAI researchers, provide practical benefit across diverse XAI application contexts, facilitate the development and evaluation of explainability approaches, and contribute to satisfying the societal desiderata of different stakeholders concerning AI systems.
... To provide more details, Wilkenfeld (2017) holds that understanding is tied to truth in terms of representational accuracy, assuming a correspondence theory of truth. For non-factive accounts such as de Regt's (2015Regt's ( , 2017Regt's ( , 2023 pragmatic approach, the evaluation of the explanans can involve a variety of criteria, such as its intelligibility, effectiveness in promoting predictions, practical applications, or its general heuristic value (de Regt & Gijsbers, 2017). As such de Regt (2015), for instance, suggests that the evaluation of the explanans could still be true, while the truth maker is due to some other criterion, besides factual correspondence. ...
... As such de Regt (2015), for instance, suggests that the evaluation of the explanans could still be true, while the truth maker is due to some other criterion, besides factual correspondence. For instance, he promotes the idea that a pragmatic theory of truth can function as such an evaluation (de Regt, 2015). ...
Article
Full-text available
This paper studies the epistemic failures to reach understanding in relation to scientific explanations. We make a distinction between genuine understanding and its negative phenomena—lack of understanding and misunderstanding. We define explanatory understanding as inclusive as possible, as the epistemic success that depends on abilities, skills, and correct explanations. This success, we add, is often supplemented by specific positive phenomenology which plays a part in forming epistemic inclinations—tendencies to receive an insight from familiar types of explanations. We define lack of understanding as the epistemic failure that results from a lack of an explanation or from an incorrect one. This can occur due to insufficient abilities and skills, or to fallacious explanatory information. Finally, we characterize misunderstanding by cases where one’s epistemic inclinations do not align with an otherwise correct explanation. We suggest that it leads to potential debates about the explanatory power of different explanatory strategies. We further illustrate this idea with a short meta-philosophical study on the current debates about distinctively mathematical explanations.
... Sullivan 2019). 15 In big data practices, the source of knowledge is not always an individual that can provide better explanations to support her claims if asked to do so; it is often a combination of methodologies plus machine implementation over inputs that come from very diverse sources in very different formats, and which interconnections are not always clear to us. In the long run, this has the effect of scientists being unable to provide explanations about procedures that might have lead to the discovery of novel phenomena. ...
... Consequently, when working with big data, scientists are trading knowledge of some parts of theoretical structure in exchange for access to inaccessible objects. As a matter of fact, the incorporation of big data to the empirical sciences has created a new epistemic preference: "answers are found through a process of 15 This, especially when adopting a standpoint similar to the so-called assurance view of testimony, according to which "testimony is restricted to speech acts that come with the speaker's assurance that the statement is true, constituting an invitation for the hearer to trust the speaker. Such views highlight the intention of the speaker and the normative character of testimony where we rebuke the testifier in the instance of false testimony (Tollefsen 2009)" (Sullivan 2019: 21). ...
Article
Full-text available
It is a fact that the larger the amount of defective (vague, partial, conflicting, inconsistent) information is, the more challenges scientists face when working with it. Here, I address the question of whether there is anything special about the ignorance involved in big data practices. I submit that the ignorance that emerges when using big data in the empirical sciences is ignorance of theoretical structure with reliable consequences and I explain how this ignorance relates to different epistemic achievements such as knowledge and understanding. I illustrate this with a case study from observational cosmology.
... If the abductive argument is strong, and if one is persuaded by the argument to accept the conclusion, and if, beyond that, the conclusion turns out to be correct, then one has attained justified, true, belief the classical philosophical conditions of knowledge… (Josephson and Josephson 1994, p. 16) Although the second formulation of the IBE's conclusion-namely, "Knowingly, H"-pulls things in the right direction, it is too strong and restrictive. First, as Elgin (2007), de Regt (2015), and Gaszczyk (2023) show, the best explanation might be non-factive in some epistemic contexts. Also, some explanations might be selected for nonepistemic reasons, such as harm reduction. ...
Article
The paper presents an extended scheme for the inference to the best explanation (IBE). The scheme precisely treats the epistemic modifiers (“hypothetically,” “plausibly,” “presumably”) of the inference, acknowledges its contrastive nature, clarifies the logical support between premises and conclusions (linked, convergent, and serial support), and introduces additional premises essential for inferring justified conclusions (especially those related to causal explanations and more demanding standards of proof). Overall, it advances the existing schemes for IBE in argumentation theory and treats IBE as a par excellence argumentative, rather than explanatory, form of reasoning. Résumé: L’article présente en détail un schéma pour l’inférence vers la meilleure explication (IME). Le schéma traite précisément les modificateurs épistémiques (« hypothétiquement », « plausiblement », « vraisemblablement ») de l’inférence, reconnaît sa nature qui fait contraste, clarifie l’appui logique entre les prémisses et les conclusions (l’appui lié, convergent et sériel) et introduit des prémisses supplémentaires essentielles pour inférer des conclusions justifiées (en particulier celles liées aux explications causales et aux normes de preuve plus exigeantes). Dans l’ensemble, l’article fait progresser les schémas existants pour l’IME dans la théorie de l’argumentation et traite l’IME comme une forme de raisonnement argumentatif par excellence, plutôt qu’explicatif.
... See alsoDoyle et al. (2019), De Regt (2015,Elgin (2004Elgin ( , 2007Elgin ( , 2008Elgin ( , 2017, andPotochnik (2017Potochnik ( , 2020.8 The very short list of experiments in this field include: Allahyari and Lavesson (2011), Freitas (2014), Fürnkranz et al. (2018), Lage et al. (2019), Kandul et al. (2023), Kliegr et al. (2018), Piltaver et al. (2016), and van der Waa (2021). ...
Preprint
Full-text available
In the natural and social sciences, it is common to use toy models -- extremely simple and highly idealized representations -- to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on the output. The obvious difference is that the common target of a toy and a full-scale model in the sciences is some phenomenon in the world, while the target of a surrogate model is another model. This essential difference makes toy surrogate models (TSMs) a new object of study for theories of understanding, one that is not easily accommodated under current analyses. This paper provides an account of what it means to understand an opaque ML model globally with the aid of such simple models.
... A useful way of thinking about the conditions of satisfaction for grasping the interconnectedness of different facts is in terms of an agent's success in using the information. This idea has been defended in different guises by philosophers of science like Ylikoski (2009), De Regt (2017, and Kuorikoski (2011Kuorikoski ( , 2023. In the inferential conception of understanding defended by Kuorikoski and Ylikoski (2015), for example, [Understanding] is not only about learning and memorizing true propositions, but about the capability to put one's knowledge to use. ...
Article
Full-text available
In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on the output. The obvious difference is that the common target of a toy and a full-scale model in the sciences is some phenomenon in the world, while the target of a surrogate model is another model. This essential difference makes toy surrogate models (TSMs) a new object of study for theories of understanding, one that is not easily accommodated under current analyses. This paper provides an account of what it means to understand an opaque ML model globally with the aid of such simple models.
... If this is so, then, usually, understanding is more difficult to acquire than knowledge; represents a greater cognitive achievement (Pritchard 2010); and, as a result, leads to more demanding accounts of moral progress. 19 Also, studies have shown that even philosophy experts can be biased, manipulated or swayed by irrelevant factors and their reasoning is unreliable in solving moral dilemmas (see, e.g., Schwitzgebel and Cushman 2012;2015). 20 Admittedly, this is more of a theoretical, than a practical problem. ...
Article
Full-text available
Moral progress is often modeled as an increase in moral knowledge and understanding, with achievements in moral reasoning seen as key drivers of progressive moral change. Contemporary discussion recognizes two (rival) accounts: knowledge-based and understanding-based theories of moral progress, with the latter recently contended as superior (Severini 2021). In this article, we challenge the alleged superiority of understanding-based accounts by conducting a comparative analysis of the theoretical advantages and disadvantages of both approaches. We assess them based on their potential to meet the following criteria: (i) moral progress must be possible despite evolutionary and epistemic constraints on moral reasoning; (ii) it should be epistemically achievable to ordinary moral agents; and (iii) it should be explainable via doxastic change. Our analysis suggests that both accounts are roughly equally plausible, but knowledge-based accounts are slightly less demanding and more effective at explaining doxastic change. Therefore, contrary to the prevailing view, we find knowledge-based accounts of moral progress more promising.
... These remarks suggest-albeit less explicitly than one would hope-that, according to proponents of the epistemic authority account, the epistemic superiority of moral experts has to be intended in terms of non-factive moral understanding (see, e.g., de Regt 2015;Elgin 2017;Severini 2021;Zagzebski 2001). In general, understanding can be intended as encompassing an informational component and a grasping component (Boyd 2017). ...
Article
Full-text available
This paper explores the concept of moral expertise in the contemporary philosophical debate, with a focus on three accounts discussed across moral epistemology, bioethics, and virtue ethics: an epistemic authority account, a skilled agent account, and a hybrid model sharing key features of the two. It is argued that there are no convincing reasons to defend a monistic approach that reduces moral expertise to only one of these models. A pluralist view is outlined in the attempt to reorient the discussion about moral expertise.
... Although further understanding-related abilities are discussed in the literature (see, e.g., de Regt 2015;Khalifa 2017;Newman 2017;Strevens 2013), I want to focus on a specific ability: the ability to reason not only about an individual instance, but also about similar or hypothetical cases. Such an ability is stressed by several authors. ...
Article
Full-text available
Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these approaches use visual explanations. Drawing on Elgin’s work (True enough. MIT Press, Cambridge, 2017), I argue that analyzing what those explanations exemplify can help to assess their suitability for producing understanding. More specifically, I suggest to distinguish between two forms of examples according to their suitability for producing understanding. I call these forms samples and exemplars, respectively. Samples are prone to misinterpretation and thus carry the risk of leading to misunderstanding. Exemplars, by contrast, are intentionally designed or chosen to meet contextual requirements and to mitigate the risk of misinterpretation. They are thus preferable for bringing about understanding. By reviewing several XAI approaches directed at image classifiers, I show that most of them explain with samples. If my analysis is correct, it will be beneficial if such explainability methods use explanations that qualify as exemplars.
... It is, however, consensus that some cognitive progress is being made in the sciences. By accepting that there can be cognitive progress on the basis of false beliefs, the non-factivity account of understanding allows that the sciences are able to attain a significant degree of understanding of their objects, and hence enables us to explain why talk of scientific progress is indeed correct (Elgin 2007;De Regt 2015). ...
Article
Full-text available
According to an optimistic view, affective empathy is a route to knowledge of what it is like to be in the target person’s state (“phenomenal knowledge”). Roughly, the idea is that the empathizer gains this knowledge by means of empathically experiencing the target’s emotional state. The literature on affective empathy, however, often draws a simplified picture according to which the target feels only a single emotion at a time. Co-occurring emotions (“concurrent emotions”) are rarely considered. This is problematic, because concurring emotions seem to support a sceptical view according to which we cannot gain phenomenal knowledge of the target person’s state by means of affective empathy. The sceptic concludes that attaining the epistemic goal of affective empathy is difficult, in practice often impossible. I accept the sceptic’s premises, but reject the conclusion, because of the argument’s unjustified, hidden premise: that the epistemic goal of affective empathy is phenomenal knowledge. I argue that the epistemic goal of affective empathy is phenomenal understanding, not knowledge. Attention to the under-explored phenomenon of concurring emotions clarifies why this is important. I argue that this is the decisive epistemic progress in everyday cases of phenomenal understanding of another person.
... Furthermore, I assume that such causal or mechanistic explanations, again under the right circumstances, can serve as vehicles for scientific understanding. The "right circumstances" may be formulated, for example, in terms of a criterion of intelligibility (De Regt and Dieks, 2005) and the resulting understanding may be considered as a skill rather than as a special type of knowledge (De Regt, 2015). Note that in saying this, I am not assuming that all explanations are causal or mechanistic, nor that I hold any particular view of causal or mechanistic explanation. ...
Article
Full-text available
The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I argue that machine learning can be tightly integrated with other, more traditional, research methods and in a clear sense can contribute to insight into the causal processes underlying phenomena of interest to biologists. One could even say that machine learning is not the end of theory in important areas of biology, as has been argued, but rather a new beginning. I support these claims with a detailed discussion of a case study involving gene regulation by microRNAs.
... Despite these arguments in its favor, factivism has not gone entirely uncontested. To mention a few objections, it has been pointed out that superseded theories provide understanding though they have been shown false (de Regt, 2015), scientific development from one mistaken ontology to another may nonetheless leads to an increase in understanding (Elgin, 2007), contemporary science provides understanding though probably false on the grounds of the pessimistic meta-induction (see Laudan, 1981), contradictory theories can provide understanding (Zagzebski, 2001: 244;de Regt, 2015: 3791-3792), science is ripe with what Elgin (2004Elgin ( , 2017 terms 'felicitous falsehoods' such as idealizations (see also Reiss, 2012), and teaching myths are pervasive as a pedagogical tool to help students understand (Stewart & Cohen, 1997: 36-38). 4 4 I am grateful to an anonymous reviewer who pointed out that a weaker, more plausible form of factivism does not hold the vehicle of understanding to be accurate. ...
Article
Full-text available
Is understanding subject to a factivity constraint? That is, must the agent’s representation of some subject matter be accurate in order for her to understand that subject matter? ‘No’, I argue in this paper. As an alternative, I formulate a novel manipulationist account of understanding. Rather than correctly representing, understanding, on this account, is a matter of being able to manipulate a representation of the world to satisfy contextually salient interests. This account of understanding is preferable to factivism, I argue, mainly for simplicity reasons. While it explains the intuitive data about understanding as successfully as factivist accounts, it is simpler by virtue of reducing the value truth bestows on understanding to that of usability.
... For example, Catherine Elgin (1993, 14-15) Regt's (2015) claim that the "quintessence of scientific understanding lies in the ability to perform a difficult task rather than in knowing the answer to a difficult question" (our emphasis). ...
Article
Full-text available
Practical ability manifested through robust and reliable task performance, as well as information relevance and well-structured representation, are key factors indicative of understanding in philosophical literature. We explore these factors in the context of deep learning, identifying prominent patterns in how the results of these algorithms represent information. While the estimation applications of modern neural networks do not qualify as the mental activity of minded agents, we argue that coupling analyses from philosophical accounts with the empirical and theoretical basis for identifying these factors in deep learning representations provides a framework for discussing and critically evaluating potential machine understanding given the continually improving task performance enabled by such algorithms.
... The problem of explanatory understanding has received significant attention in the recent literature of epistemology and philosophy of science (see Baumberger et al. 2017 for an overview). The exact definition of understanding 1 is a matter of an ongoing dispute, but most analyses have converged on the idea that at least one of the key differences between mere knowledge of a correct explanation of a phenomenon and understanding the phenomenon has an inferential character (Newman 2014;Grimm 2010;Khalifa 2017;De Regt 2015). The inferential character of the explanatory understanding of a given fact, or a factual domain, has been analysed in the literature in two complementary ways, related to both the inferential properties of singular explanations and the organization of inter-linked explanations: ...
... Many have claimed that models may explain without accurate representation (e.g. Batterman and Rice 2014;Bokulich 2008Bokulich , 2011Bokulich , 2012Bokulich , 2016De Regt 2015Graham Kennedy 2012;Potochnik 2017). Bokulich, for one, argues that what she calls 'model explanations' capture the counterfactual dependencies of a target system despite the fictional representational content. ...
Article
Full-text available
Highly idealized models may serve various epistemic functions, notably explanation, in virtue of representing the world. Inferentialism provides a prima facie compelling characterization of what constitutes the representation relation. In this paper, I argue that what I call factive inferentialism does not provide a satisfactory solution to the puzzle of model-based—factive—explanation. In particular, I show that making explanatory counterfactual inferences is not a sufficient guide for accurate representation, factivity, or realism. I conclude by calling for a more explicit specification of model-world mismatches and properties imputation.
... See Elgin (2004. While I use Elgin's view as a foil, other helpful discussions of nonfactive approaches to understanding include:Zagzebski (2001), de Regt (2015,Potochnik (2017) andRancourt (2017). ...
Article
Full-text available
The notion of understanding occupies an increasingly prominent place in contemporary epistemology, philosophy of science, and moral theory. A central and ongoing debate about the nature of understanding is how it relates to the truth. In a series of influential contributions, Catherine Elgin has used a variety of familiar motivations for antirealism in philosophy of science to defend a non-factive theory of understanding. Key to her position are: (i) the fact that false theories can contribute to the upwards trajectory of scientific understanding, and (ii) the essential role of inaccurate idealisations in scientific research. Using Elgin's arguments as a foil, I show that a strictly factive theory of understanding has resources with which to offer a unified response to both the problem of idealisations and the role of false theories in the upwards trajectory of scientific understanding. Hence, strictly factive theories of understanding are viable notwithstanding these forceful criticisms.
... Another context where 'factivity' is denied is recent defences of the epistemic good of understanding (e.g.,Potochnik [2017],Elgin [2018], De Regt [2015). My discussion differs. ...
Article
I develop an account of the relationship between aesthetics and knowledge, focusing on scientific practice. Cognitivists infer from ‘partial sensitivity’—aesthetic appreciation partly depends on doxastic states—to ‘factivity’, the idea that the truth or otherwise of those beliefs makes a difference to aesthetic appreciation. Rejecting factivity, I develop a notion of ‘epistemic engagement’: partaking genuinely in a knowledge-directed process of coming to epistemic judgements, and suggest that this better accommodates the relationship between the aesthetic and the epistemic. Scientific training (and other knowledge-directed activities), I argue, involve ‘attunement’: the co-option of aesthetic judgements towards epistemic ends. Thus, the connection between aesthetic appreciation and knowledge is psychological and contingent. This view has consequences for the warrant of aesthetic judgment in science, namely, the locus of justification are those processes of attunement, not the aesthetic judgements themselves.
... Необходимо (минимално) условие, за да имаме разбиране е построяването на ментален модел на ситуацията, от която е част това, което искаме да разберем. В този процес са вплетени инференциални процеси, без които построяването на модела, а от там и най-ясно в (de Regt, 2015), но Марк Нюман представя възгледите на де Регт, следвайки по-ранни негови публикации - (de Regt, 2004;de Regt & Dieks, 2005). самото разбиране биха били невъзможни. ...
Book
"Explanation, understanding and inference" presents a view of scientific explanation, called "inferentialist", and demonstrates the advantages of this view compared to alternative models and analyses of explanation, discussed in the philosophy of science in the last 70 years. In brief, the inferentialist view boils down to the claim that the qualities of an explanation depend on the inferences that it allows us to make. This statement stands on two premises: (a) the primary function of explanation is to bring us understanding of the object being explained, or to deepen the existing understanding; (b) understanding is manifested in the inferences we make about the object of our understanding and its relations with other objects. Hence, one explanation is good, i.e. it successfully performs the function of bringing us understanding, if it allows us to draw inferences that were not available to us before we had this explanation. The contents of the book include a preface, 11 chapters (divided into 3 parts) and an afterword.
... If truth were a presupposition for having understanding of gravitational effects, one would lack understanding in this case, given that the theory in play is strictly speaking false. As Henk de Regt (2015) argues, if truth is a necessary condition for understanding, it would follow that past scientists lacked understanding of phenomena for which they had advanced empirically successful (but from our perspective false) theories. Separating the two concepts and taking understanding not to require true theories, but rather an ability to manipulate and use a theory in a certain theoretical domain, avoids this problem and leads to a more plausible thesis of the aims in science and scientific progress. ...
... This approach to understanding appears relative (varying from person to person), but an objective approach is possible [41, § 4]. For example, understanding can be defined by reference to values and concepts shared widely among scientists (but need not necessarily coincide with truth or knowledge) [42][43][44]. Understanding involves explanatory relationships within a single theory [45], connecting theories through concepts which they have in common [46], and fitting theories into an overall framework or structure [47]. ...
Preprint
This review, of the understanding of quantum mechanics, is broad in scope, and aims to reflect enough of the literature to be representative of the current state of the subject. To enhance clarity, the main findings are presented in the form of a coherent synthesis of the reviewed sources. The review highlights core characteristics of quantum mechanics. One is statistical balance in the collective response of an ensemble of identically prepared systems, to differing measurement types. Another is that states are mathematical terms prescribing probability aspects of future events, relating to an ensemble of systems, in various situations. These characteristics then yield helpful insights on entanglement, measurement, and widely-discussed experiments and analyses. The review concludes by considering how these insights are supported, illustrated and developed by some specific approaches to understanding quantum mechanics. The review uses non-mathematical language precisely (terms defined) and rigorously (consistent meanings), and uses only such language. A theory more descriptive of independent reality than is quantum mechanics may yet be possible. One step in the pursuit of such a theory is to reach greater consensus on how to understand quantum mechanics. This review aims to contribute to achieving that greater consensus, and so to that pursuit.
... tral to understanding the phenomenon (e.g., Elgin 2007Elgin , 2017de Regt 2015;de Regt and Gijsbers 2017;Potochnik 2017;Rancourt 2017). ...
Article
Full-text available
Science is replete with falsehoods that epistemically facilitate understanding by virtue of being the very falsehoods they are. In view of this puzzling fact, some have relaxed the truth requirement on understanding. I offer a factive view of understanding (i.e., the extraction view) that fully accommodates the puzzling fact in four steps: (i) I argue that the question how these falsehoods are related to the phenomenon to be understood and the question how they figure into the content of understanding it are independent. (ii) I argue that the falsehoods do not figure into the understanding’s content by being elements of its periphery or core. (iii) Drawing lessons from case studies, I argue that the falsehoods merely enable understanding. When working with such falsehoods, only the truths we extract from them are elements of the content of our understanding. (iv) I argue that the extraction view is compatible with the thesis that falsehoods can have an epistemic value by virtue of being the very falsehoods they are.
... Kvanvig 2003;Mizrahi 2012;cf. Grimm 2006)-Elgin argues that numerous falsehoods may exist even among what the aforementioned philosophers would regard as central propositional commitments (so too de Regt 2009de Regt , 2015. The distinction between central and peripheral propositions is difficult to draw (Kvanvig 2003 offers an influential articulation of the distinction but rightly worries about it elsewhere, e.g. ...
Article
Full-text available
Elgin offers an influential and far-reaching challenge to veritism. She takes scientific understanding to be non-factive and maintains that there are epistemically useful falsehoods that figure ineliminably in scientific understanding and whose falsehood is no epistemic defect. Veritism, she argues, cannot account for these facts. This paper argues that while Elgin rightly draws attention to several features of epistemic practices frequently neglected by veritists, veritists have numerous plausible ways of responding to her arguments. In particular, it is not clear that false propositional commitments figure ineliminably in understanding in the manner supposed by Elgin. Moreover, even if scientific understanding were non-factive and false propositional commitments did figure ineliminably in understanding, the veritist can account for this in several ways without thereby abandoning veritism.
... This approach to understanding appears relative (varying from person to person), but an objective approach is possible [41, § 4]. For example, understanding can be defined by reference to values and concepts shared widely among scientists (but need not necessarily coincide with truth or knowledge) [42][43][44]. Understanding involves explanatory relationships within a single theory [45], connecting theories through concepts which they have in common [46], and fitting theories into an overall framework or structure [47]. ...
Article
Full-text available
This review, of the understanding of quantum mechanics, is broad in scope, and aims to reflect enough of the literature to be representative of the current state of the subject. To enhance clarity, the main findings are presented in the form of a coherent synthesis of the reviewed sources. The review highlights core characteristics of quantum mechanics. One is statistical balance in the collective response of an ensemble of identically prepared systems, to differing measurement types. Another is that states are mathematical terms prescribing probability aspects of future events, relating to an ensemble of systems, in various situations. These characteristics then yield helpful insights on entanglement, measurement, and widely-discussed experiments and analyses. The review concludes by considering how these insights are supported, illustrated and developed by some specific approaches to understanding quantum mechanics. The review uses non-mathematical language precisely (terms defined) and rigorously (consistent meanings), and uses only such language. A theory more descriptive of independent reality than is quantum mechanics may yet be possible. One step in the pursuit of such a theory is to reach greater consensus on how to understand quantum mechanics. This review aims to contribute to achieving that greater consensus, and so to that pursuit.
... 8 Among others, they haven´t persuaded Carter and Gordon (2016), Greco (2014), Grimm (2014), Kelp (2016), Khalifa (2012) and Strevens (2016). For more arguments against factivism, see De Regt and Gijs- bers (2016) andDe Regt (2017). our already established corpus of beliefs and commitments needs to be revised or rearranged, ideally in a non-ad-hoc manner. ...
Article
Full-text available
Testimony spreads information. It is also commonly agreed that it can transfer knowledge. Whether it can work as an epistemic source of understanding is a matter of dispute. However, testimony certainly plays a pivotal role in the proliferation of understanding in the epistemic community. But how exactly do we learn, and how do we make advancements in understanding on the basis of one another’s words? And what can we do to maximize the probability that the process of acquiring understanding from one another succeeds? These are very important questions in our current epistemological landscape, especially in light of the attention that has been paid to understanding as an epistemic achievement of purely epistemic value. Somewhat surprisingly, the recent literature in social epistemology does not offer much on the topic. The overarching aim of this paper is to provide a tentative model of understanding that goes in-depth enough to safely address the question of how understanding and testimony are related to one another. The hope is to contribute, in some measure, to the effort to understand understanding, and to explain two facts about our epistemic practices: (1) the fact that knowledge and understanding relate differently to testimony, and (2) the fact that some pieces of testimonial information are better than others for the sake of providing one with understanding and of yielding advancements in one’s epistemic standing.
... Non-factivists can respond to this defense by pointing out that in some cases, we credit scientists with an understanding of a phenomenon even though they do not exactly know how their models diverge from the phenomenon or under which conditions the models provide an approximately true description of the phenomenon. Moreover, De Regt (2015) suggests examples from economy and ecology, in which scientists acquire understanding by applying models whose central proposition are not even approximately true. ...
Chapter
Full-text available
Science has not only produced a vast amount of knowledge about a wide range of phenomena, it has also enhanced our understanding of these phenomena. Indeed, understanding can be regarded as one of the central aims of science. But what exactly is it to understand phenomena scientifically, and how can scientific understanding be achieved? What is the difference between scientific knowledge and scientific understanding? These questions are hotly debated in contemporary epistemology and philosophy of science. While philosophers have long regarded understanding as a merely subjective and psychological notion that is irrelevant from an epistemological perspective, nowadays many of them acknowledge that a philosophical account of science and its aims should include an analysis of the nature of understanding. This chapter reviews the current debate on scientific understanding. It presents the main philosophical accounts of scientific understanding and discusses topical issues such as the relation between understanding, truth and knowledge, the phenomenology of understanding, and the role of understanding in scientific progress.
... Here, we outline seven theoretically and empirically pervasive developmental processes that should be considered when interpreting behavior genetic model results, regardless of whether such processes are formally modeled within any given investigation (see Table 1 for summary). 1 Other aims that have been claimed as constitutive of scientific inquiry include having true answers to our questions (Kelly and Glymour, 2004), obtaining knowledge (Nagel, 1967), advancing empirically adequate theories (van Fraassen, 1980 and1986), having understanding (de Regt, 2015), and gaining the ability to control nature (Keller, 1985). We invite the reader to think about how what we say in this paper matters with respect to these other aims as well, though we will not discuss them explicitly. ...
Article
Full-text available
Behavior genetic findings figure in debates ranging from urgent public policy matters to perennial questions about the nature of human agency. Despite a common set of methodological tools, behavior genetic studies approach scientific questions with potentially divergent goals. Some studies may be interested in identifying a complete model of how individual differences come to be (e.g., identifying causal pathways among genotypes, environments, and phenotypes across development). Other studies place primary importance on developing models with predictive utility, in which case understanding of underlying causal processes is not necessarily required. Although certainly not mutually exclusive, these two goals often represent tradeoffs in terms of costs and benefits associated with various methodological approaches. In particular, given that most empirical behavior genetic research assumes that variance can be neatly decomposed into independent genetic and environmental components, violations of model assumptions have different consequences for interpretation, depending on the particular goals. Developmental behavior genetic theories postulate complex transactions between genetic variation and environmental experiences over time, meaning assumptions are routinely violated. Here, we consider two primary questions: (1) How might the simultaneous operation of several mechanisms of gene–environment (GE)-interplay affect behavioral genetic model estimates? (2) At what level of GE-interplay does the ‘gloomy prospect’ of unsystematic and non-replicable genetic associations with a phenotype become an unavoidable certainty?
Article
Full-text available
Bas van Fraassen has argued that explanatory reasoning does not provide confirmation for explanatory hypotheses because explanatory reasoning increases information and increasing information does not provide confirmation. We compare this argument with a skeptical argument that one should never add any beliefs because adding beliefs increases information and increasing information does not provide confirmation. We discuss the similarities between these two arguments and identify several problems with van Fraassen’s argument.
Article
Full-text available
Understanding natural phenomena is an important aim of science. Since the turn of the millennium the notion of scientific understanding has been a hot topic of debate in the philosophy of science. A bone of contention in this debate is the role of truth and representational accuracy in scientific understanding. So-called factivists and non-factivists disagree about the extent to which the theories and models that are used to achieve understanding must be (at least approximately) true or accurate. In this paper we address this issue by examining a case from the practice of synthetic chemistry. We investigate how understanding is obtained in this field by means of an in-depth analysis of the famous synthesis of periplanone B by W. Clark Still. It turns out that highly idealized models—that are representationally inaccurate and sometimes even inconsistent—and qualitative concepts are essential for understanding the synthetic pathway and accordingly for achieving the synthesis. We compare the results of our case study to various factivist and non-factivist accounts of how idealizations may contribute to scientific understanding and conclude that non-factivism offers a more plausible interpretation of the practice of synthetic chemistry. Moreover, our case study supports a central thesis of the non-factivist theory of scientific understanding developed by De Regt (Understanding scientific understanding. Oxford University Press, New York. https://doi.org/10.1093/oso/9780190652913.001.0001 , 2017), namely that scientific understanding requires intelligibility rather than representational accuracy, and that idealization is one way to enhance intelligibility.
Article
Electronic Theory of Organic Chemistry is one of the most characteristic theories in chemistry. This theory continues to be taught in current chemical education, despite its incompatibility with quantum theory and its clearly recognized limitations. How can this be explained? I will try to answer this question using the concept of explanatory understanding, which has recently been discussed in the philosophy of science. In doing so, I intend to use De Regt's approach (2017) to scientific understanding as a kind of explanatory understanding. By looking at it from the perspective of scientific understanding, I will be able to defend electronic theory of organic chemistry in this sense, as it provides organic chemists with an understanding of the phenomenon of organic synthesis.
Article
Models are indispensable tools of scientific inquiry, and one of their main uses is to improve our understanding of the phenomena they represent. How do models accomplish this? And what does this tell us about the nature of understanding? While much recent work has aimed at answering these questions, philosophers' focus has been squarely on models in empirical science. I aim to show that pure mathematics also deserves a seat at the table. I begin by presenting two cases: Cramér’s random model of the prime numbers and the function field model of the integers. These cases show that mathematicians, like empirical scientists, rely on unrealistic models to gain understanding of complex phenomena. They also have important implications for some much-discussed theses about scientific understanding. First, modeling practices in mathematics confirm that one can gain understanding without obtaining an explanation. Second, these cases undermine the popular thesis that unrealistic models confer understanding by imparting counterfactual knowledge.
Article
Full-text available
In this paper, we explore the conceptual problems that arise when using network analysis in person-centered care (PCC) in psychiatry. Personalized network models are potentially helpful tools for PCC, but we argue that using them in psychiatric practice raises boundary problems, i.e., problems in demarcating what should and should not be included in the model, which may limit their ability to provide clinically-relevant knowledge. Models can have explanatory and representational boundaries, among others. We argue that perspectival reasoning can make more explicit what questions personalized network models can address in PCC, given their boundaries.
Preprint
Full-text available
In this paper, we explore the conceptual problems arising when using network analysis in person-centered care (PCC) in psychiatry. Personalized network models are potentially helpful tools for PCC, but we argue that using them in psychiatric practice raises boundary problems, i.e., problems in demarcating what should and should not be included in the model, which may limit their ability to provide clinically-relevant knowledge. Models can have explanatory and representational boundaries, among others. We argue that we can make more explicit what kind of questions personalized network models can address in PCC, given their representational and explanatory boundaries, using perspectival reasoning.
Article
One of the most lively debates on scientific understanding is standardly presented as a controversy between the so-called factivists, who argue that understanding implies truth, and the non-factivists whose position is that truth is neither necessary nor sufficient for understanding. A closer look at the debate, however, reveals that the borderline between factivism and non�factivism is not as clear-cut as it looks at first glance. Some of those who claim to be quasi-factivists come suspiciously close to the position of their opponents, the non-factivist, from whom they pretend to differ. The non-factivist, in turn, acknowledges that some sort of ‘answering to the facts’ is indispensable for understanding. This paper discusses an example of convergence of the initially rival positions in the debate on understanding and truth: the use of the same substitute for truth by the quasi�factivist Kareem Khalifa and the non-factivists Henk de Regt and Victor Gijsbers. It is argued that the use of ‘effectiveness’ as a substitute for truth by both parties is not an occasional coincidence of terms, it rather speaks about a deeper similarity which have important implications for understanding the essential features of scientific understanding.
Article
Relationships of counterfactual dependence have played a major role in recent debates of explanation and understanding in the philosophy of science. Usually, counterfactual dependencies have been viewed as the explanantia of explanation, i.e., the things providing explanation and understanding. Sometimes, however, counterfactual dependencies are themselves the targets of explanations in science. These kinds of explanations are the focus of this paper. I argue that “micro-level model explanations” explain the particular form of the empirical regularity underlying a counterfactual dependency by representing it as a physical necessity on the basis of postulated microscopic entities. By doing so, micro-level models rule out possible forms the regularity (and the associated counterfactual) could have taken. Micro-model explanations, in other words, constrain empirical regularities and their associated counterfactual dependencies. I introduce and illustrate micro-level model explanations in detail, contrast them to other accounts of explanation, and consider potential problems.
Chapter
Here, I address the question of whether there anything special about the ignorance involved in big data practices. I submit that the ignorance that emerges when using big data in the empirical sciences is ignorance of theoretical structure with reliable consequences and I explain how this ignorance relates to different epistemic achievements such as knowledge and understanding. I illustrate this with a case study from observational cosmology.KeywordsEpistemology of big dataIgnoranceIgnorance of theoretical structureEpistemic opacityModal understandingBullet cluster
Article
Full-text available
This article uses recent work in philosophy of science and social epistemology to argue for a shift in analytic philosophy of religion from a knowledge-centric epistemology to an epistemology centered on understanding. Not only can an understanding-centered approach open up new avenues for the exploration of largely neglected aspects of the religious life, it can also shed light on how religious participation might be epistemically valuable in ways that knowledge-centered approaches fail to capture. Further, it can create new opportunities for interaction with neighboring disciplines and can help us revitalize and transform stagnant debates in philosophy of religion, while simultaneously allowing for the introduction and recovery of marginalized voices and traditions.
Chapter
According to epistemism, we scientifically understand explananda in terms of explanantia, provided that they are true and we justifiably believe them. On this account, scientific understanding requires the three ingredients of knowledge: belief, justification, and truth. Therefore, scientific understanding is attainable for realists, but not for antirealists. According to anti-epistemism, scientific understanding requires explanation and prediction, but none of the three ingredients of knowledge. I object that anti-epistemists have the burden of giving an account of explanation and prediction without appealing to the three ingredients of knowledge and an account of when misunderstanding arises. The author's videos: https://www.youtube.com/channel/UCjOMOQyQ8WxfvEVBGW1hzLw
Chapter
The Modern Synthesis can be regarded as an attempt to unify biology and to protect it from reduction to chemistry and physics, and thus to preserve the identity of biology as a discipline. Mayr was particularly sensitive to this aspect of the synthesis and developed a specific account of biological causation in part to separate biology from other disciplines. He published his case in 1961, making a distinction between proximate and ultimate causation. This distinction and Mayr’s model of causation have been heavily criticized by advocates of an Extended Evolutionary Synthesis. In this chapter, I detail Mayr’s original argument and then core arguments from those opposing his view. I defend Mayr analytically, but I also make a comment on the possibility that Mayr and his critics are simply operating different forms of idealization to deliver on different tasks. If this is the case, I suggest, then Mayr’s view has not really been dismissed as false but rather positioned within specific task demands.
Article
Full-text available
The paper explores the interplay among moral progress, evolution and moral realism. Although it is nearly uncontroversial to note that morality makes progress of one sort or another, it is far from uncontroversial to define what constitutes moral progress. In a minimal sense, moral progress occurs when a subsequent state of affairs is better than a preceding one. Moral realists conceive “it is better than” as something like “it more adequately reflects moral facts”; therefore, on a realist view, moral progress can be associated with accumulations of moral knowledge. From an evolutionary perspective, on the contrary, since there cannot be something like moral knowledge, one might conclude there cannot even be such a thing as moral progress. More precisely, evolutionism urges us to ask whether we can acknowledge the existence of moral progress without being committed to moral realism. A promising strategy, I will argue, is to develop an account of moral progress based on moral understanding rather than moral knowledge. On this view, moral progress follows increases in moral understanding rather than accumulations of moral knowledge. Whether an understanding-based account of moral progress is feasible and what its implications for the notion itself of moral progress are, will be discussed.
Article
Full-text available
The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks (ANN’s) used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding of the brain trades off against the predictive accuracy of the models. This trade-off between prediction and understanding is better explained by a non-factivist account of scientific understanding.
Thesis
Full-text available
Recent years have seen a dramatic increase in the volumes of data that are produced, stored, and analyzed. This advent of big data has led to commercial success stories, for example in recommender systems in online shops. However, scientific research in various disciplines including environmental and climate science will likely also benefit from increasing volumes of data, new sources for data, and the increasing use of algorithmic approaches to analyze these large datasets. This thesis uses tools from philosophy of science to conceptually address epistemological questions that arise in the analysis of these increasing volumes of data in environmental science with a special focus on data-driven modeling in climate research. Data-driven models, here, are defined as models of phenomena that are built with machine learning. While epistemological analyses of machine learning exist, these have mostly been conducted for fields characterized by a lack of hierarchies of theoretical background knowledge. Such knowledge is often available in environmental science and especially in physical climate science, and it is relevant for the construction, evaluation, and use of data-driven models. This thesis investigates predictions, uncertainty, and understanding from data-driven models in environmental and climate research and engages in in-depth discussions of case studies. These three topics are discussed in three topical chapters. The first chapter addresses the term “big data”, and rationales and conditions for the use of big-data elements for predictions. Namely, it uses a framework for classifying case studies from climate research and shows that “big data” can refer to a range of different activities. Based on this classification, it shows that most case studies lie in between classical domain science and pure big data. The chapter specifies necessary conditions for the use of big data and shows that in most scientific applications, background knowledge is essential to argue for the constancy of the identified relationships. This constancy assumption is relevant both for new forms of measurements and for data-driven models. Two rationales for the use of big-data elements are identified. Namely, big-data elements can help to overcome limitations in financial, computational, or time resources, which is referred to as the rationale of efficiency. Big-data elements can also help to build models when system understanding does not allow for a more theory-guided modeling approach, which is referred to as the epistemic rationale. The second chapter addresses the question of predictive uncertainties of data-driven models. It highlights that existing frameworks for understanding and characterizing uncertainty focus on specific locations of uncertainty, which are not informative for the predictive uncertainty of data-driven models. Hence, new approaches are needed for this task. A framework is developed and presented that focuses on the justification of the fitness-for-purpose of the models for the specific kind of prediction at hand. This framework uses argument-based tools and distinguishes between first-order and second-order epistemic uncertainty. First-order uncertainty emerges when it cannot be conclusively justified that the model is maximally fit-for-purpose. Second-order uncertainty emerges when it is unclear to what extent the fitness-for-purpose assumption and the underlying assumptions are justified. The application of the framework is illustrated by discussing a case study of data-driven projections of the impact of climate change on global soil selenium concentrations. The chapter also touches upon how the information emerging from the framework can be used in decision-making. The third chapter addresses the question of scientific understanding. A framework is developed for assessing the fitness of a model for providing understanding of a phenomenon. For this, the framework draws from the philosophical literature on scientific understanding and focuses on the representational accuracy, the representational depth, and the graspability of a model. Then, based on the framework, the fitness of data-driven and process-based climate models for providing understanding of phenomena is compared. It is concluded that data-driven models can, under some conditions, be fit to serve as vehicles for understanding to a satisfactory extent. This is specifically the case when sufficient background knowledge is available such that the coherence of the model with background knowledge provides good reasons for the representational accuracy of the data-driven model, which can be assessed e.g. through sensitivity analyses. This point is illustrated by discussing a case study from atmospheric physics in which data-driven models are used to better understand the drivers of a specific type of clouds. The work of this thesis highlights that while big data is no panacea for scientific research, data-driven modeling offers new tools to scientists that can be very useful for a variety of questions. All three studies emphasize the importance of background knowledge for the construction and evaluation of data-driven models as this helps to obtain models that are representationally accurate. The importance of domain-specific background knowledge and the technical challenges of implementing data-driven models for complex phenomena highlight the importance of interdisciplinary work. Previous philosophical work on machine learning has stressed that the problem framing makes models theory-laden. This thesis shows that in a field like climate research, the model evaluation is strongly guided by theoretical background knowledge, which is also important for the theory-ladenness of data-driven modeling. The results of the thesis are relevant for a range of methodological questions regarding data-driven modeling and for philosophical discussions of models that go beyond data-driven models.
Article
Full-text available
The philosophical interest in the nature, value, and varieties of human understanding has swelled in recent years. This article will provide an overview of new research in the epistemology of understanding, with a particular focus on the following questions: What is understanding and why should we care about it? Is understanding reducible to knowledge? Does it require truth, belief, or justification? Can there be lucky understanding? Does it require ‘grasping’ or some kind of ‘know-how’? This cluster of questions has largely set the research agenda for the study of understanding in epistemology. This article will conclude by discussing some varieties of understanding and highlight directions for future research.
Article
Full-text available
In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in particular). These lie mainly in the creation of predictive models which performances increase as data set increases. Next, we will identify a tradeoff between predictive and explanatory performances by comparing the features of mechanistic and predictive models. Finally, we show how this a priori analysis of machine learning and mechanistic research applies to actual biological practice. This will be done by analyzing the publications of a consortium—The Cancer Genome Atlas—which stands at the forefront in integrating data science and molecular biology. The result will be that biologists have to deal with the tradeoff between explaining and predicting that we have identified, and hence the explanatory force of the ‘new’ biology is substantially diminished if compared to the ‘old’ biology. However, this aspect also emphasizes the existence of other research goals which make predictive force independent from explanation.
Book
Full-text available
Roughly, instrumentalism is the view that science is primarily, and should primarily be, an instrument for furthering our practical ends. It has fallen out of favour because historically influential variants of the view, such as logical positivism, suffered from serious defects. In this book, however, Darrell P. Rowbottom develops a new form of instrumentalism, which is more sophisticated and resilient than its predecessors. This position—‘cognitive instrumentalism’—involves three core theses. First, science makes theoretical progress primarily when it furnishes us with more predictive power or understanding concerning observable things. Second, scientific discourse concerning unobservable things should only be taken literally in so far as it involves observable properties or analogies with observable things. Third, scientific claims about unobservable things are probably neither approximately true nor liable to change in such a way as to increase in truthlikeness. There are examples from science throughout the book, and Rowbottom demonstrates at length how cognitive instrumentalism fits with the development of late nineteenth- and early twentieth-century chemistry and physics, and especially atomic theory. Drawing upon this history, Rowbottom also argues that there is a kind of understanding, empirical understanding, which we can achieve without having true, or even approximately true, representations of unobservable things. In closing the book, he sets forth his view on how the distinction between the observable and unobservable may be drawn, and compares cognitive instrumentalism with key contemporary alternatives such as structural realism, constructive empiricism, and semirealism. Overall, this book offers a strong defence of instrumentalism that will be of interest to scholars and students working on the debate about realism in philosophy of science.
Thesis
Full-text available
This dissertation deals with knowing in medical practice by focusing on what epistemic agents such scientists, engineers, and medical professionals do when they construct and use knowledge, and what criteria play a role in evaluating the results. Based on experiences as a student and researcher in Technical Medicine, I have learned that current ideas about decision-making in clinical practice – i.e., the epistemology of evidence-based medicine (EBM) – are limited. Instead of deferring (a part of) their responsibility to clinical guidelines, doctors have epistemological responsibility for their clinical decisions. This means that they are responsible for the collection, critical appraisal, interpretation and fitting together of heterogeneous sources of evidence into a ‘picture’ of the patient. Understanding how the epistemological responsibility of doctors can be developed and how it can be assessed requires a more detailed account of (medical) expertise. I argue that this involves an account of the epistemic activities that clinicians should be able to perform and the cognitive skills that allow them to perform these activities. An account of expertise also improves our understanding of interdisciplinary research projects. Through their training, experts develop a disciplinary perspective that shapes how they deal with a target system. In interdisciplinary research aimed at problems in medical practices, multidisciplinary teams consisting of disciplinary experts interact around a problem, each exercising their own disciplinary perspective in dealing with aspects of the target system, rather than integrating theories. An important example of such an interdisciplinary research project is the development of a new medical imaging technology. Medical images do not speak for themselves but need to be interpreted. For this, engineers and clinicians need to enter into a shared search process to establish what an image represents, which requires an understanding of medical practice to establish for what relevant clinical claims they might provide evidence, whereas an understanding of the imaging technology is required to establish the reliability of this evidence. In medical practice, interdisciplinary collaborations are also important. Knowing in current medical practice is distributed over professionals with different expertises who collaborate in the construction of clinical knowledge. This collaborative character of epistemic practices in clinical decision-making leads to complex social practices of trust. Trust in these practices is implicit, in the sense that trusting the expertise of others occurs while the members of a team focus on other tasks, most importantly, building up a framework of common ways of identifying and assessing evidence. It is within this intersubjective framework that trusting or mistrusting becomes meaningful in multidisciplinary clinical teams.
Article
Full-text available
This paper advances three related arguments showing that the ontic conception of explanation (OC), which is often adverted to in the mechanistic literature, is inferentially and conceptually incapacitated, and in ways that square poorly with scientific practice. Firstly, the main argument that would speak in favor of OC is invalid, and faces several objections. Secondly, OC’s superimposition of ontic explanation and singular causation leaves it unable to accommodate scientifically important explanations. Finally, attempts to salvage OC by reframing it in terms of ‘ontic constraints’ just concedes the debate to the epistemic conception of explanation. Together, these arguments indicate that the epistemic conception is more or less the only game in town.
Article
Full-text available
Article
Full-text available
Historians often feel that standard philosophical doctrines about the nature and development of science are not adequate for representing the real history of science. However, when philosophers of science fail to make sense of certain historical events, it is also possible that there is something wrong with the standard historical descriptions of those events, precluding any sensible explanation. If so, philosophical failure can be useful as a guide for improving historiography, and this constitutes a significant mode of productive interaction between the history and the philosophy of science. I illustrate this methodological claim through the case of the Chemical Revolution. I argue that no standard philosophical theory of scientific method can explain why European chemists made a sudden and nearly unanimous switch of allegiance from the phlogiston theory to Lavoisier's theory. A careful re-examination of the history reveals that the shift was neither so quick nor so unanimous as imagined even by many historians. In closing I offer brief reflections on how best to explain the general drift toward Lavoisier's theory that did take place.
Article
Full-text available
How can false models be explanatory? And how can they help us to understand the way the world works? Sometimes scientists have little hope of building models that approximate the world they observe. Even in such cases, I argue, the models they build can have explanatory import. The basic idea is that scientists provide causal explanations of why the regularity entailed by an abstract and idealized model fails to obtain. They do so by relaxing some of its unrealistic assumptions. This method of 'expla-nation by relaxation' captures the explanatory import of some important models in economics. I contrast this method with the accounts that Daniel Hausman and Nancy Cartwright have provided of explanation in economics. Their accounts are unsatisfactory because they require that the economic model regulari-ties obtain, which is rarely the case. I go on to argue that counterfactual regularities play a central role in achieving 'understanding by relaxation.' This has a surprising implication for the relation between expla-nation and understanding: Achieving scientific understanding does not require the ability to explain observed regularities.
Article
Full-text available
Claims pertaining to understanding are made in a variety of contexts and ways. As a result, few in the philosophical literature have made an attempt to precisely characterize the state that is y understanding x. This paper builds an account that does just that. The account is motivated by two main observations. First, understanding x is somehow related to being able to manipulate x. Second, understanding is a mental phenomenon, and so what manipulations are required to be an understander must only be mental manipulations. Combining these two insights, the paper builds an account (URM) of understanding as a certain representational capacity—specifically, understanding x involves possessing a representation of x that could be manipulated in useful ways. By tying understanding to representation, the account correctly identifies that understanding is a fundamentally cognitive achievement. However, by also demanding that which representations count as understanding-conferring be determined by their practical effects, URM captures the insight that understanding is vitally connected to practice. URM is fully general, and can apply equally well to understanding states of affairs, understanding events, and even understanding people and works of art. The ultimate test of URM is its applicability in actual scientific and philosophical discourse. To that end the paper discusses the importance of understanding in the philosophy of science, psychology, and computer science.
Article
Full-text available
Article
Full-text available
The concept of mechanism is analyzed in terms of entities and activities, organized such that they are productive of regular changes. Examples show how mechanisms work in neurobiology and molecular biology. Thinking in terms of mechanisms provides a new framework for addressing many traditional philosophical issues: causality, laws, explanation, reduction, and scientific change.
Chapter
Full-text available
This chapter offers an analysis of understanding in biology based on characteristic biological practices: ways in which biologists think and act when carrying out their research. De Regt and Dieks have forcefully claimed that a philosophical study of scientific understanding should 'encompass the historical variation of specific intelligibility standards employed in scientific practice' (2005, 138). In line with this suggestion, I discuss the conditions under which contemporary biologists come to understand natural phenomena and I point to a number of ways in which the performance of specific research practices informs and shapes the quality of such understanding. My arguments are structured in three parts. In Section 1, I consider the ways in which biologists think and act in order to produce biological knowledge. I review the epistemic role played by theories and models and I emphasise the importance of embodied knowledge (so-called 'know-how') as a necessary complement to theoretical knowledge ('knowing that') of phenomena. I then argue that it is neither possible nor useful to distinguish between basic and applied knowledge within contemporary biology. Technological expertise and the ability to manipulate entities (or models thereof) are not only indispensable to the production of knowledge, but are as important a component of biological knowledge as are theories and explanations. Contemporary biology can be characterised as an 'impure' mix of tacit and articulated knowledge. Having determined what I take to count as knowledge in biology, in Section 2 I analyse how researchers use such knowledge to achieve an understanding of biological
Article
Full-text available
Like other mathematically intensive sciences, economics is becoming increasingly com-puterized. Despite the extent of the computation, however, there is very little true simulation. Simple computation is a form of theory articulation, whereas true simu-lation is analogous to an experimental procedure. Successful computation is faithful to an underlying mathematical model, whereas successful simulation directly mimics a process or a system. The computer is seen as a legitimate tool in economics only when traditional analytical solutions cannot be derived, i.e., only as a purely com-putational aid. We argue that true simulation is seldom practiced because it does not fit the conception of understanding inherent in mainstream economics. According to this conception, understanding is constituted by analytical derivation from a set of fundamental economic axioms. We articulate this conception using the concept of economists' perfect model. Since the deductive links between the assumptions and the consequences are not transparent in 'bottom-up' generative microsimulations, micro-simulations cannot correspond to the perfect model and economists do not therefore consider them viable candidates for generating theories that enhance economic understanding.
Article
Full-text available
The basic theory of scientific understanding presented in Sections 1–2 exploits three main ideas.First, that to understand a phenomenonP (for a given agent) is to be able to fitP into the cognitive background corpusC (of the agent).Second, that to fitP intoC is to connectP with parts ofC (via arguments in a very broad sense) such that the unification ofC increases.Third, that the cognitive changes involved in unification can be treated as sequences of shifts of phenomena inC. How the theory fits typical examples of understanding and how it excludes spurious unifications is explained in detail. Section 3 gives a formal description of the structure of cognitive corpuses which contain descriptive as well as inferential components. The theory of unification is then refined in the light of so called puzzling phenomena, to enable important distinctions, such as that between consonant and dissonant understanding. In Section 4, the refined theory is applied to several examples, among them a case study of the development of the atomic model. The final part contains a classification of kinds of understanding and a discussion of the relation between understanding and explanation.
Article
Full-text available
Philosophers of science have often favoured reductive approaches to how-possibly explanation. This chapter identifies three varieties of how-possibly explanation and, in so doing, helps to show that this form of explanation is a rich and interesting phenomenon in its own right. The first variety approaches “How is it possible that X?” by showing that, despite appearances, X is not ruled out by what was believed prior to X. This can sometimes be achieved by removing misunderstandings about the implications of one’s belief system (prior to observing X), but more often than not it involves a modification of this belief system so that one’s acceptance of X does not generate a contradiction.
Article
Scientific realism is the view that our best scientific theories give approximately true descriptions of both observable and unobservable aspects of a mind-independent world. Debates between realists and their critics are at the very heart of the philosophy of science. Anjan Chakravartty traces the contemporary evolution of realism by examining the most promising strategies adopted by its proponents in response to the forceful challenges of antirealist sceptics, resulting in a positive proposal for scientific realism today. He examines the core principles of the realist position, and sheds light on topics including the varieties of metaphysical commitment required, and the nature of the conflict between realism and its empiricist rivals. By illuminating the connections between realist interpretations of scientific knowledge and the metaphysical foundations supporting them, his book offers a compelling vision of how realism can provide an internally consistent and coherent account of scientific knowledge. © Anjan Chakravartty 2007 and Cambridge University Press, 2009.
Article
What distinguishes good explanations in neuroscience from bad? This book constructs and defends standards for evaluating neuroscientific explanations that are grounded in a systematic view of what neuroscientific explanations are: descriptions of multilevel mechanisms. In developing this approach, it draws on a wide range of examples in the history of neuroscience (e.g., Hodgkin and Huxley's model of the action potential and LTP as a putative explanation for different kinds of memory), as well as recent philosophical work on the nature of scientific explanation.
Article
In the study of weather and climate, the digital computer has allowed scientists to make existing theory more useful, both for prediction and for understanding. After characterizing two sorts of understanding commonly sought by scientists in this arena, I show how the use of the computer to (i) generate surrogate observational data, (ii) test physical hypotheses and (iii) experiment on models has helped to advance such understanding in significant ways.
Article
Science and the Enlightenment is a general history of eighteenth-century science covering both the physical and life sciences. It places the scientific developments of the century in the cultural context of the Enlightenment and reveals the extent to which scientific ideas permeated the thought of the age. The book takes advantage of topical scholarship, which is rapidly changing our understanding of science during the eighteenth century. In particular it describes how science was organized into fields that were quite different from those we know today. Professor Hankins's work is a much needed addition to the literature on eighteenth-century science. His study is not technical; it will be of interest to all students of the Enlightenment and the history of science, as well as to the general reader with some background in science.
Article
Recently, several authors have argued that scientific understanding should be a new topic of philosophical research. In this article, I argue that the three most developed accounts of understanding—Grimm’s, de Regt’s, and de Regt and Dieks’s—can be replaced by earlier ideas about scientific explanation without loss. Indeed, in some cases, such replacements have clear benefits.
Article
This book presents the important aspects of the field of high-energy physics, or particle physics, at an elementary level. The first chapter presents basic introductory ideas, the historical development, and a brief overview of the subject; the second and third chapters deal with experimental methods, conservation laws, and invariance principles. The following chapters deal in turn with the main features of the interactions between hadrons; the description of the hadrons in terms of quark constituents, and discussion of the basic interactions-electromagnetic, weak and strong-between the lepton and quark constituents. The final chapter discusses unification of the various interactions.
Article
Computer simulation has become an important means for obtaining knowledge about nature. The practice of scientific simulation and the frequent use of uncertain simulation results in public policy raise a wide range of philosophical questions. Most prominently highlighted is the field of anthropogenic climate change—are humans currently changing the climate? Referring to empirical results from science studies and political science, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, Second Edition addresses questions about the types of uncertainty associated with scientific simulation and about how these uncertainties can be communicated. The author, who participated in the United Nations’ Intergovernmental Panel on Climate Change (IPCC) plenaries in 2001 and 2007, discusses the assessment reports and workings of the IPCC. This second edition reflects the latest developments in climate change policy, including a thorough update and rewriting of sections that refer to the IPCC.
Article
In this paper I explore the prospects of applying inference to the Best Explanation (TIBE - sometimes also known as 'abduction') to an account of the way we decide whether to accept the word of others (sometimes known as 'aliens'). IBE is a general account of non- demonstrative or inductive inference, but it has been applied in a particular way to the management of testimony. The governing idea of Testimonial IBE (TIBE) is that a recipient of testimony ('hearer') decides whether to believe the claim of the informant ('speaker') by considering whether the truth of that claim would figure in the best explanation of the fact that the speaker made it.
Article
This paper starts by looking at the coincidence of surprising behavior on the nanolevel in both matter and simulation. It uses this coincidence to argue that the simulation approach opens up a pragmatic mode of understanding oriented toward design rules and based on a new instrumental access to complex models. Calculations, and their variation by means of explorative numerical experimentation and visualization, can give a feeling for a model's behavior and the ability to control phenomena, even if the model itself remains epistemically opaque. Thus, the investigation of simulation in nanoscience provides a good example of how science is adapting to a new instrument: computer simulation. Copyright 2006 by the Philosophy of Science Association. All rights reserved.
Article
This essay contains a partial exploration of some key concepts associated with the epistemology of realist philosophies of science. It shows that neither reference nor approximate truth will do the explanatory jobs that realists expect of them. Equally, several widely-held realist theses about the nature of inter-theoretic relations and scientific progress are scrutinized and found wanting. Finally, it is argued that the history of science, far from confirming scientific realism, decisively confutes several extant versions of avowedly ‘naturalistic’ forms of scientific realism. The positive argument for realism is that it is the only philosophy that doesn't make the success of science a miracle. -H. Putnam (1975)
Article
Biologists in many different fields of research give how-possibly explanations of the phenomena they study. Although such explanations lack empirical support, and might be regarded by some as unscientific, they play an important heuristic role in biology by helping biologists develop theories and concepts and suggesting new areas of research. How-possibly explanations serve as a useful framework for conducting research in the absence of adequate empiri cal data, and they can even become how-actually explanations if they gain enough empirical support.
Article
First, I show how to use the concept of phlogiston to teach oxidation and reduction reactions, based on the historical context of their discovery, while also teaching about the history and nature of science. Second, I discuss the project as an exemplar for integrating history, philosophy and sociology of science in teaching basic scientific concepts. Based on this successful classroom experience, I critique the application of common constructivist themes to teaching practice. Finally, this case shows, along with others, how the classroom is not merely a place for applying history, philosophy or sociology, but is also a site for active research in these areas. This potential is critical, I claim, for building a stable, permanent interdisciplinary relationships between these fields.
Article
This paper argues that spacetime visualisability is not a necessary condition for the intelligibility of theories in physics. Visualisation can be an important tool for rendering a theory intelligible, but it is by no means a sine qua non. The paper examines the historical transition from classical to quantum physics, and analyses the role of visualisability (Anschaulichkeit) and its relation to intelligibility. On the basis of this historical analysis, an alternative conception of the intelligibility of scientific theories is proposed, based on Heisenberg's reinterpretation of the notion of Anschaulichkeit.
Book
This book presents an empiricist alternative (‘constructive empiricism’) to both logical positivism and scientific realism. Against the former, it insists on a literal understanding of the language of science and on an irreducibly pragmatic dimension of theory acceptance. Against scientific realism, it insists that the central aim of science is empirical adequacy (‘saving the phenomena’) and that even unqualified acceptance of a theory involves no more belief than that this goal is met. Beginning with a critique of the metaphysical arguments that typically accompany scientific realism, a new characterization of empirical adequacy is presented, together with an interpretation of probability in both modern and contemporary physics and a pragmatic theory of explanation.
Article
Many people assume that the claims of scientists are objective truths. But historians, sociologists, and philosophers of science have long argued that scientific claims reflect the particular historical, cultural, and social context in which those claims were made. The nature of scientific knowledge is not absolute because it is influenced by the practice and perspective of human agents. Scientific Perspectivism argues that the acts of observing and theorizing are both perspectival, and this nature makes scientific knowledge contingent, as Thomas Kuhn theorized forty years ago. Using the example of color vision in humans to illustrate how his theory of “perspectivism” works, Ronald N. Giere argues that colors do not actually exist in objects; rather, color is the result of an interaction between aspects of the world and the human visual system. Giere extends this argument into a general interpretation of human perception and, more controversially, to scientific observation, conjecturing that the output of scientific instruments is perspectival. Furthermore, complex scientific principles—such as Maxwell’s equations describing the behavior of both the electric and magnetic fields—make no claims about the world, but models based on those principles can be used to make claims about specific aspects of the world. Offering a solution to the most contentious debate in the philosophy of science over the past thirty years, Scientific Perspectivism will be of interest to anyone involved in the study of science.
Article
Among philosophers of science there seems to be a general consensus that understanding represents a species of knowledge, but virtually every major epistemologist who has thought seriously about understanding has come to deny this claim. Against this prevailing tide in epistemology, I argue that understanding is, in fact, a species of knowledge: just like knowledge, for example, understanding is not transparent and can be Gettiered. I then consider how the psychological act of “grasping” that seems to be characteristic of understanding differs from the sort of psychological act that often characterizes knowledge. • Zagzebski's account • Kvanvig's account • Two problems • Comanche cases • Unreliable sources of information • The upper-right quadrant • So is understanding a species of knowledge? • A false choice
Model organisms as fictions
  • R A Ankeny
  • RA Ankeny
Ankeny, R. A. (2009). Model organisms as fictions. In M. Suàrez (Ed.), Fictions in science (pp. 193-204). New York: Routledge.
The hidden history of phlogiston
  • H Chang
Chang, H. (2010). The hidden history of phlogiston. Hyle, 16, 47–79.
Explaining the brain : Mechanisms and the mosaic unity of neuroscience
  • C F Craver
  • CF Craver
Craver, C. F. (2007). Explaining the brain : Mechanisms and the mosaic unity of neuroscience. Oxford: Clarendon.
Scientific perspectivism. Chicago: The University of Chicago Press Is understanding a species of knowledge?
  • R N Giere
Giere, R. N. (2006). Scientific perspectivism. Chicago: The University of Chicago Press. Grimm, S. R. (2006). Is understanding a species of knowledge? The British Journal for the Philosophy of Science, 57, 515–535.
From instrumentalism to conctructivism: On some relations between confirmation, empirical progress and truth approximation
  • T A F Kuipers
  • TAF Kuipers
Kuipers, T. A. F. (2000). From instrumentalism to conctructivism: On some relations between confirmation, empirical progress and truth approximation (Vol. 287). Dordrecht: Kluwer.
Fictions, fictionalization, and truth in science The scientific image
  • P Teller
Teller, P. (2009). Fictions, fictionalization, and truth in science. In M. Suàrez (Ed.), Fictions in science (pp. 235–247). New York: Routledge. van Fraassen, B. C. (1980). The scientific image. Oxford: Clarendon Press.