Article

Scientific understanding: truth or dare?

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

It is often claimed—especially by scientific realists—that science provides understanding of the world only if its theories are (at least approximately) true descriptions of reality, in its observable as well as unobservable aspects. This paper critically examines this ‘realist thesis’ concerning understanding. A crucial problem for the realist thesis is that (as study of the history and practice of science reveals) understanding is frequently obtained via theories and models that appear to be highly unrealistic or even completely fictional. So we face the dilemma of either giving up the realist thesis that understanding requires truth, or allowing for the possibility that in many if not all practical cases we do not have scientific understanding. I will argue that the first horn is preferable: the link between understanding and truth can be severed. This becomes a live option if we abandon the traditional view that scientific understanding is a special type of knowledge. While this view implies that understanding must be factive, I avoid this implication by identifying understanding with a skill rather than with knowledge. I will develop the idea that understanding phenomena consists in the ability to use a theory to generate predictions of the target system’s behavior. This implies that the crucial condition for understanding is not truth but intelligibility of the theory, where intelligibility is defined as the value that scientists attribute to the theoretical virtues that facilitate the construction of models of the phenomena. I will show, first, that my account accords with the way practicing scientists conceive of understanding, and second, that it allows for the use of idealized or fictional models and theories in achieving understanding.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... However, many epistemologists and philosophers of science disagree. They, conversely, hold that "a factive conception of understanding is unduly restrictive" (Elgin, 2007), and that it reflects neither the history of science nor contemporary scientific practice (De Regt, 2015). We shall first introduce the arguments these philosophers raised against factivity, then we will follow up with arguments of our own. ...
... That is because, contra factivists in general and Kvanvig in particular, "whatever may be the case at the end of inquiry, neither current nor previous science consists largely of truths, with a few relatively insignificant falsehoods at the periphery" (Elgin, 2009). De Regt seems to share the same position as he holds, contra scientific realists, that "it appears that the goals of realistic description and explanatory understanding often pull in different directions" (De Regt, 2015). Let us consider the arguments in support of this approach. ...
... De Regt remarks that "many scientific theories that were empirically successful in the past and that were regarded as explanatory by past scientists have later been rejected as false. Sometimes such a rejection went together with a radical conceptual revolution, which involved a change in the basic ontology of the domain" (De Regt, 2015). A classic example as such is the chemical revolution wherein phlogiston has been rejected altogether while oxygen has been introduced. ...
... The recent turn to understanding is based on the conviction that a suitable notion of understanding can serve certain theoretical purposes that cannot be fulfilled by the concepts of knowledge and explanation alone. First, some philosophers argue that making sense of the cognitive achievements of science and of scientific progress requires an adequate account of understanding, because science aims not only to acquire isolated pieces of knowledge about the world but to understand it, and the idealized models underlying this understanding involve falsehoods that are incompatible with knowledge, which according to a standard analysis implies truth (Elgin 2004;de Regt 2015;Potochnik 2015;Dellsén 2016a). Second, there is a growing awareness that understanding can neither be identified with explanation nor reduced to a subjective feeling evoked by explanations (de Regt 2009). ...
... All four conditions are contested. Some argue that understanding is not even moderately factive because it can be generated by idealized models and superseded theories (de Regt 2015;Elgin 2007). Others argue that the abilities a putative understanding enables are not relevant to its status (Strevens 2013), or can be explained in terms of knowledge (Kelp 2015;Khalifa 2012). ...
... Finally, the suggested explication identifies understanding neither with a set of abilities, nor with a mental state. Accounts that take understanding to be a set of abilities (e.g., Ylikoski 2014;de Regt 2015) tend to have difficulties to make understanding answerable to the facts (van Camp 2014); accounts that construe understanding as a mental state tend to make it impossible to account for collective and extended understanding (Ylikoski 2014). The suggested explication makes understanding answerable to the facts and allows for the possibility that the epistemic subject is a collective agent (e.g., a group or a community) and/or an extended system (e.g., a person or a group together with external, material devices such as computers). ...
Article
Full-text available
The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the cognitive achievements and goals of science. The explication combines a necessary condition with three evaluative dimensions: An epistemic agent understands a subject matter by means of a theory only if the agent commits herself sufficiently to the theory of the subject matter, and to the degree that the agent grasps the theory (i.e., is able to make use of it), the theory answers to the facts and the agent's commitment to the theory is justified. The threshold for outright attributions of understanding is determined contextually. The explication has descriptive as well as normative facets and allows for the possibility of understanding by means of non-explanatory (e.g., purely classificatory) theories.
... We could now expand this list to include (among others): Elgin (2007), Pritchard (2008), Kvanvig (2009), Gardiner (2012); and Mizrahi (2012). 3 For examples not covered in detail here, see de Regt (2015Regt ( , 2016 and Wilkenfeld (2017Wilkenfeld ( , 2019. 4 While antirealist notions of SP are available, most notably the functionalist-internalist accounts of Kuhn (1962Kuhn ( , 1991 and Laudan (1977Laudan ( , 1981Laudan ( , 1984, interest in them has waned in recent years; Shan (2019) is a notable exception. 5 The connection between truth and progress is most famously highlighted in Putnam's (1975) 'no miracles' argument, a more contemporary interpretation of which can be found in Lipton (2003). ...
... We could now expand this list to include (among others): Elgin (2007), Pritchard (2008), Kvanvig (2009), Gardiner (2012); and Mizrahi (2012). 3 For examples not covered in detail here, see de Regt (2015Regt ( , 2016 and Wilkenfeld (2017Wilkenfeld ( , 2019. 4 While antirealist notions of SP are available, most notably the functionalist-internalist accounts of Kuhn (1962Kuhn ( , 1991 and Laudan (1977Laudan ( , 1981Laudan ( , 1984, interest in them has waned in recent years; Shan (2019) is a notable exception. 5 The connection between truth and progress is most famously highlighted in Putnam's (1975) 'no miracles' argument, a more contemporary interpretation of which can be found in Lipton (2003). ...
... It is interesting to note that, unlike knowledge, understanding is not an intrinsically realist notion. In recent work, for example, de Regt (2015Regt ( , 2016 has argued for an antirealist notion of understanding which is not even moderately factive. 9 In contrast, Dellsén is keen to maintain his realist convictions and thus takes understanding to be quasi-factive, suggesting that 'the explanatorily/predictively essential elements of a theory must be true in order for the theory to provide grounds for understanding ' (2016: p. 73, fn6). ...
Article
Full-text available
Contemporary debate surrounding the nature of scientific progress has focused upon the precise role played by justification, with two realist accounts having dominated proceedings. Recently, however, a third realist account has been put forward, one which offers no role for justification at all. According to Finnur Dellsén’s (Stud Hist Philos Sci Part A 56:72–83, 2016) noetic account, science progresses when understanding increases, that is, when scientists grasp how to correctly explain or predict more aspects of the world that they could before. In this paper, we argue that the noetic account is severely undermotivated. Dellsén provides three examples intended to show that understanding can increase absent the justification required for true belief to constitute knowledge. However, we demonstrate that a lack of clarity in each case allows for two contrasting interpretations, neither of which serves its intended purpose. On the first, the agent involved lacks both knowledge and understanding; and, on the second, the agent involved successfully gains both knowledge and understanding. While neither interpretation supports Dellsén’s claim that understanding can be prised apart from knowledge, we argue that, in general, agents in such cases ought to be attributed neither knowledge nor understanding. Given that the separability of knowledge and understanding is a necessary component of the noetic account, we conclude that there is little support for the idea that science progresses through increasing understanding rather than the accumulation of knowledge.
... The final kind of understanding in our catalogue, described by de Regt (2014), comes from using a theory or model for practical use and manipulation, which lines up closely with the other aim of science we discuss in this papermanaging the world. There is understanding to be had of the world via a vehicle that helps us manipulate and control itcall this pragmatic understanding. ...
... Many can be grouped under the broad aim, 'to change the world'. As mentioned above, according to de Regt (2014) the ability to manipulate and control also gives us (pragmatic) understanding of the world-but we take this to be a big goal of science in itself, and one we especially care about. ...
... De Regt (2014) refers to (a more general version of) this as 'the realist thesis regarding understanding' and has argued against it at length, specifically focusing on what we call 'pragmatic understanding', which we get to later in this paper. ...
Article
Full-text available
Empirical adequacy matters directly - as it does for antirealists - if we aim to get all or most of the observable facts right, or indirectly - as it does for realists - as a symptom that the claims we make about the theoretical facts are right. But why should getting the facts - either theoretical or empirical - right be required of an acceptable theory? Here we endorse two other jobs that good theories are expected to do: helping us with a) understanding and b) managing the world. Both are of equal, often greater, importance than getting a swathe of facts right, and empirical adequacy fares badly in both. It is not needed for doing these jobs and in many cases it gets in the way of doing them efficiently.
... Sullivan 2019). 15 In big data practices, the source of knowledge is not always an individual that can provide better explanations to support her claims if asked to do so; it is often a combination of methodologies plus machine implementation over inputs that come from very diverse sources in very different formats, and which interconnections are not always clear to us. In the long run, this has the effect of scientists being unable to provide explanations about procedures that might have lead to the discovery of novel phenomena. ...
... Consequently, when working with big data, scientists are trading knowledge of some parts of theoretical structure in exchange for access to inaccessible objects. As a matter of fact, the incorporation of big data to the empirical sciences has created a new epistemic preference: "answers are found through a process of 15 This, especially when adopting a standpoint similar to the so-called assurance view of testimony, according to which "testimony is restricted to speech acts that come with the speaker's assurance that the statement is true, constituting an invitation for the hearer to trust the speaker. Such views highlight the intention of the speaker and the normative character of testimony where we rebuke the testifier in the instance of false testimony (Tollefsen 2009)" (Sullivan 2019: 21). ...
Article
It is a fact that the larger the amount of defective (vague, partial, conflicting, inconsistent) information is, the more challenges scientists face when working with it. Here, I address the question of whether there is anything special about the ignorance involved in big data practices. I submit that the ignorance that emerges when using big data in the empirical sciences is ignorance of theoretical structure with reliable consequences and I explain how this ignorance relates to different epistemic achievements such as knowledge and understanding. I illustrate this with a case study from observational cosmology.
... 8 Among others, they haven´t persuaded Carter and Gordon (2016), Greco (2014), Grimm (2014), Kelp (2016), Khalifa (2012) and Strevens (2016). For more arguments against factivism, see De Regt and Gijs- bers (2016) andDe Regt (2017). our already established corpus of beliefs and commitments needs to be revised or rearranged, ideally in a non-ad-hoc manner. ...
Article
Full-text available
Testimony spreads information. It is also commonly agreed that it can transfer knowledge. Whether it can work as an epistemic source of understanding is a matter of dispute. However, testimony certainly plays a pivotal role in the proliferation of understanding in the epistemic community. But how exactly do we learn, and how do we make advancements in understanding on the basis of one another’s words? And what can we do to maximize the probability that the process of acquiring understanding from one another succeeds? These are very important questions in our current epistemological landscape, especially in light of the attention that has been paid to understanding as an epistemic achievement of purely epistemic value. Somewhat surprisingly, the recent literature in social epistemology does not offer much on the topic. The overarching aim of this paper is to provide a tentative model of understanding that goes in-depth enough to safely address the question of how understanding and testimony are related to one another. The hope is to contribute, in some measure, to the effort to understand understanding, and to explain two facts about our epistemic practices: (1) the fact that knowledge and understanding relate differently to testimony, and (2) the fact that some pieces of testimonial information are better than others for the sake of providing one with understanding and of yielding advancements in one’s epistemic standing.
... Here, we outline seven theoretically and empirically pervasive developmental processes that should be considered when interpreting behavior genetic model results, regardless of whether such processes are formally modeled within any given investigation (see Table 1 for summary). 1 Other aims that have been claimed as constitutive of scientific inquiry include having true answers to our questions (Kelly and Glymour, 2004), obtaining knowledge (Nagel, 1967), advancing empirically adequate theories (van Fraassen, 1980 and1986), having understanding (de Regt, 2015), and gaining the ability to control nature (Keller, 1985). We invite the reader to think about how what we say in this paper matters with respect to these other aims as well, though we will not discuss them explicitly. ...
Article
Full-text available
Behavior genetic findings figure in debates ranging from urgent public policy matters to perennial questions about the nature of human agency. Despite a common set of methodological tools, behavior genetic studies approach scientific questions with potentially divergent goals. Some studies may be interested in identifying a complete model of how individual differences come to be (e.g., identifying causal pathways among genotypes, environments, and phenotypes across development). Other studies place primary importance on developing models with predictive utility, in which case understanding of underlying causal processes is not necessarily required. Although certainly not mutually exclusive, these two goals often represent tradeoffs in terms of costs and benefits associated with various methodological approaches. In particular, given that most empirical behavior genetic research assumes that variance can be neatly decomposed into independent genetic and environmental components, violations of model assumptions have different consequences for interpretation, depending on the particular goals. Developmental behavior genetic theories postulate complex transactions between genetic variation and environmental experiences over time, meaning assumptions are routinely violated. Here, we consider two primary questions: (1) How might the simultaneous operation of several mechanisms of gene–environment (GE)-interplay affect behavioral genetic model estimates? (2) At what level of GE-interplay does the ‘gloomy prospect’ of unsystematic and non-replicable genetic associations with a phenotype become an unavoidable certainty?
... Non-factivists can respond to this defense by pointing out that in some cases, we credit scientists with an understanding of a phenomenon even though they do not exactly know how their models diverge from the phenomenon or under which conditions the models provide an approximately true description of the phenomenon. Moreover, De Regt (2015) suggests examples from economy and ecology, in which scientists acquire understanding by applying models whose central proposition are not even approximately true. ...
Chapter
Full-text available
Science has not only produced a vast amount of knowledge about a wide range of phenomena, it has also enhanced our understanding of these phenomena. Indeed, understanding can be regarded as one of the central aims of science. But what exactly is it to understand phenomena scientifically, and how can scientific understanding be achieved? What is the difference between scientific knowledge and scientific understanding? These questions are hotly debated in contemporary epistemology and philosophy of science. While philosophers have long regarded understanding as a merely subjective and psychological notion that is irrelevant from an epistemological perspective, nowadays many of them acknowledge that a philosophical account of science and its aims should include an analysis of the nature of understanding. This chapter reviews the current debate on scientific understanding. It presents the main philosophical accounts of scientific understanding and discusses topical issues such as the relation between understanding, truth and knowledge, the phenomenology of understanding, and the role of understanding in scientific progress.
... This approach to understanding appears relative (varying from person to person), but an objective approach is possible [41, § 4]. For example, understanding can be defined by reference to values and concepts shared widely among scientists (but need not necessarily coincide with truth or knowledge) [42][43][44]. Understanding involves explanatory relationships within a single theory [45], connecting theories through concepts which they have in common [46], and fitting theories into an overall framework or structure [47]. ...
Article
Full-text available
This review, of the understanding of quantum mechanics, is broad in scope, and aims to reflect enough of the literature to be representative of the current state of the subject. To enhance clarity, the main findings are presented in the form of a coherent synthesis of the reviewed sources. The review highlights core characteristics of quantum mechanics. One is statistical balance in the collective response of an ensemble of identically prepared systems, to differing measurement types. Another is that states are mathematical terms prescribing probability aspects of future events, relating to an ensemble of systems, in various situations. These characteristics then yield helpful insights on entanglement, measurement, and widely-discussed experiments and analyses. The review concludes by considering how these insights are supported, illustrated and developed by some specific approaches to understanding quantum mechanics. The review uses non-mathematical language precisely (terms defined) and rigorously (consistent meanings), and uses only such language. A theory more descriptive of independent reality than is quantum mechanics may yet be possible. One step in the pursuit of such a theory is to reach greater consensus on how to understand quantum mechanics. This review aims to contribute to achieving that greater consensus, and so to that pursuit.
... Kvanvig 2003;Mizrahi 2012;cf. Grimm 2006)-Elgin argues that numerous falsehoods may exist even among what the aforementioned philosophers would regard as central propositional commitments (so too de Regt 2009de Regt , 2015. The distinction between central and peripheral propositions is difficult to draw (Kvanvig 2003 offers an influential articulation of the distinction but rightly worries about it elsewhere, e.g. ...
Article
Full-text available
Elgin offers an influential and far-reaching challenge to veritism. She takes scientific understanding to be non-factive and maintains that there are epistemically useful falsehoods that figure ineliminably in scientific understanding and whose falsehood is no epistemic defect. Veritism, she argues, cannot account for these facts. This paper argues that while Elgin rightly draws attention to several features of epistemic practices frequently neglected by veritists, veritists have numerous plausible ways of responding to her arguments. In particular, it is not clear that false propositional commitments figure ineliminably in understanding in the manner supposed by Elgin. Moreover, even if scientific understanding were non-factive and false propositional commitments did figure ineliminably in understanding, the veritist can account for this in several ways without thereby abandoning veritism.
... tral to understanding the phenomenon (e.g., Elgin 2007Elgin , 2017de Regt 2015;de Regt and Gijsbers 2017;Potochnik 2017;Rancourt 2017). ...
Article
Full-text available
Science is replete with falsehoods that epistemically facilitate understanding by virtue of being the very falsehoods they are. In view of this puzzling fact, some have relaxed the truth requirement on understanding. I offer a factive view of understanding (i.e., the extraction view) that fully accommodates the puzzling fact in four steps: (i) I argue that the question how these falsehoods are related to the phenomenon to be understood and the question how they figure into the content of understanding it are independent. (ii) I argue that the falsehoods do not figure into the understanding’s content by being elements of its periphery or core. (iii) Drawing lessons from case studies, I argue that the falsehoods merely enable understanding. When working with such falsehoods, only the truths we extract from them are elements of the content of our understanding. (iv) I argue that the extraction view is compatible with the thesis that falsehoods can have an epistemic value by virtue of being the very falsehoods they are.
... This approach to understanding appears relative (varying from person to person), but an objective approach is possible [41, § 4]. For example, understanding can be defined by reference to values and concepts shared widely among scientists (but need not necessarily coincide with truth or knowledge) [42][43][44]. Understanding involves explanatory relationships within a single theory [45], connecting theories through concepts which they have in common [46], and fitting theories into an overall framework or structure [47]. ...
Preprint
This review, of the understanding of quantum mechanics, is broad in scope, and aims to reflect enough of the literature to be representative of the current state of the subject. To enhance clarity, the main findings are presented in the form of a coherent synthesis of the reviewed sources. The review highlights core characteristics of quantum mechanics. One is statistical balance in the collective response of an ensemble of identically prepared systems, to differing measurement types. Another is that states are mathematical terms prescribing probability aspects of future events, relating to an ensemble of systems, in various situations. These characteristics then yield helpful insights on entanglement, measurement, and widely-discussed experiments and analyses. The review concludes by considering how these insights are supported, illustrated and developed by some specific approaches to understanding quantum mechanics. The review uses non-mathematical language precisely (terms defined) and rigorously (consistent meanings), and uses only such language. A theory more descriptive of independent reality than is quantum mechanics may yet be possible. One step in the pursuit of such a theory is to reach greater consensus on how to understand quantum mechanics. This review aims to contribute to achieving that greater consensus, and so to that pursuit.
... If truth were a presupposition for having understanding of gravitational effects, one would lack understanding in this case, given that the theory in play is strictly speaking false. As Henk de Regt (2015) argues, if truth is a necessary condition for understanding, it would follow that past scientists lacked understanding of phenomena for which they had advanced empirically successful (but from our perspective false) theories. Separating the two concepts and taking understanding not to require true theories, but rather an ability to manipulate and use a theory in a certain theoretical domain, avoids this problem and leads to a more plausible thesis of the aims in science and scientific progress. ...
... Необходимо (минимално) условие, за да имаме разбиране е построяването на ментален модел на ситуацията, от която е част това, което искаме да разберем. В този процес са вплетени инференциални процеси, без които построяването на модела, а от там и най-ясно в (de Regt, 2015), но Марк Нюман представя възгледите на де Регт, следвайки по-ранни негови публикации - (de Regt, 2004;de Regt & Dieks, 2005). самото разбиране биха били невъзможни. ...
Book
"Explanation, understanding and inference" presents a view of scientific explanation, called "inferentialist", and demonstrates the advantages of this view compared to alternative models and analyses of explanation, discussed in the philosophy of science in the last 70 years. In brief, the inferentialist view boils down to the claim that the qualities of an explanation depend on the inferences that it allows us to make. This statement stands on two premises: (a) the primary function of explanation is to bring us understanding of the object being explained, or to deepen the existing understanding; (b) understanding is manifested in the inferences we make about the object of our understanding and its relations with other objects. Hence, one explanation is good, i.e. it successfully performs the function of bringing us understanding, if it allows us to draw inferences that were not available to us before we had this explanation. The contents of the book include a preface, 11 chapters (divided into 3 parts) and an afterword.
... Another context where 'factivity' is denied is recent defences of the epistemic good of understanding (e.g.,Potochnik [2017],Elgin [2018], De Regt [2015). My discussion differs. ...
Article
I develop an account of the relationship between aesthetics and knowledge, focusing on scientific practice. Cognitivists infer from ‘partial sensitivity’—aesthetic appreciation partly depends on doxastic states—to ‘factivity’, the idea that the truth or otherwise of those beliefs makes a difference to aesthetic appreciation. Rejecting factivity, I develop a notion of ‘epistemic engagement’: partaking genuinely in a knowledge-directed process of coming to epistemic judgements, and suggest that this better accommodates the relationship between the aesthetic and the epistemic. Scientific training (and other knowledge-directed activities), I argue, involve ‘attunement’: the co-option of aesthetic judgements towards epistemic ends. Thus, the connection between aesthetic appreciation and knowledge is psychological and contingent. This view has consequences for the warrant of aesthetic judgment in science, namely, the locus of justification are those processes of attunement, not the aesthetic judgements themselves.
... See Elgin (2004. While I use Elgin's view as a foil, other helpful discussions of nonfactive approaches to understanding include:Zagzebski (2001), de Regt (2015,Potochnik (2017) andRancourt (2017). ...
Article
Full-text available
The notion of understanding occupies an increasingly prominent place in contemporary epistemology, philosophy of science, and moral theory. A central and ongoing debate about the nature of understanding is how it relates to the truth. In a series of influential contributions, Catherine Elgin has used a variety of familiar motivations for antirealism in philosophy of science to defend a non-factive theory of understanding. Key to her position are: (i) the fact that false theories can contribute to the upwards trajectory of scientific understanding, and (ii) the essential role of inaccurate idealisations in scientific research. Using Elgin's arguments as a foil, I show that a strictly factive theory of understanding has resources with which to offer a unified response to both the problem of idealisations and the role of false theories in the upwards trajectory of scientific understanding. Hence, strictly factive theories of understanding are viable notwithstanding these forceful criticisms.
... Many have claimed that models may explain without accurate representation (e.g. Batterman and Rice 2014;Bokulich 2008Bokulich , 2011Bokulich , 2012Bokulich , 2016De Regt 2015Graham Kennedy 2012;Potochnik 2017). Bokulich, for one, argues that what she calls 'model explanations' capture the counterfactual dependencies of a target system despite the fictional representational content. ...
Article
Full-text available
Highly idealized models may serve various epistemic functions, notably explanation, in virtue of representing the world. Inferentialism provides a prima facie compelling characterization of what constitutes the representation relation. In this paper, I argue that what I call factive inferentialism does not provide a satisfactory solution to the puzzle of model-based—factive—explanation. In particular, I show that making explanatory counterfactual inferences is not a sufficient guide for accurate representation, factivity, or realism. I conclude by calling for a more explicit specification of model-world mismatches and properties imputation.
... In fact, the "pessimistic induction" (Laudan 1981) suggests that even our current best theories may be false not merely at the periphery. Consequently, current science displays some degree of understanding only if understanding is not even moderately factive (Elgin 2007;De Regt 2015). ...
Chapter
Full-text available
The paper provides a systematic overview of recent debates in epistemology and philosophy of science on the nature of understanding. We explain why philosophers have turned their attention to understanding and discuss conditions for “explanatory” understanding of why something is the case and for “objectual” understanding of a whole subject matter. The most debated conditions for these types of understanding roughly resemble the three traditional conditions for knowledge: truth, justification and belief. We discuss prominent views about how to construe these conditions for understanding, whether understanding indeed requires conditions of all three types and whether additional conditions are needed.
... The problem of explanatory understanding has received significant attention in the recent literature of epistemology and philosophy of science (see Baumberger et al. 2017 for an overview). The exact definition of understanding 1 is a matter of an ongoing dispute, but most analyses have converged on the idea that at least one of the key differences between mere knowledge of a correct explanation of a phenomenon and understanding the phenomenon has an inferential character (Newman 2014;Grimm 2010;Khalifa 2017;De Regt 2015). The inferential character of the explanatory understanding of a given fact, or a factual domain, has been analysed in the literature in two complementary ways, related to both the inferential properties of singular explanations and the organization of inter-linked explanations: ...
... 13 Zie bv. De Regt (2009Regt ( , 2014Regt ( , 2015. ...
Presentation
Full-text available
Rede uitgesproken bij de aanvaarding van het ambt van bijzonder hoogleraar Wetenschapsfilosofie vanwege de Stichting Het Vrije Universiteitsfonds aan de Faculteit der Geesteswetenschappen van de Vrije Universiteit Amsterdam op 12 mei 2016.
... The veridicality condition on understanding is the claim that only representational devices that satisfy a criterion of representational veridicality can grant understanding. The veridicality condition is a generalized version of what one of us has called the 'realist thesis regarding understanding' in an earlier paper (De Regt 2015), the thesis that science provides understanding of the world only if its theories are at least approximately true descriptions of reality, in its observable as well as unobservable aspects. 1 Sometimes, the veridicality criterion is explicitly defended, as for instance by Ahlstrom-Vij and Grimm (2013), who argue that while understanding needs not involve truth, it must involve accuracy and "getting it right"; and Wilkenfeld (2015), who argues that the value of representational accuracy is always involved in assessing the quality of understanding. But often, philosophers simply assume that since many particular devices that we now consider to be highly non-veridicalsuch as phlogiston theoryare no longer used to enhance our understanding of nature, it must be the case that non-veridical devices cannot give understanding at all. ...
Research
Full-text available
Forthcoming in: Stephen Grimm, Christoph Baumberger & Sabine Ammon (eds), Explaining Understanding: New Perspectives from Epistemology and Philosophy of Science (Routledge).
Article
Some experiences change who we are in ways we cannot understand until we have that very experience. In this paper I argue that so-called "transformative experiences" can not only bring about new understanding, but can actually be brought out by the gain of understanding itself. Coming to understand something new can change you. I argue that not only is understanding acquisition potentially a kind of transformative experience; given some of the recent philosophy of the phenomenology of understanding, it is a kind that is potentially rare in not being dependent on a particular subjective phenomenology. The goal of this paper threefold. First, I argue that coming to gain cognitive understanding of an academic subject matter can, under some circumstances, itself be a transformative experience. A second, subsidiary goal of this paper is to argue that such transformative understanding merits further study. Finally, I give a rough taxonomy of under what conditions we should expect understanding acquisition to be transformative.
Article
Full-text available
In order to deal with the complexity of biological systems and attempts to generate applicable results, current biomedical sciences are adopting concepts and methods from the engineering sciences. Philosophers of science have interpreted this as the emergence of an engineering paradigm, in particular in systems biology and synthetic biology. This article aims at the articulation of the supposed engineering paradigm by contrast with the physics paradigm that supported the rise of biochemistry and molecular biology. This articulation starts from Kuhn's notion of a disciplinary matrix, which indicates what constitutes a paradigm. It is argued that the core of the physics paradigm is its metaphysical and ontological presuppositions, whereas the core of the engineering paradigm is the epistemic aim of producing useful knowledge for solving problems external to the scientific practice. Therefore, the two paradigms involve distinct notions of knowledge. Whereas the physics paradigm entails a representational notion of knowledge, the engineering paradigm involves the notion of 'knowledge as epistemic tool'.
Chapter
Full-text available
In science and philosophy, a relatively demanding notion of understanding is of central interest: an epistemic subject understands a subject matter by means of a theory. This notion can be explicated in a way which resembles JTB analyses of knowledge. The explication requires that the theory answers to the facts, that the subject grasps the theory, is committed to the theory and justified in the theory. In this paper, we focus on the justification condition and argue that it can be analysed with reference to the idea of a reflective equilibrium.
Article
Full-text available
Scientists often use aesthetic values in the evaluation and choice of theories. Aesthetic values are not only regarded as leading to practically more useful theories, but are often taken to be indicators of the truth of a theory. This paper explores what aesthetic considerations influence scientists’ reasoning, how such aesthetic values relate to the utility of a scientific theory, and how one can justify the epistemic role for such values. The paper examines ways in which the link between beauty and truth can be defended, the challenges facing such accounts, and explores alternative epistemic roles for aesthetic values in scientific practice.
Article
Full-text available
This paper focuses on two questions: (1) Is understanding intimately bound up with accurately representing the world? (2) Is understanding intimately bound up with downstream abilities? We will argue that the answer to both these questions is “yes”, and for the same reason-both accuracy and ability are important elements of orthogonal evaluative criteria along which understanding can be assessed. More precisely, we will argue that representational-accuracy (of which we assume truth is one kind) and intelligibility (which we will define so as to entail abilities) are good-making features of a state of understanding. Interestingly, both evaluative claims have been defended by philosophers in the literature on understanding as the criterion of evaluation. We argue that proponents of both approaches have important insights and that, drawing on both their own observations and a few novel arguments, we can construct a more complete picture of understanding evaluation. We thus posit the theory of there being Multiple Understanding Dimensions. The main thing to note about our dualism regarding the evaluative criteria of understanding is that it accounts for the intuitions about cases underlying both previously held positions.
Chapter
In this chapter I will illustrate how scientific modeling activity can be better described taking advantage of the concept of “epistemic warfare”, which sees scientific enterprise as a complicated struggle for rational knowledge in which it is crucial to distinguish epistemic (for example scientific models) from non epistemic (for example fictions, falsities, propaganda) weapons.
Article
Full-text available
Recent attempts to reconcile the ontic and epistemic approaches to explanation propose that our best explanations simply fulfill epistemic and ontic norms simultaneously. I aim to upset this armistice. Epistemic norms of attaining general and systematic explanations are, I argue, autonomous of ontic norms: they cannot be fulfilled simultaneously or in simple conjunction with ontic norms, and plausibly have priority over them. One result is that central arguments put forth by ontic theorists against epistemic theorists are revealed as not only question-begging, but ultimately self-defeating. Another result is that a more nuanced reconciliation of the epistemic and ontic views is required: we should regard good explanatory practice as a dynamic process with distinct phases of epistemic and ontic success.
Chapter
There are three main accounts of scientific progress: (1) the epistemic account, according to which an episode in science constitutes progress when there is an increase in knowledge; (2) the semantic account, according to which progress is made when the number of truths increases; (3) the problem-solving account, according to which progress is made when the number of problems that we are able to solve increases. Each of these accounts has received several criticisms in the last decades. Nevertheless, some authors think that the epistemic account is to be preferred if one takes a realist stance. Recently, Dellsén proposed the noetic account, according to which an episode in science constitutes progress when scientists achieve increased understanding of a phenomenon. Dellsén claims that the noetic account is a more adequate realist account of scientific progress than the epistemic account. This paper aims precisely at assessing whether the noetic account is a more adequate realist account of progress than the epistemic account.
Article
In this paper, we explore the conceptual problems arising when using network analysis in person- centered care (PCC) in psychiatry. Personalized network models are potentially helpful tools for PCC, but we argue that using them in psychiatric practice raises boundary problems, i.e., problems in demarcating what should and should not be included in the model, which may limit their ability to provide clinically-relevant knowledge. Models can have explanatory and representational boundaries, among others. We argue that we can make more explicit what kind of questions personalized network models can address in PCC, given their representational and explanatory boundaries, using perspectival reasoning.
Article
Full-text available
There has been a burst of work in the last couple of decades on mechanistic explanation, as an alternative to the traditional covering-law model of scientific explanation. That work makes some interesting claims about mechanistic explanations rendering phenomena ‘intelligible’, but does not develop this idea in great depth. There has also been a growth of interest in giving an account of scientific understanding, as a complement to an account of explanation, specifically addressing a three-place relationship between explanation, world, and the scientific community. The aim of this paper is to use the contextual theory of scientific understanding to build an account of understanding phenomena using mechanistic explanations. This account will be developed and illustrated by examining the mechanisms of supernovae, which will allow synthesis of treatment of the life sciences and social sciences on the one hand, where many accounts of mechanisms were originally developed, and treatment of physics on the other hand, where the contextual theory drew its original inspiration.
Article
Full-text available
This paper advances three related arguments showing that the ontic conception of explanation (OC), which is often adverted to in the mechanistic literature, is inferentially and conceptually incapacitated, and in ways that square poorly with scientific practice. Firstly, the main argument that would speak in favor of OC is invalid, and faces several objections. Secondly, OC’s superimposition of ontic explanation and singular causation leaves it unable to accommodate scientifically important explanations. Finally, attempts to salvage OC by reframing it in terms of ‘ontic constraints’ just concedes the debate to the epistemic conception of explanation. Together, these arguments indicate that the epistemic conception is more or less the only game in town.
Thesis
This dissertation deals with knowing in medical practice by focusing on what epistemic agents such scientists, engineers, and medical professionals do when they construct and use knowledge, and what criteria play a role in evaluating the results. Based on experiences as a student and researcher in Technical Medicine, I have learned that current ideas about decision-making in clinical practice – i.e., the epistemology of evidence-based medicine (EBM) – are limited. Instead of deferring (a part of) their responsibility to clinical guidelines, doctors have epistemological responsibility for their clinical decisions. This means that they are responsible for the collection, critical appraisal, interpretation and fitting together of heterogeneous sources of evidence into a ‘picture’ of the patient. Understanding how the epistemological responsibility of doctors can be developed and how it can be assessed requires a more detailed account of (medical) expertise. I argue that this involves an account of the epistemic activities that clinicians should be able to perform and the cognitive skills that allow them to perform these activities. An account of expertise also improves our understanding of interdisciplinary research projects. Through their training, experts develop a disciplinary perspective that shapes how they deal with a target system. In interdisciplinary research aimed at problems in medical practices, multidisciplinary teams consisting of disciplinary experts interact around a problem, each exercising their own disciplinary perspective in dealing with aspects of the target system, rather than integrating theories. An important example of such an interdisciplinary research project is the development of a new medical imaging technology. Medical images do not speak for themselves but need to be interpreted. For this, engineers and clinicians need to enter into a shared search process to establish what an image represents, which requires an understanding of medical practice to establish for what relevant clinical claims they might provide evidence, whereas an understanding of the imaging technology is required to establish the reliability of this evidence. In medical practice, interdisciplinary collaborations are also important. Knowing in current medical practice is distributed over professionals with different expertises who collaborate in the construction of clinical knowledge. This collaborative character of epistemic practices in clinical decision-making leads to complex social practices of trust. Trust in these practices is implicit, in the sense that trusting the expertise of others occurs while the members of a team focus on other tasks, most importantly, building up a framework of common ways of identifying and assessing evidence. It is within this intersubjective framework that trusting or mistrusting becomes meaningful in multidisciplinary clinical teams.
Book
Full-text available
Roughly, instrumentalism is the view that science is primarily, and should primarily be, an instrument for furthering our practical ends. It has fallen out of favour because historically influential variants of the view, such as logical positivism, suffered from serious defects. In this book, however, Darrell P. Rowbottom develops a new form of instrumentalism, which is more sophisticated and resilient than its predecessors. This position—‘cognitive instrumentalism’—involves three core theses. First, science makes theoretical progress primarily when it furnishes us with more predictive power or understanding concerning observable things. Second, scientific discourse concerning unobservable things should only be taken literally in so far as it involves observable properties or analogies with observable things. Third, scientific claims about unobservable things are probably neither approximately true nor liable to change in such a way as to increase in truthlikeness. There are examples from science throughout the book, and Rowbottom demonstrates at length how cognitive instrumentalism fits with the development of late nineteenth- and early twentieth-century chemistry and physics, and especially atomic theory. Drawing upon this history, Rowbottom also argues that there is a kind of understanding, empirical understanding, which we can achieve without having true, or even approximately true, representations of unobservable things. In closing the book, he sets forth his view on how the distinction between the observable and unobservable may be drawn, and compares cognitive instrumentalism with key contemporary alternatives such as structural realism, constructive empiricism, and semirealism. Overall, this book offers a strong defence of instrumentalism that will be of interest to scholars and students working on the debate about realism in philosophy of science.
Article
Full-text available
In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in particular). These lie mainly in the creation of predictive models which performances increase as data set increases. Next, we will identify a tradeoff between predictive and explanatory performances by comparing the features of mechanistic and predictive models. Finally, we show how this a priori analysis of machine learning and mechanistic research applies to actual biological practice. This will be done by analyzing the publications of a consortium—The Cancer Genome Atlas—which stands at the forefront in integrating data science and molecular biology. The result will be that biologists have to deal with the tradeoff between explaining and predicting that we have identified, and hence the explanatory force of the ‘new’ biology is substantially diminished if compared to the ‘old’ biology. However, this aspect also emphasizes the existence of other research goals which make predictive force independent from explanation.
Preprint
Full-text available
In this paper, we explore the conceptual problems arising when using network analysis in person-centered care (PCC) in psychiatry. Personalized network models are potentially helpful tools for PCC, but we argue that using them in psychiatric practice raises boundary problems, i.e., problems in demarcating what should and should not be included in the model, which may limit their ability to provide clinically-relevant knowledge. Models can have explanatory and representational boundaries, among others. We argue that we can make more explicit what kind of questions personalized network models can address in PCC, given their representational and explanatory boundaries, using perspectival reasoning.
Thesis
Full-text available
Recent years have seen a dramatic increase in the volumes of data that are produced, stored, and analyzed. This advent of big data has led to commercial success stories, for example in recommender systems in online shops. However, scientific research in various disciplines including environmental and climate science will likely also benefit from increasing volumes of data, new sources for data, and the increasing use of algorithmic approaches to analyze these large datasets. This thesis uses tools from philosophy of science to conceptually address epistemological questions that arise in the analysis of these increasing volumes of data in environmental science with a special focus on data-driven modeling in climate research. Data-driven models, here, are defined as models of phenomena that are built with machine learning. While epistemological analyses of machine learning exist, these have mostly been conducted for fields characterized by a lack of hierarchies of theoretical background knowledge. Such knowledge is often available in environmental science and especially in physical climate science, and it is relevant for the construction, evaluation, and use of data-driven models. This thesis investigates predictions, uncertainty, and understanding from data-driven models in environmental and climate research and engages in in-depth discussions of case studies. These three topics are discussed in three topical chapters. The first chapter addresses the term “big data”, and rationales and conditions for the use of big-data elements for predictions. Namely, it uses a framework for classifying case studies from climate research and shows that “big data” can refer to a range of different activities. Based on this classification, it shows that most case studies lie in between classical domain science and pure big data. The chapter specifies necessary conditions for the use of big data and shows that in most scientific applications, background knowledge is essential to argue for the constancy of the identified relationships. This constancy assumption is relevant both for new forms of measurements and for data-driven models. Two rationales for the use of big-data elements are identified. Namely, big-data elements can help to overcome limitations in financial, computational, or time resources, which is referred to as the rationale of efficiency. Big-data elements can also help to build models when system understanding does not allow for a more theory-guided modeling approach, which is referred to as the epistemic rationale. The second chapter addresses the question of predictive uncertainties of data-driven models. It highlights that existing frameworks for understanding and characterizing uncertainty focus on specific locations of uncertainty, which are not informative for the predictive uncertainty of data-driven models. Hence, new approaches are needed for this task. A framework is developed and presented that focuses on the justification of the fitness-for-purpose of the models for the specific kind of prediction at hand. This framework uses argument-based tools and distinguishes between first-order and second-order epistemic uncertainty. First-order uncertainty emerges when it cannot be conclusively justified that the model is maximally fit-for-purpose. Second-order uncertainty emerges when it is unclear to what extent the fitness-for-purpose assumption and the underlying assumptions are justified. The application of the framework is illustrated by discussing a case study of data-driven projections of the impact of climate change on global soil selenium concentrations. The chapter also touches upon how the information emerging from the framework can be used in decision-making. The third chapter addresses the question of scientific understanding. A framework is developed for assessing the fitness of a model for providing understanding of a phenomenon. For this, the framework draws from the philosophical literature on scientific understanding and focuses on the representational accuracy, the representational depth, and the graspability of a model. Then, based on the framework, the fitness of data-driven and process-based climate models for providing understanding of phenomena is compared. It is concluded that data-driven models can, under some conditions, be fit to serve as vehicles for understanding to a satisfactory extent. This is specifically the case when sufficient background knowledge is available such that the coherence of the model with background knowledge provides good reasons for the representational accuracy of the data-driven model, which can be assessed e.g. through sensitivity analyses. This point is illustrated by discussing a case study from atmospheric physics in which data-driven models are used to better understand the drivers of a specific type of clouds. The work of this thesis highlights that while big data is no panacea for scientific research, data-driven modeling offers new tools to scientists that can be very useful for a variety of questions. All three studies emphasize the importance of background knowledge for the construction and evaluation of data-driven models as this helps to obtain models that are representationally accurate. The importance of domain-specific background knowledge and the technical challenges of implementing data-driven models for complex phenomena highlight the importance of interdisciplinary work. Previous philosophical work on machine learning has stressed that the problem framing makes models theory-laden. This thesis shows that in a field like climate research, the model evaluation is strongly guided by theoretical background knowledge, which is also important for the theory-ladenness of data-driven modeling. The results of the thesis are relevant for a range of methodological questions regarding data-driven modeling and for philosophical discussions of models that go beyond data-driven models.
Article
Full-text available
The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks (ANN’s) used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding of the brain trades off against the predictive accuracy of the models. This trade-off between prediction and understanding is better explained by a non-factivist account of scientific understanding.
Article
Full-text available
The paper explores the interplay among moral progress, evolution and moral realism. Although it is nearly uncontroversial to note that morality makes progress of one sort or another, it is far from uncontroversial to define what constitutes moral progress. In a minimal sense, moral progress occurs when a subsequent state of affairs is better than a preceding one. Moral realists conceive “it is better than” as something like “it more adequately reflects moral facts”; therefore, on a realist view, moral progress can be associated with accumulations of moral knowledge. From an evolutionary perspective, on the contrary, since there cannot be something like moral knowledge, one might conclude there cannot even be such a thing as moral progress. More precisely, evolutionism urges us to ask whether we can acknowledge the existence of moral progress without being committed to moral realism. A promising strategy, I will argue, is to develop an account of moral progress based on moral understanding rather than moral knowledge. On this view, moral progress follows increases in moral understanding rather than accumulations of moral knowledge. Whether an understanding-based account of moral progress is feasible and what its implications for the notion itself of moral progress are, will be discussed.
Chapter
The Modern Synthesis can be regarded as an attempt to unify biology and to protect it from reduction to chemistry and physics, and thus to preserve the identity of biology as a discipline. Mayr was particularly sensitive to this aspect of the synthesis and developed a specific account of biological causation in part to separate biology from other disciplines. He published his case in 1961, making a distinction between proximate and ultimate causation. This distinction and Mayr’s model of causation have been heavily criticized by advocates of an Extended Evolutionary Synthesis. In this chapter, I detail Mayr’s original argument and then core arguments from those opposing his view. I defend Mayr analytically, but I also make a comment on the possibility that Mayr and his critics are simply operating different forms of idealization to deliver on different tasks. If this is the case, I suggest, then Mayr’s view has not really been dismissed as false but rather positioned within specific task demands.
Chapter
According to epistemism, we scientifically understand explananda in terms of explanantia, provided that they are true and we justifiably believe them. On this account, scientific understanding requires the three ingredients of knowledge: belief, justification, and truth. Therefore, scientific understanding is attainable for realists, but not for antirealists. According to anti-epistemism, scientific understanding requires explanation and prediction, but none of the three ingredients of knowledge. I object that anti-epistemists have the burden of giving an account of explanation and prediction without appealing to the three ingredients of knowledge and an account of when misunderstanding arises. The author's videos: https://www.youtube.com/channel/UCjOMOQyQ8WxfvEVBGW1hzLw
Article
Full-text available
This article uses recent work in philosophy of science and social epistemology to argue for a shift in analytic philosophy of religion from a knowledge-centric epistemology to an epistemology centered on understanding. Not only can an understanding-centered approach open up new avenues for the exploration of largely neglected aspects of the religious life, it can also shed light on how religious participation might be epistemically valuable in ways that knowledge-centered approaches fail to capture. Further, it can create new opportunities for interaction with neighboring disciplines and can help us revitalize and transform stagnant debates in philosophy of religion, while simultaneously allowing for the introduction and recovery of marginalized voices and traditions.
Article
Relationships of counterfactual dependence have played a major role in recent debates of explanation and understanding in the philosophy of science. Usually, counterfactual dependencies have been viewed as the explanantia of explanation, i.e., the things providing explanation and understanding. Sometimes, however, counterfactual dependencies are themselves the targets of explanations in science. These kinds of explanations are the focus of this paper. I argue that “micro-level model explanations” explain the particular form of the empirical regularity underlying a counterfactual dependency by representing it as a physical necessity on the basis of postulated microscopic entities. By doing so, micro-level models rule out possible forms the regularity (and the associated counterfactual) could have taken. Micro-model explanations, in other words, constrain empirical regularities and their associated counterfactual dependencies. I introduce and illustrate micro-level model explanations in detail, contrast them to other accounts of explanation, and consider potential problems.
Article
One of the most lively debates on scientific understanding is standardly presented as a controversy between the so-called factivists, who argue that understanding implies truth, and the non-factivists whose position is that truth is neither necessary nor sufficient for understanding. A closer look at the debate, however, reveals that the borderline between factivism and non�factivism is not as clear-cut as it looks at first glance. Some of those who claim to be quasi-factivists come suspiciously close to the position of their opponents, the non-factivist, from whom they pretend to differ. The non-factivist, in turn, acknowledges that some sort of ‘answering to the facts’ is indispensable for understanding. This paper discusses an example of convergence of the initially rival positions in the debate on understanding and truth: the use of the same substitute for truth by the quasi�factivist Kareem Khalifa and the non-factivists Henk de Regt and Victor Gijsbers. It is argued that the use of ‘effectiveness’ as a substitute for truth by both parties is not an occasional coincidence of terms, it rather speaks about a deeper similarity which have important implications for understanding the essential features of scientific understanding.
Article
The paper offers an account of the structure of information provided by models that relevantly deviate from reality. It is argued that accounts of scientific modeling according to which a model’s epistemic and pragmatic relevance stems from the alleged fact that models give access to possibilities fail. First, it seems that there are models that do not give access to possibilities, for what they describe is impossible. Secondly, it appears that having access to a possibility is epistemically and pragmatically idle. Based on these observations, an alternative is developed.
Article
Full-text available
Historians often feel that standard philosophical doctrines about the nature and development of science are not adequate for representing the real history of science. However, when philosophers of science fail to make sense of certain historical events, it is also possible that there is something wrong with the standard historical descriptions of those events, precluding any sensible explanation. If so, philosophical failure can be useful as a guide for improving historiography, and this constitutes a significant mode of productive interaction between the history and the philosophy of science. I illustrate this methodological claim through the case of the Chemical Revolution. I argue that no standard philosophical theory of scientific method can explain why European chemists made a sudden and nearly unanimous switch of allegiance from the phlogiston theory to Lavoisier's theory. A careful re-examination of the history reveals that the shift was neither so quick nor so unanimous as imagined even by many historians. In closing I offer brief reflections on how best to explain the general drift toward Lavoisier's theory that did take place.
Article
Full-text available
How can false models be explanatory? And how can they help us to understand the way the world works? Sometimes scientists have little hope of building models that approximate the world they observe. Even in such cases, I argue, the models they build can have explanatory import. The basic idea is that scientists provide causal explanations of why the regularity entailed by an abstract and idealized model fails to obtain. They do so by relaxing some of its unrealistic assumptions. This method of 'expla-nation by relaxation' captures the explanatory import of some important models in economics. I contrast this method with the accounts that Daniel Hausman and Nancy Cartwright have provided of explanation in economics. Their accounts are unsatisfactory because they require that the economic model regulari-ties obtain, which is rarely the case. I go on to argue that counterfactual regularities play a central role in achieving 'understanding by relaxation.' This has a surprising implication for the relation between expla-nation and understanding: Achieving scientific understanding does not require the ability to explain observed regularities.
Article
Full-text available
Claims pertaining to understanding are made in a variety of contexts and ways. As a result, few in the philosophical literature have made an attempt to precisely characterize the state that is y understanding x. This paper builds an account that does just that. The account is motivated by two main observations. First, understanding x is somehow related to being able to manipulate x. Second, understanding is a mental phenomenon, and so what manipulations are required to be an understander must only be mental manipulations. Combining these two insights, the paper builds an account (URM) of understanding as a certain representational capacity—specifically, understanding x involves possessing a representation of x that could be manipulated in useful ways. By tying understanding to representation, the account correctly identifies that understanding is a fundamentally cognitive achievement. However, by also demanding that which representations count as understanding-conferring be determined by their practical effects, URM captures the insight that understanding is vitally connected to practice. URM is fully general, and can apply equally well to understanding states of affairs, understanding events, and even understanding people and works of art. The ultimate test of URM is its applicability in actual scientific and philosophical discourse. To that end the paper discusses the importance of understanding in the philosophy of science, psychology, and computer science.
Article
Full-text available
The concept of mechanism is analyzed in terms of entities and activities, organized such that they are productive of regular changes. Examples show how mechanisms work in neurobiology and molecular biology. Thinking in terms of mechanisms provides a new framework for addressing many traditional philosophical issues: causality, laws, explanation, reduction, and scientific change.
Chapter
Full-text available
This chapter offers an analysis of understanding in biology based on characteristic biological practices: ways in which biologists think and act when carrying out their research. De Regt and Dieks have forcefully claimed that a philosophical study of scientific understanding should 'encompass the historical variation of specific intelligibility standards employed in scientific practice' (2005, 138). In line with this suggestion, I discuss the conditions under which contemporary biologists come to understand natural phenomena and I point to a number of ways in which the performance of specific research practices informs and shapes the quality of such understanding. My arguments are structured in three parts. In Section 1, I consider the ways in which biologists think and act in order to produce biological knowledge. I review the epistemic role played by theories and models and I emphasise the importance of embodied knowledge (so-called 'know-how') as a necessary complement to theoretical knowledge ('knowing that') of phenomena. I then argue that it is neither possible nor useful to distinguish between basic and applied knowledge within contemporary biology. Technological expertise and the ability to manipulate entities (or models thereof) are not only indispensable to the production of knowledge, but are as important a component of biological knowledge as are theories and explanations. Contemporary biology can be characterised as an 'impure' mix of tacit and articulated knowledge. Having determined what I take to count as knowledge in biology, in Section 2 I analyse how researchers use such knowledge to achieve an understanding of biological
Article
Full-text available
Like other mathematically intensive sciences, economics is becoming increasingly com-puterized. Despite the extent of the computation, however, there is very little true simulation. Simple computation is a form of theory articulation, whereas true simu-lation is analogous to an experimental procedure. Successful computation is faithful to an underlying mathematical model, whereas successful simulation directly mimics a process or a system. The computer is seen as a legitimate tool in economics only when traditional analytical solutions cannot be derived, i.e., only as a purely com-putational aid. We argue that true simulation is seldom practiced because it does not fit the conception of understanding inherent in mainstream economics. According to this conception, understanding is constituted by analytical derivation from a set of fundamental economic axioms. We articulate this conception using the concept of economists' perfect model. Since the deductive links between the assumptions and the consequences are not transparent in 'bottom-up' generative microsimulations, micro-simulations cannot correspond to the perfect model and economists do not therefore consider them viable candidates for generating theories that enhance economic understanding.
Article
Full-text available
The basic theory of scientific understanding presented in Sections 1–2 exploits three main ideas.First, that to understand a phenomenonP (for a given agent) is to be able to fitP into the cognitive background corpusC (of the agent).Second, that to fitP intoC is to connectP with parts ofC (via arguments in a very broad sense) such that the unification ofC increases.Third, that the cognitive changes involved in unification can be treated as sequences of shifts of phenomena inC. How the theory fits typical examples of understanding and how it excludes spurious unifications is explained in detail. Section 3 gives a formal description of the structure of cognitive corpuses which contain descriptive as well as inferential components. The theory of unification is then refined in the light of so called puzzling phenomena, to enable important distinctions, such as that between consonant and dissonant understanding. In Section 4, the refined theory is applied to several examples, among them a case study of the development of the atomic model. The final part contains a classification of kinds of understanding and a discussion of the relation between understanding and explanation.
Article
Full-text available
Philosophers of science have often favoured reductive approaches to how-possibly explanation. This chapter identifies three varieties of how-possibly explanation and, in so doing, helps to show that this form of explanation is a rich and interesting phenomenon in its own right. The first variety approaches “How is it possible that X?” by showing that, despite appearances, X is not ruled out by what was believed prior to X. This can sometimes be achieved by removing misunderstandings about the implications of one’s belief system (prior to observing X), but more often than not it involves a modification of this belief system so that one’s acceptance of X does not generate a contradiction.
Article
Scientific realism is the view that our best scientific theories give approximately true descriptions of both observable and unobservable aspects of a mind-independent world. Debates between realists and their critics are at the very heart of the philosophy of science. Anjan Chakravartty traces the contemporary evolution of realism by examining the most promising strategies adopted by its proponents in response to the forceful challenges of antirealist sceptics, resulting in a positive proposal for scientific realism today. He examines the core principles of the realist position, and sheds light on topics including the varieties of metaphysical commitment required, and the nature of the conflict between realism and its empiricist rivals. By illuminating the connections between realist interpretations of scientific knowledge and the metaphysical foundations supporting them, his book offers a compelling vision of how realism can provide an internally consistent and coherent account of scientific knowledge. © Anjan Chakravartty 2007 and Cambridge University Press, 2009.
Article
What distinguishes good explanations in neuroscience from bad? This book constructs and defends standards for evaluating neuroscientific explanations that are grounded in a systematic view of what neuroscientific explanations are: descriptions of multilevel mechanisms. In developing this approach, it draws on a wide range of examples in the history of neuroscience (e.g., Hodgkin and Huxley's model of the action potential and LTP as a putative explanation for different kinds of memory), as well as recent philosophical work on the nature of scientific explanation.
Article
In the study of weather and climate, the digital computer has allowed scientists to make existing theory more useful, both for prediction and for understanding. After characterizing two sorts of understanding commonly sought by scientists in this arena, I show how the use of the computer to (i) generate surrogate observational data, (ii) test physical hypotheses and (iii) experiment on models has helped to advance such understanding in significant ways.
Article
Science and the Enlightenment is a general history of eighteenth-century science covering both the physical and life sciences. It places the scientific developments of the century in the cultural context of the Enlightenment and reveals the extent to which scientific ideas permeated the thought of the age. The book takes advantage of topical scholarship, which is rapidly changing our understanding of science during the eighteenth century. In particular it describes how science was organized into fields that were quite different from those we know today. Professor Hankins's work is a much needed addition to the literature on eighteenth-century science. His study is not technical; it will be of interest to all students of the Enlightenment and the history of science, as well as to the general reader with some background in science.
Article
Recently, several authors have argued that scientific understanding should be a new topic of philosophical research. In this article, I argue that the three most developed accounts of understanding—Grimm’s, de Regt’s, and de Regt and Dieks’s—can be replaced by earlier ideas about scientific explanation without loss. Indeed, in some cases, such replacements have clear benefits.
Article
This book presents the important aspects of the field of high-energy physics, or particle physics, at an elementary level. The first chapter presents basic introductory ideas, the historical development, and a brief overview of the subject; the second and third chapters deal with experimental methods, conservation laws, and invariance principles. The following chapters deal in turn with the main features of the interactions between hadrons; the description of the hadrons in terms of quark constituents, and discussion of the basic interactions-electromagnetic, weak and strong-between the lepton and quark constituents. The final chapter discusses unification of the various interactions.
Article
Computer simulation has become an important means for obtaining knowledge about nature. The practice of scientific simulation and the frequent use of uncertain simulation results in public policy raise a wide range of philosophical questions. Most prominently highlighted is the field of anthropogenic climate change—are humans currently changing the climate? Referring to empirical results from science studies and political science, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, Second Edition addresses questions about the types of uncertainty associated with scientific simulation and about how these uncertainties can be communicated. The author, who participated in the United Nations’ Intergovernmental Panel on Climate Change (IPCC) plenaries in 2001 and 2007, discusses the assessment reports and workings of the IPCC. This second edition reflects the latest developments in climate change policy, including a thorough update and rewriting of sections that refer to the IPCC.
Article
In this paper I explore the prospects of applying inference to the Best Explanation (TIBE - sometimes also known as 'abduction') to an account of the way we decide whether to accept the word of others (sometimes known as 'aliens'). IBE is a general account of non- demonstrative or inductive inference, but it has been applied in a particular way to the management of testimony. The governing idea of Testimonial IBE (TIBE) is that a recipient of testimony ('hearer') decides whether to believe the claim of the informant ('speaker') by considering whether the truth of that claim would figure in the best explanation of the fact that the speaker made it.
Article
This paper starts by looking at the coincidence of surprising behavior on the nanolevel in both matter and simulation. It uses this coincidence to argue that the simulation approach opens up a pragmatic mode of understanding oriented toward design rules and based on a new instrumental access to complex models. Calculations, and their variation by means of explorative numerical experimentation and visualization, can give a feeling for a model's behavior and the ability to control phenomena, even if the model itself remains epistemically opaque. Thus, the investigation of simulation in nanoscience provides a good example of how science is adapting to a new instrument: computer simulation. Copyright 2006 by the Philosophy of Science Association. All rights reserved.
Article
This essay contains a partial exploration of some key concepts associated with the epistemology of realist philosophies of science. It shows that neither reference nor approximate truth will do the explanatory jobs that realists expect of them. Equally, several widely-held realist theses about the nature of inter-theoretic relations and scientific progress are scrutinized and found wanting. Finally, it is argued that the history of science, far from confirming scientific realism, decisively confutes several extant versions of avowedly 'naturalistic' forms of scientific realism.
Article
Biologists in many different fields of research give how-possibly explanations of the phenomena they study. Although such explanations lack empirical support, and might be regarded by some as unscientific, they play an important heuristic role in biology by helping biologists develop theories and concepts and suggesting new areas of research. How-possibly explanations serve as a useful framework for conducting research in the absence of adequate empiri cal data, and they can even become how-actually explanations if they gain enough empirical support.
Article
First, I show how to use the concept of phlogiston to teach oxidation and reduction reactions, based on the historical context of their discovery, while also teaching about the history and nature of science. Second, I discuss the project as an exemplar for integrating history, philosophy and sociology of science in teaching basic scientific concepts. Based on this successful classroom experience, I critique the application of common constructivist themes to teaching practice. Finally, this case shows, along with others, how the classroom is not merely a place for applying history, philosophy or sociology, but is also a site for active research in these areas. This potential is critical, I claim, for building a stable, permanent interdisciplinary relationships between these fields.
Article
This paper argues that spacetime visualisability is not a necessary condition for the intelligibility of theories in physics. Visualisation can be an important tool for rendering a theory intelligible, but it is by no means a sine qua non. The paper examines the historical transition from classical to quantum physics, and analyses the role of visualisability (Anschaulichkeit) and its relation to intelligibility. On the basis of this historical analysis, an alternative conception of the intelligibility of scientific theories is proposed, based on Heisenberg's reinterpretation of the notion of Anschaulichkeit.
Article
Many people assume that the claims of scientists are objective truths. But historians, sociologists, and philosophers of science have long argued that scientific claims reflect the particular historical, cultural, and social context in which those claims were made. The nature of scientific knowledge is not absolute because it is influenced by the practice and perspective of human agents. Scientific Perspectivism argues that the acts of observing and theorizing are both perspectival, and this nature makes scientific knowledge contingent, as Thomas Kuhn theorized forty years ago. Using the example of color vision in humans to illustrate how his theory of “perspectivism” works, Ronald N. Giere argues that colors do not actually exist in objects; rather, color is the result of an interaction between aspects of the world and the human visual system. Giere extends this argument into a general interpretation of human perception and, more controversially, to scientific observation, conjecturing that the output of scientific instruments is perspectival. Furthermore, complex scientific principles—such as Maxwell’s equations describing the behavior of both the electric and magnetic fields—make no claims about the world, but models based on those principles can be used to make claims about specific aspects of the world. Offering a solution to the most contentious debate in the philosophy of science over the past thirty years, Scientific Perspectivism will be of interest to anyone involved in the study of science.
Article
Among philosophers of science there seems to be a general consensus that understanding represents a species of knowledge, but virtually every major epistemologist who has thought seriously about understanding has come to deny this claim. Against this prevailing tide in epistemology, I argue that understanding is, in fact, a species of knowledge: just like knowledge, for example, understanding is not transparent and can be Gettiered. I then consider how the psychological act of “grasping” that seems to be characteristic of understanding differs from the sort of psychological act that often characterizes knowledge. • Zagzebski's account • Kvanvig's account • Two problems • Comanche cases • Unreliable sources of information • The upper-right quadrant • So is understanding a species of knowledge? • A false choice
Model organisms as fictions
  • R A Ankeny
  • RA Ankeny
Ankeny, R. A. (2009). Model organisms as fictions. In M. Suàrez (Ed.), Fictions in science (pp. 193-204). New York: Routledge.
The hidden history of phlogiston
  • H Chang
Chang, H. (2010). The hidden history of phlogiston. Hyle, 16, 47–79.
Explaining the brain : Mechanisms and the mosaic unity of neuroscience
  • C F Craver
  • CF Craver
Craver, C. F. (2007). Explaining the brain : Mechanisms and the mosaic unity of neuroscience. Oxford: Clarendon.
Scientific perspectivism. Chicago: The University of Chicago Press Is understanding a species of knowledge?
  • R N Giere
Giere, R. N. (2006). Scientific perspectivism. Chicago: The University of Chicago Press. Grimm, S. R. (2006). Is understanding a species of knowledge? The British Journal for the Philosophy of Science, 57, 515–535.
From instrumentalism to conctructivism: On some relations between confirmation, empirical progress and truth approximation
  • T A F Kuipers
  • TAF Kuipers
Kuipers, T. A. F. (2000). From instrumentalism to conctructivism: On some relations between confirmation, empirical progress and truth approximation (Vol. 287). Dordrecht: Kluwer.
Fictions, fictionalization, and truth in science The scientific image
  • P Teller
Teller, P. (2009). Fictions, fictionalization, and truth in science. In M. Suàrez (Ed.), Fictions in science (pp. 235–247). New York: Routledge. van Fraassen, B. C. (1980). The scientific image. Oxford: Clarendon Press.
  • I Niiniluoto
Niiniluoto, I. (1987). Truthlikeness. Dordrecht: Reidel.
Fictions, fictionalization, and truth in science
  • G Schurz
  • K Lambert
Schurz, G., & Lambert, K. (1994). Outline of a theory of scientific understanding. Synthese, 101(1), 65-120. Teller, P. (2009). Fictions, fictionalization, and truth in science. In M. Suàrez (Ed.), Fictions in science (pp. 235-247). New York: Routledge. van Fraassen, B. C. (1980). The scientific image. Oxford: Clarendon Press.