Book

Idealization and the Aims of Science

Authors:
... For similar reasons, I will skip over a range of issues that involve the connection between explanation and other aspects of scientifc epistemology. These include the role of explanation and understanding in structuring scientifc inquiry and the relation between these notions and scientifc progress (de Regt 2017;Dellsen 2016;Elgin 2017;Potochnik, 2017). ...
... The intricacies of de-idealizing models are discussed in Knuutila and Morgan (2021). Potochnik (2017) provides an extended account of the importance of idealization in explanation and in science more broadly. Elgin (2017) argues against pictures of science centered on truth, emphasizing idealization and understanding. ...
... In her 2017, Elgin proposes a broader view of the epistemology of science centered around understanding. A related view, albeit one that stresses idealization to a greater extent, is given by Potochnik (2017). Grimm (2006) is an important discussion of the relation between understanding and knowledge, and Khalifa (2012) argues that the appeal to understanding adds little to existing accounts of explanation. ...
... Of course, we're not the first to take pragmatic factors seriously. Many philosophers have stressed the relevance of conversational and cognitive considerations in characterising explanation (Bromberger, 1965;van Fraassen, 1977van Fraassen, , 1980Achinstein, 1983;Ylikoski and Kuorikoski, 2010;De Regt, 2017;Potochnik, 2017), but these proposals have remained programmatic and informal. By contrast, our formal account is developed using tools from contemporary cognitive science; not only does this afford precise comparison with other formal proposals, it suggests a promising future avenue for incorporating psychological work into the philosophy of explanation. ...
... Others endeavour to put such matters front and center, in part because they feel that pragmatic and contextual dependence undermines previous accounts (Bromberger, 1965(Bromberger, , 1984Achinstein, 1977Achinstein, , 1983Achinstein, , 1984van Fraassen, 1977van Fraassen, , 1980De Regt and Dieks, 2005;De Regt, 2017;Potochnik, 2017). To illustrate their motivations, consider a variant of an example from Gärdenfors (1980). ...
... For similar reasons, it's unclear why the HP analysis has a hard requirement that explanations be minimal (EX3). If anything, a good explanation ought to bear more rather than less inferential fruit, a point often stressed in the literature on explanatory generality (e.g., Potochnik 2017;Putnam 1975). Consider the following example: ...
Preprint
Full-text available
This paper develops a formal account of causal explanation, grounded in a theory of conversational pragmatics, and inspired by the interventionist idea that explanation is about asking and answering what-if-things-had-been-different questions. We illustrate the fruitfulness of the account, relative to previous accounts, by showing that widely recognised explanatory virtues emerge naturally, as do subtle empirical patterns concerning the impact of norms on causal judgments. This shows the value of a communication-first approach to explanation: getting clear on explanation's communicative dimension is an important prerequisite for philosophical work on explanation. The result is a simple but powerful framework for incorporating insights from the cognitive sciences into philosophical work on explanation, which will be useful for philosophers or cognitive scientists interested in explanation.
... which can involve deviations from the truth (Potochnik, 2017) are made use of to achieve a kind of scientific understanding, moral understanding might also involve only grasping a subset of all the relevant moral reasons. Yet, in Section 5, I point out some differences between moral and scientific understanding, which shows that even if one doesn't take scientific idealization to involve restricting one's grasping of scientific reasons, it's plausible that such restrictions do occur in the moral domain. ...
... Within the realm of science, it's been argued that idealizations help us achieve scientific understanding. Idealizations involve deviations from truths and have been said to be necessary for creating a simplified model that the human mind is capable of grasping (Elgin, 2017;Potochnik, 2017Potochnik, , 2020. Such deviations from the truth are thought to be necessary because of the cognitive limitations of the human mind; sometimes less (of the truth) is more, at least when it comes to achieving understanding. ...
... Within the realm of science, it's been argued that idealizations help us achieve scientific understanding. Idealizations involve deviations from truths and have been said to be necessary for creating a simplified model that the human mind is capable of grasping (Elgin, 2017;Potochnik, 2017Potochnik, , 2020. Such deviations from the truth are thought to be necessary because of the cognitive limitations of the human mind; sometimes less (of the truth) is more, at least when it comes to achieving understanding. ...
Article
Full-text available
Moral understanding has typically been defined as grasping the explanation, q, for some proposition, p, where p states that some action is morally right (or wrong). This article deals with an underdiscussed point within the literature on moral understanding: the degree of moral understanding one has deepens with the more moral reasons that one grasps, whereby these reasons not only consist of those that speak in favor of an action’s moral permissibility but also those speaking against. I argue for a surprising and important implication of this: having a deep degree of moral understanding can make it harder to carry out the right action. Furthermore, I propose that we should think of our pursuit of moral understanding in an analogous way as to how some have thought of scientific understanding: There may be good reasons to fail to appreciate all of the actual moral reasons that in fact exist; sometimes we should seek a surfaced-level moral understanding instead of something deeper. Just as idealizations used within science – which can involve deviations from the truth – can help us achieve scientific understanding, so too we might restrict the moral reasons that we seek to grasp in pursuit of moral understanding.
... Additionally, mechanisms can be described at varying levels of abstraction (where certain parts are intentionally excluded from the description) and/or idealization (where parts known not to belong to the mechanism are intentionally attributed to it). This approach stems from the fact that proposing a mechanism is an explanatory endeavor and, as such, is shaped by the purpose of the research (Craver, 2009;Glennan, 2017;Potochnik, 2017). Depending on the goal of the research, it may be more useful to abstract away certain features and idealize others. ...
... Explanations generally reveal counterfactual dependencies (Woodward, 2003;Glennan, 2017, Potochnik, 2017. Counterfactual dependencies not only elucidate why a phenomenon happens but also how it would alter under different circumstances (Woodward, 2003Craver, 2007). ...
... Sometimes the underlying mechanism may not be the best explanation, especially if it does not allow for the specific interventions sought in that particular research. Moreover, science arguably has multiple goals beyond uncovering underlying mechanisms (Potochnik, 2017), and methodological strategies, including kinding strategies, depend on the goal at hand (Massimi, 2022). ...
Article
Full-text available
The (dis)continuism debate in the philosophy of memory revolves around the question of whether memory and imagination belong to the same natural kind. Continuism, on the one hand, defends that they belong to the same natural kind. Discontinuism, on the other hand, defends that they do not belong to the same natural kind. By adopting a minimal notion of natural kind, one can recognize that there are different legitimate ways of sorting kinds, which lead to different positions in the debate. In this paper, I interpret continuism as a mechanistic thesis, according to which memory and imagination belong to the same natural kind because they are underpinned by the same constitutive mechanism. I clarify the implications of this thesis and show that most of the discontinuist attacks on continuism do not constitute a challenge to the mechanistic thesis. I also present a possible challenge to mechanistic continuism. This suggests that there may be multiple (dis)continuism debates.
... However, a crucial question for this prominent approach to understanding is, which possibilities scientists ought to investigate to generate (or deepen) scientific understanding? The prominent views of understanding found in the literature seem to suggest three possible answers: (1) accuracy/ closeness with respect to the actual world (Strevens 2013;Trout 2007), (2) the counterfactual situations relevant for evaluating the outcomes of interventions (Douglas 2009;Potochnik 2017;Woodward 2003), or (3) the possibilities of interest to the scientists using the model or theory (Elgin 2017;Potochnik 2017;Saatsi 2019). ...
... However, a crucial question for this prominent approach to understanding is, which possibilities scientists ought to investigate to generate (or deepen) scientific understanding? The prominent views of understanding found in the literature seem to suggest three possible answers: (1) accuracy/ closeness with respect to the actual world (Strevens 2013;Trout 2007), (2) the counterfactual situations relevant for evaluating the outcomes of interventions (Douglas 2009;Potochnik 2017;Woodward 2003), or (3) the possibilities of interest to the scientists using the model or theory (Elgin 2017;Potochnik 2017;Saatsi 2019). ...
... While this enables Strevens's view to capture some additional cases, this kind of view fails to capture the sundry idealized models that provide scientific understanding by directly distorting difference-making features (Elgin 2017;Potochnik 2017;Rice 2018Rice , 2021. The explanatory content of these models cannot be rendered true (of the actual world) by arguing that the distorted features do not make a difference to the explanandum. ...
... 4 Focusing on these model-based justifications highlights what I call the "tyranny of availability" that greatly constrains variable and parameter choice across scientific practice. Rather than picking out a single best set of variables and parameters for casual modeling purposes (Woodward 2016) or that are of interest to one's audience (Potochnik 2017), I will argue that-across a wide range of scientific modeling contexts-scientists' choices of variables and parameters are highly constrained by the availability of the mathematical frameworks, modeling techniques, theories, measurements, data, and so forth with which to construct a scientific model. As a result, the variables or parameters that are favored for other reasons typically must be drawn from within the set of variables and parameters delimited by these modeling constraints. ...
... And later on, "This chapter aims to justify these variables/parameters as better able to figure in explanations, as better able to provide descriptions and understanding of certain behaviors" (ibid., 121; my emphasis). Similarly, Angela Potochnik (2017) has argued that the variables that ought to be included in a scientific model are those that contribute to the causal pattern of interest to one's audience because this will best promote understanding. Rather than tying variable choice to causal modeling contexts or appealing to metaphysical naturalness, this epistemic approach aims to justify scientists' variable choices by showing how they contribute to more general epistemic aims of scientific inquiry. ...
... For example, like Woodward, many philosophers have argued that the aim of explanation will be best achieved by models that include variables and parameters that capture the difference-making causes of the system (Strevens 2008). However, other philosophers have argued that the variables that ought to be included in causal explanations should be tailored to the intended audience rather than overly focused on difference-making causes (Potochnik 2017). Moreover, many other philosophers have argued that there are noncausal scientific explanations that would not be captured by these causal accounts and have offered varying accounts of how noncausal explanations work (Batterman and Rice 2014;Bokulich 2011;Rice 2021;Khalifa 2017). ...
Article
Full-text available
This article distinguishes causal modeling, metaphysical, epistemic, and modeling reasons for variable/parameter choice. I argue in favor of justifying variable/parameter choices by appealing to modeling reasons concerning the limitations of the available measurements, experimental data, modeling techniques, and modeling frameworks. I use this “tyranny of availability” to identify normative criteria for variable/parameter choice that apply across most scientific modeling contexts and investigate their metaphysical and epistemological implications.
... In addition, CLS/SCT contains descriptive explanations that idealize, e.g., hippocampal and neocortical processes, memories, and other components of the theory at, e.g., behavioral, circuit, and computational levels of abstraction, which are connected by mechanistic explanations for *how* memories are initially formed (Nadel et al., 2012) and consolidated (Klinzing et al., 2019) and normative explanations for *why* memories should be separated into complementary learning systems (Roxin & Fusi, 2013). This multi-level theory surrounds a "core" (Lakatos, 1970) idea: that the mammalian brain has two phenomena they pertain to), are selective accounts that omit features of those phenomena (abstraction, Jones, 2005), and contain deliberate falsehoods of the remaining features (idealization, Potochnik, 2017). As a result, different research communities can have qualitatively different theories that suit the particularities of their respective domains, even when comparing the theories of fields as similar to the cognitive sciences (Box 2). ...
... Fields have multiple theories that together aim to cover all problems in their problem space which those phenomena are relevant) but also includes the proliferation of different theories about the same phenomenon (e.g., at different levels of abstraction, Love, 2012). This is because (1) the world is causally complex, and thus, problem-solving requires selective attention to the more relevant aspects of a subset of phenomena (Khalifa, 2020;Potochnik, 2017) and (2) the needs and competencies of different agents are highly diverse, requiring different theories that might meet these needs. As a result, theories pay for their success on some problems with failure on other problems. ...
... As has been pointed out previously, science is a social process that progresses through societal mechanisms (Kuhn, 1962;Longino, 1990), and views that ignore this are neither an accurate portrayal of science nor helpful for scientific practice. Further, as limited beings in a causally complex world (Potochnik, 2017;Wimsatt, 2007), our heuristics for judging problem-solving tools are inevitably specific to the agents who wish to solve them (Bechtel & Richardson, 2010;Wimsatt, 2007). Even if they may seem to be a negative or irrational aspect of human nature, it is important that we acknowledge and weigh these agent-specific properties in our scientific judgments and decision-making, as it is a more effective and, indeed, rational way to do science than pretending they do not exist. ...
Article
Full-text available
The cognitive sciences are facing questions of how to select from competing theories or develop those that suit their current needs. However, traditional accounts of theoretical virtues have not yet proven informative to theory development in these fields. We advance a pragmatic account by which theoretical virtues are heuristics we use to estimate a theory’s contribution to a field’s body of knowledge and the degree to which it increases that knowledge’s ability to solve problems in the field’s domain or problem space. From this perspective, properties that are traditionally considered epistemic virtues, such as a theory’s fit to data or internal coherence, can be couched in terms of problem space coverage, and additional virtues come to light that reflect a theory’s alignment with problem-having agents and the context in a societally embedded scientific system. This approach helps us understand why the needs of different fields result in different kinds of theories and allows us to formulate the challenges facing cognitive science in terms that we hope will facilitate their resolution through further theoretical development.
... Others have argued that even if deidealizing models in this way were feasible, it might not be desirable. For example, both Michael Weisberg (2007) and Robert Batterman (2009) have argued that often we do not want to deidealize our models (see also Potochnik 2017). ...
... For example, having turned deidealization into a comparative concept only, proposals like Peruzzi and Cevolani's still seem to envision a uniform process of replacing idealized with less idealized assumptions. This is in line with many others who have discussed model deidealization primarily as a process of relaxing assumptions (e.g., Hindriks 2012; Mäki 2012), eliminating idealizations (e.g., Batterman 2010;McMullin 1985;Potochnik 2017), or adding back "details" (e.g., Batterman 2009McMullin 1985;Weisberg 2007). If we look at concrete scientific practices, we might, however, find that processes of model deidealization are much less uniform than this would suggest. ...
Article
Full-text available
Knuuttila and Morgan (2019) challenge the widespread understanding that deidealization is no more than a simple process of relaxing assumptions to build increasingly more realistic models. They submit that, in practice, processes of model deidealization are diverse and complex and thus warrant more explicit scrutiny. Drawing on a case from economics, my analysis extends their proposal by showing how narratives, as additional representational forms, can assume a crucial role in deidealizing mathematical models. I thereby propose to consider that processes of model deidealization are not necessarily exhausted by processes in which one theoretical mathematical model is replaced with another one.
... A selection of qualities that good scientific theories or models display are summarised in Table 1. The qualities listed are summarised from a range of works in the philosophy of science (Haig, 2014;Keas, 2018;Kuhn, 1977;Lipton, 2017;Potochnik, 2016Potochnik, , 2017Thagard, 2019). It is useful to consider that such qualities are often in tension with each other. ...
... Models in science are both representations of the world and tools for use by scientists (and in this case clinicians). As such, scientific models are subject not only to epistemological values-that is, how well they represent the topic of study-but also to pragmatic valuesthat is, what we have here labelled the quality of utility (Potochnik, 2016(Potochnik, , 2017. For a model of a clinical problem this means that, all else the same, structuring models in a way that is accessible to clinicians is a genuine theoretical virtue. ...
Article
Full-text available
Persisting post-concussion symptoms (PPCS) refers to a heterogenous cluster of difficulties experienced by a significant proportion of individuals following mild traumatic brain injury (mTBI). Innovative developments by Kenzie et al. suggest that PPCS may be understood as a complex dynamical system, with persisting symptoms being maintained by interacting factors across the brain, experience, and social environment. This paper offers a conceptual and theoretical evaluation of Kenzie et al.'s model, based on a broad set of appraisal criteria drawn from the philosophy of science. Kenzie et al.'s model is found to have several strengths. Areas for improvement highlighted include recognising the role of bodily factors outside the brain, improving the specificity and perceived importance of psychological and contextual factors, and managing the complexity of the model. Four suggestions are then made for continued development of a complex systems approach to PPCS. These include drawing on an enactive understanding of human functioning, utilising the notion of a "scientific phenomenon" to improve specificity, making riskier psycho-social hypotheses, and developing component models targeted at clinical phenomena.
... Similar ideas can be found in the literature on scientific models. Potochnik (2017), for example, argues that in many cases the commonality sought between representations and what they represent can be understood in terms of functional similarities. The similarities included in the model depend on the functions of interest to the modelers, that is, on the purpose for which they want to use it. ...
... See alsoDoyle et al. (2019), De Regt (2015,Elgin (2004Elgin ( , 2007Elgin ( , 2008Elgin ( , 2017, andPotochnik (2017Potochnik ( , 2020.8 The very short list of experiments in this field include: Allahyari and Lavesson (2011), Freitas (2014), Fürnkranz et al. (2018), Lage et al. (2019), Kandul et al. (2023), Kliegr et al. (2018), Piltaver et al. (2016), and van der Waa (2021). ...
Preprint
Full-text available
In the natural and social sciences, it is common to use toy models -- extremely simple and highly idealized representations -- to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on the output. The obvious difference is that the common target of a toy and a full-scale model in the sciences is some phenomenon in the world, while the target of a surrogate model is another model. This essential difference makes toy surrogate models (TSMs) a new object of study for theories of understanding, one that is not easily accommodated under current analyses. This paper provides an account of what it means to understand an opaque ML model globally with the aid of such simple models.
... Similar ideas can be found in the literature on scientific models. Potochnik (2017), for example, argues that in many cases the commonality sought between representations and what they represent can be understood in terms of functional similarities. The similarities included in the model depend on the functions of interest to the modelers, that is, on the purpose for which they want to use it. ...
... See alsoDoyle et al. (2019), De Regt(2015),Elgin (2004Elgin ( , 2007Elgin ( , 2008Elgin ( , 2017, andPotochnik (2017Potochnik ( , 2020. ...
Article
Full-text available
In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on the output. The obvious difference is that the common target of a toy and a full-scale model in the sciences is some phenomenon in the world, while the target of a surrogate model is another model. This essential difference makes toy surrogate models (TSMs) a new object of study for theories of understanding, one that is not easily accommodated under current analyses. This paper provides an account of what it means to understand an opaque ML model globally with the aid of such simple models.
... von Wright, 1971). Some philosophers argue that explanations aim to produce understanding in cognitive agents (Hills, 2015;Elgin, 2017;Potochnik, 2017). To understand something is to hold it as intelligible. ...
... may help them understand how it looks, but it is not an explanatory understanding. Explanatory understanding involves grasping regularities within a limited range of phenomena (Potochnik, 2017), allowing us to answer why-questions and formulate predictions. ...
Article
Full-text available
One of the ideas that characterises the enactive approach to cognition is that life and mind are deeply continuous, which means that both phenomena share the same basic set of organisational and phenomenological properties. The appeal to phenomenology to address life and basic cognition is controversial. It has been argued that, because of its reliance on phenomenological categories, enactivism may implicitly subscribe to a form of anthropomorphism incompatible with the modern scientific framework. These worries are a result of a lack of clarity concerning the role that phenomenology can play in relation to biology and our understanding of non-human organisms. In this paper, I examine whether phenomenology can be validly incorporated into the enactive conception of mind and life. I argue that enactivists must rely on phenomenology when addressing life and mind so that they can properly conceptualise minimal living systems as cognitive, as well as argue for an enactive conception of biology in line with their call for a non-objectivist science. To sustain these claims, I suggest that enactivism must be further phenomenologised by not only drawing from Hans Jonas’s phenomenology of the organism (as enactivists often do) but also from Edmund Husserl’s thoughts on the connection between transcendental phenomenology and biology. Additionally, phenomenology must be considered capable of providing explanatory accounts of phenomena
... (LN ii ) Idealized models of mechanisms that are cited in mechanistic explanations misrepresent those mechanisms. Potochnik (2017) highlights the contradiction between the beliefs that explanations must be true and that idealizations are untrue: ...
... This would be a straightforward solution since nothing is puzzling about "false" models providing false explanations. Even so, philosophers rarely follow this strategy explicitly, most likely because they commonly subscribe to the factivity of explanation. 1 One notable exception is Potochnik (2017), who argues that "because idealizations are patently untrue," (93) model-based explanations cannot be true either (134). Because Potochnik accepts that models are "false" and that models can explain, she sacrifices the factivity of explanation. ...
Chapter
Full-text available
Among the many functions of models, explanation is central to the aims and functions of science. However, the discussions surrounding modeling and explanation in philosophy have remained largely separate from each other. This chapter seeks to bridge the gap by focusing on the puzzle of model-based explanation, asking how different philosophical accounts answer the following question: if idealizations and fictions introduce falsehoods into models, how can idealized and fictional models provide true explanations? The chapter provides a selective and critical overview of the available strategies for solving this puzzle, mainly focusing on idealized models and how they explain.
... One way of maintaining awareness of methodological contingencies which seems to be gaining traction within the philosophy of science is pluralism (Longino, 2020(Longino, , 2021Potochnik, 2017). If different perspectives necessarily come with just part of the full story, then it might seem that the best manner of approach is one which combines a multitude of differing perspectives. ...
Article
Full-text available
4E (embodied, enactive, embedded, and extended) approaches in cognitive science are unified in their rejection of Cartesianism. Anti-Cartesianism, understood as the rejection of a view of the relation between the mind and the world as mediated (e.g., by mental representations), is a watchword of the 4E family. This article shows how 4E approaches face hitherto underappreciated challenges in overcoming what has been termed the “representational pull” (Di Paolo et al., 2017) of the mediational, Cartesian standard in cognitive science. I argue that present proposals for reaching “escape velocity” from representational pull underestimate the force of mediationalism as part of the territory of cognitive science itself. I then offer a historical contextualization of representational pull in contemporary cognitive science as symptomatic of the latter’s methodological commitment to approaching questions about minds and cognition at the level of individual agents. Rather than resulting from a misapplication of cognitive science, the persistent pull of mediationalism is a function of a core commitment to methodological individualism which continues to define the enterprise of cognitive science. Finally, I will discuss potential options for how 4E theorists might seek to combat the continuing mediational pull at the core of cognitive science.
... When appropriately chosen or found, a representation generates understanding by resembling something that is already comprehended, such as a pictorial or simple mechanical model [39,53]. Idealized models thus play a crucial role in science [43,104,107,[110][111][112][113]. They serve as interpretable representations resembling aspects of the real world, connecting to specific features of it [39,114]. ...
Preprint
Full-text available
Machine learning is increasingly transforming various scientific fields, enabled by advancements in computational power and access to large data sets from experiments and simulations. As artificial intelligence (AI) continues to grow in capability, these algorithms will enable many scientific discoveries beyond human capabilities. Since the primary goal of science is to understand the world around us, fully leveraging machine learning in scientific discovery requires models that are interpretable -- allowing experts to comprehend the concepts underlying machine-learned predictions. Successful interpretations increase trust in black-box methods, help reduce errors, allow for the improvement of the underlying models, enhance human-AI collaboration, and ultimately enable fully automated scientific discoveries that remain understandable to human scientists. This review examines the role of interpretability in machine learning applied to physics. We categorize different aspects of interpretability, discuss machine learning models in terms of both interpretability and performance, and explore the philosophical implications of interpretability in scientific inquiry. Additionally, we highlight recent advances in interpretable machine learning across many subfields of physics. By bridging boundaries between disciplines -- each with its own unique insights and challenges -- we aim to establish interpretable machine learning as a core research focus in science.
... D. Mitchell 2003;Steel 2007;Lange 2008). They do so by generalisation: identifying patterns in how events, phenomena or causal factors repeat across space, time, or taxonomies (Potochnik 2017). Examples include statistical generalisation, generalising from samples to populations; typological generalisation, generalising from tokens to types; and extrapolation, generalising from one population or species to other populations or species. ...
Preprint
Full-text available
Behavioural ecologists have recently begun to study individuality, that is, individual differences and uniqueness in phenotypic traits and in ecological relations. However, individuality is an unusual object of research. Using an ethnographic case study of individuality research in behavioural ecology, we analyse concerns that behavioural ecologists express about their ability to study individuality. We argue that these concerns stem from two epistemic challenges: the variation-noise challenge and the generalisation challenge. First, individuality is difficult to distinguish from noise, as standard practices lump variation between individuals together with noise. Second, individuality is difficult to capture in generalisations, as they typically involve ignoring idiosyncratic factors. We examine how these challenges shape research practices in behavioural ecology, leading to epistemic strategies for studying individuality via alternative approaches to measurement, experimentation, and generalisation.
... The general idea, however, is that it is not quite enough to simply have a model or theory that accurately locates the phenomenon within the causal structure of the world, you also need to know, or understand, how to use the model to reliably predict and manipulate the phenomenon of interest. This requirement that scientific models not only track the causal structure of the world but also be usable by limited being such as us explains why scientific explanations often involve constructs known to be idealizations and abstractions (Craver, 2019;Potochnik, 2017). ...
Article
Full-text available
Philosophers of mind and philosophers of science have markedly different views on the relationship between explanation and understanding. Reflecting on these differences highlights two ways in which explaining consciousness might be uniquely difficult. First, scientific theories may fail to provide a psychologically satisfying sense of understanding—consciousness might still seem mysterious even after we develop a scientific theory of it. Second, our limited epistemic access to consciousness may make it difficult to adjudicate between competing theories. Of course, both challenges may apply. While the first has received extensive philosophical attention, in this paper I aim to draw greater attention to the second. In consciousness science, the two standard methods for advancing understanding—theory testing and refining measurement procedures through epistemic iteration—face serious challenges.
... It should be noted however that the question whether understanding is or is not factive is a controversial one. Some authors say it is (Kvanvig, 2003), other authors say it isn't (Elgin 2017;Potochnik, 2017;cf. Sullivan & Kareem 2019). ...
Article
Full-text available
During the last decade we have witnessed a stagnated debate on the epistemic nature of emotion with two clear factions: those who defend that emotions are epistemically akin to perception and those who deny it. In this paper I propose a way out of that impasse. Based on Sosa’s distinction, I propose that there are animal and reflective evaluative knowledge in both of which emotion’s play a non-superfluous epistemic role. On the one hand, we can devise an externalist version of perceptualism immune to traditional objections. On the other hand, we not only can but should complement that externalist position with an account of reflective evaluative knowledge, used by non-perceptualist to attack traditional, internalist versions of perceptualism. Thus, perceptualism and non-perceptualism do not disagree with each other, they are just offering an analysis of different epistemic achievements, which have different epistemic statuses and different epistemic requirements.
... Conversely, in the philosophy of science, explanatory pluralism acknowledges that the world is too complex to be fully described by a single comprehensive explanation (Kellert et al., 2006;Potochnik, 2017). Multiple explanations often coexist without conflict because they address different explanatory goals (e.g., explaining distinct behaviors) or employ different simplification strategies (e.g., differing levels of abstraction Marr and Poggio, 1976). ...
Preprint
Full-text available
As AI systems are used in high-stakes applications, ensuring interpretability is crucial. Mechanistic Interpretability (MI) aims to reverse-engineer neural networks by extracting human-understandable algorithms to explain their behavior. This work examines a key question: for a given behavior, and under MI's criteria, does a unique explanation exist? Drawing on identifiability in statistics, where parameters are uniquely inferred under specific assumptions, we explore the identifiability of MI explanations. We identify two main MI strategies: (1) "where-then-what," which isolates a circuit replicating model behavior before interpreting it, and (2) "what-then-where," which starts with candidate algorithms and searches for neural activation subspaces implementing them, using causal alignment. We test both strategies on Boolean functions and small multi-layer perceptrons, fully enumerating candidate explanations. Our experiments reveal systematic non-identifiability: multiple circuits can replicate behavior, a circuit can have multiple interpretations, several algorithms can align with the network, and one algorithm can align with different subspaces. Is uniqueness necessary? A pragmatic approach may require only predictive and manipulability standards. If uniqueness is essential for understanding, stricter criteria may be needed. We also reference the inner interpretability framework, which validates explanations through multiple criteria. This work contributes to defining explanation standards in AI.
... For cognitive niche construction, it is the researcher's ability to solve problems that defines the cognitive niche, and for scientific niche construction niches are constructed to enable understanding of the world. Notably, the goal of truth does not appear in these theories; this aligns with a broader consensus in practice-based philosophy of science that scientists pursue a wide variety of goals but that accuracy does not typically serve as an end in itself (Longino, 2002;de Regt, 2017;Potochnik, 2017). ...
Article
Full-text available
Several philosophers of science have taken inspiration from biological research on niches to conceptualise scientific practice. We systematise and extend three niche-based theories of scientific practice: conceptual ecology, cognitive niche construction, and scientific niche construction. We argue that research niches are a promising conceptual tool for understanding complex and dynamic research environments, which helps to investigate relevant forms of agency and material and social interdependencies, while also highlighting their historical and dynamic nature. To illustrate this, we develop a six-point framework for conceptualising research niches . Within this framework, research niches incorporate multiple and heterogenous material, social and conceptual factors (multi-dimensionality); research outputs arise, persist and differentiate through interactions between researchers and research niches (processes); researchers actively respond to and construct research niches (agency); research niches enable certain interactions and processes and not others (capability); and research niches are defined in relation to particular entities, such as individual researchers, disciplines, or concepts (relationality), and in relation to goals, such as understanding, solving problems, intervention, or the persistence of concepts or instruments (normativity).
... This allows a more extensive transfer of relations from the map domain to the representational plurality domain. For instance, just as different maps emphasize different aspects of a given environment with more or less fidelity (think of a subway map), scientific representations focus only on some objects, properties or interactions, often in an idealized manner (Potochnik, 2017). ...
Article
Full-text available
Representational pluralism is a perspective that acknowledges that it is normal and even desirable in some circumstances to hold incompatible representations in one’s mind regarding a natural phenomenon. This pluralist perspective has been defended in cognitive science, psychology, philosophy of science and science education, raising several original issues about cognition, learning and scientific practice. When discussing this subject, many pluralist authors use analogies. Generally speaking, analogies use the concepts of a base domain (and their relations to each other) to explain a target domain for which the required knowledge is absent, deficient or difficult to use. Accordingly, this paper is based on the premise that pluralist analogies are means used by authors to tackle issues that are both important and conceptually difficult. The paper posits that an analysis of pluralist analogies can, globally, act as a basis for identifying important issues associated with representational plurality, revealing which aspects of these issues are considered to be conceptually difficult, and characterizing the suggested ways to overcome those difficulties. A search within pluralist literature across the abovementioned disciplines yielded a corpus of 28 analogies. It is proposed that most of these analogies are used to address four basic issues in respect to plurality: psychological coexistence, cognitive value, selection processes and teaching. The paper discusses how the analogies are used to address each of these issues. It is hoped that identification of such a set of issues might facilitate research interactions between pluralist researchers, who are often from different disciplinary backgrounds and studying different aspects of representational plurality.
... Though cross-cultural developmental research has emphasized experimental and reductionist methods, along with standardized protocols, it often fails in adopting robust causal frameworks. Most research questions in the behavioral sciences are causal in nature (Potochnik, 2017;Rohrer, 2024). There has been a recent increase in calls for more rigorous consideration of causal assumptions (the underlying assumptions when determining cause and effect relationships) and analyses (McElreath, 2022), including in cross cultural research (Deffner et al., 2022), to improve study validity. ...
Article
Full-text available
The recent expansion of cross-cultural research in the social sciences has led to increased discourse on methodological issues involved when studying culturally diverse populations. However, discussions have largely overlooked the challenges of construct validity- ensuring instruments are measuring what they are intended to- in diverse cultural contexts, particularly in developmental research. We contend that cross-cultural developmental research poses distinct problems for ensuring high construct validity, owing to the nuances of working with children and that the standard approach of transporting protocols designed and validated in one population to another risks low construct validity. Drawing upon our own and others’ work, we highlight several challenges to construct validity in the field of cross-cultural developmental research, including 1) lack of cultural and contextual knowledge, 2) dissociating developmental and cultural theory and methods, 3) lack of causal frameworks, 4) superficial and short- term partnerships and collaborations, and 5) culturally inappropriate tools and tests. We provide guidelines to address these challenges, including 1) using ethnographic and observational approaches, 2) developing evidence-based causal frameworks, 3) conducting community-engaged and collaborative research, and 4) culture-specific refinements and training. We discuss the need to balance methodological consistency with culture-specific refinements to improve construct validity in cross-cultural developmental research.
... For many purposes, however, models are deliberate oversimplifications (46), i.e., the modeler is aware that there is a resemblance gap between the target phenomenon and the model. When this gap is present (whether deliberately or not), the model ‡ A model can predict one or several of many different aspects of a target phenomenon, and so a potential data pattern can refer to many different types of outcomes. ...
Article
Full-text available
The preference for simple explanations, known as the parsimony principle, has long guided the development of scientific theories, hypotheses, and models. Yet recent years have seen a number of successes in employing highly complex models for scientific inquiry (e.g., for 3D protein folding or climate forecasting). In this paper, we reexamine the parsimony principle in light of these scientific and technological advancements. We review recent developments, including the surprising benefits of modeling with more parameters than data, the increasing appreciation of the context-sensitivity of data and misspecification of scientific models, and the development of new modeling tools. By integrating these insights, we reassess the utility of parsimony as a proxy for desirable model traits, such as predictive accuracy, interpretability, effectiveness in guiding new research, and resource efficiency. We conclude that more complex models are sometimes essential for scientific progress, and discuss the ways in which parsimony and complexity can play complementary roles in scientific modeling practice.
... Scientists rely on idealizations and abstractions to get a handle on complex phenomena in many domains (Floridi 2008;Potochnik 2017). There are no frictionless planes or infinite populations, but physicists and geneticists freely make use of such assumptions to strip away irrelevant details and focus on mechanisms of interest. ...
Article
Full-text available
Several competing narratives drive the contemporary AI ethics discourse. At the two extremes are sociotechnical dogmatism, which holds that society is full of inefficiencies and imperfections that can only be solved by better technology; and sociotechnical skepticism, which highlights the unacceptable risks AI systems pose. While both narratives have their merits, they are ultimately reductive and limiting. As a constructive synthesis, we introduce and defend sociotechnical pragmatism—a narrative that emphasizes the central role of context and human agency in designing and evaluating emerging technologies. In doing so, we offer two novel contributions. First, we demonstrate how ethical and epistemological considerations are intertwined in the AI ethics discourse by tracing the dialectical interplay between dogmatic and skeptical narratives across disciplines. Second, we show through examples how sociotechnical pragmatism does more to promote fair and transparent AI than dogmatic or skeptical alternatives. By spelling out the assumptions that underpin sociotechnical pragmatism, we articulate a robust stance for policymakers and scholars who seek to enable societies to reap the benefits of AI while managing the associated risks through feasible, effective, and proportionate governance.
... When making use of a modelling framework like the Markov blanket formalism or the FEP framework, one must be aware of its limitations. By their very nature, models distort and intentionally misrepresent their target systems for the sake of simplicity (Toon 2012;Potochnik 2017). Some phenomena are infinitely complex and having a completely detailed scientific description of them is impossible, as well as undesirable-imagine describing to somebody in every single detail how to go from, say, Exeter to Bristol instead of simply showing them a map which, as a model, oversimplifies the territory it represents. ...
Chapter
Full-text available
This chapter explores the possibility of integrating the enactive and the Free Energy Principle’s (FEP) approaches to life and mind. Both frameworks have been linked to the life-mind continuity thesis, but recent debates challenge their potential integration. Critics argue that the enactive approach, rooted in autopoiesis theory, has an internalist view of life and a contentful view of cognition, making it challenging to account for adaptive behavior and minimal cognition. Similarly, some find the FEP’s stationary view of life biologically implausible. Here, I address recent challenges in integrating the FEP and enactivism, thereby focusing on the life-mind continuity thesis. I suggest that the FEP, without explicitly defining life and mind, can be used to model the autopoietic dynamics of organisms. Additionally, I argue that the enactive conception of cognition as sense-making overcomes issues associated with contentful views of cognition. Furthermore, I refute the misinterpretation of the FEP’s assertion of stationary organisms, allowing for the modeling of enactive adaptive behavior through free energy minimization. Ultimately, I offer a constructive and interactionist approach to life and mind, transcending internalist and externalist perspectives.
... Therefore, science aims not for absolute truth but for understanding and relies on simplifications. Different disciplines have their own practices, and the most basic consequence of scientific practice is the extensive use of idealizations or assumptions made without regard for whether they are true and often with full knowledge that they are false (e.g., Cartwright, 1983;Levins, 1966;Potochnik, 2017;Wimsatt, 1987;Winther, 2006). ...
Article
Full-text available
The scientific practice from 1993 to 2024 in the ongoing BALTEX/Baltic Earth program has applied a philosophical view of complex systems that promotes improved understanding through idealizations without organizing science hierarchically. Instead, the pluralistic scientific approach used by the BALTEX/Baltic Earth program has successfully generated a new scientific understanding of how to address climate and environmental changes in the region. Some of these major advances are as follows: •The program has developed new communication skills by developing conceptual views into drawings with substantial information content at various spatial and temporal scales. •The program has gained experience in increasing the number of data and data products and in realizing the need for well-documented, homogenized, and open datasets; it has also provided training in characterizing and detecting climate and environmental changes in the region. •Indices and statistical models have played an important role in understanding complex dynamics; we have learned that they also need to take account of homogeneities and often have severe limitations. •Several new maps of the region conveying geographic and human information have, in a convenient visual way, opened our eyes to the need for multi-disciplinary research. •Intensive research on the atmosphere-ocean boundary layers has improved our understanding of these factors. •New understanding has been achieved through establishing water, heat, nutrient, and carbon budgets. •The program has generated improved understanding by developing mechanistic and system models of water, heat, nutrient, and carbon cycling. •Maximum complexity models have been developed as computer capacity has grown, yielding important results when attributing the causes of climate change and creating scenarios of possible future developments. •Experience with assessment has taught us about the strengths and weaknesses in evaluating science and scenarios. It has also enhanced our understanding of multidisciplinary research.
... 42 The essential and ineliminable roles that idealizations, distortions, simplifications, and misdescriptions play in science have been well documented by philosophers and scientists alike for decades. 43,44,45,46,47,48,49,50,51,52,53,54,55,56 Newtonian mechanics describes the world in ways we know are metaphysically incorrect (by assuming that space and time are distinct entities, and that the passage of time is not dependent on one's reference frame); however, it remains a valuable tool in scientific practice. No physicist claims that we ought to eliminate all use of Newtonian mechanics from scientific practice or discourse, otherwise physics could not explain most everyday systems that we encounter. ...
Chapter
Full-text available
Chris Letheby, Jaipreet Mattu, and Eric Hochstein try to put an end to the “mysticism wars,” by which they mean the battle between psychedelic researchers who hold that mystical concepts ought to be employed in attempts to describe and understand psychedelic experiences and those who do not hold this. Letheby, Mattu, and Hochstein side with the former and do so on the grounds that (as they put it), “there are no good reasons to abandon mystical concepts in psychedelic science, and plenty of good reasons to keep them.” At bottom, they contend that critiques of the pro-mystical-concepts view can be solved via clarifying concepts and recognizing basic (though significant) distinctions. And though they grant that mystical concepts may one day be superseded in the science of psychedelics, they maintain that this should not occur on the basis of a misplaced conceptual critique, as is (they maintain) the present-day critique of the pro-mystical-concepts view.
... This scale can be expressed in terms of the spatio-temporal extent, diversity, or complexity of the input to the cognitive process of interest, or of the stored knowledge that is used in this process, among others. Empirical studies aiming to capture such processes necessarily simplify, idealize, and abstract away from some of the breadth and complexity of the real-world phenomenon (see Potochnik 2020). That is, experiments usually take place on a small, and possibly also low-complexity, scale. ...
Article
Full-text available
Meta-theoretical perspectives on the research problems and activities of (cognitive) scientists often emphasize empirical problems and problem-solving as the main aspects that account for scientific progress. While certainly useful to shed light on issues of theory-observation relationships, these conceptual analyses typically begin when empirical problems are already there for researchers to solve. As a result, the role of theoretical problems and problem-finding remain comparatively obscure. How do the scientific problems of Cognitive Science arise, and what do they comprise, empirically and theoretically? Here, we attempt to understand the research activities that lead to adequate explanations through a broader conception of the problems researchers must attend to and how they come about. To this end, we bring theoretical problems and problem-finding out of obscurity to paint a more integrative picture of how these complement empirical problems and problem-solving to advance cognitive science.
... Taken together, these networks are suggestive of a body of literature that is growing increasingly disjoint. While autism may have long been multiple, as the previous section described, researchers have historically worked to produce a 'coordinate unity' (Potochnik, 2020) which could encompass multiple understandings of both the disorder and its causes. Indeed, most researchers would nominally agree that ASD includes cases that resemble both Categorical Alignments A and B as well as mixtures between them (e.g., Happé et al., 2006;Weiner et. ...
Article
Full-text available
The opaque relationship between biology and behavior is an intractable problem for psychiatry, and it increasingly challenges longstanding diagnostic categorizations. While various big data sciences have been repeatedly deployed as potential solutions, they have so far complicated more than they have managed to disentangle. Attending to categorical misalignment, this article proposes one reason why this is the case: Datasets have to instantiate clinical categories in order to make biological sense of them, and they do so in different ways. Here, I use mixed methods to examine the role of the reuse of big data in recent genomic research on autism spectrum disorder (ASD). I show how divergent regimes of psychiatric categorization are innately encoded within commonly used datasets from MSSNG and 23andMe, contributing to a rippling disjuncture in the accounts of autism that this body of research has produced. Beyond the specific complications this dynamic introduces for the category of autism, this paper argues for the necessity of critical attention to the role of dataset reuse and recombination across human genomics and beyond.
... [34][35]. Given our limitations and the complexity we have to deal with, this means that the explanations produced in any area of science are uncertain [95] (p. 23). ...
Article
Full-text available
For more than thirty years, 3D digital modelling has been used more and more widely as a research tool in various disciplinary fields. Despite this, the 3D models produced by different research, investigation, and speculation activities are still only used as a basis and as sources for the production of images and scientific contributions (papers in journals, contributions in conference proceedings, etc.) in dissemination and cultural activities, but without having yet assumed full autonomy as a ‘scientific fact’, as a product of research, or as a means of scientific debate and progress. This paper outlines the context in the field of architecture and archeology in which the use of 3D models has become increasingly widespread, reaching a level of full maturity, and how the field of hypothetical reconstruction can be characterized as an autonomous/scientific discipline through the definition and adoption of a scientific, transparent, verifiable, reusable, and refutable method. In this context, the definition of the 3D model as a product of scientific speculation and research is proposed.
Chapter
Of all the positions that have amassed intellectual traction within the philosophy of science in the previous decades, it is scientific pluralism that is most often pitted against Karl Popper’s image of the sciences. In fact, many card-carrying scientific pluralists explicitly distance their philosophies from Popper’s critical rationalism. Particularly, they consider pluralism about scientific theories or explanations to be in opposition to falsificationism. In this paper we shall critically re-examine this claim and the relation of Popper’s philosophy to millennial scientific pluralism. Following Kellert and colleagues, we distinguish two flavours of scientific pluralism: radical and modest. For both flavours, we examine congruities between pluralism and Popper’s work. While we acknowledge crucial differences, we argue that these tend to be overstated, and that there is much room for fruitful exchange. More specifically, we highlight that Popper endorses and embraces plurality with respect to many types of scientifically relevant entities including theories, models, and practices; and we qualify Popper’s purported fixation on truth as the only objective of science.
Chapter
This chapter briefly sketches the general concept of integrative promise, describing it as a pattern of invocation of explanatory virtues: in particular, integration, stringency, opportunism, and appeals to non-epistemic values. First, I defend the choice of the somewhat unusual term “explanatory virtue,” by comparison with appeals to other, more classic accounts of theoretical virtues, particularly those of Thomas Kuhn. I then situate integrative promise with respect to a number of other existing philosophical literatures which have treated similar subjects. Especially important is the concept of pursuit-worthiness, which has already been the topic of a significant amount of philosophical discussion. I also describe the relationship between integrative promise and the widely recognized theoretical virtues of fruitfulness and scope, and Mayo’s concept of severe testing.
Chapter
In defending integrative promise, I regularly appeal to two senses of integration: a horizontal sense, describing integration across the tree of life, and a vertical sense, describing integration across something like levels of organization. Despite the intuitive appeal of these notions, the idea of vertical integration in particular has been subject to much philosophical criticism. In this chapter, I detail that criticism, in the process building a modified definition of what I will call “inter-domain” integration that, I claim, can capture much of the intuition behind vertical integration without falling prey to the kinds of well-known metaphysical worries to which other forms of vertical integration are subject. Inter-domain integration, by focusing on the ways in which scientists foreground and background different kinds of explanatory resources, more naturally functions as an element of the pattern of explanatory virtues that is integrative promise.
Chapter
In addition to integration, integrative promise also invokes a number of other explanatory virtues. In particular, because integrative knowledge in biology is extremely difficult to come by, promise involves a trade-off between what I will call stringency and opportunism. Stringency refers to the idea that integrative explanations need to be put to severe tests, given the challenges that we face in attaining integration. But at the same time, opportunism describes the tendency of biologists to rely on explanations that use readily available materials, model systems, bodies of theory, and so forth, as a way to make integrative research pragmatically more feasible. In this chapter, I explore this trade-off by looking at the late works of Darwin, who spent much of his career attempting to offer evolutionary explanations with integrative promise using organisms like orchids or earthworms, and struggled mightily with the balance between stringency and opportunism.
Article
Full-text available
The topic of disagreement has captured a great deal of attention among epistemologists in recent years. In this paper, I want to raise the issue of disagreement for the epistemic aim of understanding. I will address three main issues. The first concerns the nature of understanding disagreement. What do disagreements in understanding amount to? What kind of disagreement is at play when two agents understand something differently, or have a different understanding of something? The second concerns the norms of rational epistemic behavior in dealing with understanding disagreements. How should an agent react in realizing that another agent understands things differently than she does? The third concerns the value of understanding disagreements. Are understanding disagreements valuable? What is there to gain from understanding disagreements, and what is there to learn from those who understand things differently than we do? My arguments lend support to three main theses. The first is that understanding disagreements are interestingly different from familiar doxastic disagreements. The second is that reasonable understanding disagreements are possible, and hence that we are often entitled to stand our ground in face of an understanding disagreement. The third is that understanding disagreements can have epistemic value, because they can lead to modal insight.
Article
Full-text available
Recently, a version of realism has been offered to address the simplification strategies used in computational neuroscience. According to this view, computational models provide us with knowledge about the brain, but they should not be taken literally in any sense, even rejecting the idea that the brain performs computations (computationalism). I acknowledge the need for considerations regarding simplification strategies in neuroscience and how they contribute to our interpretations of computational models; however, I argue that whether we should accept or reject computationalism about the brain is a separate issue that can be addressed independently by a philosophical theory of physical computation. This takes seriously the idea that the brain performs computations while also taking an analogical stance toward computational models in neuroscience. I call this version of realism “Analogical Computational Realism.” Analogical Computational Realism is a realist view in virtue of being committed to computationalism while taking certain computational models to pick out real patterns that provide a how-possibly explanation without also thinking that the model is literally implemented in the brain.
Article
Full-text available
For some post-structuralist complexity theorists, there are no epistemic meta-perspectives from where to judge between different epistemic perspectives towards complex systems. In this paper, I argue that these theorists face a dilemma because they argue against meta-perspectives from just such a meta-perspective. In fact, when we understand two or more different perspectives, we seem to unavoidably adopt a meta-perspective to analyse, compare, and judge between those perspectives. I further argue that meta-perspectives can be evaluated and judged from meta-meta-perspectives, and so on. This suggests an epistemic hierarchy. Perspectives, meta-perspectives, meta-meta-perspectives, etc. can be ranked according to the degree to which they confer understanding. I also explore what scope my thesis might have outside the philosophy of complexity by applying it to the sociology of science.
Article
The nature of explanation is an important area of inquiry in philosophy of science. Consensus has been that explanation in the cognitive and brain sciences is typically a special case of causal explanation, specifically, mechanistic explanation. But recently there has been increased attention to computational explanation in the brain sciences and to whether that can be understood as a variety of mechanistic explanation. After laying out the stakes for a proper understanding of scientific explanation, we consider the status of computational explanation in the brain sciences by comparing the mechanistic proposal to computational accounts advanced by Piccinini, Milkowski, Cao, Chirimuuta and Ross. We argue that many of these accounts of computational explanation in neuroscience can satisfy the same explanatory criteria as causal explanations, but not all. This has implications for interpretation of those computational explanations that satisfy different criteria.
Article
Full-text available
This paper asks how representational notions figure into cognitive science, especially neuroscience. Philosophers have a way of skipping over that question and going straight to another: what is neural representation? What is the property or relation that representational notions pick out? I argue that this is a mistake. Our ultimate questions, as philosophers of cognitive science, are about the function and epistemology of cognitive scientific explanations—in this case, explanations that use representational notions. To answer those questions we must understand what representational notions contribute to science: what they enable scientists to do or explain, and how. But I show that we can do this without raising traditional and vexing questions about the definition of neural representation, or the nature of a property or relation that notion picks out. Taking this approach, I defend a realist account of representational explanation that underwrites important connections between philosophy and neuroscience.
Article
The nineteenth-century distinction between the nomothetic and the idiographic approach to scientific inquiry can provide valuable insight into the epistemic challenges faced in contemporary earth modelling. However, as it stands, the nomothetic-idiographic dichotomy does not fully encompass the range of modelling committments and trade-offs that geoscientists need to navigate in their practice. Adopting a historical epistemology perspetive, I propose to further spell out this dichotomy as a set of modelling decisions concerning historicity, model complexity, scale, and closure. Then, I suggest that, to address the challenges posed by these decisions, a pluralist stance towards the cognitive aims of earth modelling should be endorsed, especially beyond predictive aims.
Article
Scientists, either working alone or in groups, require rich cognitive, social, cultural, and material environments to accomplish their epistemic aims. There is research in the cognitive sciences that examines intelligent behavior as a function of the environment (“environmental perspectives”), which can be used to examine how scientists integrate “cognitive-cultural” resources as they create environments for problem-solving. In this paper, I advance the position that an expanded framework of distributed cognition can provide conceptual, analytical, and methodological tools to investigate how scientists enhance natural cognitive capacities by creating specific kinds of environments to address their epistemic goals. In a case study of a pioneering neuroengineering lab seeking to understand learning in living networks of neurons, I examine how the researchers integrated conceptual, methodological, and material resources from engineering, neuroscience, and computational science to create different kinds of distributed problem-solving environments that enhanced their natural cognitive capacities, for instance, for reasoning, visualization, abstraction, imagination, and memory, to attain their epistemic aims.
Article
The paper aims to look at the possibilities of overcoming the restriction on the systematic overdetermination of mental causation in the ordinal naturalism of J. Buchler. We think that in ordinal naturalism, conscious behavioral acts have integrity and specificity, while being associated with other orders (physiological, psychological, social), but not being reduced to them, which ensures the complexity of mental causation, i.e. the possession of both mental and physical traits of both cause-events and effect-events. This will allow us to change the form of causal statements so as to avoid overdetermination. Mental causation is interpreted as an irreducibly natural complex. The order of physical events excludes mental traits as irrelevant. In the order of events of conscious behavior, a coalescence of physical and mental complexes occurs, forming a new integral complex. Therefore, highlighting the mental aspect of causation is a description of the traits of both the cause-event and the effect-event, belonging to the same order of conscious behavior. The identification of individual traits may have the syntactic character of the analysis of causal statements, but ontologically both types of causality are real relations of natural complexes of different orders.
Article
Full-text available
Bas van Fraassen has argued that explanatory reasoning does not provide confirmation for explanatory hypotheses because explanatory reasoning increases information and increasing information does not provide confirmation. We compare this argument with a skeptical argument that one should never add any beliefs because adding beliefs increases information and increasing information does not provide confirmation. We discuss the similarities between these two arguments and identify several problems with van Fraassen’s argument.
Chapter
This volume is a comprehensive reference for conducting political analyses of emerging welfare systems in the Global South. These countries have adopted a development-oriented approach, one distinct from the social policy trajectory observed in industrialized capitalist states. However, the pervasive influence of globalization since the 1990s has significantly reshaped policy priorities in these regions. Notably, the political discourse surrounding social policy concepts developed in Northern capitalist states has gained prominence. Irrespective of the geographical focus of the chapters, the volume delves into fundamental social policy concepts and debates, including the ongoing discourse between ‘universalism’ and ‘selectivity’, the challenges posed by the welfare residuum, the intricate role of institutional norms and apparatuses in achieving justice or engendering feelings of shame among social assistance recipients, and the examination of ‘absolute’ and ‘relative’ poverty. Additionally, the volume investigates the pendulum shift within social welfare policies, the complex politics surrounding the portrayal of welfare recipients, and the newly established link between poverty and shame. Comprising 12 chapters, the volume employs a case study–based approach to test the applicability and universality of social policy theories and concepts. The central focus lies in assessing the adaptability of concepts and theories developed in the Global North to comprehend the intricacies of welfare politics in the Global South. These case studies contribute to theoretical generalizations that can explain universal principles relevant to both the Global South and North.
Article
Full-text available
Dynamic semantics violates numerous classical laws, including Non- Contradiction. Proponents of dynamic semantics have offered no explanation for this behavior, and some critics consider this result to be strong evidence against the tenability of the dynamic program. I defend and explain failures of Non-Contradiction by comparing dynamic semantics and classical, truth conditional semantics in terms of their idealizing assumptions. I demonstrate that dynamic semantics rejects context fixity, an idealizing assumption that truth-conditional semantics typically adopts. I then argue that any semantics which rejects context fixity should, by the classical semanticist’s own lights, violate Non-Contradiction under certain circumstances. I then demonstrate that dynamic semantics violates Non-Contradiction in all and only those circumstances. I subsequently appeal to this insight to vindicate some of dynamic semantics’ more controversial predictions. I close by suggesting that discussion of idealizing assumptions, common in the sciences, is similarly crucial to fruitful discussion in natural language semantics.
ResearchGate has not been able to resolve any references for this publication.