Conference PaperPDF Available

On politics and social science – the subject-object problem in social science and Foucault’s engaged epistemology

Authors:

Abstract and Figures

This presentation will deal with the epistemological problem of the relationship between the subject and the object of knowledge in social science. We will try to show how the epistemological framework of Michel Foucault can be used to tackle this problem. Our point will be that Foucault’s methodology solves this problem by envisioning the research process as a practice of an intervention into the object. This intervention is envisioned as a form of criticism of the researched object by showing its contingent and arbitrary nature.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In some recent works, Crupi and Iacona proposed an analysis of ‘if’ based on Chrysippus’ idea that a conditional holds whenever the negation of its consequent is incompatible with its antecedent. This paper presents a sound and complete system of conditional logic that accommodates their analysis. The soundness and completeness proofs that will be provided rely on a general method elaborated by Raidl, which applies to a wide range of systems of conditional logic.
Article
Full-text available
[A heavily rewritten version of this paper has been published in BBS in 2021] Markov blankets have been used to settle disputes central to philosophy of mind and cognition. Their development from a technical concept in Bayesian inference to a central concept within the free-energy principle is analysed. We propose to distinguish between instrumental Pearl blankets and realist Friston blankets. Pearl blankets are substantiated by the empirical literature but can do limited philosophical work. Friston blankets can do philosophical work, but require strong theoretical assumptions. Both are conflated in the current literature on the free-energy principle. Consequently, we propose that distinguishing between an instrumental and a realist research program will help clarify the literature.
Article
Full-text available
The free energy principle says that any self-organising system that is at nonequilibrium steady-state with its environment must minimize its (variational) free energy. It is proposed as a grand unifying principle for cognitive science and biology. The principle can appear cryptic, esoteric, too ambitious, and unfalsifiable—suggesting it would be best to suspend any belief in the principle, and instead focus on individual, more concrete and falsifiable ‘process theories’ for particular biological processes and phenomena like perception, decision and action. Here, I explain the free energy principle, and I argue that it is best understood as offering a conceptual and mathematical analysis of the concept of existence of self-organising systems. This analysis offers a new type of solution to long-standing problems in neurobiology, cognitive science, machine learning and philosophy concerning the possibility of normatively constrained, self-supervised learning and inference. The principle can therefore uniquely serve as a regulatory principle for process theories, to ensure that process theories conforming to it enable self-supervision. This is, at least for those who believe self-supervision is a foundational explanatory task, good reason to believe the free energy principle.
Article
Full-text available
The word function has many different meanings in molecular biology. Here we explore the use of this word (and derivatives like functional) in research papers about de novo gene birth. Based on an analysis of 20 abstracts we propose a simple lexicon that, we believe, will help scientists and philosophers discuss the meaning of function more clearly.
Article
Full-text available
Pre-Proof version We develop a truism of commonsense psychology that perception and action constitute the boundaries of the mind. We do so however not on the basis of commonsense psychology, but by using the notion of a Markov blanket originally employed to describe the topological properties of causal networks. We employ the Markov blanket formalism to propose precise criteria for demarcating the boundaries of the mind that unlike other rival candidates for "marks of the cognitive" avoids begging the question in the extended mind debate. Our criteria imply that the boundary of the mind is nested and multiscale sometimes extending beyond the individual agent to incorporate items located in the environment. Chalmers has used commonsense psychology to develop what he sees as the most serious challenge to the view that minds sometimes extend into the world. He has argued that perception and action should be thought of as interfaces that separate minds from their surrounding environment. In a series of recent papers Hohwy has defended a similar claim using the Markov blanket formalism. We use the Markov blanket formalism to show how both of their objections to the extended mind fail.
Article
Full-text available
A number of philosophers argue that because of its history of systematic disagreement, philosophy has made little to no epistemic progress – especially in comparison to the hard sciences. One argument for this conclusion contends that the best explanation for systematic disagreement in philosophy is that at least some, potentially all, philosophers are unreliable. Since we do not know who is reliable, we have reason to conclude that we ourselves are probably unreliable. Evidence of one’s potential unreliability in a domain purportedly defeats any first-order support one has for any judgments in that domain. This paper defends philosophy. First, accepting that science is rightfully treated as the benchmark of epistemic progress, I contend that a proper conception of epistemic progress highlights that philosophy and science are relevantly similar in terms of such progress. Secondly, even granting that systematic disagreement is a mark of unreliability and that it does characterize philosophy, this paper further argues that evidence of unreliability is insufficient for meta-level, domain-wide, defeat of philosophical judgments more generally.
Article
Full-text available
This paper examines lessons obtained by means of simulations in the form of agent-based models (ABMs) about the norms that are to guide disagreeing scientists. I focus on two types of epistemic and methodological norms: (i) norms that guide one’s attitude towards one’s own theory, and (ii) norms that guide one’s attitude towards the opponent’s theory. Concerning (i) I look into ABMs that have been designed to examine the context of peer disagreement. Here I challenge the conclusion that the given ABMs provide a support for the so-called Steadfast Norm, according to which one is epistemically justified in remaining steadfast in their beliefs in face of disagreeing peers. I argue that the proposed models at best provide evidence for a weaker norm, which concerns methodological steadfastness. Concerning (ii) I look into ABMs aimed at examining epistemic effects of scientific interaction. Here I argue that the models provide diverging suggestions and that the link between each ABM and the type of represented inquiry is still missing. Moreover, I examine alternative strategies of arguing in favor of the benefits of scientific interaction, relevant for contemporary discussions on scientific pluralism.
Article
The ENCODE project has made important new estimates of human genome functionality, now revising the percentage considered functional to more than 80%, which is in stark contrast to the received view, which estimated that less than 10% of the conserved parts of the human genome are functional. ENCODE's unorthodox use of the notion of biological function has stirred the so-called ENCODE controversy, involving conflicting views about the correct notion of function in postgenomics. The debate hinges on the traditional philosophical contrast between the causal role (CR) and selected effects (SE) approaches. In this paper, we examine the ENCODE controversy in terms of the distinction between function monism and pluralism. We propose to apply a weak etiological account to genomic function ascriptions. In this approach, we can ascribe a function to a genomic structure of an organism if and only if performing the function persists in causally contributing to the organism's and its ancestors' fitness. In comparison to the strong etiological (i.e., the selected effects) approach, the present account does not require there to be selection for the structure in question. This is a monistic approach that enables us to avoid the main difficulties of CR, as well as SE's overdependence on natural selection, while still preserving an evolutionary-constrained notion of biological functions. Our proposal is much more moderate in accommodating the estimates of the functionality of the human genome than both ENCODE's proposal itself and the views of the critics relying on a version of the SE account of functions.