Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The free energy principle, an influential framework in computational neuroscience and theoretical neurobiology, starts from the assumption that living systems ensure adaptive exchanges with their environment by minimizing the objective function of variational free energy. Following this premise, it claims to deliver a promising integration of the life sciences. In recent work, Markov Blankets, one of the central constructs of the free energy principle, have been applied to resolve debates central to philosophy (such as demarcating the boundaries of the mind). The aim of this paper is twofold. First, we trace the development of Markov blankets starting from their standard application in Bayesian networks, via variational inference, to their use in the literature on active inference. We then identify a persistent confusion in the literature between the formal use of Markov blankets as an epistemic tool for Bayesian inference, and their novel metaphysical use in the free energy framework to demarcate the physical boundary between an agent and its environment. Consequently, we propose to distinguish between ‘Pearl blankets’ to refer to the original epistemic use of Markov blankets and ‘Friston blankets’ to refer to the new metaphysical construct. Second, we use this distinction to critically assess claims resting on the application of Markov blankets to philosophical problems. We suggest that this literature would do well in differentiating between two different research programs: ‘inference with a model’ and ‘inference within a model’. Only the latter is capable of doing metaphysical work with Markov blankets, but requires additional philosophical premises and cannot be justified by an appeal to the success of the mathematical framework alone.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We see no straightforward relation between operational closure (an organizational statement about how a network of processes actively produces and distinguishes itself) and Markov blankets (a statement about statistical conditional independence between sets of variables). Very few commentators seem to have remarked on this discrepancy (e.g., Bruineberg & Hesp, 2018, p. 38; see also Bruineberg et al., 2021). As we have said, an operationally closed system is open to all kinds of interactions and exchanges with the environment, as long as its organization is not destroyed. ...
... Hence there is nothing in the Markov Blanket that necessarily links it to processes of organismic constitution. This is a discrepancy also noted by Raja et al. (2021), who argue that the choice of where a Markov blanket should be is rather ad hoc and follows the convenience of each case (see also Bruineberg et al., 2021). This is not a problem in itself; it might even be an advantage in some cases. ...
... For recent critical discussions about Markov blankets, seeBruineberg et al. (2021) andRaja et al. (2021). Bruineberg and colleagues argue that a simple statistical idea applied in Bayesian networks and describing how variables may be shielded from variations in other variables has been made to do some heavy conceptual lifting in the FEP framework, being used, as we discuss in this paper, to play the role of boundaries and sensorimotor interfaces. ...
Article
Full-text available
Several authors have made claims about the compatibility between the Free Energy Principle (FEP) and theories of autopoiesis and enaction. Many see these theories as natural partners or as making similar statements about the nature of biological and cognitive systems. We critically examine these claims and identify a series of misreadings and misinterpretations of key enactive concepts. In particular, we notice a tendency to disregard the operational definition of autopoiesis and the distinction between a system’s structure and its organization. Other misreadings concern the conflation of processes of self-distinction in operationally closed systems and Markov blankets. Deeper theoretical tensions underlie some of these misinterpretations. FEP assumes systems that reach a non-equilibrium steady state and are enveloped by a Markov blanket. We argue that these assumptions contradict the historicity of sense-making that is explicit in the enactive approach. Enactive concepts such as adaptivity and agency are defined in terms of the modulation of parameters and constraints of the agent-environment coupling, which entail the possibility of changes in variable and parameter sets, constraints, and in the dynamical laws affecting the system. This allows enaction to address the path-dependent diversity of human bodies and minds. We argue that these ideas are incompatible with the time invariance of non-equilibrium steady states assumed by the FEP. In addition, the enactive perspective foregrounds the enabling and constitutive roles played by the world in sense-making, agency, development. We argue that this view of transactional and constitutive relations between organisms and environments is a challenge to the FEP. Once we move beyond superficial similarities, identify misreadings, and examine the theoretical commitments of the two approaches, we reach the conclusion that far from being easily integrated, the FEP, as it stands formulated today, is in tension with the theories of autopoiesis and enaction.
... To claim that it does is to commit the literalist fallacy. Bruineberg et al. (2021) focus on the ontological status of the Markov blanket formalism in the FEP. 2 They argue that much of the literature on the FEP implies that organisms literally instantiate the mathematical structure of Markov blankets. They argue that such a use of the Markov blanket formalism conflates a model with its target system. ...
... We now consider work by Bruineberg et al. (2021) on the Markov blanket formalism underwriting the FEP. They argue that the use of the Markov blanket formalism in the literature on the FEP often takes the formalism (the map) to literally be a property of the territory. ...
... According to Beal (2003), the "Markov blanket for the node (or set of nodes) A is defined as the smallest set of nodes C, such that A is conditionally independent of all other variables not in C, given C." (2003, p. 18) The key point here is that once a Markov blanket has been identified for any given node, e.g., A, this captures all the relevant information needed to infer the state of A. Markov blankets can be used in order to model (in)dependencies between different variables, which allows for an approach to probabilistic reasoning under uncertain circumstances. Bruineberg et al. (2021) ...
Article
Full-text available
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option.
... However, most of the accounts discussed here do not make any reference to this concept; hence, I have decided to omit it to keep the discussion simple. It is briefly introduced in the section discussing MM below, and an interested reader is directed to papers that focus on developing this concept, e.g. and Parr et al. (2020), and to the critical discussion in Bruineberg et al. (2020). ...
... First, it is necessary for active inference to reduce the space of possibilities and then for attention to actually probe the hypotheses. This ties the account of "fame in the brain" (Dołęga and Dewhurst 2020) back to the AST, providing a mature deflationary theory of consciousness that treats the illusion of phenomenality seriously. ...
... According to the FEP, this division appears when we consider any system as delineated by a Markov blanket-i.e. the set of states or variables that render the internal states of the system conditionally independent from anything else. While Friston argues [e.g. in his monograph (Friston 2019); but see Bruineberg et al. (2020) for critical analysis of this proposal] that the existence of the Markov blanket is necessary for the existence of every "thing" as far as it can be distinguished from everything else, a key insight here comes from the fact that the delineated system can be described in terms of gradient flow on self-information or surprisal (roughly speaking, this is possible as it is the steady-state solution to the Fokker-Planck equation, a standard description of time evolution of dynamical systems). Together with the very existence of the Markov blanket, this provides the modeler with the possibility of describing the internal states as modeling external states, precisely in the manner described in the introduction to the FEP above. ...
Article
Full-text available
The goal of the paper is to review existing work on consciousness within the frameworks of Predictive Processing, Active Inference, and Free Energy Principle. The emphasis is put on the role played by the precision and complexity of the internal generative model. In the light of those proposals, these two properties appear to be the minimal necessary components for the emergence of conscious experience-A Minimal Unifying Model of consciousness. © 2021 The Author(s) 2021. Published by Oxford University Press.
... Our interest lies with the possible implications of the FEP for physicalism. We suggest that, regardless of whether the FEP is an instance of targetless or general modeling, its 9 This debate has been part of a broader discussion of whether the FEP is compatible with scientific realism or requires a commitment to instrumentalism (Andrews, 2021;Bruineberg et al., 2021;Kirchhoff, Kiverstien, & Robertson, 2021). characterization of the guiding assumptions and behavioral implications of the FEP approach. ...
... More is needed to address the question of whether the sort of explanations offered by Bayesian neurophysiology extend to, and encompass, all relevant mental phenomena. In addition, questions have been raised about the ontological implications of Markov blankets (Bruineberg et al. 2021). Finally, the abstract mathematical descriptions at the heart of the FEP, including the central notion of a Markov blanket, may be compatible with mutually exclusive accounts of the ontology of consciousness (Beni, 2021). ...
Article
Full-text available
The Free Energy Principle (FEP) states that all biological self-organizing systems must minimize variational free energy. The acceptance of this principle has given rise to a popular and far-reaching theoretical and empirical approach to the study of the brain and living organisms. Despite the popularity of the FEP approach, little discussion has ensued about its ontological status and implications. By understanding physicalism as an interdisciplinary research program that aims to offer compositional explanations of mental phenomena, this paper articulates what it would mean for the FEP approach to be part of research program physicalism and to corroborate a physicalist outlook. In doing so, this paper contributes both to philosophical discussions regarding the FEP approach and to the literature on physicalism. It does the former by explicating the metaphysical standing of the FEP approach. It does the latter by showing how cutting-edge research in the empirical sciences of the mind can inform our attitudes regarding physicalism.
... Interestingly, this can happen for different partitions of a system and at different scales at the same time. While this can lead to a theory where everything might be a system without any further constraints (Bruineberg et al., 2021), unlike other approaches, such as standard IIT implementations (Oizumi et al., 2014), the FEP embodies crucial aspects of "being a system" (or agent) at multiple scales: the possibility that societies, oneself and the cells within one's body can all be seen as agents at the same time. ...
... Other worries however still exist for the FEP and active inference, especially about their precise claims. While initially proposed as a theory of brains (Friston, 2010), it has since then been used to describe the origins of all known life (Friston, 2013, but see , and more recently, it has further been introduced as a theory of systems (Friston, 2019, but see Bruineberg et al., 2021). Baltieri et al. (2020) unpacks some of these ideas, showing how the process theory, active inference, can be used to model a number of different systems (cognitive or not, living or not), without necessarily having to invoke the more general principle, the FEP. ...
Preprint
Full-text available
Artificial life is a research field studying what processes and properties define life, based on a multidisciplinary approach spanning the physical, natural and computational sciences. Artificial life aims to foster a comprehensive study of life beyond "life as we know it" and towards "life as it could be", with theoretical, synthetic and empirical models of the fundamental properties of living systems. While still a relatively young field, artificial life has flourished as an environment for researchers with different backgrounds, welcoming ideas and contributions from a wide range of subjects. Hybrid Life is an attempt to bring attention to some of the most recent developments within the artificial life community, rooted in more traditional artificial life studies but looking at new challenges emerging from interactions with other fields. In particular, Hybrid Life focuses on three complementary themes: 1) theories of systems and agents, 2) hybrid augmentation, with augmented architectures combining living and artificial systems, and 3) hybrid interactions among artificial and biological systems. After discussing some of the major sources of inspiration for these themes, we will focus on an overview of the works that appeared in Hybrid Life special sessions, hosted by the annual Artificial Life Conference between 2018 and 2022.
... In this paper, we begin the development of a bona fide meta-theory, and philosophy for the usage, of the FEP. A good starting point for this project would be the so-called map-territory fallacy, which some have alleged is committed by the architects of the FEP [9,10]. Indeed, some scholars have argued that FEP-theoretic modelling conflates the metaphorical 'map' (i.e., the scientific model that scientists use to make sense of some target phenomenon) and 'territory' (i.e., the actual natural system that is being modelled). ...
... We propose these few points as the foundations of a philosophy of Bayesian mechanics, which has yet to be constructed. In summary, we have argued that the map-territory fallacy, as it has been leveraged against the FEP (e.g., by [9,10]) simply does not apply to FEP-theoretic modelling: it constitutes a fallacy, which we have called the map-territory fallacy fallacy. Mathematically, there is no ambiguity or conflation of map and territory in FEP-theoretic modelling. ...
Preprint
Full-text available
This paper presents a meta-theory of the usage of the free energy principle (FEP) and examines its scope in the modelling of physical systems. We consider the so-called `map-territory fallacy' and the fallacious reification of model properties. By showing that the FEP is a consistent, physics-inspired theory of inferences of inferences, we disprove the assertion that the map-territory fallacy contradicts the principled usage of the FEP. As such, we argue that deploying the map-territory fallacy to criticise the use of the FEP and Bayesian mechanics itself constitutes a fallacy: what we call the {\it map-territory fallacy fallacy}. In so doing, we emphasise a few key points: the uniqueness of the FEP as a model of particles or agents that model their environments; the restoration of convention to the FEP via its relation to the principle of constrained maximum entropy; the `Jaynes optimality' of the FEP under this relation; and finally, the way that this meta-theoretical approach to the FEP clarifies its utility and scope as a formal modelling tool. Taken together, these features make the FEP, uniquely, {\it the} ideal model of generic systems in statistical physics.
... Roughly, in Pearl's sense the 'Markov blanket' of a focal node is the set of nodes that provide total information about the focal node. However, Markov blankets have taken on a special usage within active inference (Bruineberg et al. 2021). In the sense required here, a Markov blanket can be understood as the set of nodes that 'screen off' the agent from nodes considered external to it. ...
... Similarly, it is difficult to evaluate the corresponding general claim because there is not enough understanding of the mathematical theorem and how it maps onto real systems. Recently, however, Beni (2021) and Bruineberg et al. (2021) have critiqued the framework on grounds of its applicability to real systems. We are starting to see critical analysis of active inference from outside the tradition. ...
Article
Full-text available
Over the last fifteen years, an ambitious explanatory framework has been proposed to unify explanations across biology and cognitive science. Active inference, whose most famous tenet is the free energy principle, has inspired excitement and confusion in equal measure. Here, we lay the ground for proper critical analysis of active inference, in three ways. First, we give simplified versions of its core mathematical models. Second, we outline the historical development of active inference and its relationship to other theoretical approaches. Third, we describe three different kinds of claim—labelled mathematical, empirical and general—routinely made by proponents of the framework, and suggest dialectical links between them. Overall, we aim to increase philosophical understanding of active inference so that it may be more readily evaluated. This paper is the Introduction to the Topical Collection “The Free Energy Principle: From Biology to Cognition”.
... The web extends the spider's sensory observations by enabling it to infer external 10 Random variables are represented as nodes in a graph, where the shaded nodes represent variables that are observed (e.g., nodes {2, 3, 4, 6, 7} in Fig. 2) and empty ones represent those that are hidden (e.g., node {1} in Fig. 2). The (probabilistic) relationships between such random variables are expressed using edges connecting the nodes (Bruineberg et al., 2020). 11 This interpretation depends upon an interpretation of internal states as encoding Bayesian beliefs about external states (Friston, 2013, p. 4). ...
... 12 While more can be said about Markov blankets, this overview suffices to convey the important points for present purposes. For a more systematic overview of Markov blankets, see Andrews (2020), Bruineberg et al. (2020), or Menary and Gillet (2020). ...
Article
Full-text available
There is a longstanding debate between those who think that cognition extends into the external environment (extend cognition) and those who think it is located squarely within the individual (internalism). Recently, a new actor has emerged on the scene, one that looks to play kingmaker. Predictive processing (PP) says that the mind/brain is fundamentally engaged in a process of minimising the difference between what is predicted about the world and how the world actually is, what is known as ‘prediction error minimisation’ (PEM). The goal of this paper is to articulate a novel approach to extended cognition using the resources of PP. After outlining two recent proposals from Constant et al. (2020) and Kirchhoff and Kiverstein (2019), I argue that the case for extended cognition can be further developed by interpreting certain elements of the PP story (namely, PEM) as a “mark of the cognitive”. The suggestion is that when construed at an ‘algorithmic level’ PEM offers a direct route to thinking about extended systems as genuine cognitive systems. On route to articulating the proposal, I lay out the core argument, defend the proposal’s novelty, and point to several of the advantages of the formulation. Finally, I conclude by taking up two challenges raised by Hohwy (2016, 2018) about the prospects of using PEM to argue for extended cognition.
... The concept of Markov blankets has been used to illustrate the critical precondition for any adaptive system to have some separation and autonomy from the environment [32]. However, Bruineberg and colleagues (2022) have recently pointed out some inaccuracies in the literature regarding the use of the term [35]. The authors propose a distinction between 'Pearl Blankets'the original epistemic use of Markov blankets as a tool for Bayesian inference, and 'Friston Blankets' to refer to the metaphysical construct in the FEP framework to demarcate the physical boundary between an agent and its environment [35].Although this is an important debate in the literature, it is beyond the purpose of this paper, and we will therefore maintain the use of Markov blankets to illustrate our arguments. ...
... However, Bruineberg and colleagues (2022) have recently pointed out some inaccuracies in the literature regarding the use of the term [35]. The authors propose a distinction between 'Pearl Blankets'the original epistemic use of Markov blankets as a tool for Bayesian inference, and 'Friston Blankets' to refer to the metaphysical construct in the FEP framework to demarcate the physical boundary between an agent and its environment [35].Although this is an important debate in the literature, it is beyond the purpose of this paper, and we will therefore maintain the use of Markov blankets to illustrate our arguments. ...
Article
Full-text available
Osteopaths commonly face complexity and clinical uncertainty in their daily professional practice as primary contact practitioners. In order to effectively deal with complex clinical presentations , osteopaths need to possess well-developed clinical reasoning to understand the individual patient's lived experience of pain and other symptoms and how their problem impacts their per-sonhood and ability to engage with their world. We have recently proposed (En)active inference as an integrative framework for osteopathic care. The enactivist and active inference frameworks underpin our integrative hypothesis. Here, we present a clinically based interpretation of our integra-tive hypothesis by considering the ecological niche in which osteopathic care occurs. Active inference enables patients and practitioners to disambiguate each other's mental states. The patients' mental states are unobservable and must be inferred based on perceptual cues such as posture, body language, gaze direction and response to touch and hands-on care. A robust therapeutic alliance centred on cooperative communication and shared narratives and the appropriate and effective use of touch and hands-on care enable patients to contextualize their lived experiences. Touch and hands-on care enhance the therapeutic alliance, mental state alignment, and biobehavioural syn-chrony between patient and practitioner. Therefore, the osteopath-patient dyad provides mental state alignment and opportunities for ecological niche construction. Arguably, this can produce therapeutic experiences which reduce the prominence given to high-level prediction errors-and consequently, the top-down attentional focus on bottom-up sensory prediction errors, thus minimizing free energy. This commentary paper primarily aims to enable osteopaths to critically consider the value of this proposed framework in appreciating the complexities of delivering person-centred care.
... Roughly, in Pearl's sense the 'Markov blanket' of a focal node is the set of nodes that provide total information about the focal node. However, Markov blankets have taken on a special usage within active inference (Bruineberg et al., 2021). In the sense required here, a Markov blanket can be understood as the set of nodes that 'screen off' the agent from nodes considered external to it. ...
... Similarly, it is difficult to evaluate the corresponding general claim because there is not enough understanding of the mathematical theorem and how it maps onto real systems. Recently, however, Beni (2021) and Bruineberg et al. (2021) have critiqued the framework on grounds of its applicability to real systems. We are starting to see critical analysis of active inference from outside the tradition. ...
Article
Full-text available
Over the last fifteen years, an ambitious explanatory framework has been proposed to unify explanations across biology and cognitive science. Active inference, whose most famous tenet is the free energy principle, has inspired excitement and confusion in equal measure. Here, we lay the ground for proper critical analysis of active inference, in three ways. First, we give simplified versions of its core mathematical models. Second, we outline the historical development of active inference and its relationship to other theoretical approaches. Third, we describe three different kinds of claim -- labelled mathematical, empirical and general -- routinely made by proponents of the framework, and suggest dialectical links between them. Overall, we aim to increase philosophical understanding of active inference so that it may be more readily evaluated. This is a manuscript draft of the Introduction to the Topical Collection "The Free Energy Principle: From Biology to Cognition", forthcoming in Biology & Philosophy.
... The term was initially introduced in the context of Bayesian networks or graphs [31], and it is also known as the general Markov condition [32]. In the FEP, Markov blankets are used for identifying a set of variables that separate the internal and external states of a system (see [33] for a detailed study on the specific use of the concept of Markov blanket in the FEP). Here, we note that Markov blankets can be easily identified in models defined by directed acyclic Bayesian networks (Fig. 2.A). ...
... We discover that, in the class of linear systems explored, the answer to this question is that the statistical structure required by the FEP only arises in a very narrow class of systems, requiring stringent conditions such as fully symmetric agent-environment interactions that we cannot, in general, expect from living systems [35,[37][38][39]. The generality of the FEP has been questioned in the past due to conceptual issues [51,33] or the existence of counterexamples challenging the idea that perception-action interfaces, Markov blankets and solenoidal decoupling follow from each other [34]. However, to our knowledge, our study is the first that shows that the assumptions of the FEP do not hold for a vast class of systems, namely, linear, weakly coupled systems, except for the limited case of fully symmetric agent-environment interaction. ...
Article
Full-text available
The free energy principle (FEP) states that any dynamical system can be interpreted as performing Bayesian inference upon its surrounding environment. Although, in theory, the FEP applies to a wide variety of systems, there has been almost no direct exploration or demonstration of the principle in concrete systems. In this work, we examine in depth the assumptions required to derive the FEP in the simplest possible set of systems – weakly-coupled non-equilibrium linear stochastic systems. Specifically, we explore (i) how general the requirements imposed on the statistical structure of a system are and (ii) how informative the FEP is about the behaviour of such systems. We discover that two requirements of the FEP – the Markov blanket condition (i.e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i.e. tendencies driving a system out of equilibrium) – are only valid for a very narrow space of parameters. Suitable systems require an absence of asymmetries in perception-action loops that are highly unusual for living systems interacting with an environment, which are the kind of systems the FEP explicitly sets out to model. More importantly, we observe that a mathematically central step in the argument, connecting the behaviour of a system to variational inference, relies on an implicit equivalence between the dynamics of the average states of a system with the average of the dynamics of those states. This equivalence does not hold in general even for linear systems, since it requires an effective decoupling from the system's history of interactions. These observations are critical for evaluating the generality and applicability of the FEP and indicate the existence of significant problems of the theory in its current form. These issues make the FEP, as it stands, not straightforwardly applicable to the simple linear systems studied here and suggests that more development is needed before the theory could be applied to the kind of complex systems which describe living and cognitive processes.
... The main difference is that, according to them, organisms (and cognitive systems) do not embody, or are not themselves, their own models; rather, they have, or make use of those models. 7 Bruineberg et al. (2022); Menary and Gillett (2021); but also Facchin (2021), although in this case limited to the extent in which Markov blankets can be used to settle disputes over vehicle externalism. ...
Article
Full-text available
In this paper, by means of a novel use of insights from the literature on scientific modelling, I will argue in favour of an instrumentalist approach to the models that are crucially involved in the study of adaptive systems within the Free-Energy Principle (FEP) framework. I will begin (§2) by offering a general, informal characterisation of FEP. Then (§3), I will argue that the models involved in FEP-theorising are plausibly intended to be isomorphic to their targets. This will allow (§4) to turn the criticisms moved against isomorphism-based accounts of representation towards the FEP modelling practice. Since failure to establish an isomorphism between model and target would result in the former’s failure to represent the latter, and given that it is highly unlikely that FEP-models are ever isomorphic to their targets, maintaining that FEP-models represent their targets as they are, in a realist sense, is unwarranted. Finally (§5), I will consider what implications my argument in favour of an instrumentalist reading of FEP-models has for attempts at making use of the FEP to elaborate an account of what cognition exactly is. My conclusion is that we should not dismiss FEP-based accounts of cognition, as they would still be informative and would further our understanding of the nature of cognition. Nonetheless, the prospects of settling the philosophical debates that sparked the interest in having a “mark of the cognitive” are not good.
... not fulfill. In other words, life phenomena can be quite sensitive to the transient behavior between different steadystate regimes. Consequently, the assumption of a non-equilibrium steady state may miss crucial features in the evolution of life and the developmental stages of a living system.4 The epistemic status of the FEP is controversial too.(Bruineberg et al., 2022) presents an overall view of the discussion. See also the commentary (Sánchez-Cañizares, 2022b) on how the stipulation of Markov blankets may help to build an ontology. This topic will be part of the last section of the paper. ...
Article
Full-text available
The Maximum Entropy Production Principle (MEPP) stands out as an overarching principle that rules life phenomena in Nature. However, its explanatory power beyond heuristics remains controversial. On the one hand, the MEPP has been successfully applied principally to non-living systems far from thermodynamic equilibrium. On the other hand, the underlying assumptions to lay the MEPP’s theoretical foundations and range of applicability increase the possibilities of conflicting interpretations. More interestingly, from a metaphysical stance, the MEPP’s philosophical status is hotly debated: does the MEPP passively translate physical information into macroscopic predictions or actively select the physical solution in multistable systems, granting the connection between scientific models and reality? This paper deals directly with this dilemma by discussing natural determination from three angles: (1) Heuristics help natural philosophers to build an ontology. (2) The MEPP’s ontological status may stem from its selection of new forms of causation beyond physicalism. (3) The MEPP’s ontology ultimately depends on the much-discussed question of the ontology of probabilities in an information-theoretic approach and the ontology of macrostates according to the Boltzmannian definition of entropy.
... While our contribution was restricted to a critical technical discussion of one particular theory of a Bayesian mechanics, some responses to our contribution addressed broader ideas of the principle and its surrounding philosophy. These resonate with a number of high-profile, more philosophically-grounded, critiques of the principle that have been published recently [4][5][6]. However, despite the critical nature of aspects of our own work, we hope it will be read as a contribution aiming to advance the field and help set it upon rigorous foundations rather than dismissing it outright. ...
... Importantly, however, active inference does not appear to offer any principled means for arbitrating such boundary disputes (cf. Bruineberg et al., 2021;Clark, 2017a;Facchin, 2021;Kirchhoff & Kiverstein, 2019;Ramstead et al., 2019). Rather, the framework permits the individuation of multiple nested models ranging from the cellular to the societal scale and beyond. ...
Preprint
Full-text available
Embodied cognition-the idea that mental states and processes should be understood in relation to one's bodily constitution and interactions with the world-remains a controversial topic in the cognitive sciences. Recently, however, increasing interest in predictive processing theories amongst proponents and critics of embodiment alike has raised hopes of a reconciliation. This article sets out to appraise the unificatory potential of predictive processing, focusing in particular on embodied formulations of active inference. Our analysis suggests that most active inference accounts invoke weak, potentially trivial conceptions of embodiment; those making stronger claims do so independently of the theoretical commitments of the active inference framework. We suggest that a more compelling version of embodied active inference can be motivated by adopting a diachronic perspective on the way rhythmic physiological activity shapes neural development in utero. According to this visceral afferent training hypothesis, early-emerging physiological processes are essential not only for supporting the biophysical development of neural structures, but also for configuring the cognitive architecture these structures entail. Focusing in particular on the cardiovascular system, we propose three candidate mechanisms through which visceral afferent training might operate: (i) activity-dependent neuronal development, (ii) periodic signal modelling, and (iii) oscillatory network coordination.
... More than two decades after this study, it remains very difficult to trace signals as they traverse multiple nodes of known connectivity in a brain network (see van der Meij and Voytek, 2018;Hodassman et al., 2022). Models that rely on inferring causality linking separate measurements of structure and activation (e.g., Javadzadeh and Hofer, 2021) can be misleading (see, e.g., Mehler and Kording, 2018;Brette, 2019;Bruineberg et al., 2021). ...
Article
Full-text available
Philosophers have long recognized the value of metaphor as a tool that opens new avenues of investigation. By seeing brains as having the goal of representation, the computer metaphor in its various guises has helped systems neuroscience approach a wide array of neuronal behaviors at small and large scales. Here I advocate a complementary metaphor, the internet. Adopting this metaphor shifts our focus from computing to communication, and from seeing neuronal signals as localized representational elements to seeing neuronal signals as traveling messages. In doing so, we can take advantage of a comparison with the internet's robust and efficient routing strategies to understand how the brain might meet the challenges of network communication. I lay out nine engineering strategies that help the internet solve routing challenges similar to those faced by brain networks. The internet metaphor helps us by reframing neuronal activity across the brain as, in part, a manifestation of routing, which may, in different parts of the system, resemble the internet more, less, or not at all. I describe suggestive evidence consistent with the brain's use of internet-like routing strategies and conclude that, even if empirical data do not directly implicate internet-like routing, the metaphor is valuable as a reference point for those investigating the difficult problem of network communication in the brain and in particular the problem of routing.
... In doing so, active inference has been aspiring to become a generalized framework in various fields, ranging from philosophy, psychology and psychiatry to neuroscience, robotics and artificial intelligence (e.g. [23,52,[63][64][65][66][67]; 'a theory of every "thing" that can be distinguished from other "things" in a statistical sense' as Friston provocatively puts it [68]; but also note critiques on the scope of current versions of the framework; [69][70][71]). ...
Article
Full-text available
In this article, we analyse social interactions, drawing on diverse points of views, ranging from dialectics, second-person neuroscience and enactivism to dynamical systems, active inference and machine learning. To this end, we define interpersonal attunement as a set of multi-scale processes of building up and materializing social expectations—put simply, anticipating and interacting with others and ourselves. While cultivating and negotiating common ground, via communication and culture-building activities, are indispensable for the survival of the individual, the relevant multi-scale mechanisms have been largely considered in isolation. Here, collective psychophysiology , we argue, can lend itself to the fine-tuned analysis of social interactions, without neglecting the individual. On the other hand, an interpersonal mismatch of expectations can lead to a breakdown of communication and social isolation known to negatively affect mental health. In this regard, we review psychopathology in terms of interpersonal misattunement, conceptualizing psychiatric disorders as disorders of social interaction, to describe how individual mental health is inextricably linked to social interaction. By doing so, we foresee avenues for an inter- personalized psychiatry, which moves from a static spectrum of disorders to a dynamic relational space, focusing on how the multi-faceted processes of social interaction can help to promote mental health. This article is part of the theme issue ‘Concepts in interaction: social engagement and inner experiences’.
... One of these discussions is about the realist credentials of FEP's theoretical framework ). The discussion is interesting not least because some participants have delved into the issue of realism in light of the model-based nature of science (Andrews 2021;Beni 2021;Bruineberg et al. 2021;Ramstead et al. 2019;Wiese and Friston 2021). The debate reaches new heights in Colombo and Palacios's (2021) expressed scepticism about the plausibility of explanations ensued from FEP, on grounds of the conveyed mismatch between the physics assumption of FEP and properties of biological target systems. ...
Article
Full-text available
Richard Levins’s (Am Sci 54(4):421–431, 1966) paper sets a landmark for the significance of scientific model-making in biology. Colombo and Palacios (Biol Philos 36(5):1–26. 10.1007/S10539-021-09818-X, 2021) have recently built their critique of the explanatory power of the Free Energy Principle on Levins’s insight into the relationship between generality, realism, and precision. This paper addresses the issue of the plausibility of biological explanations that are grounded in the Free Energy Principle (FEP) and deals with the question of the realist fortitude of FEP’s theoretical framework. It indicates that what is required for establishing the plausibility of the explanation of a target system given a model of that system is the dosage or the harmony between the generality and accuracy of explanatory models. This would also provide a basis for seeing how scientific realism could be a viable option with respect to FEP.
... The free energy principle addresses this challenge by developing a physics of sentience combining dynamical systems theory with the boundary separating self from nonself. Coupling of the dynamics of the particular partition of states external and internal to a system to the corresponding information geometry of belief updating and inference is carried out by Bruineberg et al. [107] 6. The social challenge: Human thought is inherently social in ways that cognitive science ignores. ...
Article
Full-text available
Cognition, historically considered uniquely human capacity, has been recently found to be the ability of all living organisms, from single cells and up. This study approaches cognition from an info-computational stance, in which structures in nature are seen as information, and processes (information dynamics) are seen as computation, from the perspective of a cognizing agent. Cognition is understood as a network of concurrent morphological/morphogenetic computations unfolding as a result of self-assembly, self-organization, and autopoiesis of physical, chemical, and biological agents. The present-day human-centric view of cognition still prevailing in major encyclopedias has a variety of open problems. This article considers recent research about morphological computation, morphogenesis, agency, basal cognition, extended evolutionary synthesis, free energy principle, cognition as Bayesian learning, active inference, and related topics, offering new theoretical and practical perspectives on problems inherent to the old computationalist cognitive models which were based on abstract symbol processing, and unaware of actual physical constraints and affordances of the embodiment of cognizing agents. A better understanding of cognition is centrally important for future artificial intelligence, robotics, medicine, and related fields.
... In the target article, Bruineberg et al. (2021) disrupt current debates about the role of Markov Blankets in demarcating the boundaries between living system and their environment. The authors accurately describe the gap between a Markov blanket as a useful property for statistical inference and the more ontologically loaded concept in the FEP, as a boundary within which Bayesian inference occurs. ...
Article
Full-text available
Markov blankets – statistical independences between system and environment – have become popular to describe the boundaries of living systems under Bayesian views of cognition. The intuition behind Markov blankets originates from considering acyclic, atemporal networks. In contrast, living systems display recurrent, nonequilibrium interactions that generate pervasive couplings between system and environment, making Markov blankets highly unusual and restricted to particular cases.
... Note, however, that it is not the mere fact of representing a Markov blanket that makes the 3 N boundary ontologically distinct. Indeed, Bruineberg et al. (2021) have recently criticized the idea to reify Markov blankets. The ontological distinctiveness of 3 N lies in the nature of the neural system. ...
Article
Full-text available
The program of “neurophenomenal structuralism” is presented as an agenda for a genuine structuralist neuroscience of consciousness that seeks to understand specific phenomenal experiences as strictly relational affairs. The paper covers a broad range of topics. It starts from considerations about neural change detection and relational coding that motivate a solution of the Newman problem of the brain in terms of spatiotemporal relations. Next, phenomenal quality spaces and their Q-structures are discussed. Neurophenomenal structuralism proclaims a homomorphic mapping of the structures of self-organized neural maps in the brain onto Q-structures, and it will be demonstrated how this leads to a new and special version of structural representationalism about phenomenal content. A methodological implication of neurophenomenal structuralism is that it proposes measurement procedures that focus on the relationships between different stimuli (as, for instance, similarity ratings or representational geometry methods). Finally, it will be shown that neurophenomenal structuralism also has strong philosophical implications, as it leads to holism about phenomenal experiences and serves to reject inverted qualia scenarios.
... Even though Prosen never uses such a concept, it is implied in his discussion on Markov blankets. A detailed overview of the Markov-blanket formalism is well beyond the scope of this commentary (see Kirchhoff et al. 2018;Bruineberg et al. 2022;Raja et al. 2021). Suffice it to say that the Markov-blanket formalism is an es-sential part of the FEP. ...
Article
Full-text available
I sympathize with Prosen’s conviction in integrating enactivism, the free-energy principle, and the extended-mind hypothesis. However, I show that he uses the concept of “boundary” ambiguously. By disambiguating it, I suggest that we can keep both Markov blankets and operational closure as ways of drawing the boundaries of a cognitive system. Nevertheless, from an enactive perspective, neither of those boundaries is a “cognitive” boundary. Cite as: Bogotá J. D. (2022) Why not Both (but also, Neither)? Markov Blankets and the Idea of Enactive-Extended Cognition. Constructivist Foundations 17(3): 233–235. https://constructivist.info/17/3/233
... At the time of writing, the only articles mentioning the free energy principle in Biology & Philosophy are associated with this Topical Collection. It has received more attention from philosophers of cognitive science(Sprevak 2020;Williams 2021;Hohwy 2020;Bruineberg et al. 2021).Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
The free energy principle is notoriously difficult to understand. In this paper, we relate the principle to a framework that philosophers of biology are familiar with: Ruth Millikan’s teleosemantics. We argue that: (i) systems that minimise free energy are systems with a proper function; and (ii) Karl Friston’s notion of implicit modelling can be understood in terms of Millikan’s notion of mapping relations. Our analysis reveals some surprising formal similarities between the two frameworks, and suggests interesting lines of future research. We hope this will aid further philosophical evaluation of the free energy principle.
... To address this question we need to briefly introduce the Markov blanket formalism. The terminology of Markov blankets is borrowed from the literature on causal Bayesian networks (Pearl, 1988;Bruineberg et al., 2022). The Markov blanket for a node in a Bayes network comprises the node's parents, children and parents of its children. ...
Article
Full-text available
Biological agents can act in ways that express a sensitivity to context-dependent relevance. So far it has proven difficult to engineer this capacity for context-dependent sensitivity to relevance in artificial agents. We give this problem the label the “problem of meaning”. The problem of meaning could be circumvented if artificial intelligence researchers were to design agents based on the assumption of the continuity of life and mind. In this paper, we focus on the proposal made by enactive cognitive scientists to design artificial agents that possess sensorimotor autonomy—stable, self-sustaining patterns of sensorimotor interaction that can ground values, norms and goals necessary for encountering a meaningful environment. More specifically, we consider whether the Free Energy Principle (FEP) can provide formal tools for modeling sensorimotor autonomy. There is currently no consensus on how to understand the relationship between enactive cognitive science and the FEP. However, a number of recent papers have argued that the two frameworks are fundamentally incompatible. Some argue that biological systems exhibit historical path-dependent learning that is absent from systems that minimize free energy. Others have argued that a free energy minimizing system would fail to satisfy a key condition for sensorimotor agency referred to as “interactional asymmetry”. These critics question the claim we defend in this paper that the FEP can be used to formally model autonomy and adaptivity. We will argue it is too soon to conclude that the two frameworks are incompatible. There are undeniable conceptual differences between the two frameworks but in our view each has something important and necessary to offer. The FEP needs enactive cognitive science for the solution it provides to the problem of meaning. Enactive cognitive science needs the FEP to formally model the properties it argues to be constitutive of agency. Our conclusion will be that active inference models based on the FEP provides a way by which scientists can think about how to address the problems of engineering autonomy and adaptivity in artificial agents in formal terms. In the end engaging more closely with this formalism and its further developments will benefit those working within the enactive framework.
... One motivation traces to considering the problem of perception as one of inference about the causes of sensory signals 72,73 . The other -exemplified by the free energy principle 74 -appeals to fundamen tal constraints regarding control and regulation that apply to all systems that maintain their organization over time 75-77 (but see ref. 78 ). Both lead to the notion that the brain implements a process of 'prediction error minimization' 79 that approximates Bayesian inference through the reciprocal exchange of (usually top down) perceptual predictions and (usually bottom up) predic tion errors 80 (although see ref. 81 ). ...
Article
Recent years have seen a blossoming of theories about the biological and physical basis of consciousness. Good theories guide empirical research, allowing us to interpret data, develop new experimental techniques and expand our capacity to manipulate the phenomenon of interest. Indeed, it is only when couched in terms of a theory that empirical discoveries can ultimately deliver a satisfying understanding of a phenomenon. However, in the case of consciousness, it is unclear how current theories relate to each other, or whether they can be empirically distinguished. To clarify this complicated landscape, we review four prominent theoretical approaches to consciousness: higher-order theories, global workspace theories, re-entry and predictive processing theories and integrated information theory. We describe the key characteristics of each approach by identifying which aspects of consciousness they propose to explain, what their neurobiological commitments are and what empirical data are adduced in their support. We consider how some prominent empirical debates might distinguish among these theories, and we outline three ways in which theories need to be developed to deliver a mature regimen of theory-testing in the neuroscience of consciousness. There are good reasons to think that the iterative development, testing and comparison of theories of consciousness will lead to a deeper understanding of this most profound of mysteries. Various theories have been developed for the biological and physical basis of consciousness. In this Review, Anil Seth and Tim Bayne discuss four prominent theoretical approaches to consciousness, namely higher-order theories, global workspace theories, re-entry and predictive processing theories and integrated information theory.
... I end up this section by making a few relevant points. The first point is that Markov blankets or the probabilistic barriers that separate the internal and external states (or sensory and active states) are Bayesian network models that set conditional independence between inside and outside of given state spaces (Bruineberg et al., 2021;Pearl, 1988). Using Markovian models to articulate FEP has led to interesting discussions (Andrews, 2021;Beni, 2021a; van Es, 2020) (more on this in the next section). ...
Article
This paper is generally concerned with the relationship between the model-based nature of the Free Energy Principle (FEP) and a realist stance on the said models. However, instead of defending realism directly, it starts by pondering the question of the origin of scientific models and asks what makes scientists’ attempt at making representational models of their environment so successful. In search of the answer, the paper develops a cognitive realist take on FEP, by arguing that not only constructing generative models and minimising their conveyed prediction error under FEP provides a basis for explicating the origins of scientific model making, but it also helps with precisifying the notion of similarity in the context of model-based science.
... Monte Carlo methods are applicable to a wide spectrum of phenomena, from the physical processes in the ignition of a nuclear bomb (which was the first application of that method in computer simulations, see Galison, 1996) to 'particle filtering' approaches in Bayesian inference (Bishop, 2006;Murphy, 2012) and evolutionary computing (Holland, 1975(Holland, , 1992. There is no expectation though that these phenomena are connected, let alone unified, by a shared mechanism that would be described by the Monte Carlo methodunless probabilistic structures such as Markov blankets are assumed to be real entities (as under the 'Free Energy Principle'; see p. 19 below and the discussion in Bruineberg et al. 2021). Otherwise, the probabilistic nature of this method does not even convey information on whether the pertinent phenomena themselves are stochastic or deterministic in nature. ...
Article
Full-text available
The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between elements of model and target system (in: Black, Models and metaphors, 1962). While analogue models aim at perceptually or conceptually accessible model-target relations, computer models give rise to a specific kind of underdetermination in these relations that needs to be addressed in specific ways. I then undertake a comparison between two contemporary AI approaches that, although related, distinctly align with the above modelling paradigms and represent distinct strategies towards model intelligibility: Deep Neural Networks and Predictive Processing. I conclude that their respective degrees of epistemic transparency primarily depend on the underlying purposes of modelling, not on their computational properties.
... organisms and their environments is that they fail to pass the bottleneck of evolutionary theory and give us a misleading picture of living agents and what they are for. Bruineberg et al. (2021) show that one cannot just 'read off' the boundary between agent and environment from the mathematical formalism provided in the theoretical models. Instead, these are ambiguous and depend on additional assumptions by the modeler, thus requiring quite substantive metaphysical supplementation for Markov blankets to do their work. ...
Article
Full-text available
There has been much criticism of the idea that Friston’s free energy principle can unite the life and mind sciences. Here, we argue that perhaps the greatest problem for the totalizing ambitions of its proponents is a failure to recognize the importance of evolutionary dynamics and to provide a convincing adaptive story relating free energy minimization to organismal fitness.
... "We should not confuse the foundations of the real world with the intellectual props that serve to evoke that world on the stage of our thoughts". This quote from Ernst Mach ([1], p.531, translated in [2], p.19), surfacing from the origins of the philosophy of science, connects directly to the target article [3], in which Bruineberg and colleagues discuss how Markov blankets (MBs) should be understood within the wider literature of the free energy principle (FEP, [4]), as well as how 'models' and 'modelling' should be interpreted within the cognitive and brain sciences more generally. ...
Preprint
Full-text available
Bruineberg and colleagues helpfully distinguish between instrumental and ontological interpretations of Markov blankets, exposing the dangers of using the former to make claims about the latter. However, proposing a sharp distinction neglects the value of recognising a continuum spanning from instrumental to ontological. This value extends to the related distinction between ‘being’ and ‘having’ a model.
... Warunkiem minimalizacji energii swobodnej jest zatem ścisła separacja wnętrza systemu od jego otoczenia, czyli zdolność tego systemu do odróżniania siebie od środowiska (Friston, 2013). Istnieje pewna rozbieżność między "instrumentalną" interpretacją koców Markowa Pearla a "realistyczną" Fristona (Bruineberg et al., 2020). Interpretacja Pearla nie wykracza poza czysty formalizm, a same koce Markowa są po prostu matematycznym konstruktem służącym do wnioskowania na temat (na przykład) modeli generatywnych. ...
Article
Full-text available
The dispute over the continuity of life and mind. Arguments for cognitivism: The purpose of this paper is to discuss the position of non-cognitivism on the issue of the so-called dispute over the continuity / discontinuity of life and mind. In discussing the views of Michael Kirch-hoff and Tom Froese, I will point out some difficulties related to their position. Next, I will formulate three arguments in favor of the cognitive alternative, emphasizing the need to resort to semantic information in explaining these phenomena. According to noncognitivist position, there is a continuity in the life-mind line, which can be justified by referring to the concept of Shannon' s syntactic information. Opponents of this thesis, i.e. supporters of cognitivism, claim that the explanation of cognition requires the use of other tools than those used to explanation of life, because, first, the notion of syntactic information does not exhaust the complexity of these phenomena, and, second, the non-cognitive position raises many problems and ambiguities. According to cognitivists, when explaining life and mind, one should refer to the concept of semantic information that is rejected by noncognitivists. In the Conclusion I will analyze the ambiguities and assumptions related to the thesis about the continuity or discontinuity between life and cognitive processes.
... Therefore, the modularity of the mind, as well as any other scientifically informed picture of cognition would remain always model relative. This assertion speaks directly to the recent growing interest in the non-realist reading of elements of Bayesian networks (Bruineberg et al., 2020;Ramstead et al., 2020;van Es, 2020;van Es & Hipolito, 2020). To be clear, I do not think Hipolito and Kirchhoff (2019) are bound to concede a non-realist reading of DCMs. ...
Article
The paper presents a model-based defence of the partial functional/informational segregation of cognition in the context of the predictive architecture. The paper argues that the model-relativeness of modularity does not need to undermine its tenability. In fact, it holds that using models is indispensable to scientific practice, and it builds its argument about the indispensability of modularity to predictive architecture on the indispensability of scientific models. More specifically to defend the modularity thesis, the paper confutes two counterarguments that lie at the centre of Hipolito and Kirchhoff’s (2019) recent confutation of the modularity thesis. The main insight of the paper is that Hipolito and Kirchhoff’s counterarguments miss the mark because they dismiss a few rudimentary facts about the model-based nature of dynamical causal models and Markov blankets.
... Aqui, em seu "ambiente natural", "Cobertores/Envoltórios de Markov" se demonstraram (historicamente) uma ferramenta eficaz, capaz de confiavelmente realizar as abstrações matemáticas para as quais foram desenvolvidos -mas, nesse contexto, não lhes é permitida a realização de qualquer inferência útil acerca das estruturas do "mundo real". Por outro lado, com o recente desenvolvimento do PEL, os "Cobertores/Envoltórios" identificados por seu "modelo explicativo" (isto é, a IA) parecem ser ontologicamente promovidos às próprias características dos sistemas (reais) modelados, de maneira a possibilitar toda a diversa gama de implicações metafísicas necessárias para que os proponentes do PEL forneçam uma "natureza" normativa às suas análises e ampliem o ambicioso escopo de suas aplicações(BRUINEBERG et al, 2020). A preocupação, que disso surge é, então, que os "Cobertores/Envoltórios de Markov" podem acabar por se tornar, meramente, uma estratégia não-intencional dos modeladores do PEL. ...
Chapter
Full-text available
O Princípio da Energia Livre (PEL) vem sendo amplamente apresentado como uma teoria unificada do funcionamento neural, um princípio geral para a auto-organização dos sistemas biológicos e (mais recentemente) até mesmo como uma teorização preditiva da dinâmica (da maioria) das “coisas”. Adicionalmente, a Inferência Ativa (IA) é comumente tomada como o corolário/teoria processual diretamente implicado pelo PEL, capaz de modelar toda a gama de fenômenos idealmente visados por biólogos e cientistas (neuro)cognitivos. Ao longo desta apresentação, pretendemos ecoar e (brevemente) sintetizar uma série de críticas construtivas que atualmente têm desafiado ambas essas reivindicações, argumentando que o PEL talvez não seja o princípio generalizante que se costuma afirmar; e que a IA muito provavelmente não é o corolário/teoria processual abrangente que a maioria de seus proponentes acredita ser. Conforme pretendemos demonstrar, sob uma leitura mais meticulosa, o PEL revela-se meramente como um estratagema engenhoso para a generalização da inferência Bayesiana a todos os domínios de análise possíveis, por meio do formalismo inerente ao emprego de envoltórios de Markov em sua descrição – um “instrumento matemático” que, por si só, não decorre de qualquer suposição geral e/ou evidente a respeito dos sistemas físicos, biológicos ou cognitivos (conforme geralmente se espera observar em “primeiros princípios” do tipo), sendo, pelo contrário , um artifício descritivo que permite a conceituação de processos e entidades sob um tipo muito específico de modelo: a “inferência bayesiana variacional”. Nesse sentido, talvez a única maneira viável de, atualmente, concebermos o PEL de forma suficientemente abrangente seria conforme uma estrutura de modelagem formal, com a IA sendo apenas sua instanciação mais difundida, partindo de uma teoria processual não-relacionada. E, se essas conclusões forem coerentes, faz-se possível concluir que quaisquer questões relativas ao status epistêmico dos modelos baseados no PEL, seu conteúdo empírico, suas previsões e relações mais específicas com os vários corolários e teorias processuais comumente empregados pelas ciências da mente e da vida, necessitam ser avaliados independentemente – caso por caso. A esse respeito, o presente estado do PEL talvez possua seu melhor espelhamento no cenário mais geral da literatura acerca do uso de modelos: havendo muito pouco a ser dito qualificativamente sobre os modelos advindos do PEL conforme um corpus homogêneo; mas com inúmeros desafios a serem direcionados para as instanciações específicas dessa estrutura de modelagem. Outrossim, tal como está, a literatura especializada parece nem sempre reconhecer essa distinção, muitas vezes confundindo a estrutura de modelagem do PEL com algumas de suas implementações específicas e (portanto) não sendo suficientemente cuidadosa quanto aos dispêndios ontológicos e consequências epistêmicas mais diretamente envolvidas em tais questões. Como exemplo ilustrativo, pretendemos demonstrar como tal cenário pode estar até mesmo afetando a plausibilidade da IA conforme uma teoria processual autônoma, visto que, ao modelar a percepção e a ação essa acabaria por pressupor tais processos, ao invés de explicá-los – ou seja, ao recorrer ao PEL apenas como uma estrutura de modelagem, a IA não mais parece ser capaz de fornecer uma fundamentação adequada para a complexidade dos modelos gerativos (Bayesianos) que precisa implementar.
... The FEP builds on the notion that human brains, like all living systems, can be thought of as "trying" to minimize their surprisal through representing an optimal world model and acting on it (Friston, 2010). At its core, the FEP is closely related to Bayesianism (Aitchison and Lengyel, 2017) but incorporates a (variable) host of additional assumptions (Gershman, 2019;Bruineberg et al., 2020), the most important arguably being the explicit representation of prediction errors at all stages of perception and action, termed predictive coding (PC, Rao and Ballard, 1999; see Aitchison and Lengyel, 2017 for PC schemes in other contexts). According to PC, predictions descend the cortical hierarchy where they suppress incoming bottom-up signals leading to the representation of prediction errors. ...
Article
Full-text available
Within the methodologically diverse interdisciplinary research on the minimal self, we identify two movements with seemingly disparate research agendas – cognitive science and cognitive (developmental) robotics. Cognitive science, on the one hand, devises rather abstract models which can predict and explain human experimental data related to the minimal self. Incorporating the established models of cognitive science and ideas from artificial intelligence, cognitive robotics, on the other hand, aims to build embodied learning machines capable of developing a self “from scratch” similar to human infants. The epistemic promise of the latter approach is that, at some point, robotic models can serve as a testbed for directly investigating the mechanisms that lead to the emergence of the minimal self. While both approaches can be productive for creating causal mechanistic models of the minimal self, we argue that building a minimal self is different from understanding the human minimal self. Thus, one should be cautious when drawing conclusions about the human minimal self based on robotic model implementations and vice versa. We further point out that incorporating constraints arising from different levels of analysis will be crucial for creating models that can predict, generate, and causally explain behavior in the real world.
... This strict distinction between hidden and motor units is made for conceptual clarity; none of the measures outlined below depend on it. While the sensor and motor units of an MB thus constitute a Markov Blanket in the traditional, causal sense [20], they are not Markov Blankets as required according to Friston's free energy principle (FEP) formalism [21], because MBs are not self-organizing (see also [22]). ...
Article
Full-text available
Should the internal structure of a system matter when it comes to autonomy? While there is still no consensus on a rigorous, quantifiable definition of autonomy, multiple candidate measures and related quantities have been proposed across various disciplines, including graph-theory, information-theory, and complex system science. Here, I review and compare a range of measures related to autonomy and intelligent behavior. To that end, I analyzed the structural, information-theoretical, causal, and dynamical properties of simple artificial agents evolved to solve a spatial navigation task, with or without a need for associative memory. By contrast to standard artificial neural networks with fixed architectures and node functions, here, independent evolution simulations produced successful agents with diverse neural architectures and functions. This makes it possible to distinguish quantities that characterize task demands and input-output behavior, from those that capture intrinsic differences between substrates, which may help to determine more stringent requisites for autonomous behavior and the means to measure it.
... Existing work on active inference and the free energy principle has previously proposed an intimate link between action generation and perceptual inference (Friston, 2010;Friston et al., 2015). However, in doing so, it has ended up conflating expectations and desires (Yon et al., 2020) and adopted technical terms like 'Markov blankets' using problematic definitions (Bruineberg et al., 2021), not rarely causing deep confusion (Freed, 2010). In the present work, I strive to avoid these issues, by describing the shared computational machinery underlying perception and action in novel terms. ...
Preprint
Full-text available
The idea that the brain is a probabilistic (Bayesian) inference machine, continuously trying to figure out the hidden causes of its inputs, has become very influential in cognitive (neuro)science over recent decades. Here I present a relatively straightforward generalization of this idea: the primary computational task that the brain is faced with is to track the probabilistic structure of observations themselves, without recourse to hidden states. Taking this starting point seriously turns out to have considerable explanatory power, and several key ideas are developed from it: (1) past experience, encoded in prior expectations, has an influence over the future that is analogous to regularization as known from machine learning; (2) action generation (interpreted as constraint satisfaction) is a special case of such regularization; (3) the concept of attractors in dynamical systems provides a useful lens through which prior expectations, regularization, and action induction can be viewed; these thus appear as different perspectives on the same phenomenon; (4) the phylogenetically ancient imperative of acting to ensure and thereby observe conditions beneficial for survival is likely the same as that which underlies perceptual inference. The Bayesian brain hypothesis has been touted as promising to deliver a "unified science of mind and action". In this paper, I sketch an informal step towards fulfilling that promise, while avoiding some pitfalls that other such attempts have fallen prey to.
... One motivation traces to considering the problem of perception as one of inference about the causes of sensory signals 72,73 . The other -exemplified by the free energy principle 74 -appeals to fundamen tal constraints regarding control and regulation that apply to all systems that maintain their organization over time 75-77 (but see ref. 78 ). Both lead to the notion that the brain implements a process of 'prediction error minimization' 79 that approximates Bayesian inference through the reciprocal exchange of (usually top down) perceptual predictions and (usually bottom up) predic tion errors 80 (although see ref. 81 ). ...
... However, PP's ontological and explanatory status is one of the more contentious issues in philosophy of mind and cognitive science (see e.g., Aitchison & Lengyel, 2017;Bruineberg et al., 2020;Colombo & Wright, 2017;Heilbron & Chait, 2018). Hohwy and Seth acknowledge this when they point out that "the PP framework can be cast at different levels of abstraction which make different claims about the underlying mechanism" (Hohwy & Seth, 2020, p. 15). ...
Article
Full-text available
The predictive processing framework has gained significant popularity across disciplines investigating the mind and brain. In this article we critically examine two of the recently made claims about the kind of headway that the framework can make in the neuroscientific and philosophical investigation of consciousness. Firstly, we argue that predictive processing is unlikely to yield significant breakthroughs in the search for the neural correlates of consciousness as it is still too vague to individuate neural mechanisms at a fine enough scale. Despite its unifying ambitions, the framework harbors a diverse family of competing computational models which rely on different assumptions and are under-constrained by neurological data. Secondly, we argue that the framework is also ill suited to provide a unifying theory of consciousness. Here, we focus on the tension between the claim that predictive processing is compatible with all of the leading neuroscientific models of consciousness with the fact that most attempts explaining consciousness within the framework rely heavily on external assumptions.
... The organism's interaction with the environmenthow the organism garners evidence for its existence by actively searching the environment and how it decreases the free energy by acting on the environmentcan be modelled by using Markov blankets. The question of whether Markov blankets are embodied (or real/physical) entities within organisms or they are just abstract modelling tools has been the subject of some debate (Bruineberg et al. 2020;van Es and Hipólito 2020;Beni 2021). ...
Article
Full-text available
The paper addresses the issue of theory-ladenness of observation/experimentation. Motivated by a naturalistic reading of Thomas Kuhn's insights into the same topic, I draw on cognitive neuroscience (predictive coding under Free Energy Principle) to scrutinise theory-ladenness. I equate theory-ladenness with the cognitive penetrability of perceptual inferences and argue that strong theory-ladenness prevails only under uncertain circumstances. This understanding of theory-ladenness is in line with Thomas Kuhn's view on the same subject as well as a cognitive version of modest realism rather than downright antirealism.
Chapter
This paper’s aim is twofold: on the one hand, to provide an overview of the state of the art of some kind of Bayesian networks, i.e. Markov blankets (MB), focusing on their relationship with the cognitive theories of the free energy principle (FEP) and active inference. On the other hand, to sketch how these concepts can be practically applied to artificial intelligence (AI), with special regard to their use in the field of sustainable development. The proposal of this work, indeed, is that understanding exactly to what extent MBs may be framed in the context of FEP and active inference, could be useful to implement tools to support decision-making processes for addressing sustainability. Conversely, looking at these tools considering how they could be related to those theoretical frameworks, may help to shed some light on the debate about FEP, active inference and its linkages with MBs, which still seems to be clarified. For the above purposes, the paper is organized as follows: after a general introduction, Sect. 2 explains what a MB is, and how it is related to the concepts of FEP and active inference. Thus, Sect. 3 focuses on how MBs, joint with FEP and active inference, are employed in the field of AI. On these grounds, Sect. 4 explores whether MBs, FEP, and active inference can be useful to face the issues related to sustainability.
Article
This paper draws on the resources of computational neuroscience (an account of active inference under the free energy principle) to address Bas van Fraassen's bad lot objection to the inference to the best explanation (IBE). The general assumption of this paper is that IBE is a finessed form of active inferences that self‐organising systems perform to maximise the chance of their survival. Under this assumption, the paper aims to establish the following points: first, the capacity to learn to perform explanatory inferences comes with evolutionary privileges; second, adaptive actions guide beliefs and beliefs are action‐oriented; and third, IBE is not inconsistent with (approximate) Bayesianism but plays a heuristic role to it.
Article
Full-text available
How do intelligent agents spawn and exploit integrated processing regimes spanning brain, body, and world? The answer may lie in the ability of the biological brain to select actions and policies in the light of counterfactual predictions—predictions about what kinds of futures will result if such-and-such actions are launched. Appeals to the minimization of ‘counterfactual prediction errors’ (the ones that would result under various scenarios) already play a leading role in attempts to apply the basic toolkit of the neurocomputational theory known as ‘predictive processing’ to higher cognitive functions such as policy selection and planning. In this paper, I show that this also leads naturally to the discovery and use of extended processing regimes defined across heterogeneous mixtures of biological and non-biological resources. This solves a long-standing puzzle concerning the ‘recruitment’ of the right non-neural processing resources at the right time. It reveals how (and why) human brains spawn and maintain extended human minds.
Article
Full-text available
Researchers recognize the affinity of habits-as-heuristics and habits-as-routines. This paper argues that the affinity should not be surprising, as both kinds of habits are the outcome of rational choice. The paper finds that the dual process theory, once reconstructed as based on rational choice, reveals that the affinity runs deep, as threefold parallelism: i) the cognitive economy responsible for habits-as-heuristics parallels what this paper calls the "physiological economy" responsible for habits-as-routines; ii) the occasional slipup of heuristics generated by the cognitive economy parallels the occasional slipup of routines of the physiological economy; and iii) the breakdown of heuristics of the cognitive economy parallels the breakdown of routines of the physiological economy. The rationality-based dual process theory can explain-whereas the single process theory cannot-why slipups do not induce the decision makers to abandon the pertinent habit, but breakdowns do.
Article
For most of the twentieth century, consciousness was the elephant in the living room of science – we all knew it was there, but no-one wanted to talk about it. Aspirations for a science of consciousness faced an array of formidable philosophical, theoretical and methodological challenges, thought by many to be insurmountable. In recent decades, pessimism has been replaced by increasing optimism, as scholars and researchers from many fields, including psychology, neuroscience, philosophy, AI and evolutionary biology have focused their trans-disciplinary efforts on understanding the enigma of how and why we are conscious of ourselves and the world around us.
Article
Full-text available
The active inference framework, and in particular its recent formulation as a partially observable Markov decision process (POMDP), has gained increasing popularity in recent years as a useful approach for modeling neurocognitive processes. This framework is highly general and flexible in its ability to be customized to model any cognitive process, as well as simulate predicted neuronal responses based on its accompanying neural process theory. It also affords both simulation experiments for proof of principle and behavioral modeling for empirical studies. However, there are limited resources that explain how to build and run these models in practice, which limits their widespread use. Most introductions assume a technical background in programming, mathematics, and machine learning. In this paper we offer a step-by-step tutorial on how to build POMDPs, run simulations using standard MATLAB routines, and fit these models to empirical data. We assume a minimal background in programming and mathematics, thoroughly explain all equations, and provide exemplar scripts that can be customized for both theoretical and empirical studies. Our goal is to provide the reader with the requisite background knowledge and practical tools to apply active inference to their own research. We also provide optional technical sections and multiple appendices, which offer the interested reader additional technical details. This tutorial should provide the reader with all the tools necessary to use these models and to follow emerging advances in active inference research.
Article
Full-text available
Bruineberg and colleagues argue that a realist interpretation of Markov blankets inadvertently relies upon unfounded assumptions. However, insofar as their diagnosis is accurate, their prescribed instrumentalism may ultimately prove insufficient as a complete remedy. Drawing upon a process-based perspective on living systems, we suggest a potential way to avoid some of the assumptions behind problems described by Bruineberg and colleagues.
Preprint
Enactivism is a major research programme in the philosophy of perception. Yet its metaphysical status is unclear, since it is claimed to avoid both idealism and realism yet still has aspects of both within it. One attempt to solve this conundrum is based on the fusion of enactivism with phenomenology and the mathematical concept of symmetry breaking (Moss Brender, 2013). I suggest this is not entirely successful and propose it needs the addition of a multi-level, non-reductive metaphysics (for example, Informational Structural Realism). The processes we commonly call ‘perception’ are causal transfers of information at certain levels in the hierarchy of meaningful structures that comprise physical reality. Phenomenologists could use the word ‘perception’ metaphorically across all levels, although realists need not do so.
Chapter
Full-text available
This collection of essays explores the metaphysical thesis that the living world is not ontologically made up of substantial particles or things, as has often been assumed, but is rather constituted by processes. The biological domain is organized as an interdependent hierarchy of processes, which are stabilized and actively maintained at different timescales. Even entities that intuitively appear to be paradigms of things, such as organisms, are actually better understood as processes. Unlike previous attempts to articulate processual views of biology, which have tended to use Alfred North Whitehead’s panpsychist metaphysics as a foundation, this book takes a naturalistic approach to metaphysics. It submits that the main motivations for replacing an ontology of substances with one of processes are to be looked for in the empirical findings of science. Biology provides compelling reasons for thinking that the living realm is fundamentally dynamic and that the existence of things is always conditional on the existence of processes. The phenomenon of life cries out for theories that prioritize processes over things, and it suggests that the central explanandum of biology is not change but rather stability—or, more precisely, stability attained through constant change. This multicontributor volume brings together philosophers of science and metaphysicians interested in exploring the consequences of a processual philosophy of biology. The contributors draw on an extremely wide range of biological case studies and employ a process perspective to cast new light on a number of traditional philosophical problems such as identity, persistence, and individuality.
Article
Full-text available
The free energy principle, and its corollary active inference, constitute a bio-inspired theory that assumes biological agents act to remain in a restricted set of preferred states of the world, i.e., they minimize their free energy. Under this principle, biological agents learn a generative model of the world and plan actions in the future that will maintain the agent in an homeostatic state that satisfies its preferences. This framework lends itself to being realized in silico, as it comprehends important aspects that make it computationally affordable, such as variational inference and amortized planning. In this work, we investigate the tool of deep learning to design and realize artificial agents based on active inference, presenting a deep-learning oriented presentation of the free energy principle, surveying works that are relevant in both machine learning and active inference areas, and discussing the design choices that are involved in the implementation process. This manuscript probes newer perspectives for the active inference framework, grounding its theoretical aspects into more pragmatic affairs, offering a practical guide to active inference newcomers and a starting point for deep learning practitioners that would like to investigate implementations of the free energy principle.
Article
Full-text available
Several authors have made claims about the compatibility between the Free Energy Principle (FEP) and theories of autopoiesis and enaction. Many see these theories as natural partners or as making similar statements about the nature of biological and cognitive systems. We critically examine these claims and identify a series of misreadings and misinterpretations of key enactive concepts. In particular, we notice a tendency to disregard the operational definition of autopoiesis and the distinction between a system’s structure and its organization. Other misreadings concern the conflation of processes of self-distinction in operationally closed systems and Markov blankets. Deeper theoretical tensions underlie some of these misinterpretations. FEP assumes systems that reach a non-equilibrium steady state and are enveloped by a Markov blanket. We argue that these assumptions contradict the historicity of sense-making that is explicit in the enactive approach. Enactive concepts such as adaptivity and agency are defined in terms of the modulation of parameters and constraints of the agent-environment coupling, which entail the possibility of changes in variable and parameter sets, constraints, and in the dynamical laws affecting the system. This allows enaction to address the path-dependent diversity of human bodies and minds. We argue that these ideas are incompatible with the time invariance of non-equilibrium steady states assumed by the FEP. In addition, the enactive perspective foregrounds the enabling and constitutive roles played by the world in sense-making, agency, development. We argue that this view of transactional and constitutive relations between organisms and environments is a challenge to the FEP. Once we move beyond superficial similarities, identify misreadings, and examine the theoretical commitments of the two approaches, we reach the conclusion that far from being easily integrated, the FEP, as it stands formulated today, is in tension with the theories of autopoiesis and enaction.
Article
Full-text available
This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function—and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity—accompanied with adaptation of firing thresholds—is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.
Article
Full-text available
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option.
Article
Full-text available
Over the last fifteen years, an ambitious explanatory framework has been proposed to unify explanations across biology and cognitive science. Active inference, whose most famous tenet is the free energy principle, has inspired excitement and confusion in equal measure. Here, we lay the ground for proper critical analysis of active inference, in three ways. First, we give simplified versions of its core mathematical models. Second, we outline the historical development of active inference and its relationship to other theoretical approaches. Third, we describe three different kinds of claim -- labelled mathematical, empirical and general -- routinely made by proponents of the framework, and suggest dialectical links between them. Overall, we aim to increase philosophical understanding of active inference so that it may be more readily evaluated. This is a manuscript draft of the Introduction to the Topical Collection "The Free Energy Principle: From Biology to Cognition", forthcoming in Biology & Philosophy.
Article
Full-text available
This paper develops a Bayesian mechanics for adaptive systems. Firstly, we model the interface between a system and its environment with a Markov blanket. This affords conditions under which states internal to the blanket encode information about external states. Second, we introduce dynamics and represent adaptive systems as Markov blankets at steady state. This allows us to identify a wide class of systems whose internal states appear to infer external states, consistent with variational inference in Bayesian statistics and theoretical neuroscience. Finally, we partition the blanket into sensory and active states. It follows that active states can be seen as performing active inference and well-known forms of stochastic control (such as PID control), which are prominent formulations of adaptive behaviour in theoretical biology and engineering.
Article
Full-text available
The free energy principle (FEP) states that any dynamical system can be interpreted as performing Bayesian inference upon its surrounding environment. Although, in theory, the FEP applies to a wide variety of systems, there has been almost no direct exploration or demonstration of the principle in concrete systems. In this work, we examine in depth the assumptions required to derive the FEP in the simplest possible set of systems – weakly-coupled non-equilibrium linear stochastic systems. Specifically, we explore (i) how general the requirements imposed on the statistical structure of a system are and (ii) how informative the FEP is about the behaviour of such systems. We discover that two requirements of the FEP – the Markov blanket condition (i.e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i.e. tendencies driving a system out of equilibrium) – are only valid for a very narrow space of parameters. Suitable systems require an absence of asymmetries in perception-action loops that are highly unusual for living systems interacting with an environment, which are the kind of systems the FEP explicitly sets out to model. More importantly, we observe that a mathematically central step in the argument, connecting the behaviour of a system to variational inference, relies on an implicit equivalence between the dynamics of the average states of a system with the average of the dynamics of those states. This equivalence does not hold in general even for linear systems, since it requires an effective decoupling from the system's history of interactions. These observations are critical for evaluating the generality and applicability of the FEP and indicate the existence of significant problems of the theory in its current form. These issues make the FEP, as it stands, not straightforwardly applicable to the simple linear systems studied here and suggests that more development is needed before the theory could be applied to the kind of complex systems which describe living and cognitive processes.
Article
Full-text available
Information theory provides an interdisciplinary method to understand important phenomena in many research fields ranging from astrophysical and laboratory fluids/plasmas to biological systems. In particular, information geometric theory enables us to envision the evolution of non-equilibrium processes in terms of a (dimensionless) distance by quantifying how information unfolds over time as a probability density function (PDF) evolves in time. Here, we discuss some recent developments in information geometric theory focusing on time-dependent dynamic aspects of non-equilibrium processes (e.g., time-varying mean value, time-varying variance, or temperature, etc.) and their thermodynamic and physical/biological implications. We compare different distances between two given PDFs and highlight the importance of a path-dependent distance for a time-dependent PDF. We then discuss the role of the information rate Γ=dLdt and relative entropy in non-equilibrium thermodynamic relations (entropy production rate, heat flux, dissipated work, non-equilibrium free energy, etc.), and various inequalities among them. Here, L is the information length representing the total number of statistically distinguishable states a PDF evolves through over time. We explore the implications of a geodesic solution in information geometry for self-organization and control.
Article
Full-text available
How can the free energy principle contribute to research on neural correlates of consciousness, and to the scientific study of consciousness more generally? Under the free energy principle, neural correlates should be defined in terms of neural dynamics, not neural states, and should be complemented by research on computational correlates of consciousness – defined in terms of probabilities encoded by neural states. We argue that these restrictions brighten the prospects of a computational explanation of consciousness, by addressing two central problems. The first is to account for consciousness in the absence of sensory stimulation and behaviour. The second is to allow for the possibility of systems that implement computations associated with consciousness, without being conscious, which requires differentiating between computational systems that merely simulate conscious beings and computational systems that are conscious in and of themselves. Given the notion of computation entailed by the free energy principle, we derive constraints on the ascription of consciousness in controversial cases (e.g., in the absence of sensory stimulation and behaviour). We show that this also has implications for what it means to be, as opposed to merely simulate a conscious system.
Article
Full-text available
In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions—and the functional form of the underlying densities—have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition—and polynomial expansions—to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified—using the accompanying Hessian—to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.
Article
Full-text available
Scientific thinking about the minds of humans and other animals has been transformed by the idea that the brain is Bayesian. A cornerstone of this idea is that agents set the balance between prior knowledge and incoming evidence based on how reliable or ‘precise’ these different sources of information are — lending the most weight to that which is most reliable. This concept of precision has crept into several branches of cognitive science and is a lynchpin of emerging ideas in computational psychiatry — where unusual beliefs or experiences are explained as abnormalities in how the brain estimates precision. But what precisely is precision? In this Primer we explain how precision has found its way into classic and contemporary models of perception, learning, self-awareness, and social interaction. We also chart how ideas around precision are beginning to change in radical ways, meaning we must get more precise about how precision works.
Article
Full-text available
Two striking claims are advanced on behalf of the free energy principle (FEP) in cognitive science and philosophy: (i) that it identifies a condition of the possibility of existence for self-organising systems; and (ii) that it has important implications for our understanding of how the brain works, defining a set of process theories—roughly, theories of the structure and functions of neural mechanisms—consistent with the free energy minimising imperative that it derives as a necessary feature of all self-organising systems. I argue that the conjunction of claims (i) and (ii) rests on a fallacy of equivocation. The FEP can be interpreted in two ways: as a claim about how it is possible to redescribe the existence of self-organising systems (the Descriptive FEP ), and as a claim about how such systems maintain their existence (the Explanatory FEP ). Although the Descriptive FEP plausibly does identify a condition of the possibility of existence for self-organising systems, it has no important implications for our understanding of how the brain works. Although the Explanatory FEP would have such implications if it were true, it does not identify a condition of the possibility of existence for self-organising systems. I consider various ways of responding to this conclusion, and I explore its implications for the role and importance of the FEP in cognitive science and philosophy.
Article
Full-text available
In theoretical biology, we are often interested in random dynamical systems—like the brain—that appear to model their environments. This can be formalized by appealing to the existence of a (possibly non-equilibrium) steady state, whose density preserves a conditional independence between a biological entity and its surroundings. From this perspective, the conditioning set, or Markov blanket, induces a form of vicarious synchrony between creature and world—as if one were modelling the other. However, this results in an apparent paradox. If all conditional dependencies between a system and its surroundings depend upon the blanket, how do we account for the mnemonic capacity of living systems? It might appear that any shared dependence upon past blanket states violates the independence condition, as the variables on either side of the blanket now share information not available from the current blanket state. This paper aims to resolve this paradox, and to demonstrate that conditional independence does not preclude memory. Our argument rests upon drawing a distinction between the dependencies implied by a steady state density, and the density dynamics of the system conditioned upon its configuration at a previous time. The interesting question then becomes: What determines the length of time required for a stochastic system to ‘forget’ its initial conditions? We explore this question for an example system, whose steady state density possesses a Markov blanket, through simple numerical analyses. We conclude with a discussion of the relevance for memory in cognitive systems like us.
Article
Full-text available
Biehl et al. (2021) present some interesting observations on an early formulation of the free energy principle. We use these observations to scaffold a discussion of the technical arguments that underwrite the free energy principle. This discussion focuses on solenoidal coupling between various (subsets of) states in sparsely coupled systems that possess a Markov blanket—and the distinction between exact and approximate Bayesian inference, implied by the ensuing Bayesian mechanics.
Article
Full-text available
Active inference is an increasingly prominent paradigm in theoretical biology. It frames the dynamics of living systems as if they were solving an inference problem. This rests upon their flow towards some (non-equilibrium) steady state—or equivalently, their maximisation of the Bayesian model evidence for an implicit probabilistic model. For many models, these self-evidencing dynamics manifest as messages passed among elements of a system. Such messages resemble synaptic communication at a neuronal network level but could also apply to other network structures. This paper attempts to apply the same formulation to biochemical networks. The chemical computation that occurs in regulation of metabolism relies upon sparse interactions between coupled reactions, where enzymes induce conditional dependencies between reactants. We will see that these reactions may be viewed as the movement of probability mass between alternative categorical states. When framed in this way, the master equations describing such systems can be reformulated in terms of their steady-state distribution. This distribution plays the role of a generative model, affording an inferential interpretation of the underlying biochemistry. Finally, we see that—in analogy with computational neurology and psychiatry—metabolic disorders may be characterized as false inference under aberrant prior beliefs.
Article
Full-text available
According to the free energy principle, life is an “inevitable and emergent property of any (ergodic) random dynamical system at non-equilibrium steady state that possesses a Markov blanket” (Friston 2013). Formulating a principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems. Thus far, however, the physics foundations of the free energy principle have received hardly any attention. Here, we start to fill this gap and analyse some of the challenges raised by applications of statistical physics for modelling biological targets. Based on our analysis, we conclude that model-building grounded in the free energy principle exacerbates a trade-off between generality and realism, because of a fundamental mismatch between its physics assumptions andthe properties of actual biological targets.
Article
Full-text available
The idea that our perceptions in the here and now are influenced by prior events and experiences has recently received substantial support and attention from the proponents of the Predictive Processing (PP) and Active Inference framework in philosophy and computational neuroscience. In this paper we look at how perceptual experiences get off the ground from the outset, in utero. One basic yet overlooked aspect of current PP approaches is that human organisms first develop within another human body. Crucially, while not all humans will have the experience of being pregnant or carrying a baby, the experience of being carried and growing within another person’s body is universal. Specifically, we focus on the development of minimal selfhood in utero as a process co-embodiment and co-homeostasis, and highlight their close relationship. We conclude with some implications on several critical questions fuelling current debates on the nature of conscious experiences, minimal self and social cognition.
Article
Full-text available
From birth to 15 months infants and caregivers form a fundamentally intersubjective, dyadic unit within which the infant’s ability to recognize gender/sex in the world develops. Between about 18 and 36 months the infant accumulates an increasingly clear and subjective sense of self as female or male. We know little about how the precursors to gender/sex identity form during the intersubjective period, nor how they transform into an independent sense of self by 3 years of age. In this Theory and Hypothesis article I offer a general framework for thinking about this problem. I propose that through repetition and patterning, the dyadic interactions in which infants and caregivers engage imbue the infant with an embodied, i.e., sensori-motor understanding of gender/sex. During this developmental period (which I label Phase 1) gender/sex is primarily an intersubjective project. From 15 to 18 months (which I label Phase 2) there are few reports of newly appearing gender/sex behavioral differences, and I hypothesize that this absence reflects a period of developmental instability during which there is a transition from gender/sex as primarily inter-subjective to gender/sex as primarily subjective. Beginning at 18 months (i.e., the start of Phase 3), a toddler’s subjective sense of self as having a gender/sex emerges, and it solidifies by 3 years of age. I propose a dynamic systems perspective to track how infants first assimilate gender/sex information during the intersubjective period (birth to 15 months); then explore what changes might occur during a hypothesized phase transition (15 to 18 months), and finally, review the emergence and initial stabilization of individual subjectivity-the period from 18 to 36 months. The critical questions explored focus on how to model and translate data from very different experimental disciplines, especially neuroscience, physiology, developmental psychology and cognitive development. I close by proposing the formation of a research consortium on gender/sex development during the first 3 years after birth.
Article
Full-text available
The free energy principle (FEP) has been presented as a unified brain theory, as a general principle for the self-organization of biological systems, and most recently as a principle for a theory of every thing. Additionally, active inference has been proposed as the process theory entailed by FEP that is able to model the full range of biological and cognitive events. In this paper, we challenge these two claims. We argue that FEP is not the general principle it is claimed to be, and that active inference is not the all-encompassing process theory it is purported to be either. The core aspects of our argumentation are that (i) FEP is just a way to generalize Bayesian inference to all domains by the use of a Markov blanket formalism, a generalization we call the Markov blanket trick; and that (ii) active inference presupposes successful perception and action instead of explaining them.
Article
Full-text available
We summarize the original formulation of the free energy principle and highlight some technical issues. We discuss how these issues affect related results involving generalised coordinates and, where appropriate, mention consequences for and reveal, up to now unacknowledged, differences from newer formulations of the free energy principle. In particular, we reveal that various definitions of the "Markov blanket" proposed in different works are not equivalent. We show that crucial steps in the free energy argument, which involve rewriting the equations of motion of systems with Markov blankets, are not generally correct without additional (previously unstated) assumptions. We prove by counterexamples that the original free energy lemma, when taken at face value, is wrong. We show further that this free energy lemma, when it does hold, implies the equality of variational density and ergodic conditional density. The interpretation in terms of Bayesian inference hinges on this point, and we hence conclude that it is not sufficiently justified. Additionally, we highlight that the variational densities presented in newer formulations of the free energy principle and lemma are parametrised by different variables than in older works, leading to a substantially different interpretation of the theory. Note that we only highlight some specific problems in the discussed publications. These problems do not rule out conclusively that the general ideas behind the free energy principle are worth pursuing.
Article
Full-text available
A weak version of life-mind continuity thesis entails that every living system also has a basic mind (with a non-representational form of intentionality). The strong version entails that the same concepts that are sufficient to explain basic minds (with non-representational states) are also central to understanding non-basic minds (with representational states). We argue that recent work on the free energy principle supports the following claims with respect to the life-mind continuity thesis: (i) there is a strong continuity between life and mind; (ii) all living systems can be described as if they had representational states; (iii) the ’as-if representationality’ entailed by the free energy principle is central to understanding both basic forms of intentionality and intentionality in non-basic minds. In addition to this, we argue that the free energy principle also renders realism about computation and representation compatible with a strong life-mind continuity thesis (although the free energy principle does not entail computational and representational realism). In particular, we show how representationality proper can be grounded in ’as-if representationality’.
Article
Full-text available
The Free Energy Principle (FEP) is currently one of the most promising frameworks with which to address a unified explanation of life-related phenomena. With powerful formalism that embeds a small set of assumptions, it purports to deal with complex adaptive dynamics ranging from barely unicellular organisms to complex cultural manifestations. The FEP has received increased attention in disciplines that study life, including some critique regarding its overall explanatory power and its true potential as a grand unifying theory (GUT). Recently, FEP theorists presented a contribution with the main tenets of their framework, together with possible philosophical interpretations, which lean towards so-called Markovian Monism (MM). The present paper assumes some of the abovementioned critiques, rejects the arguments advanced to invalidate the FEP’s potential to be a GUT, and overcomes criticism thereof by reviewing FEP theorists’ newly minted metaphysical commitment, namely MM. Specifically, it shows that this philosophical interpretation of the FEP argues circularly and only delivers what it initially assumes, i.e., a dual information geometry that allegedly explains epistemic access to the world based on prior dual assumptions. The origin of this circularity can be traced back to a physical description contingent on relative system-environment separation. However, the FEP itself is not committed to MM, and as a scientific theory it delivers more than what it assumes, serving as a heuristic unification principle that provides epistemic advancement for the life sciences.
Article
Full-text available
Recent characterisations of self-organising systems depend upon the presence of a ‘Markov blanket’: a statistical boundary that mediates the interactions between the inside and outside of a system. We leverage this idea to provide an analysis of partitions in neuronal systems. This is applicable to brain architectures at multiple scales, enabling partitions into single neurons, brain regions, and brain-wide networks. This treatment is based upon the canonical micro-circuitry used in empirical studies of effective connectivity, so as to speak directly to practical applications. The notion of effective connectivity depends upon the dynamic coupling between functional units, whose form recapitulates that of a Markov blanket at each level of analysis. The nuance afforded by partitioning neural systems in this way highlights certain limitations of ‘modular’ perspectives of brain function that only consider a single level of description.
Article
Full-text available
Free Energy Principle underlies a unifying framework that integrates theories of origins of life, cognition, and action. Recently, FEP has been developed into a Markovian monist perspective (Friston et al. in BC 102: 227–260, 2020). The paper expresses scepticism about the validity of arguments for Markovian monism. The critique is based on the assumption that Markovian models are scientific models, and while we may defend ontological theories about the nature of scientific models, we could not read off metaphysical theses about the nature of target systems (self-organising conscious systems, in the present context) from our theories of nature of scientific models (Markov blankets). The paper draws attention to different ways of understanding Markovian models, as material entities, fictional entities, and mathematical structures. I argue that none of these interpretations contributes to the defence of a metaphysical stance (either in terms of neutral monism or reductive physicalism). This is because scientific representation is a sophisticated process, and properties of Markovian models—such as the property of being neither physical nor mental—could not be projected onto their targets to determine the ontological properties of targets easily.
Article
Full-text available
Active inference is a physics of life process theory of perception, action and learning that is applicable to natural and artificial agents. In this paper, active inference theory is related to different types of practice in social organization. Here, the term social organization is used to clarify that this paper does not encompass organization in biological systems. Rather, the paper addresses active inference in social organization that utilizes industrial engineering, quality management, and artificial intelligence alongside human intelligence. Social organization referred to in this paper can be in private companies, public institutions, other for-profit or not-for-profit organizations, and any combination of them. The relevance of active inference theory is explained in terms of variational free energy, prediction errors, generative models, and Markov blankets. Active inference theory is most relevant to the social organization of work that is highly repetitive. By contrast, there are more challenges involved in applying active inference theory for social organization of less repetitive endeavors such as one-of-a-kind projects. These challenges need to be addressed in order for active inference to provide a unifying framework for different types of social organization employing human and artificial intelligence.
Article
Full-text available
Habitual actions unfold without conscious deliberation or reflection, and yet often seem to be intelligently adjusted to situational intricacies. A question arises, then, as to how it is that habitual actions can exhibit this form of intelligence, while falling outside the domain of paradigmatically intentional actions. Call this the intelligence puzzle of habits. This puzzle invites three standard replies. Some stipulate that habits lack intelligence and contend that the puzzle is ill-posed. Others hold that habitual actions can exhibit intelligence because they are guided by automatic yet rational, propositional processes. Others still suggest that habits guide intelligent behaviour without involving propositional states by shaping perception in action-soliciting ways. We develop an alternative fourth answer based on John Dewey’s pragmatist account of habit. We argue that habits promote intelligent behaviour by shaping perception, by forming an interrelated network among themselves, and by cooperating with the environment.
Article
Full-text available
Climate change, biodiversity loss, and other major social and environmental problems pose severe risks. Progress has been inadequate and scientists, global policy experts, and the general public increasingly conclude that transformational change is needed across all sectors of society in order to improve and maintain social and ecological wellbeing. At least two paths to transformation are conceivable: (1) reform of and innovation within existing societal systems (e.g., economic, legal, and governance systems); and (2) the de novo development of and migration to new and improved societal systems. This paper is the final in a three-part series of concept papers that together outline a novel science-driven research and development program aimed at the second path. It summarizes literature to build a narrative on the topic of de novo design of societal systems. The purpose is to raise issues, suggest design possibilities, and highlight directions and questions that could be explored in the context of this or any R&D program aimed at new system design. This paper does not present original research, but rather provides a synthesis of selected ideas from the literature. Following other papers in the series, a society is viewed as a superorganism and its societal systems as a cognitive architecture. Accordingly, a central goal of design is to improve the collective cognitive capacity of a society, rendering it more capable of achieving and sustainably maintaining vitality. Topics of attention, communication, self-identity, power, and influence are discussed in relation to societal cognition and system design. A prototypical societal system is described, and some design considerations are highlighted.
Article
Full-text available
The free energy principle (FEP) portends to provide a unifying principle for the biological and cognitive sciences. It states that for a system to maintain non-equilibrium steady-state with its environment it must minimise its (information-theoretic) free energy. Under the FEP, to minimise free energy is equivalent to engaging in approximate Bayesian inference. According to the FEP, therefore, inference is at the explanatory base of biology and cognition. In this paper, we discuss a specific challenge to this inferential formulation of adaptive self-organisation. We call it the universal ethology challenge: it states that the FEP cannot unify biology and cognition, for life itself (or adaptive self-organisation) does not require inferential routines to select adaptive solutions to environmental pressures (as mandated by the FEP). We show that it is possible to overcome the universal ethology challenge by providing a cautious and exploratory treatment of inference under the FEP. We conclude that there are good reasons for thinking that the FEP can unify biology and cognition under the notion of approximate Bayesian inference, even if further challenges must be addressed to properly draw such a conclusion.
Article
Full-text available
The free energy principle (FEP) purports to provide a single principle for the organizational dynamics of living systems, including their cognitive profiles. It states that for a system to maintain non-equilibrium steady-state with its environment it must minimise its free energy. It is said to be entirely scale-free, applying to anything from particles to organisms, and interactive machines, spanning from the abiotic to the biotic. Because the FEP is so general in its application, it is for this reason that one might wonder in what sense this framework captures anything specific to biological characteristics, if details at all. We take steps to correct for this here. We do so by taking up a distinct challenge that the FEP must overcome if it is to be of interest to those working in the biological sciences. We call this the pebble challenge: it states that the FEP cannot capture the organisational principles specific to biology, for its formalisms apply equally well to pebbles. We progress in solving the pebble challenge by articulating how the notion of ‘autonomy as precarious operational closure’ from the enactive literature can be unpacked within the FEP. This enables the FEP to delineate between the abiotic and the biotic; avoiding the pebble challenge that keeps it out of touch with the living systems we encounter in the world and is of interest to the sciences of life and mind.
Article
Full-text available
Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration—and account for uncertainty about their environment—in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.
Article
Full-text available
The aim of this paper is to leverage the free-energy principle and its corollary process theory, active inference, to develop a generic, generalizable model of the representational capacities of living creatures; that is, a theory of phenotypic representation. Given their ubiquity, we are concerned with distributed forms of representation (e.g., population codes), whereby patterns of ensemble activity in living tissue come to represent the causes of sensory input or data. The active inference framework rests on the Markov blanket formalism, which allows us to partition systems of interest, such as biological systems, into internal states, external states, and the blanket (active and sensory) states that render internal and external states conditionally independent of each other. In this framework, the representational capacity of living creatures emerges as a consequence of their Markovian structure and nonequilibrium dynamics, which together entail a dual-aspect information geometry. This entails a modest representational capacity: internal states have an intrinsic information geometry that describes their trajectory over time in state space, as well as an extrinsic information geometry that allows internal states to encode (the parameters of) probabilistic beliefs about (fictive) external states. Building on this, we describe here how, in an automatic and emergent manner, information about stimuli can come to be encoded by groups of neurons bound by a Markov blanket; what is known as the neuronal packet hypothesis. As a concrete demonstration of this type of emergent representation, we present numerical simulations showing that self-organizing ensembles of active inference agents sharing the right kind of probabilistic generative model are able to encode recoverable information about a stimulus array.
Article
Full-text available
Active inference is a normative principle underwriting perception, action, planning, decision-making and learning in biological or artificial agents. From its inception, its associated process theory has grown to incorporate complex generative models, enabling simulation of a wide range of complex behaviours. Due to successive developments in active inference, it is often difficult to see how its underlying principle relates to process theories and practical implementation. In this paper, we try to bridge this gap by providing a complete mathematical synthesis of active inference on discrete state-space models. This technical summary provides an overview of the theory, derives neuronal dynamics from first principles and relates this dynamics to biological processes. Furthermore, this paper provides a fundamental building block needed to understand active inference for mixed generative models; allowing continuous sensations to inform discrete representations. This paper may be used as follows: to guide research towards outstanding challenges, a practical guide on how to implement active inference to simulate experimental behaviour, or a pointer towards various in-silico neurophysiological responses that may be used to make empirical predictions.
Chapter
Full-text available
This book evaluates the potential of the pragmatist notion of habit possesses to influence current debates at the crossroads between philosophy, cognitive sciences, neurosciences, and social theory. It deals with the different aspects of the pragmatic turn involved in 4E cognitive science and traces back the roots of such a pragmatic turn to both classical and contemporary pragmatism. Written by renowned philosophers, cognitive scientists, neuroscientists, and social theorists, this volume fills the need for an interdisciplinary account of the role of 'habit'. Researchers interested in the philosophy of mind, cognitive science, neuroscience, psychology, social theory, and social ontology will need this book to fully understand the pragmatist turn in current research on mind, action and society.
Article
Full-text available
We formalize the Gaia hypothesis about the Earth climate system using advances in theoretical biology based on the minimization of variational free energy. This amounts to the claim that non-equilibrium steady-state dynamics—that underwrite our climate—depend on the Earth system possessing a Markov blanket. Our formalization rests on how the metabolic rates of the biosphere (understood as Markov blanket's internal states) change with respect to solar radiation at the Earth's surface (i.e. external states), through the changes in greenhouse and albedo effects (i.e. active states) and ocean-driven global temperature changes (i.e. sensory states). Describing the interaction between the metabolic rates and solar radiation as climatic states—in a Markov blanket—amounts to describing the dynamics of the internal states as actively inferring external states. This underwrites climatic non-equilibrium steady-state through free energy minimization and thus a form of planetary autopoiesis.
Chapter
Full-text available
Abstract: In this chapter we assess the role that Markov blankets can play in the debate concerning the boundaries of the mind. We distinguish between two different ways in which Markov blankets can be construed: The first is a purely heuristic and instrumental version of Markov blankets derived from the work of Judaea Pearl. The second is an ontological version derived from the work of Karl Friston. The literature as it stands does not always acknowledge these distinct versions, often conflates them and is not sufficiently careful about the costs and consequences of holding either of them. We raise a number of problems and issues that require resolving before the Markov blanket construct can be useful in helping to decide the question of where the boundaries of the mind lie.
Article
Full-text available
In this paper, I suggest that some tales (or narratives) developed in the literature of embodied and radical embodied cognitive science can contribute to the solution of two longstanding issues in the cognitive neuroscience of perception and action. The two issues are (i) the fundamental problem of perception, or how to bridge the gap between sensations and the environment, and (ii) the fundamental problem of motor control, or how to better characterize the relationship between brain activity and behavior. In both cases, I am going to propose that cognitive neuroscience could incorporate embodied insights-coming from the sensorimotor approach to perception and action, and from ecological psychology-to advance the solution for each issue without the need for abandoning or undergoing a substantial revision of its core assumptions. Namely, cognitive neuroscience could incorporate the forgotten tales of embodiment without undergoing through a complete revolution. In this sense, I am proposing not a call but a farewell to arms.
Article
In this paper we show the identification between stochastic optimal control computation and probabilistic inference on a graphical model for certain class of control problems. We refer to these problems as Kullback-Leibler (KL) control problems. We illustrate how KL control can be used to model a multi-agent cooperative game for which optimal control can be approximated using belief propagation when exact inference is unfeasible.
Article
Causal reasoning is a crucial part of science and human intelligence. In order to discover causal relationships from data, we need structure discovery methods. We provide a review of background theory and a survey of methods for structure discovery. We primarily focus on modern, continuous optimization methods, and provide reference to further resources such as benchmark datasets and software packages. Finally, we discuss the assumptive leap required to take us from structure to causality.
Article
This article advertises a new account of computational implementation. According to the resemblance account, implementation is a matter of resembling a computational architecture. The resemblance account departs from previous theories by denying that computational architectures are exhausted by their formal, mathematical features. Instead, they are taken to be permeated with causality, spatiotemporality, and other nonmathematical features. I argue that this approach comports well with computer scientific practice and offers a novel response to so-called triviality arguments.