Gabriel René’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Designing ecosystems of intelligence from first principles
  • Article

January 2024

·

135 Reads

·

39 Citations

Collective Intelligence

Karl J Friston

·

·

·

[...]

·

Gabriel René

This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what we call “shared intelligence.” This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one’s sensed world—also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales, that is, inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent’s generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing—leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first—and key—step towards such an ecology.


Designing Explainable Artificial Intelligence with Active Inference: A Framework for Transparent Introspection and Decision-Making

November 2023

·

80 Reads

·

14 Citations

Communications in Computer and Information Science

This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle. We first provide a brief overview of active inference, and in particular, of how it applies to the modeling of decision-making, introspection, as well as the generation of overt and covert actions. We then discuss how active inference can be leveraged to design explainable AI systems, namely, by allowing us to model core features of “introspective” processes and by generating useful, human-interpretable models of the processes involved in decision-making. We propose an architecture for explainable AI systems using active inference. This architecture foregrounds the role of an explicit hierarchical generative model, the operation of which enables the AI system to track and explain the factors that contribute to its own decisions, and whose structure is designed to be interpretable and auditable by human users. We outline how this architecture can integrate diverse sources of information to make informed decisions in an auditable manner, mimicking or reproducing aspects of human-like consciousness and introspection. Finally, we discuss the implications of our findings for future research in AI, and the potential ethical considerations of developing AI systems with (the appearance of) introspective capabilities.


Figure 1: A basic generative model for precision-weighted perceptual inference. This figure depicts an elementary generative model that is capable of performing precisionweighted perceptual inference. States are depicted as circles and denoted in lowercase: observable states or outcomes are denoted o and latent states (which need to be inferred) are denoted s. Parameters are depicted as squares and denoted as uppercase. The likelihood mapping A relates outcomes to the states that cause them, whereas D harnesses our prior beliefs about states, independent of how they are sampled. The precision term γ controls the precision or weighting assigned to elements of the likelihood, and implements attention as precision-weighting. Figure from [61].
Figure 2: A generative model for policy selection. This figure depicts a more sophisticated generative model that is apt for planning and the selection of actions in the future. The basic model depicted in Figure 1 has now been expanded to include beliefs about the current course of action or policy (denoted ¯ π), as well as B, C, E, F and G parameters. This kind of model generates a time series of states (s 1 , s 2 , etc.) and outcomes (o 1 , o 2 , etc.). The state transition (B) parameter encodes the transition probabilities between states over time, independently of the way they are sampled. B, C, E, F and G enter into the selection of beliefs about courses of action, a.k.a. policies. The C vector specifies preferred or expected outcomes and enters into the calculation of variational (F) and expected (G) free energies. The E vector specifies a prior preference for specific courses of action. Figure from [61].
Designing explainable artificial intelligence with active inference: A framework for transparent introspection and decision-making
  • Preprint
  • File available

June 2023

·

603 Reads

·

1 Citation

This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle. We first provide a brief overview of active inference, and in particular, of how it applies to the modeling of decision-making, introspection, as well as the generation of overt and covert actions. We then discuss how active inference can be leveraged to design explainable AI systems, namely, by allowing us to model core features of ``introspective'' processes and by generating useful, human-interpretable models of the processes involved in decision-making. We propose an architecture for explainable AI systems using active inference. This architecture foregrounds the role of an explicit hierarchical generative model, the operation of which enables the AI system to track and explain the factors that contribute to its own decisions, and whose structure is designed to be interpretable and auditable by human users. We outline how this architecture can integrate diverse sources of information to make informed decisions in an auditable manner, mimicking or reproducing aspects of human-like consciousness and introspection. Finally, we discuss the implications of our findings for future research in AI, and the potential ethical considerations of developing AI systems with (the appearance of) introspective capabilities.

Download

Figure 1: Belief updating on a statistical manifold.
Designing Ecosystems of Intelligence from First Principles

December 2022

·

924 Reads

·

7 Citations

This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants\unicode{x2014}what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world\unicode{x2014}also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing\unicode{x2014}leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first\unicode{x2014}and key\unicode{x2014}step towards such an ecology.

Citations (4)


... This describes the exploration-exploitation tradeoff inherent to agent-based systems. For more details about the FEP, the interested reader can check the following references [Friston et al.(2016), Kirchhoff et al.(2018), Friston et al.(2009), Friston et al.(2024b]. ...

Reference:

Distributed Intelligence in the Computing Continuum with Active Inference
Designing ecosystems of intelligence from first principles
  • Citing Article
  • January 2024

Collective Intelligence

... A viable alternative is to learn a model of the environment [93], e.g., with Bayesian non-parametrics [18]; however, these approaches are still computationally demanding. Albarracin et al. described how active inference may find an answer to the black box problem [94], and we further showed how different elements of an active inference agent have practical and interpretable meanings. In this view, optimization of parameters in hybrid models could be an effective alternative to deep RL algorithms, or other approaches in active inference relying on the use of neural networks as generative models. ...

Designing Explainable Artificial Intelligence with Active Inference: A Framework for Transparent Introspection and Decision-Making
  • Citing Chapter
  • November 2023

Communications in Computer and Information Science

... Obviously, it is not possible to say that disembodied entities such as LLMs could never achieve understanding or agency in some sense. The production of systems able to describe the reasons for their decisions (i.e., explainability) in natural language, as a pragmatic premise to overt discourse, is a major avenue for AIs' development (Parr & Pezzulo, 2021;Albarracin et al., 2023;Chalmers, 2023). However, even if LLMs could be considered bona fide cognitive agents, this (as we noted earlier) does very little to undermine the position of 4E and AIF accounts of cognition. ...

Designing explainable artificial intelligence with active inference: A framework for transparent introspection and decision-making

... Whereas, in the current model, the parameter values were set by hand, the model naturally lends itself to learning the parameters from data (see Wei et al., , 2023b for our earlier work in this direction). The design space for incorporating techniques from contemporary generative AI in active inference models is large and the exploration of these possibilities has only begun (see e.g., Fountas et al., 2020;Tschantz et al., 2020;Lanillos et al., 2021;Friston et al., 2022;Mazzaglia et al., 2022). However, regardless of the implementation, behavioral models developed based on the active inference principles outlined above are fundamentally explainable and interpretable which is one of their key potential advantages compared to existing black-box approaches for agent modeling (Albarracin et al., 2023). ...

Designing Ecosystems of Intelligence from First Principles