Christopher L. Buckley's research while affiliated with University of Sussex and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (116)
Capsule networks are a neural network architecture specialized for visual scene recognition. Features and pose information are extracted from a scene and then dynamically routed through a hierarchy of vector-valued nodes called ‘capsules’ to create an implicit scene graph, with the ultimate aim of learning vision directly as inverse graphics. Despi...
An open question in the study of emergent behaviour in multi-agent Bayesian systems is the relationship, if any, between individual and collective inference. In this paper we explore the correspondence between generative models that exist at two distinct scales, using spin glass models as a sandbox system to investigate this question. We show that...
We present a message passing interpretation of planning under Active Inference. Specifically, we show how the Active Inference planning procedure can be broken into a (partial) message passing sweep over a graph, followed by local computations of a cost functional (the Expected Free Energy). Using Forney-style Factor Graphs, we then proceed to show...
Predictive Coding Networks (PCNs) aim to learn a generative model of the world. Given observations, this generative model can then be inverted to infer the causes of those observations. However, when training PCNs, a noticeable pathology is often observed where inference accuracy peaks and then declines with further training. This cannot be explain...
Recent work has uncovered close links between classical reinforcement learning (RL) algorithms, Bayesian filtering, and Active Inference which lets us understand value functions in terms of Bayesian posteriors. An alternative, but less explored, model-free RL algorithm is the successor representation, which expresses the value function in terms of...
Bayesian theories of biological and brain function speculate that Markov blankets (a conditional independence separating a system from external states) play a key role for facilitating inference-like behaviour in living systems. Although it has been suggested that Markov blankets are commonplace in sparsely connected, nonequilibrium complex systems...
The optomotor response (OMR) is central to the locomotory behavior in diverse animal species including insects, fish and mammals. Furthermore, the study of the OMR in larval zebrafish has become a key model system for investigating the neural basis of sensorimotor control. However, a comprehensive understanding of the underlying control algorithms...
Language models (LMs) are pretrained to imitate internet text, including content that would violate human preferences if generated by an LM: falsehoods, offensive comments, personally identifiable information, low-quality or buggy code, and more. Here, we explore alternative objectives for pretraining LMs in a way that also guides them to generate...
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active...
Markov blankets – statistical independences between system and environment – have become popular to describe the boundaries of living systems under Bayesian views of cognition. The intuition behind Markov blankets originates from considering acyclic, atemporal networks. In contrast, living systems display recurrent, nonequilibrium interactions that...
Capsule networks are a neural network architecture specialized for visual scene recognition. Features and pose information are extracted from a scene and then dynamically routed through a hierarchy of vector-valued nodes called 'capsules' to create an implicit scene graph, with the ultimate aim of learning vision directly as inverse graphics. Despi...
Predictive Coding Networks (PCNs) aim to learn a generative model of the world. Given observations, this generative model can then be inverted to infer the causes of those observations. However, when training PCNs, a noticeable pathology is often observed where inference accuracy peaks and then declines with further training. This cannot be explain...
Bayesian theories of biological and brain function speculate that Markov blankets (a conditional independence separating a system from external states) play a key role for facilitating inference-like behaviour in living systems. Although it has been suggested that Markov blankets are commonplace in sparsely connected, nonequilibrium complex systems...
Recent work has uncovered close links between between classical reinforcement learning algorithms, Bayesian filtering, and Active Inference which lets us understand value functions in terms of Bayesian posteriors. An alternative, but less explored, model-free RL algorithm is the successor representation, which expresses the value function in terms...
An open question in the study of emergent behaviour in multi-agent Bayesian systems is the relationship, if any, between individual and collective inference. In this paper we explore the correspondence between generative models that exist at two distinct scales, using spin glass models as a sandbox system to investigate this question. We show that...
Reinforcement learning (RL) is frequently employed in fine-tuning large language models (LMs), such as GPT-3, to penalize them for undesirable features of generated sequences, such as offensiveness, social bias, harmfulness or falsehood. The RL formulation involves treating the LM as a policy and updating it to maximise the expected value of a rewa...
Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer perceptrons (MLPs) can be approximated using predictive coding, a biologically plausible process theory of cortical computation that relies solely on local...
Predictive coding is an influential model of cortical neural activity. It proposes that perceptual beliefs are furnished by sequentially minimising "prediction errors" - the differences between predicted and observed data. Implicit in this proposal is the idea that perception requires multiple cycles of neural activity. This is at odds with evidenc...
The optomotor response (OMR) is central to the locomotory behavior in diverse animal species including insects, fish and mammals. Furthermore, the study of the OMR in larval zebrafish has become a key model system for investigating the neural basis of sensorimotor control. However, a comprehensive understanding of the underlying control algorithms...
The free energy principle (FEP) states that any dynamical system can be interpreted as performing Bayesian inference upon its surrounding environment. Although, in theory, the FEP applies to a wide variety of systems, there has been almost no direct exploration or demonstration of the principle in concrete systems. In this work, we examine in depth...
The truly surprising thing about evolution is not how it makes individuals better adapted to their environment, but how it makes individuals. All individuals are made of parts that used to be individuals themselves, e.g., multicellular organisms from unicellular organisms. In such evolutionary transitions in individuality, the organised structure o...
The adaptive regulation of bodily and interoceptive parameters, such as body temperature, thirst and hunger is a central problem for any biological organism. Here, we present a series of simulations using the framework of active inference to formally characterize interoceptive control and some of its dysfunctions. We start from the premise that the...
Markov blankets –statistical independences between system and environment– have become popular to describe the boundaries of living systems under Bayesian views of cognition. The intuition behind Markov blanket originates from considering acyclic, atemporal networks. In contrast, living systems display recurrent interactions that generate pervasive...
Active inference is a mathematical framework which originated in computational neuroscience as a theory of how the brain implements action, perception and learning. Recently, it has been shown to be a promising approach to the problems of state-estimation and control under uncertainty, as well as a foundation for the construction of goal-driven beh...
In cognitive science, behaviour is often separated into two types. Reflexive control is habitual and immediate, whereas reflective is deliberative and time consuming. We examine the argument that Hierarchical Predictive Coding (HPC) can explain both types of behaviour as a continuum operating across a multi-layered network, removing the need for se...
The Free-Energy-Principle (FEP) is an influential and controversial theory which postulates a deep and powerful connection between the stochastic thermodynamics of self-organization and learning through variational inference. Specifically, it claims that any self-organizing system which can be statistically separated from its environment, and which...
Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world. The theory is closely related to the Bayesian brain framework and, over the last two decades, has gained substantial influence in the fields...
The Free Energy Principle (FEP) states that any dynamical system can be interpreted as performing Bayesian inference upon its surrounding environment. Although the FEP applies in theory to a wide variety of systems, there has been almost no direct exploration of the principle in concrete systems. In this paper, we examine in depth the assumptions r...
The exploration-exploitation trade-off is central to the description of adaptive behaviour in fields ranging from machine learning, to biology, to economics. While many approaches have been taken, one approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive' which is often implemented in te...
The Kalman filter is a fundamental filtering algorithm that fuses noisy sensory data, a previous state estimate, and a dynamics model to produce a principled estimate of the current state. It assumes, and is optimal for, linear models and white Gaussian noise. Due to its relative simplicity and general effectiveness, the Kalman filter is widely use...
The adaptive regulation of bodily and interoceptive parameters, such as body temperature, thirst and hunger is a central problem for any biological organism. Here, we present a series of simulations using the framework of Active Inference to formally characterize interoceptive control and some of its dysfunctions. We start from the premise that the...
The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference agents evince. Despite its imp...
Background
Selective Plane Illumination Microscopy (SPIM) is a fluorescence imaging technique that allows volumetric imaging at high spatio-temporal resolution to monitor neural activity in live organisms such as larval zebrafish. A major challenge in the construction of a custom SPIM microscope using a scanned laser beam is the control and synchro...
In cognitive science, behaviour is often separated into two types. Reflexive control is habitual and immediate, whereas reflective is deliberative and time consuming. We examine the argument that Hierarchical Predictive Coding (HPC) can explain both types of behaviour as a continuum operating across a multi-layered network, removing the need for se...
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence. Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem. While these frameworks both consider action selecti...
The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules. Unlike competing schemes, it converges to the exact backpropagation gradients, and utilises only a single type of computational unit and a single backwards relaxat...
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors. While motivated by high-level notions of variational inference, detailed neurophysiological models of cortical microcircuits which can...
Can the powerful backpropagation of error (backprop) reinforcement learning algorithm be formulated in a manner suitable for implementation in neural circuitry? The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global (error) signals, as in orthodox backprop. Recently several algor...
The field of reinforcement learning can be split into model-based and model-free methods. Here, we unify these approaches by casting model-free policy optimisation as amortised variational inference, and model-based planning as iterative variational inference, within a `control as hybrid inference' (CHI) framework. We present an implementation of C...
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence. Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem. While these frameworks both consider action selecti...
Active inference introduces a theory describing action-perception loops via the minimisation of variational (and expected) free energy or, under simplifying assumptions, (weighted) prediction error. Recently, active inference has been proposed as part of a new and unifying framework in the cognitive sciences: predictive processing. Predictive proce...
Selective Plane Illumination Microscopy (SPIM) is a fluorescence imaging technique that allows volumetric imaging at high spatio-temporal resolution to monitor neural activity in live organisms such as larval zebrafish. A major challenge in the construction of a custom SPIM microscope is the control and synchronization of the various hardware compo...
There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline. Broad classification schemes such as these help provide a unified perspective on disparate techniques and can contextualise and guide the development o...
Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. However, backprop is often criticised for lacking biological plausibility. Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-...
Linear Quadratic Gaussian (LQG) control is a framework first introduced in control theory that provides an optimal solution to linear problems of regulation in the presence of uncertainty. This framework combines Kalman-Bucy filters for the estimation of hidden states with Linear Quadratic Regulators for the control of their dynamics. Nowadays, LQG...
This short letter is a response to a recent Forum article in Trends in Cognitive Sciences, by Sun and Firestone, which reprises the so-called 'Dark Room Problem' as a challenge to the explanatory value of predictive processing and free-energy-minimisation frameworks for cognitive science. Among many possible responses to Sun and Firestone, we expla...
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-orient...
The Expected Free Energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference agents evince. Despite its imp...
The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards. In contrast, active inference, an emerging framework within cognitive and computational neuroscience, proposes that agents act to maximize the evidence for a biased generative model. Here, we illustrate how ideas from active inference can...
The Bayesian brain hypothesis, predictive processing, and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world. However, generative models need not be veridical representations of the environment. We suggest that they can (and should) be used to describe sensorimot...
In reinforcement learning (RL), agents often operate in partially observed and uncertain environments. Model-based RL suggests that this is best achieved by learning and exploiting a probabilistic model of the world. 'Active inference' is an emerging normative framework in cognitive and computational neuroscience that offers a unifying account of h...
The neural circuit linking the basal ganglia, the cerebellum and the cortex through the thalamus plays an essential role in motor and cognitive functions. However, how such functions are realized by multiple
loop circuits with neurons of multiple types is still unknown. In order
to investigate the dynamic nature of the whole-brain network, we
built...
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-orient...
In psychology and neuroscience it is common to de- scribe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches...
The Bayesian brain hypothesis, predictive processing and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world. However, generative models need not be veridical representations of the environment. We suggest that they can (and should) be used to describe sensorimoto...
In psychology and neuroscience it is common to describe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches to...
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical...
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical...
Several species of insects have become model systems for studying learning and memory formation. Although many studies focus on freely moving animals, studies implementing classical conditioning paradigms with harnessed insects have been important for investigating the exact cues that individuals learn and the neural mechanisms underlying learning...
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. The Bayesian brain hypothesis, predictive coding, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to unify understandings of life and cognition wit...
The assumption that action and perception can be investigated independently is entrenched in theories, models and experimental approaches across the brain and mind sciences. In cognitive science, this has been a central point of contention between computationalist and 4Es (enactive, embodied, extended and embedded) theories of cognition, with the f...
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. The Bayesian brain hypothesis, predictive coding, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to unify understandings of life and cognition wit...
Alternate schemes for sensory feedback in the whisker system.
(DOCX)
During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedb...
Wood ants are a model system for studying visual learning and navigation. They can forage for food and navigate to their nests effectively by forming memories of visual features in their surrounding environment. Previous studies of freely behaving ants have revealed many of the behavioural strategies and environmental features necessary for success...
Using an extracellular medium with high potassium/low magnesium concentration with the addition of 4-AP we induced epileptiform activity in combined hippocampus/entorhinal cortex slices of the rat brain [1]. In this in vitro model of temporal lobe epilepsy, we observed the repeating sequences of interictal discharge (IID) regimes and seizure-like e...
Active inference is emerging as a possible unifying theory of perception and action in cognitive and computational neuroscience. On this theory, perception is a process of inferring the causes of sensory data by minimising the error between actual sensations and those predicted by an inner \emph{generative} (probabilistic) model. Action on the othe...
The 'free energy principle' (FEP) has been suggested to provide a unified theory of the brain, integrating data and theory relating to action, perception, and learning. The theory and implementation of the FEP combines insights from Helmholtzian 'perception as inference', machine learning theory, and statistical thermodynamics. Here, we provide a d...
The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relation...
Brain state regulates sensory processing and motor control for adaptive behavior. Internal mechanisms of brain state control are well studied, but the role of external modulation from the environment is not well understood. Here, we examined the role of closed-loop environmental (CLE) feedback, in comparison to open-loop sensory input, on brain sta...
Research on the so-called "free-energy principle'' (FEP) in cognitive
neuroscience is becoming increasingly high-profile. To date, introductions to
this theory have proved difficult for many readers to follow, but it depends
mainly upon two relatively simple ideas: firstly that normative or teleological
values can be expressed as probability distri...
Coherent behaviour emerges from mutual interaction between the brain, body and environment across multiple timescales and not from within the brain alone [1,2]. For example sensation is actively shaped by dynamical interaction of the brain and environment through motor actions such as sniffing, saccading, and touching. The onset of active sensing i...
Oscillating neuronal circuits, known as central pattern generators (CPGs), are responsible for generating rhythmic behaviours such as walking, breathing and chewing. The CPG model alone however does not account for the ability of animals to adapt their future behaviour to changes in the sensory environment that signal reward. Here, using multi-elec...
NRP vs. remaining inter-cycle-interval, all durations and conditions. NRP plotted against the remaining inter-cycle interval (ICI) plotted for all NRP and ICI durations and conditions shows a significant correlation (r = 0.71, p<0.001, n = 138). The solid line represents best-fit linear regression.
(TIF)
The feeding circuitry of Lymnaea stagnalis. Feeding in Lymnaea is generated by the CPG circuit in the paired buccal ganglia. The basic 3-phase pattern (radula protraction, rasp and swallow) is produced by the three CPG interneuron types N1, N2 and N3, which entrain a larger pool of different B type motor neurons. A full feeding cycle is initiated w...
Simple distributed strategies that modify the behaviour of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimise their individual utilities by coordinating (or anti-coordinating) with their neighbours, to maximise the pay-offs from...
The natural energy minimization behavior of a dynamical system can be interpreted as a simple optimization process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this...
Significant insights into the dynamics of neuronal populations have been gained in the olfactory system where rich spatio-temporal dynamics is observed during, and following, exposure to odours. It is now widely accepted that odour identity is represented in terms of stimulus-specific rate patterning observed in the cells of the antennal lobe (AL)....
We present a systematic multiscale reduction of a biologically plausible model of the inhibitory neuronal network of the pheromone system of the moth. Starting from a Hodgkin-Huxley conductance based model we adiabatically eliminate fast variables and quantitatively reduce the model to mean field equations. We then prove analytically that the netwo...
The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But thi...
The conditions under which numerous independently-motivated components or agents in a complex system create globally efficient structures, behaviours or functions remains a fundamental open question for domains such as ecology, sociology, economics, organismic biology and many others. Here we show that if agents modify their relationships with othe...
Simple distributed strategies that modify the behavior of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimize their individual utilities by coordinating (or anticoordinating) with their neighbors, to maximize the payoffs from rand...