Christopher L Buckley

University of Sussex, Brighton, ENG, United Kingdom

Are you Christopher L Buckley?

Claim your profile

Publications (28)35.49 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term “evolutionary connectionism” to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. We review the evidence supporting the functional equivalences between the domains of learning and of evolution, and discuss the potential for this to resolve conceptual problems in our understanding of the evolution of developmental, ecological and reproductive organisations and, in particular, the major evolutionary transitions.
    Full-text · Article · Dec 2015 · Evolutionary Biology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Oscillating neuronal circuits, known as central pattern generators (CPGs), are responsible for generating rhythmic behaviours such as walking, breathing and chewing. The CPG model alone however does not account for the ability of animals to adapt their future behaviour to changes in the sensory environment that signal reward. Here, using multi-electrode array (MEA) recording in an established experimental model of centrally generated rhythmic behaviour we show that the feeding CPG of Lymnaea stagnalis is itself associated with another, and hitherto unidentified, oscillating neuronal population. This extra-CPG oscillator is characterised by high population-wide activity alternating with population-wide quiescence. During the quiescent periods the CPG is refractory to activation by food-associated stimuli. Furthermore, the duration of the refractory period predicts the timing of the next activation of the CPG, which may be minutes into the future. Rewarding food stimuli and dopamine accelerate the frequency of the extra-CPG oscillator and reduce the duration of its quiescent periods. These findings indicate that dopamine adapts future feeding behaviour to the availability of food by significantly reducing the refractory period of the brain's feeding circuitry.
    Full-text · Article · Jul 2012 · PLoS ONE
  • Source
    Richard A. Watson · Rob Mills · C.L. Buckley
    [Show abstract] [Hide abstract]
    ABSTRACT: The natural energy minimization behavior of a dynamical system can be interpreted as a simple optimization process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge—not one amenable to the spontaneous energy minimization behavior of a natural dynamical system. However, in this article we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organization. We use a ‘‘self-modeling’’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimization behavior of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully distributed, positive feedback mechanisms that are relevant to other ‘‘active linking’’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behavior in various non-neural adaptive networks such as social, genetic and ecological networks.
    Full-text · Article · Aug 2011 · Adaptive Behavior
  • Source
    Christopher L Buckley · Thomas Nowotny
    [Show abstract] [Hide abstract]
    ABSTRACT: Significant insights into the dynamics of neuronal populations have been gained in the olfactory system where rich spatio-temporal dynamics is observed during, and following, exposure to odours. It is now widely accepted that odour identity is represented in terms of stimulus-specific rate patterning observed in the cells of the antennal lobe (AL). Here we describe a nonlinear dynamical framework inspired by recent experimental findings which provides a compelling account of both the origin and the function of these dynamics. We start by analytically reducing a biologically plausible conductance based model of the AL to a quantitatively equivalent rate model and construct conditions such that the rate dynamics are well described by a single globally stable fixed point (FP). We then describe the AL's response to an odour stimulus as rich transient trajectories between this stable baseline state (the single FP in absence of odour stimulation) and the odour-specific position of the single FP during odour stimulation. We show how this framework can account for three phenomena that are observed experimentally. First, for an inhibitory period often observed immediately after an odour stimulus is removed. Second, for the qualitative differences between the dynamics in the presence and the absence of odour. Lastly, we show how it can account for the invariance of a representation of odour identity to both the duration and intensity of an odour stimulus. We compare and contrast this framework with the currently prevalent nonlinear dynamical framework of 'winnerless competition' which describes AL dynamics in terms of heteroclinic orbits. This article is part of a Special Issue entitled "Neural Coding".
    Full-text · Article · Jul 2011 · Brain research
  • Source
    Christopher L Buckley · Thomas Nowotny
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a systematic multiscale reduction of a biologically plausible model of the inhibitory neuronal network of the pheromone system of the moth. Starting from a Hodgkin-Huxley conductance based model we adiabatically eliminate fast variables and quantitatively reduce the model to mean field equations. We then prove analytically that the network's ability to operate on signal amplitudes across several orders of magnitude is optimal when a disinhibitory mode is close to losing stability and the network dynamics are close to bifurcation. This has the potential to extend the idea that optimal dynamic range in the brain arises as a critical phenomenon of phase transitions in excitable media to brain regions that are dominated by inhibition or have slow dynamics.
    Full-text · Article · Jun 2011 · Physical Review Letters
  • Source
    Richard A Watson · Rob Mills · C L Buckley
    [Show abstract] [Hide abstract]
    ABSTRACT: In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organize into structures that enhance global adaptation, efficiency, or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology, and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalization, and optimization are well understood. Such global functions within a single agent or organism are not wholly surprising, since the mechanisms (e.g., Hebbian learning) that create these neural organizations may be selected for this purpose; but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviors when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g., when they can influence which other agents they interact with), then, in adapting these inter-agent relationships to maximize their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviors as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalize by idealizing stored patterns and/or creating new combinations of subpatterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviors in the same sense, and by the same mechanism, as with the organizational principles familiar in connectionist models of organismic learning.
    Full-text · Article · May 2011 · Artificial Life
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Simple distributed strategies that modify the behavior of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimize their individual utilities by coordinating (or anticoordinating) with their neighbors, to maximize the payoffs from randomly weighted pairwise games. In general, agents will opt for the behavior that is the best compromise (for them) of the many conflicting constraints created by their neighbors, but the attractors of the system as a whole will not maximize total utility. We then consider agents that act as creatures of habit by increasing their preference to coordinate (anticoordinate) with whichever neighbors they are coordinated (anticoordinated) with at present. These preferences change slowly while the system is repeatedly perturbed, so that it settles to many different local attractors. We find that under these conditions, with each perturbation there is a progressively higher chance of the system settling to a configuration with high total utility. Eventually, only one attractor remains, and that attractor is very likely to maximize (or almost maximize) global utility. This counterintuitive result can be understood using theory from computational neuroscience; we show that this simple form of habituation is equivalent to Hebbian learning, and the improved optimization of global utility that is observed results from well-known generalization capabilities of associative memory acting at the network scale. This causes the system of selfish agents, each acting individually but habitually, to collectively identify configurations that maximize total utility.
    Full-text · Article · May 2011 · Artificial Life
  • Source
    Richard A. Watson · C. L. Buckley · Rob Mills
    [Show abstract] [Hide abstract]
    ABSTRACT: When a dynamical system with multiple point attractors is released from an arbitrary initial condition, it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimizes these constraints by this method is unlikely or may take many attempts. Here, we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimize total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely “recalling” low energy states that have been previously visited but “predicting” their location by generalizing over local attractor states that have already been visited. This “self-modeling” framework, i.e., a system that augments its behavior with an associative memory of its own attractors, helps us better understand the conditions under which a simple locally mediated mechanism of self-organization can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph coloring and distributed task allocation problems. © 2010 Wiley Periodicals, Inc. Complexity 16: 17–26, 2011 © 2011 Wiley Periodicals, Inc.
    Full-text · Article · May 2011 · Complexity
  • Source
    L Barnett · C L Buckley · S Bullock
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end, Tononi et al. [Proc. Natl. Acad. Sci. USA. 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular, we explicitly establish a dependency of neural complexity on cyclic graph motifs.
    Full-text · Article · Apr 2011 · Physical Review E
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For some moth species, especially those closely interrelated and sympatric, recognizing a specific pheromone component concentration ratio is essential for males to successfully locate conspecific females. We propose and determine the properties of a minimalist competition-based feed-forward neuronal model capable of detecting a certain ratio of pheromone components independently of overall concentration. This model represents an elementary recognition unit for the ratio of binary mixtures which we propose is entirely contained in the macroglomerular complex (MGC) of the male moth. A set of such units, along with projection neurons (PNs), can provide the input to higher brain centres. We found that (1) accuracy is mainly achieved by maintaining a certain ratio of connection strengths between olfactory receptor neurons (ORN) and local neurons (LN), much less by properties of the interconnections between the competing LNs proper. An exception to this rule is that it is beneficial if connections between generalist LNs (i.e. excited by either pheromone component) and specialist LNs (i.e. excited by one component only) have the same strength as the reciprocal specialist to generalist connections. (2) successful ratio recognition is achieved using latency-to-first-spike in the LN populations which, in contrast to expectations with a population rate code, leads to a broadening of responses for higher overall concentrations consistent with experimental observations. (3) when longer durations of the competition between LNs were observed it did not lead to higher recognition accuracy.
    Full-text · Article · Feb 2011 · PLoS ONE
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial, temporal, and modulatory factors affecting the evolvability of GasNets — a style of artificial neural network incorporating an analogue of volume signalling — are investigated. The focus of the article is a comparative study of variants of the GasNet, implementing various spatial, temporal, and modulatory constraints, used as control systems in an evolutionary robotics task involving visual discrimination. The results of the study are discussed in the context of related research. © 2010Wiley Periodicals, Inc. Complexity 16: 35–44, 2010 © 2010 Wiley Periodicals, Inc.
    Full-text · Article · Nov 2010 · Complexity
  • Source
    Christopher L. Buckley · Seth Bullock · Lionel Barnett
    [Show abstract] [Hide abstract]
    ABSTRACT: To gain a deeper understanding of the impact of spatial embedding on the dynamics of complex systems we employ a measure of interaction complexity developed within neuroscience using the tools of statistical information theory. We apply this measure to a set of simple network models embedded within Euclidean spaces of varying dimensionality in order to characterise the way in which the constraints imposed by low-dimensional spatial embedding contribute to the dynamics (rather than the structure) of complex systems. We demonstrate that strong spatial constraints encourage high intrinsic complexity, and discuss the implications for complex systems in general.
    Full-text · Article · Nov 2010 · Complexity

  • No preview · Article · Jan 2010

  • No preview · Article · Jan 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Document is itself an extended abstract.
    Full-text · Article · Jan 2010
  • Source
    Seth Bullock · Christopher L Buckley
    [Show abstract] [Hide abstract]
    ABSTRACT: Architectural design is typically limited by the constraints imposed by physical space. If and when opportunities to attenuate or extinguish these limits arise, should they be seized? Here it is argued that the limiting influence of spatial embedding should not be regarded as a frustrating "tyranny" to be escaped wherever possible, but as a welcome enabling constraint to be leveraged. Examples from the natural world are presented, and an appeal is made to some recent results on complex systems and measures of interaction complexity.
    Full-text · Article · Nov 2009 · Technoetic Arts a Journal of Speculative Research
  • Source
    L Barnett · C L Buckley · S Bullock
    [Show abstract] [Hide abstract]
    ABSTRACT: Tononi [Proc. Natl. Acad. Sci. U.S.A. 91, 5033 (1994)] proposed a measure of neural complexity based on mutual information between complementary subsystems of a given neural network, which has attracted much interest in the neuroscience community and beyond. We develop an approximation of the measure for a popular Gaussian model which, applied to a continuous-time process, elucidates the relationship between the complexity of a neural system and its structural connectivity. Moreover, the approximation is accurate for weakly coupled systems and computationally cheap, scaling polynomially with system size in contrast to the full complexity measure, which scales exponentially. We also discuss connectivity normalization and resolve some issues stemming from an ambiguity in the original Gaussian model.
    Full-text · Article · Jun 2009 · Physical Review E

  • No preview · Article · Jan 2009
  • Source
    Christopher L Buckley · Thomas Nowotny

    Full-text · Article · Jan 2009 · BMC Neuroscience
  • Source
    Richard A. Watson · C. L. Buckley · Rob Mills
    [Show abstract] [Hide abstract]
    ABSTRACT: In neural networks, two specific dynamical behaviours are well known: 1) Networks naturally find patterns of activation that locally minimise constraints among interactions. This can be understood as the local minimisation of an energy or potential function, or the optimisation of an objective function. 2) In distinct scenarios, Hebbian learning can create new interactions that form associative memories of activation patterns. In this paper we show that these two behaviours have a surprising interaction – that learning of this type significantly improves the ability of a neural network to find configurations that satisfy constraints/perform effective optimisation. Specifically, the network develops a memory of the attractors that it has visited, but importantly, is able to generalise over previously visited attractors to increase the basin of attraction of superior attractors before they are visited. The network is ultimately transformed into a different network that has only one basin of attraction, but this attractor corresponds to a configuration that is very low energy in the original network. The new network thus finds optimised configurations that were unattainable (had exponentially small basins of attraction) in the original network dynamics.
    Full-text · Article · Jan 2009

Publication Stats

179 Citations
35.49 Total Impact Points

Institutions

  • 2010-2012
    • University of Sussex
      • • School of Engineering and Informatics
      • • Department of Informatics
      • • Centre for Computational Neuroscience and Robotics
      Brighton, ENG, United Kingdom
  • 2007-2008
    • University of Southampton
      • Department of Electronics and Computer Science (ECS)
      Southampton, England, United Kingdom
  • 2005
    • University of Leeds
      • School of Computing
      Leeds, England, United Kingdom