ArticlePDF Available

Spiking Neural Network Connectivity and its Potential for Temporal Sensory Processing and Variable Binding

Authors:

Abstract

The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain.
EDITORIAL
published: 19 December 2013
doi: 10.3389/fncom.2013.00182
Spiking neural network connectivity and its potential for
temporal sensory processing and variable binding
Julie Wall1*and Cornelius Glackin2
1Multimedia and Vision Research Group, School of Electronic Engineering and Computer Science, Queen Mary, University of London, London, UK
2Adaptive Systems Research Group, Department of Computer Science, University of Hertfordshire, Hatfield, Hertfordshire, UK
*Correspondence: julie.wall@qmul.ac.uk
Edited by:
Misha Tsodyks, Weizmann Institute of Science, Israel
Keywords: cell assembly, spiking neural network, spike timing, biological neurons, learning, connectivity, sensory processing
The most biologically-inspired artificial neurons are those of the
third generation, and are termed spiking neurons, as individual
pulses or spikes are the means by which stimuli are commu-
nicated. In essence, a spike is a short-term change in electrical
potential and is the basis of communication between biological
neurons. Unlike previous generations of artificial neurons, spik-
ing neurons operate in the temporal domain, and exploit time
as a resource in their computation. In 1952, Alan Lloyd Hodgkin
and Andrew Huxley produced the first model of a spiking neu-
ron; their model describes the complex electro-chemical process
that enables spikes to propagate through, and hence be com-
municated by, spiking neurons. Since this time, improvements
in experimental procedures in neurobiology, particularly with
in vivo experiments, have provided an increasingly more com-
plex understanding of biological neurons. For example, it is now
well-understood that the propagation of spikes between neurons
requires neurotransmitter, which is typically of limited supply.
When the supply is exhausted neurons become unresponsive. The
morphology of neurons, number of receptor sites, amongst many
other factors, means that neurons consume the supply of neu-
rotransmitter at different rates. This in turn produces variations
over time in the responsiveness of neurons, yielding various com-
putational capabilities. Such improvements in the understanding
of the biological neuron have culminated in a wide range of dif-
ferent neuron models, ranging from the computationally efficient
to the biologically realistic. These models enable the modeling of
neural circuits found in the brain.
In recent years, much of the focus in neuron modeling has
moved to the study of the connectivity of spiking neural net-
works. Spiking neural networks provide a vehicle to understand
from a computational perspective, aspects of the brain’s neural
circuitry. This understanding can then be used to tackle some
of the historically intractable issues with artificial neurons, such
as scalability and lack of variable binding. Current knowledge of
feed-forward, lateral, and recurrent connectivity of spiking neu-
rons, and the interplay between excitatory and inhibitory neurons
is beginning to shed light on these issues, by improved under-
standing of the temporal processing capabilities and synchronous
behavior of biological neurons. This research topic spans current
research on neuron models to spiking neural networks and their
application to interesting and current computational problems.
The research papers submitted to this topic can be categorized
into the following major areas of more efficient neuron model-
ing; lateral and recurrent spiking neural network connectivity;
exploitation of biological neural circuitry by means of spiking
neural networks; optimization of spiking neural networks; and
spiking neural networks for sensory processing.
Moujahid and d’Anjou (2012) stimulate the giant squid
axon with simulated spikes to develop some new insights into
the development of more relevant models of biological neu-
rons. They observed that temperature mediates the efficiency of
action potentials by reducing the overlap between sodium and
potassium currents in the ion exchange and subsequent energy
consumption. The original research article by Dockendorf and
Srinivasa (2013) falls into the area of lateral and recurrent spik-
ing neural network connectivity. It presents a recurrent spiking
model capable of learning episodes featuring missing and noisy
data. The presented topology provides a means of recalling previ-
ously encoded patterns where inhibition is of the high frequency
variety aiming to promote stability of the network. Kaplan et al.
(2013) also investigated the use of recurrent spiking connectiv-
ity in their work on motion-based prediction and the issue of
missing data. Here they address how anisotropic connectivity pat-
terns that consider the tuning properties of neurons efficiently
predict the trajectory of a disappearing moving stimulus. They
demonstrate and test this by simulating the network response in
a moving-dot blanking experiment.
Garrido et al. (2013) investigate how systematic modifications
of synaptic weights can exert close control over the timing of spike
transmissions. They demonstrate this using a network of leaky
integrate-and-fire spiking neurons to simulate cells of the cere-
bellar granular layer. Börgers and Walker (2013) investigate sim-
ulations of excitatory pyramidal cells and inhibitory interneurons
which interact and exhibit gamma rhythms in the hippocam-
pus and neocortex. They focus on how inhibitory interneurons
maintain synchrony using gap junctions. Similarly, Ponulak and
Hopfield (2013) also take inspiration from the neural structure
of the hippocampus to hypothesize about the problem of spa-
tial navigation. Their topology encodes the spatial environment
through an exploratory phase which utilizes “place” cells to reflect
all possible trajectory boundaries and environmental constraints.
Subsequently, a wave propagation process maps the trajectory
between the target or multiple targets and the current location by
altering the synaptic connectivity of the aforementioned “place”
cells in a single pass. A novel viewpoint of the state-of-the-art for
the exploitation of biological neural circuitry by means of spik-
ing neural networks is provided by Aimone and Weick (2013).In
their paper, a thorough and comprehensive review of modeling
Frontiers in Computational Neuroscience www.frontiersin.org December 2013 | Volume 7 | Article 182 |1
COMPUTATIONAL NEUROSCIENC
E
Wall and Glackin Spiking neural connectivity, variable binding
cortical damage due to stroke is provided. They argue that a the-
oretical understanding of the damaged cortical area post-disease
is vital while taking into account current thinking of models for
adult neurogenesis.
One of the issues with modeling large-scale spiking neural net-
works is the lack of tools to analyse such a large parameter space,
as Buice and Chow (2013) discuss in their hypothesis and theory
article. They propose a possible approach which combines mean
field theory with information about spiking correlations; thus
reducing the complexity to that of a more comprehensible rate-
like description. Demonstrations of spiking neural networks for
sensory processing include the work of Srinivasa and Jiang (2013).
Their research consists of the development of spiking neuron
models, initially assembled into an unstructured map topol-
ogy. The authors show how the combination of self-organized
and STDP-based continuous learning can provide the initial for-
mation and on-going maintenance of orientation and ocular
dominance maps of the kind commonly found in the visual
cortex.
It is clear that research on spiking neural networks has
expanded beyond computational models of individual neurons
and now encompasses large-scale networks which aim to model
the behavior of whole neural regions. This has resulted in a
diverse and exciting field of research with many perspectives and
a multitude of potential applications.
REFERENCES
Aimone, J. B., and Weick, J. P. (2013). Perspectives for computational modeling of
cell replacement for neurological disorders. Front. Comput. Neurosci. 7:150. doi:
10.3389/fncom.2013.00150
Börgers, C., and Walker, B. (2013). Toggling between gamma-frequency activity
and suppression of cell assemblies. Front. Comput. Neurosci. 7:33. doi: 10.3389/
fncom.2013.00033
Buice, M. A., and Chow, C. C. (2013). Generalized activity equations for spiking
neural network dynamics. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.
2013.00162
Dockendorf, K., and Srinivasa, N. (2013). Learning and prospective recall of noisy
spike pattern episodes. Front. Comput. Neurosci. 7:80. doi: 10.3389/fncom.2013.
00080
Garrido, J. A., Ros, E., and D’Angelo, E. (2013). Spike timing regulation on the mil-
lisecond scale by distributed synaptic plasticity at the cerebellum input stage: a
simulation study. Front.Comput. Neurosci. 7:64. doi: 10.3389/fncom.2013.00064
Kaplan, B. A., Lansner, A., Masson, G. S., and Perrinet, L. U. (2013). Anisotropic
connectivity implements motion-based prediction in a spiking neural network.
Front. Comput. Neurosci. 7:112. doi: 10.3389/fncom.2013.00112
Moujahid, A., and d’Anjou, A. (2012). Metabolic efficiency with fast spiking in the
squid axon. Front. Comput. Neurosci. 6:95. doi: 10.3389/fncom.2012.00095
Ponulak, F. J., and Hopfield, J. J. (2013). Rapid, parallel path planning by propa-
gating wavefronts of spiking neural activity. Front. Comput. Neurosci. 7:98. doi:
10.3389/fncom.2013.00098
Srinivasa, N., and Jiang, Q., (2013). Stable learning of functional maps in self-
organizing spiking neural networks with continuous synaptic plasticity. Front.
Comput. Neurosci. 7:10. doi: 10.3389/fncom.2013.00010
Received: 14 November 2013; accepted: 02 December 2013; published online: 19
December 2013.
Citation: Wall J and Glackin C (2013) Spiking neural network connectivity and
its potential for temporal sensory processing and variable binding. Front. Comput.
Neuro sci . 7:182. doi: 10.3389/fncom.2013.00182
This article was submitted to the journal Frontiers in Computational Neuroscience.
Copyright © 2013 Wall and Glackin. This is an open-access article distributed under
the terms of the Creative Commons Attribution License (CC BY). The use, distribu-
tion or reproduction in other forums is permitted, provided the original author(s)
or licensor are credited and that the original publication in this journal is cited, in
accordance with accepted academic practice. No use, distribution or reproduction is
permitted which does not comply with these terms.
Frontiers in Computational Neuroscience www.frontiersin.org December 2013 | Volume 7 | Article 182 |2
... A Liquid State Machines (LSM) is a computational model which consists essentially of recurrent and random spiking neural network and multiple read-out neurons (Zhang et al., 2015). Spiking Neural Network (SNN) according to Wall and Glackin (2013) is a third generation artificial neuron that is most biologically-inspired. They are preferred above the previous generations of artificial neurons because the spiking neurons operate in temporal domain and their computation is based on time resource. ...
Preprint
Liquid State Machine (LSM) is a neural model with real time computations which transforms the time varying inputs stream to a higher dimensional space. The concept of LSM is a novel field of research in biological inspired computation with most research effort on training the model as well as finding the optimum learning method. In this review, the performance of LSM model was investigated using two learning method, online learning and offline (batch) learning methods. The review revealed that optimal performance of LSM was recorded through online method as computational space and other complexities associated with batch learning is eliminated.
... Spiking neural networks (SNNs) (Maass, 1996(Maass, , 1997Kistler and Gerstner, 2002) are sometimes referred to as the "third generation" of neural networks because of their potential to supersede deep learning methods in the fields of computational neuroscience (Wall and Glackin, 2013) and biologically plausible machine learning (ML) . SNNs are also thought to be more practical for data-processing tasks in which the data has a temporal component since the neurons which comprise SNNs naturally integrate their inputs over time. ...
Article
Full-text available
The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET1, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.
Article
The physical implementation of artificial neural networks, also known as “neuromorphic engineering” as advocated by Carver Mead in the late 1980s, has become urgent because of the increasing demand on massive and unstructured data processing. complementary metal-oxide-semiconductor-based hardware suffers from high power consumption due to the von Neumann bottleneck; therefore, alternative hardware architectures and devices meeting the energy efficiency requirements are being extensively investigated for neuromorphic computing. Among the emerging neuromorphic electronics, oxide-based three-terminal artificial synapses merit the features of scalability and compatibility with the silicon technology as well as the concurrent signal transmitting-and-learning. In this Perspective, we survey four types of three-terminal artificial synapses classified by their operation mechanisms, including the oxide electrolyte-gated transistor, ion-doped oxide electrolyte-gated transistor, ferroelectric-gated transistor, and charge trapping-gated transistor. The synaptic functions mimicked by these devices are analyzed based on the tunability of the channel conductance correlated with the charge relocation and polarization in gate dielectrics. Finally, the opportunities and challenges of implementing oxide-based three-terminal artificial synapses in physical neural networks are delineated for future prospects.
Article
As the basic and essential unit of neuromorphic computing systems, artificial synaptic devices have the great potential to accelerate high-performance parallel computation, artificial intelligence, and adaptive learning. Among the proposed...
Article
Full-text available
Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.
Article
Full-text available
Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.
Article
Full-text available
Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.
Article
Full-text available
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware.
Article
Full-text available
Spike patterns in vivo are often incomplete or corrupted with noise that makes inputs to neuronal networks appear to vary although they may, in fact, be samples of a single underlying pattern or repeated presentation. Here we present a recurrent spiking neural network (SNN) model that learns noisy pattern sequences through the use of homeostasis and spike-timing dependent plasticity (STDP). We find that the changes in the synaptic weight vector during learning of patterns of random ensembles are approximately orthogonal in a reduced dimension space when the patterns are constructed to minimize overlap in representations. Using this model, representations of sparse patterns maybe associated through co-activated firing and integrated into ensemble representations. While the model is tolerant to noise, prospective activity, and pattern completion differ in their ability to adapt in the presence of noise. One version of the model is able to demonstrate the recently discovered phenomena of preplay and replay reminiscent of hippocampal-like behaviors.
Article
Full-text available
The way long-term synaptic plasticity regulates neuronal spike patterns is not completely understood. This issue is especially relevant for the cerebellum, which is endowed with several forms of long-term synaptic plasticity and has been predicted to operate as a timing and a learning machine. Here we have used a computational model to simulate the impact of multiple distributed synaptic weights in the cerebellar granular-layer network. In response to mossy fiber (MF) bursts, synaptic weights at multiple connections played a crucial role to regulate spike number and positioning in granule cells. The weight at MF to granule cell synapses regulated the delay of the first spike and the weight at MF and parallel fiber to Golgi cell synapses regulated the duration of the time-window during which the first-spike could be emitted. Moreover, the weights of synapses controlling Golgi cell activation regulated the intensity of granule cell inhibition and therefore the number of spikes that could be emitted. First-spike timing was regulated with millisecond precision and the number of spikes ranged from zero to three. Interestingly, different combinations of synaptic weights optimized either first-spike timing precision or spike number, efficiently controlling transmission and filtering properties. These results predict that distributed synaptic plasticity regulates the emission of quasi-digital spike patterns on the millisecond time-scale and allows the cerebellar granular layer to flexibly control burst transmission along the MF pathway.
Article
Full-text available
Gamma (30-80 Hz) rhythms in hippocampus and neocortex resulting from the interaction of excitatory and inhibitory cells (E- and I-cells), called Pyramidal-Interneuronal Network Gamma (PING), require that the I-cells respond to the E-cells, but don't fire on their own. In idealized models, there is a sharp boundary between a parameter regime where the I-cells have weak-enough drive for PING, and one where they have so much drive that they fire without being prompted by the E-cells. In the latter regime, they often de-synchronize and suppress the E-cells; the boundary was therefore called the "suppression boundary" by Börgers and Kopell (2005). The model I-cells used in the earlier work by Börgers and Kopell have a "type 1" phase response, i.e., excitatory input always advances them. However, fast-spiking inhibitory basket cells often have a "type 2" phase response: Excitatory input arriving soon after they fire delays them. We study the effect of the phase response type on the suppression transition, under the additional assumption that the I-cells are kept synchronous by gap junctions. When many E-cells participate on a given cycle, the resulting excitation advances the I-cells on the next cycle if their phase response is of type 1, and this can result in suppression of more E-cells on the next cycle. Therefore, strong E-cell spike volleys tend to be followed by weaker ones, and vice versa. This often results in erratic fluctuations in the strengths of the E-cell spike volleys. When the phase response of the I-cells is of type 2, the opposite happens: strong E-cell spike volleys delay the inhibition on the next cycle, therefore tend to be followed by yet stronger ones. The strengths of the E-cell spike volleys don't oscillate, and there is a nearly abrupt transition from PING to ING (a rhythm involving I-cells only).
Article
Full-text available
This study describes a spiking model that self-organizes for stable formation and maintenance of orientation and ocular dominance maps in the visual cortex (V1). This self-organization process simulates three development phases: an early experience-independent phase, a late experience-independent phase and a subsequent refinement phase during which experience acts to shape the map properties. The ocular dominance maps that emerge accommodate the two sets of monocular inputs that arise from the lateral geniculate nucleus (LGN) to layer 4 of V1. The orientation selectivity maps that emerge feature well-developed iso-orientation domains and fractures. During the last two phases of development the orientation preferences at some locations appear to rotate continuously through ±180° along circular paths and referred to as pinwheel-like patterns but without any corresponding point discontinuities in the orientation gradient maps. The formation of these functional maps is driven by balanced excitatory and inhibitory currents that are established via synaptic plasticity based on spike timing for both excitatory and inhibitory synapses. The stability and maintenance of the formed maps with continuous synaptic plasticity is enabled by homeostasis caused by inhibitory plasticity. However, a prolonged exposure to repeated stimuli does alter the formed maps over time due to plasticity. The results from this study suggest that continuous synaptic plasticity in both excitatory neurons and interneurons could play a critical role in the formation, stability, and maintenance of functional maps in the cortex.
Article
Full-text available
Fundamentally, action potentials in the squid axon are consequence of the entrance of sodium ions during the depolarization of the rising phase of the spike mediated by the outflow of potassium ions during the hyperpolarization of the falling phase. Perfect metabolic efficiency with a minimum charge needed for the change in voltage during the action potential would confine sodium entry to the rising phase and potassium efflux to the falling phase. However, because sodium channels remain open to a significant extent during the falling phase, a certain overlap of inward and outward currents is observed. In this work we investigate the impact of ion overlap on the number of the adenosine triphosphate (ATP) molecules and energy cost required per action potential as a function of the temperature in a Hodgkin-Huxley model. Based on a recent approach to computing the energy cost of neuronal action potential generation not based on ion counting, we show that increased firing frequencies induced by higher temperatures imply more efficient use of sodium entry, and then a decrease in the metabolic energy cost required to restore the concentration gradients after an action potential. Also, we determine values of sodium conductance at which the hydrolysis efficiency presents a clear minimum.