Figure 1 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Illustration of the neuron structure. The afferent neurons are connected to the postsynaptic neuron through synapses. Each emitted spike from afferent neurons will trigger a postsynaptic current (PSC). The membrane potential of the postsynaptic neuron is a weighted sum of all incoming PSCs from all afferent neurons. The yellow neuron denotes the instructor which is used for learning.
Source publication
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. S...
Similar publications
In this paper, a Differential Evolution (DE) algorithm for solving multiobjective optimization problems to solve the problem of tuning Artificial Neural Network (ANN) parameters is presented. The multiobjective evolutionary used in this study is a Differential Evolution algorithm while ANN used is Three-Term Backpropagation network (TBP). The propo...
Modular knowledge development in neural networks have the potential to feature robust decision given sudden changes in the environment or the data during real-time implementation. It can also provide a means to address robustness in decision making given certain features of the data are missing post training stage. In this paper, we present a multi...
An important issue in the design of gene selection algorithm for microarray data analysis is the formation of a suitable criterion function for measuring the relevance between different gene expressions. Mutual Information (MI) is widely used criterion function but it calculates the relevance on the entire samples only once which cannot exactly ide...
Multi-Valued Neuron (MVN) was proposed for pattern classification. It operates with complex-valued inputs, outputs, and weights, and its learning algorithm is based on error-correcting rule. The activation function of MVN is not differentiable. Therefore, we can not apply backpropagation when constructing multilayer structures. In this paper, we pr...
The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification per...
Citations
... However, the Tempotron could only memorize a certain number of spatio-temporal patterns, which are about three times of the network's synapses. To process and memorize more spatio-temporal patterns, Precisespike-driven (PSD) synaptic plasticity method takes advantage of the concrete spike timing and employs the error between the actual output spike train and the target spike train to control weight updates (Yu et al. 2013). The positive errors would trigger long-term potentiation, while the negative errors would contribute to short-term depression. ...
Spiking neural networks (SNNs) have manifested remarkable advantages in power consumption and event-driven property during the inference process. To take full advantage of low power consumption and improve the efficiency of these models further, the pruning methods have been explored to find sparse SNNs without redundancy connections after training. However, parameter redundancy still hinders the efficiency of SNNs during training. In the human brain, the rewiring process of neural networks is highly dynamic, while synaptic connections maintain relatively sparse during brain development. Inspired by this, here we propose an efficient evolutionary structure learning (ESL) framework for SNNs, named ESL-SNNs, to implement the sparse SNN training from scratch. The pruning and regeneration of synaptic connections in SNNs evolve dynamically during learning, yet keep the structural sparsity at a certain level. As a result, the ESL-SNNs can search for optimal sparse connectivity by exploring all possible parameters across time. Our experiments show that the proposed ESL-SNNs framework is able to learn SNNs with sparse structures effectively while reducing the limited accuracy. The ESL-SNNs achieve merely 0.28% accuracy loss with 10% connection density on the DVS-Cifar10 dataset. Our work presents a brand-new approach for sparse training of SNNs from scratch with biologically plausible evolutionary mechanisms, closing the gap in the expressibility between sparse training and dense training. Hence, it has great potential for SNN lightweight training and inference with low power consumption and small memory usage.
... Zheng et al. proposed a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN [28]. In addition, there are other learning algorithms, such as Remote Supervised Method (ReSuMe) [29],Spike Pattern Association Neuron (SPAN) [30] and Precise-Spike-Driven plasticity (PSD) [31]. Recently, some conversion-based approaches have got excellent accuracy [32]. ...
... The PSD learning rule proposed by Yu et al. [31] to learn precise timings for spatiotemporal spike patterns is summarized in this section. ...
... PSD modifies the weights so that the actual output spike train gradually converges toward the target spike train. As discussed in [31], when using the single exponential decay as the PSC kernel, the PSD learning rule is similar to the ReSuMe rule. This similarity results from the common WH rule. ...
Spiking neural networks (SNNs) use spikes to communicate between neurons, leading to biological plausible implementation. Considering spikes as events, SNNs are inherently suitable for processing address event representation (AER) data. Despite the progress in event-driven methods for AER data, there is little study on the relationship between time-driven and event-driven algorithms, that is required to gain insight into the understanding of SNNs. In this paper, an in-depth analysis of time-driven and event-driven algorithms was given. A same-timestamp problem in event-driven simulation, which may lead to an error spike, is found and solved in a simple efficacious way. An event-driven learning algorithm was proposed, which is efficient and compatible with a multitude of spike-based plasticity mechanisms. Leaky integrate-and-fire neurons with precise spike driven synaptic plasticity was used to demonstrate the property of the proposed event-driven algorithm and conduct experiments on two AER datasets (MNIST-DVS and AER Poker Card) and MNIST dataset. The results show that the event-driven simulation is always faster than the time-driven simulation, and the proposed algorithm achieves similar accuracy to other conventional time-driven methods.
... The number of patterns that a neuron can learn to classify depends on their length, the time constants of the neurons and the synaptic inputs [12]. While in the Tempotron the action potential is allowed to occur anywhere during the time of the learned patterns, it was later shown that neurons can be forced to fire also at a specific time [10,13] during a specific pattern's presence, which can be achieved by several more or less realistic synaptic mechanisms [14,15]. Both the Tempotron and the Chronotron employ supervised learning mechanisms based on label and time, respectively. ...
Spiking model neurons can be set up to respond selectively to specific spatio-temporal spike patterns by optimization of their input weights. It is unknown, however, if existing synaptic plasticity mechanisms can achieve this temporal mode of neuronal coding and computation. Here it is shown that changes of synaptic efficacies which tend to balance excitatory and inhibitory synaptic inputs can make neurons sensitive to particular input spike patterns. Simulations demonstrate that a combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is sufficient for self-organizing sensitivity for spatio-temporal spike patterns that repeat in the input. In networks inclusion of hetero-synaptic plasticity that depends on the pre-synaptic neurons leads to specialization and faithful representation of pattern sequences by a group of target neurons. Pattern detection is robust against a range of distortions and noise. The proposed combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is found to protect the memories for specific patterns from being overwritten by ongoing learning during extended periods when the patterns are not present. This suggests a novel explanation for the long term robustness of memory traces despite ongoing activity with substantial synaptic plasticity. Taken together, our results promote the plausibility of precise temporal coding in the brain.
... SPAN [25] uses convolving kernels to convert input, actual, and desired spike trains into analog signals. Unlike SPAN, precise-spikedriven (PSD) synaptic plasticity [99] only convolves input spike trains. ...
... Finally, a summary of the learning rules for synaptic weights is given in equations (9), (11), and (13) corresponding to each layer of the RSNNs. ...
... For a long time there had been doubts whether spiking neural networks (SNNs) can be trained by gradient descent due to the non-differentiable jumps in membrane potential when spikes occur. Using approximations and simplifying assumptions and building up from single spike, single layer to more complex scenarios, gradient based learning in spiking neural networks has gradually been developed over the last 20 years, including the early SpikeProp algorithm [2] and its variants [22,3,36,35,25], also applied to deeper networks [21,33,29], the Chronotron [9], the (multispike) tempotron [13,28,12,8], the Widrow-Hoff rule-based ReSuMe algorithm [27,31,41] and PSD [38], as well as the SPAN algorithm [23,24] and Slayer [30]. Other approaches have tried to relate back-propagation to phenomenological learning rules such as STDP [32], to enable gradient descent by removing the abstraction of instantaneous spikes [14], or using probabilistic interpretations to obtain smooth gradients [7]. ...
In a recent paper Wunderlich and Pehle introduced the EventProp algorithm that enables training spiking neural networks by gradient descent on exact gradients. In this paper we present extensions of EventProp to support a wider class of loss functions and an implementation in the GPU enhanced neuronal networks framework which exploits sparsity. The GPU acceleration allows us to test EventProp extensively on more challenging learning benchmarks. We find that EventProp performs well on some tasks but for others there are issues where learning is slow or fails entirely. Here, we analyse these issues in detail and discover that they relate to the use of the exact gradient of the loss function, which by its nature does not provide information about loss changes due to spike creation or spike deletion. Depending on the details of the task and loss function, descending the exact gradient with EventProp can lead to the deletion of important spikes and so to an inadvertent increase of the loss and decrease of classification accuracy and hence a failure to learn. In other situations the lack of knowledge about the benefits of creating additional spikes can lead to a lack of gradient flow into earlier layers, slowing down learning. We eventually present a first glimpse of a solution to these problems in the form of `loss shaping', where we introduce a suitable weighting function into an integral loss to increase gradient flow from the output layer towards earlier layers.
... Similarly, PSD (Precise-Spike-Driven) [98] is also derived from Widrow-Hoff rule after transforming spikes into analog signal but distinguishable from SPAN on the spike trains that need to be transformed. Unlike the method used in SPAN, PSD only does spike convolution on input spike trains s i (t) = s i (t) * κ(t), where κ(t) is the kernel function. ...
Artificial Neural Network (ANN) has served as an important pillar of machine learning which played a crucial role in fueling the robust artificial intelligence (AI) revival experienced in the last few years. Inspired by the biological brain architecture of living things, ANN has shown widespread success in pattern recognition, data analysis and classification tasks. Among the many models of neural networks conceptualized and developed over the years, the Spiking Neural Network (SNN) which was initiated in 1996 has shown great promise in the current push towards compact embedded AI. By combining both spatial and temporal information as features in the training and testing process, many inherent shortcomings of traditional ANNs can be overcome. With temporal features and event-driven updating of the network, SNNs hold the potential of improving computational and energy efficiency. In SNN, the most basic signal carrier element is the spike, bringing about a revolution in neural network weights updating compared to traditional methods that are widely applied in ANNs. In literature, there have been numerous SNN weights updating algorithms developed in recent years. With the active and dynamic research work on SNN, a consolidation of the state-of-the-art SNN research is beneficial. This paper is aimed at reviewing and surveying the current status of research pertaining to SNN, in particular highlighting the various novel SNN training techniques coupled with an objective comparison of the techniques. SNN applications and associated neuromorphic hardware systems are also covered in this survey, with some thoughts on the challenges in developing new SNN training algorithms and discussion on potential future research trends are presented in this survey.
... Since the training process does not occur in the SNNs, it cannot fully reflect the characteristics of the strong physiological rationality of the SNNs. The second type is based on error backpropagation to obtain higher model accuracy, such as Tempotron (Gütig and Sompolinsky, 2006), PSD (Yu et al., 2013), ReSuMe (Ponulak, 2005), MST (Gütig, 2016), EMLC (Yu et al., 2020), MPD-AL , SpikeProp (Bohte et al., 2000), STCA (Gu et al., 2019), etc. The supervised learning methods mainly calculate the difference between the voltage of the output neuron at target time points and the threshold value to change the weight (Xie et al., 2016). ...
Spiking neural network (SNN) is considered to be the brain-like model that best conforms to the biological mechanism of the brain. Due to the non-differentiability of the spike, the training method of SNNs is still incomplete. This paper proposes a supervised learning method for SNNs based on associative learning: ALSA. The method is based on the associative learning mechanism, and its realization is similar to the animal conditioned reflex process, with strong physiological plausibility and rationality. This method uses improved spike-timing-dependent plasticity (STDP) rules, combined with a teacher layer to induct spikes of neurons, to strengthen synaptic connections between input spike patterns and specified output neurons, and weaken synaptic connections between unrelated patterns and unrelated output neurons. Based on ALSA, this paper also completed the supervised learning classification tasks of the IRIS dataset and the MNIST dataset, and achieved 95.7 and 91.58% recognition accuracy, respectively, which fully proves that ALSA is a feasible SNNs supervised learning method. The innovation of this paper is to establish a biological plausible supervised learning method for SNNs, which is based on the STDP learning rules and the associative learning mechanism that exists widely in animal training.
... Method (ReSuMe) [118], Chronotron [187], and the Precise-Spike-Driven Synaptic Plasticity (PSD) rule [188] have been designed to supervise SNNs to output desired trains of spikes. ...
... The PSD rule has been proposed for training SNNs to associate input patterns with a desired output firing pattern, but unlike Chronotron's E-learning, it does not use gradient descent. Under certain conditions, it resembles ReSuMe [188], which can be caused by the fact that they were both developed as different interpretations of the Widrow-Hoff rule [189]. PSD rule was derived by converting spiking patterns into analogue signals. ...
Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system.
... Supervised learning maps inputs to outputs based on instructive signals, such as input-output pairs, have an important role in shaping brain function by calibrating sensorimotor networks and establishing networks that support certain cognitive skills [9]. Inspired by neuroscience, supervised learning rules for SNNs like ReSuMe [10], Tempotron [11], and precise-spike-driven (PSD) [12] have been proposed and successfully deployed in tasks such as image recognition [13] and sensory data processing [14]. Among them, the PSD rule is both computationally efficient and biologically plausible. ...
... A positive or negative post-spike was generated by a function generator in Agilent B1500 and fed back to the 1T1E synapse by comparing output-spike and teacher-spike. Fig. 2(a) shows the three scenarios of the PSD learning in associating an input spatiotemporal spike pattern with a desired spike train [12]. If a synapse transmits a pre-spike shortly before the teacherspike or output-spike, synaptic weight update will be driven by the difference between the teacher-spike and output-spike. ...
Spiking neural networks (SNNs) are a powerful and efficient information processing approach. However, to deploy SNN on resource-constrained edge systems, compact and low power synapses are required, posing a significant challenge to the conventional silicon-based digital circuits in terms of area- and energy-efficiency. In this study, electrolyte-gated transistors (EGTs) paired with conventional transistors were used as the building blocks to implement SNNs. The one transistor one EGT (1T1E) synapse features heterosynaptic plasticity, which provides a flexible and efficient way to practice supervised learning via spike-timing-dependent plasticity. Based on this method, an SNN with spatiotemporal coding was implemented to recognize the handwritten alphabets, demonstrating 98.3% accuracy at 10% noise level with 5 fJ per synaptic transmission and 1.05 pJ per synaptic programming. These results pave the way for energy-efficiently neuromorphic computing in the future.