Illustration of the neuron structure.
The afferent neurons are connected to the postsynaptic neuron through synapses. Each emitted spike from afferent neurons will trigger a postsynaptic current (PSC). The membrane potential of the postsynaptic neuron is a weighted sum of all incoming PSCs from all afferent neurons. The yellow neuron denotes the instructor which is used for learning.

Illustration of the neuron structure. The afferent neurons are connected to the postsynaptic neuron through synapses. Each emitted spike from afferent neurons will trigger a postsynaptic current (PSC). The membrane potential of the postsynaptic neuron is a weighted sum of all incoming PSCs from all afferent neurons. The yellow neuron denotes the instructor which is used for learning.

Source publication
Article
Full-text available
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. S...

Similar publications

Chapter
Full-text available
In this paper, a Differential Evolution (DE) algorithm for solving multiobjective optimization problems to solve the problem of tuning Artificial Neural Network (ANN) parameters is presented. The multiobjective evolutionary used in this study is a Differential Evolution algorithm while ANN used is Three-Term Backpropagation network (TBP). The propo...
Conference Paper
Full-text available
Modular knowledge development in neural networks have the potential to feature robust decision given sudden changes in the environment or the data during real-time implementation. It can also provide a means to address robustness in decision making given certain features of the data are missing post training stage. In this paper, we present a multi...
Conference Paper
Full-text available
An important issue in the design of gene selection algorithm for microarray data analysis is the formation of a suitable criterion function for measuring the relevance between different gene expressions. Mutual Information (MI) is widely used criterion function but it calculates the relevance on the entire samples only once which cannot exactly ide...
Article
Full-text available
Multi-Valued Neuron (MVN) was proposed for pattern classification. It operates with complex-valued inputs, outputs, and weights, and its learning algorithm is based on error-correcting rule. The activation function of MVN is not differentiable. Therefore, we can not apply backpropagation when constructing multilayer structures. In this paper, we pr...
Article
Full-text available
The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification per...

Citations

... The training step determine the actual output neuron membrane potential change to match the predicted value. However, the neurons in this method do not have that capability; 3) Learning algorithms based on spike-timing-dependent plasticity (STDP), such as the Hebbian learning algorithm [37]; 4) Algorithms for remote supervised learning, such as the ReSuMe algorithm [38]; 5) The supervised learning architecture and network (SPAN) algorithm, for example, supervised learning technique based on pulse sequence convolution [39] and the power spectral density (PSD) algorithm [40]. ...
Article
Full-text available
With the capacities of self-learning, acquainted capacities, high-speed looking for ideal arrangements, solid nonlinear fitting, and mapping self-assertively complex nonlinear relations, neural systems have made incredible advances and accomplished broad application over the final half-century. As one of the foremost conspicuous methods for fake insights, neural systems are growing toward high computational speed and moo control utilization. Due to the inborn impediments of electronic gadgets, it may be troublesome for electronic-implemented neural systems to make the strides these two exhibitions encourage. Optical neural systems can combine optoelectronic procedures and neural organization models to provide ways to break the bottleneck. This paper outlines optical neural networks of feedforward repetitive and spiking models to give a clearer picture of history, wildernesses, and future optical neural systems. The framework demonstrates neural systems in optic communication with the serial and parallel setup. The graphene-based laser structure for fiber optic communication is discussed. The comparison of different balance plans for photonic neural systems is made within the setting of hereditary calculation and molecule swarm optimization. In expansion, the execution comparison of routine photonic neural, time-domain with and without extending commotion is additionally expounded. The challenges and future patterns of optical neural systems on the growing scale and applications of in situ preparing nonlinear computing will hence be uncovered.
... . . . . Remote supervised method (ReSuMe) To train synaptic weights so that neurons fire spikes at desired time steps, STDP-based supervised learning rules have been proposed (Ponulak and Kasiński, 2010;Mohemmed et al., 2013;Xu et al., 2013b;Yu et al., 2013;Zhang et al., 2017Zhang et al., , 2018. The main difference between supervised learning and STDP rules is that supervised methods quantify spike timing errors to precisely predict the desired spike timings. ...
... . . . . Supervised learning with a kernel function Other STDP-based supervised synaptic learning rules try to transform discrete spike trains to continuous-valued trains with a kernel function κ(t) (Mohemmed et al., 2013;Yu et al., 2013). In Spike Pattern Association Neuron (SPAN) method (Mohemmed et al., 2013), the authors convolve all spike trains, i.e., input, output, and desired spike trains, with an alpha kernel so that gradient descent can be used to minimize the timing error. ...
... where t f i is the spike timing at a pre-synaptic neuron, t f d (or t f o ) is the desired (or output) spike timing at a post-synaptic neuron, and τ is the decay constant. Instead of convolving all the spike trains, Precise Spike-Driven plasticity rule (PSD; Yu et al., 2013) only convolves the input spike train with a kernel function having two independent decay constants. The synaptic learning rule of PSD can be expressed as ...
Article
Full-text available
Recent developments in artificial neural networks and their learning algorithms have enabled new research directions in computer vision, language modeling, and neuroscience. Among various neural network algorithms, spiking neural networks (SNNs) are well-suited for understanding the behavior of biological neural circuits. In this work, we propose to guide the training of a sparse SNN in order to replace a sub-region of a cultured hippocampal network with limited hardware resources. To verify our approach with a realistic experimental setup, we record spikes of cultured hippocampal neurons with a microelectrode array (in vitro). The main focus of this work is to dynamically cut unimportant synapses during SNN training on the fly so that the model can be realized on resource-constrained hardware, e.g., implantable devices. To do so, we adopt a simple STDP learning rule to easily select important synapses that impact the quality of spike timing learning. By combining the STDP rule with online supervised learning, we can precisely predict the spike pattern of the cultured network in real-time. The reduction in the model complexity, i.e., the reduced number of connections, significantly reduces the required hardware resources, which is crucial in developing an implantable chip for the treatment of neurological disorders. In addition to the new learning algorithm, we prototype a sparse SNN hardware on a small FPGA with pipelined execution and parallel computing to verify the possibility of real-time replacement. As a result, we can replace a sub-region of the biological neural circuit within 22 μs using 2.5 × fewer hardware resources, i.e., by allowing 80% sparsity in the SNN model, compared to the fully-connected SNN model. With energy-efficient algorithms and hardware, this work presents an essential step toward real-time neuroprosthetic computation.
... ReSuMe and SuperSpike algorithms [140,165,223,225]. In [122], a different point of view was taken, in which synaptic parameters were considered as tunable delay elements. ...
... Using ( ( ) − ( )) as a factor to modulate STDP updates allows minimizing the error given by the squared difference between desired and actual output spike count at every time instant. This kind of modulated STDP emerges in some of the works mentioned above [50,140,223,225], as well as in another interesting approach, called BP-STDP [197]. A closer look at the latter approach can help us to get insights on the quantity ( ) − ( ). ...
Preprint
Full-text available
For a long time, biology and neuroscience fields have been a great source of inspiration for computer scientists, towards the development of Artificial Intelligence (AI) technologies. This survey aims at providing a comprehensive review of recent biologically-inspired approaches for AI. After introducing the main principles of computation and synaptic plasticity in biological neurons, we provide a thorough presentation of Spiking Neural Network (SNN) models, and we highlight the main challenges related to SNN training, where traditional backprop-based optimization is not directly applicable. Therefore, we discuss recent bio-inspired training methods, which pose themselves as alternatives to backprop, both for traditional and spiking networks. Bio-Inspired Deep Learning (BIDL) approaches towards advancing the computational capabilities and biological plausibility of current models.
... This RC model with MLP architecture produces a high attack detection rate (99%) and shows strong robustness in various attack types. Later, the author extended the pioneering work to the more challenging dynamic attack detection in smart grid [231], where a bio-inspired learning rule called precise-spike-detection (PSD) [276] is used for spiking reservoir training. ...
Preprint
Full-text available
Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.
... It weakens the synapses through the anti-STDP process and strengthens the synapses through the STDP process at the times of output spikes. PSD [29] and SPAN [30,31] convolve discrete spike firing times to real numbers with a kernel function and then use the Widrow-Hoff rule to adjust synaptic weights. FILT [32], nSTK [33] and I-learning of Choronotron [24] are other Widrow-Hoff-based learning methods, and all apply the Widrow-Hoff rule to SNs in a specific manner. ...
Article
Full-text available
Supervised learning of spiking neurons is an effective simulation method to explore the learning mechanism of real neurons. Desired output spike trains are often used as supervised signals to control the synaptic strength adjustment of neurons for precise emission. The goal of supervised learning is also to allow spiking neurons to enter the desired running and firing state. The running process of a spiking neuron is a continuous process, but because of absolute refractory periods, it is regarded as several running segments. Based on the segmental characteristic, a new supervised learning strategy for spiking neurons is proposed to expand the action mode of supervised signals in supervised learning. Desired output spikes are used to actively regulate the running segments and make them more efficient in achieving the desired running and firing state. Supervised signals actively regulate the running process of neurons and are more comprehensively involved in the learning process than simply participating in adjusting synaptic weights. Based on two weight adjustment mechanisms of spiking neurons, two new specific supervised learning methods are proposed. The experimental results obtained using various settings indicate that the two learning methods have higher learning performance, which indicates the effectiveness of the new learning strategy.
... However, the Tempotron could only memorize a certain number of spatio-temporal patterns, which are about three times of the network's synapses. To process and memorize more spatio-temporal patterns, Precisespike-driven (PSD) synaptic plasticity method takes advantage of the concrete spike timing and employs the error between the actual output spike train and the target spike train to control weight updates (Yu et al. 2013). The positive errors would trigger long-term potentiation, while the negative errors would contribute to short-term depression. ...
Article
Full-text available
Spiking neural networks (SNNs) have manifested remarkable advantages in power consumption and event-driven property during the inference process. To take full advantage of low power consumption and improve the efficiency of these models further, the pruning methods have been explored to find sparse SNNs without redundancy connections after training. However, parameter redundancy still hinders the efficiency of SNNs during training. In the human brain, the rewiring process of neural networks is highly dynamic, while synaptic connections maintain relatively sparse during brain development. Inspired by this, here we propose an efficient evolutionary structure learning (ESL) framework for SNNs, named ESL-SNNs, to implement the sparse SNN training from scratch. The pruning and regeneration of synaptic connections in SNNs evolve dynamically during learning, yet keep the structural sparsity at a certain level. As a result, the ESL-SNNs can search for optimal sparse connectivity by exploring all possible parameters across time. Our experiments show that the proposed ESL-SNNs framework is able to learn SNNs with sparse structures effectively while reducing the limited accuracy. The ESL-SNNs achieve merely 0.28% accuracy loss with 10% connection density on the DVS-Cifar10 dataset. Our work presents a brand-new approach for sparse training of SNNs from scratch with biologically plausible evolutionary mechanisms, closing the gap in the expressibility between sparse training and dense training. Hence, it has great potential for SNN lightweight training and inference with low power consumption and small memory usage.
... However, the Tempotron could only memorize a certain number of spatio-temporal patterns, which are about three times of the network's synapses. To process and memorize more spatio-temporal patterns, Precisespike-driven (PSD) synaptic plasticity method takes advantage of the concrete spike timing and employs the error between the actual output spike train and the target spike train to control weight updates (Yu et al. 2013). The positive errors would trigger long-term potentiation, while the negative errors would contribute to short-term depression. ...
Preprint
Full-text available
Spiking neural networks (SNNs) have manifested remarkable advantages in power consumption and event-driven property during the inference process. To take full advantage of low power consumption and improve the efficiency of these models further, the pruning methods have been explored to find sparse SNNs without redundancy connections after training. However, parameter redundancy still hinders the efficiency of SNNs during training. In the human brain, the rewiring process of neural networks is highly dynamic, while synaptic connections maintain relatively sparse during brain development. Inspired by this, here we propose an efficient evolutionary structure learning (ESL) framework for SNNs, named ESL-SNNs, to implement the sparse SNN training from scratch. The pruning and regeneration of synaptic connections in SNNs evolve dynamically during learning, yet keep the structural sparsity at a certain level. As a result, the ESL-SNNs can search for optimal sparse connectivity by exploring all possible parameters across time. Our experiments show that the proposed ESL-SNNs framework is able to learn SNNs with sparse structures effectively while reducing the limited accuracy. The ESL-SNNs achieve merely 0.28% accuracy loss with 10% connection density on the DVS-Cifar10 dataset. Our work presents a brand-new approach for sparse training of SNNs from scratch with biologically plausible evolutionary mechanisms, closing the gap in the expressibility between sparse training and dense training. Hence, it has great potential for SNN lightweight training and inference with low power consumption and small memory usage.
... Recently, researchers have explored using DSNNs for fault diagnosis in manipulators [31]. Some of them are developing DSNN-based fault diagnosis algorithms that improve practical application, minimize resource consumption, and maintain the capabilities of existing DL algorithms [32]. ...
Article
Full-text available
Deep neural networks (DNNs) have shown high accuracy in fault diagnosis, but they struggle to effectively capture changes over time in multivariate time-series data and suffer from resource consumption issues. Spike deep belief networks (spike-DBNs) address these limitations by capturing the change in time-varying signals and reducing resource consumption, but they sacrifice accuracy. To overcome these limitations, we propose integrating an event-driven approach into spike-DBNs through the Latency-Rate coding method and the reward-STDP learning rule. The encoding method enhances the event representation capability, while the learning rule focuses on the global behavior of spiking neurons triggered by events. Our proposed method not only maintains low resource consumption but also improves the fault diagnosis ability of spike-DBNs. We conducted a series of experiments to verify our model's performance, and the results demonstrate that our proposed method improves the accuracy of fault classification of manipulators and reduces learning time by nearly 76% compared to spike-CNN under the same conditions.
... Zheng et al. proposed a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN [28]. In addition, there are other learning algorithms, such as Remote Supervised Method (ReSuMe) [29],Spike Pattern Association Neuron (SPAN) [30] and Precise-Spike-Driven plasticity (PSD) [31]. Recently, some conversion-based approaches have got excellent accuracy [32]. ...
... The PSD learning rule proposed by Yu et al. [31] to learn precise timings for spatiotemporal spike patterns is summarized in this section. ...
... PSD modifies the weights so that the actual output spike train gradually converges toward the target spike train. As discussed in [31], when using the single exponential decay as the PSC kernel, the PSD learning rule is similar to the ReSuMe rule. This similarity results from the common WH rule. ...
Article
Full-text available
Spiking neural networks (SNNs) use spikes to communicate between neurons, leading to biological plausible implementation. Considering spikes as events, SNNs are inherently suitable for processing address event representation (AER) data. Despite the progress in event-driven methods for AER data, there is little study on the relationship between time-driven and event-driven algorithms, that is required to gain insight into the understanding of SNNs. In this paper, an in-depth analysis of time-driven and event-driven algorithms was given. A same-timestamp problem in event-driven simulation, which may lead to an error spike, is found and solved in a simple efficacious way. An event-driven learning algorithm was proposed, which is efficient and compatible with a multitude of spike-based plasticity mechanisms. Leaky integrate-and-fire neurons with precise spike driven synaptic plasticity was used to demonstrate the property of the proposed event-driven algorithm and conduct experiments on two AER datasets (MNIST-DVS and AER Poker Card) and MNIST dataset. The results show that the event-driven simulation is always faster than the time-driven simulation, and the proposed algorithm achieves similar accuracy to other conventional time-driven methods.
... The number of patterns that a neuron can learn to classify depends on their length, the time constants of the neurons and the synaptic inputs [12]. While in the Tempotron the action potential is allowed to occur anywhere during the time of the learned patterns, it was later shown that neurons can be forced to fire also at a specific time [10,13] during a specific pattern's presence, which can be achieved by several more or less realistic synaptic mechanisms [14,15]. Both the Tempotron and the Chronotron employ supervised learning mechanisms based on label and time, respectively. ...
Article
Full-text available
Spiking model neurons can be set up to respond selectively to specific spatio-temporal spike patterns by optimization of their input weights. It is unknown, however, if existing synaptic plasticity mechanisms can achieve this temporal mode of neuronal coding and computation. Here it is shown that changes of synaptic efficacies which tend to balance excitatory and inhibitory synaptic inputs can make neurons sensitive to particular input spike patterns. Simulations demonstrate that a combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is sufficient for self-organizing sensitivity for spatio-temporal spike patterns that repeat in the input. In networks inclusion of hetero-synaptic plasticity that depends on the pre-synaptic neurons leads to specialization and faithful representation of pattern sequences by a group of target neurons. Pattern detection is robust against a range of distortions and noise. The proposed combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is found to protect the memories for specific patterns from being overwritten by ongoing learning during extended periods when the patterns are not present. This suggests a novel explanation for the long term robustness of memory traces despite ongoing activity with substantial synaptic plasticity. Taken together, our results promote the plausibility of precise temporal coding in the brain.