Reservoir-Based Evolving Spiking Neural Network for Spatio-temporal Pattern Recognition.
ABSTRACT Evolving spiking neural networks (eSNN) are computational models that are trained in an one-pass mode from streams of data. They evolve their structure and functionality from incoming data. The paper presents an extension of eSNN called reservoir-based eSNN (reSNN) that allows efficient processing of spatio-temporal data. By classifying the response of a recurrent spiking neural network that is stimulated by a spatio-temporal input signal, the eSNN acts as a readout function for a Liquid State Machine. The classification characteristics of the extended eSNN are illustrated and investigated using the LIBRAS sign language dataset. The paper provides some practical guidelines for configuring the proposed model and shows a competitive classification performance in the obtained experimental results.
- [Show abstract] [Hide abstract]
ABSTRACT: The brain functions as a spatio-temporal information processing machine and deals extremely well with spatio-temporal data. Spatio- and spectro-temporal data (SSTD) are the most common data collected to measure brain signals and brain activities, along with the recently obtained gene and protein data. Yet, there are no computational models to integrate all these different types of data into a single model to help understand brain processes and for a better brain signal pattern recognition. The EU FP7 Marie Curie IIF EvoSpike project develops methods and tools for spatio and spectro temporal pattern recognition. This paper proposes a new evolving spiking model called NeuCube as part of the EvoSpike project, especially for modeling brain data. The NeuCube is 3D evolving Neurogenetic Brain Cube of spiking neurons that is an approximate map of structural and functional areas of interest of an animal or human brain. Optionally, gene information is included in the NeuCube in the form of gene regulatory networks that relate to spiking neuronal parameters of interest. Different types of brain SSTD can be used to train a NeuCube, including: EEG, fMRI, video-, image- and sound data, complex multimodal data. Potential applications are: EEG -, fMRI-, and multimodal brain data modeling and pattern recognition; Brain-Computer Interfaces; cognitive and emotional robots; neuro-prosthetics and neuro-rehabilitation; modeling brain diseases. Analysis of the internal structure of the model can trigger new hypotheses about spatio-temporal pathways in the brain.Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition; 09/2012
- [Show abstract] [Hide abstract]
ABSTRACT: Spatio- and spectro-temporal data (SSTD) are the most common types of data collected in many domain areas, including engineering, bioinformatics, neuroinformatics, ecology, environment, medicine, economics, etc. However, there is lack of methods for the efficient analysis of such data and for spatio-temporal pattern recognition (STPR). The brain functions as a spatio-temporal information processing machine and deals extremely well with spatio-temporal data. Its organisation and functions have been the inspiration for the development of new methods for SSTD analysis and STPR. The brain-inspired spiking neural networks (SNN) are considered the third generation of neural networks and are a promising paradigm for the creation of new intelligent ICT for SSTD. This new generation of computational models and systems are potentially capable of modelling complex information processes due to their ability to represent and integrate different information dimensions, such as time, space, frequency, and phase, and to deal with large volumes of data in an adaptive and self-organising manner. The paper reviews methods and systems of SNN for SSTD analysis and STPR, including single neuronal models, evolving spiking neural networks (eSNN) and computational neuro-genetic models (CNGM). Software and hardware implementations and some pilot applications for audio-visual pattern recognition, EEG data analysis, cognitive robotic systems, BCI, neurodegenerative diseases, and others are discussed.Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence; 06/2012
- [Show abstract] [Hide abstract]
ABSTRACT: The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data have been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain-Computer Interfaces (BCI), games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, NIRS, etc. in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input STBD or the stimuli data is presented, thus acting as an associative memory. The NeuCube framework can be used not only to discover functional pathways from data, but also as a predictive system of brain activities, to predict and possibly—prevent certain events. Analysis of the internal structure of a model after training can reveal important spatio-temporal relationships ‘hidden’ in the data. NeuCube will allow the integration in one model of various brain data-, information- and knowledge, related to a single subject (personalized modeling) or to a population of subjects. The use of NeuCube for classification of STBD is illustrated on a case study problem of EEG data. NeuCube models result in a better accuracy of STBD classification than standard machine learning techniques. They are robust to noise (so typical in brain data) and facilitate a better interpretation of the results and understanding of the STBD and the brain conditions under which data was collected. Future directions for the use of SNN for STBD are discussed.Neural networks: the official journal of the International Neural Network Society 01/2014; · 1.88 Impact Factor
Reservoir-based evolving spiking neural network
for spatio-temporal pattern recognition
Stefan Schliebs1, Haza Nuzly Abdull Hamed1,2, and Nikola Kasabov1,3
1KEDRI, Auckland University of Technology, New Zealand
WWW home page: www.kedri.info
2Soft Computing Research Group, Universiti Teknologi Malaysia
81310 UTM Johor Bahru, Johor, Malaysia
3Institute for Neuroinformatics, ETH and University of Zurich, Switzerland
Abstract. Evolving spiking neural networks (eSNN) are computational models
that are trained in an one-pass mode from streams of data. They evolve their
structure and functionality from incoming data. The paper presents an extension
of eSNN called reservoir-based eSNN (reSNN) that allows efficient processing
of spatio-temporal data. By classifying the response of a recurrent spiking neural
network that is stimulated by a spatio-temporal input signal, the eSNN acts as
a readout function for a Liquid State Machine. The classification characteristics
of the extended eSNN are illustrated and investigated using the LIBRAS sign
language dataset. The paper provides some practical guidelines for configuring
the proposed model and shows a competitive classification performance in the
obtained experimental results.
Keywords: Spiking Neural Networks, Evolving Systems, Spatio-Temporal Pat-
The desire to better understand the remarkable information processing capabilities of
the mammalian brain has led to the development of more complex and biologically
plausible connectionist models, namely spiking neural networks (SNN). See  for
a comprehensive standard text on the material. These models use trains of spikes as
internal information representation rather than continuous variables. Nowadays, many
studies attempt to use SNN for practical applications, some of them demonstrating very
promising results in solving complex real world problems.
An evolving spiking neural network (eSNN) architecture was proposed in . The
eSNN belongs to the family of Evolving Connectionist Systems (ECoS), which was
first introduced in . ECoS based methods represent a class of constructive ANN
algorithms that modify both the structure and connection weights of the network as part
of the training process. Due to the evolving nature of the network and the employed
fast one-pass learning algorithm, the method is able to accumulate information as it
becomes available, without the requirement of retraining the network with previously
Fig.1: Architecture of the extended eSNN capable of processing spatio-temporal data.
The colored (dashed) boxes indicate novel parts in the original eSNN architecture.
presented data. The review in  summarises the latest developments on ECoS related
research; we refer to  for a comprehensive discussion of the eSNN classification
The eSNN classifier learns the mapping from a single data vector to a specified class
label. It is mainly suitable for the classification of time-invariant data. However, many
data volumes are continuously updated adding an additional time dimension to the data
sets. In , the authors outlined an extension of eSNN to reSNN which principally
enables the method to process spatio-temporal information. Following the principle of
a Liquid State Machine (LSM) , the extension includes an additional layer into the
network architecture, i.e.a recurrent SNN acting as areservoir. The reservoir transforms
a spatio-temporal input pattern into a single high-dimensional network state which in
turn can be mapped into a desired class label by the one-pass learning algorithm of
In this paper, the reSNN extension presented in  is implemented and its suitabil-
ity as a classification method is analyzed in computer simulations. We use a well-known
real-world data set, i.e. the LIBRAS sign language data set , in order to allow an in-
dependent comparison with related techniques. The goal of the study is to gain some
general insights into the working of the reservoir based eSNN classification and to de-
liver a proof of concept of its feasibility.
2Spatio-temporal pattern recognition with reSNN
The reSNN classification method is built upon a simplified integrate-and-fire neural
model first introduced in  that mimics the information processing of the human
eye. We refer to  for a comprehensive description and analysis of the method. The
proposed reSNN is illustrated in Figure 1. The novel parts in the architecture are indi-
cated by the highlighted boxes. We outline the working of the method by explaining the
diagram from left to right.
Spatio-temporal data patterns are presented to the reSNN system in form of an or-
dered sequence of real-valued data vectors. In the first step, each real-value of a data
vector is transformed into a spike train using a population encoding. This encoding dis-
tributes a single input value to multiple neurons. Our implementation is based on arrays
of receptive fields as described in . Receptive fields allow the encoding of continuous
values by using a collection of neurons with overlapping sensitivity profiles.
As a result of the encoding, input neurons spike at predefined times according to the
presented data vectors. The input spike trains are then fed into a spatio-temporal filter
which accumulates the temporal information of all input signals into a single high-
dimensional intermediate liquid state. The filter is implemented in form of a liquid or a
reservoir , i.e. a recurrent SNN, for which the eSNN acts as a readout function. The
one-pass learning algorithm of eSNN is able to learn the mapping of the liquid state into
a desired class label. The learning process successively creates a repository of trained
output neurons during the presentation of training samples. For each training sample a
new neuron is trained and then compared to the ones already stored in the repository of
the same class. If a trained neuron is considered to be too similar (in terms of its weight
vector) to the ones in the repository (according to a specified similarity threshold), the
neuron will be merged with the most similar one. Otherwise the trained neuron is added
to the repository as a new output neuron for this class. The merging is implemented as
the (running) average of the connection weights, and the (running) average of the two
firing threshold. Because of the incremental evolution of output neurons, it is possible
to accumulate information and knowledge as they become available from the input data
stream. Hence a trained network is able to learn new data and new classes without
the need of re-training already learned samples. We refer to  for a more detailed
description of the employed learning in eSNN.
The reservoir is constructed of Leaky Integrate-and-Fire (LIF) neurons with exponential
synaptic currents. This neural model is based on the idea of an electrical circuit con-
taining a capacitor with capacitance C and a resistor with a resistance R, where both
C and R are assumed to be constant. The dynamics of a neuron i are then described by
the following differential equations:
= −ui(t) + R Isyn
The constant τm= RC is called the membrane time constant of the neuron. Whenever
the membrane potential uicrosses a threshold ϑ from below, the neuron fires a spike
and its potential is reset to a reset potential ur. We use an exponential synaptic current
for a neuron i modeled by Eq. 2 with τsbeing a synaptic time constant.
In our experiments we construct a liquid having a small-world inter-connectivity
pattern as described in . A recurrent SNN is generated by aligning 100 neurons in a
three-dimensional grid of size 4×5×5. Two neurons A and B in this grid are connected
with a connection probability
P(A,B) = C × e
where d(A,B) denotes the Euclidean distance between two neurons and λ corresponds
to the density of connections which was set to λ = 2 in all simulations. Parameter C
depends on the type of the neurons. We discriminate into excitatory (ex) and inhibitory
(inh) neurons resulting in the following parameters for C: Cex−ex= 0.3, Cex−inh=
0.2, Cinh−ex= 0.5 and Cinh−inh= 0.1. The network contained 80% excitatory and
20% inhibitory neurons. The connections weights were randomly selected by a uniform
distribution and scaled in the interval [−8,8]nA. The neural parameters were set to
τm = 30ms, τs = 10ms, ϑ = 5mV, ur = 0mV. Furthermore, a refractory period of
5ms and a synaptic transmission delay of 1ms was used.
Using this configuration, the recorded liquid states did not exhibit the undesired
behavior of over-stratification and pathological synchrony – effects that are common
for randomly generated liquids . For the simulation of the reservoir we used the
SNN simulator Brian .
In order to investigate the suitability of the reservoir based eSNN classification method,
we have studied its behavior on a spatio-temporal real-world data set. In the next sec-
tions, we present the LIBRAS sign-language data, explain the experimental setup and
discuss the obtained results.
LIBRAS is the acronym for LIngua BRAsileira de Sinais, which is the official Brazil-
ian sign language. There are 15 hand movements (signs) in the dataset to be learned
and classified. The movements are obtained from recorded video of four different peo-
ple performing the movements in two sessions. In total 360 videos have been recorded,
each video showing one movement lasting for about seven seconds. From the videos 45
frames uniformly distributed over the seven seconds have then been extracted. In each
frame, the centroid pixels of the hand are used to determine the movement. All samples
have been organized in ten sub-datasets, each representing a different classification sce-
nario. More comprehensive details about the dataset can be found in . The data can
be obtained from the UCI machine learning repository.
from three different people. This dataset is balanced consisting of 270 videos with 18
samples for each of the 15 classes. An illustration of the dataset is given in Figure 2.
The diagrams show a single sample of each class.
As described in Section 2, a population encoding has been applied to transform the data
into spike trains. This method is characterized by the number of receptive fields used
for the encoding along with the width β of the Gaussian receptive fields. After some
initial experiments, we decided to use 30 receptive fields and a width of β = 1.5. More
details of the method can be found in .
curved swinghorizontal swing vertical swinganti-clockwise arcclockwise arc
circle horizontal straight-line vertical straight-linetremblehorizontal zigzag
vertical zigzaghorizontal wavyvertical wavyface-up curveface-down curve
Fig.2: The LIBRAS data set. A single sample for each of the 15 classes is shown,
the color indicating the time frame of a given data point (black/white corresponds to
earlier/later time points).
In order to perform a classification of the input sample, the state of the liquid at
a given time t has to be read out from the reservoir. The way how such a liquid state
is defined is critical for the working of the method. We investigate in this study three
different types of readouts. We call the first type a cluster readout. The neurons in the
reservoir are first grouped into clusters and then the population activity of the neurons
belonging to the same cluster is determined. The population activity was defined in 
and is the ratio of neurons being active in a given time interval [t − ∆ct,t]. Initial
experiments suggested to use 25 clusters collected in a time window of ∆ct = 10ms.
Since our reservoir contains 100 neurons simulated over a time period of T = 300ms,
T/∆ct = 30 readouts for a specific input data sample can be extracted, each of them
corresponding to a single vector with 25 continuous elements. Similar readouts have
also been employed in related studies .
The second readout is principally very similar to the first one. In the interval [t −
∆ft,t] we determine the firing frequency of all neurons in the reservoir. According to
our reservoir setup, this frequency readout produces a single vector with 100 continuous
elements. We used a time window of ∆ft = 30 resulting in the extraction of T/∆ft =
10 readouts for a specific input data sample.
Finally, in the analog readout, every spike is convolved by a kernel function that
transforms the spike train of each neuron in the reservoir into a continuous analog sig-
nal. Many possibilities for such a kernel function exist, such as Gaussian and exponen-
Readout at t=40
Readout at t=120
Readout at t=130
time in msec
accuracy in %
time in msec
time in msec
Fig.3: Classification accuracy of eSNN for three readouts extracted at different times
during the simulation of the reservoir (top row of diagrams). The best accuracy obtained
is marked with a small (red) circle. For the marked time points, the readout of all 270
samples of the data are shown (bottom row).
tial kernels. In this study, we use the alpha kernel α(t) = e τ−1t e−t/τΘ(t) where
Θ(t) refers to the Heaviside function and parameter τ = 10ms is a time constant. The
convolved spike trains are then sampled using a time step of ∆at = 10ms resulting in
100 time series – one for each neuron in the reservoir. In these series, the data points
at time t represent the readout for the presented input sample. A very similar readout
was used in  for a speech recognition problem. Due to the sampling interval ∆a,
T/∆at = 30 different readouts for a specific input data sample can be extracted during
the simulation of the reservoir.
All readouts extracted at a given time have been fed to the standard eSNN for classi-
fication.Basedonpreliminaryexperiments, someinitialeSNNparameters werechosen.
We set the modulation factor m = 0.99, the proportion factor c = 0.46 and the similar-
ity threshold s = 0.01. Using this setup we classified the extracted liquid states over all
possible readout times.
The evolution of the accuracy over time for each of the three readout methods is pre-
sented in Figure 3. Clearly, the cluster readout is the least suitable readout among the
tested ones. The best accuracy found is 60.37% for the readout extracted at time 40ms,
cf. the marked time point in the upper left diagram of the figure4. The readouts ex-
tracted at time 40ms are presented in the lower left diagram. A row in this diagram
4We note that the average accuracy of a random classifier is around
is the readout vector of one of the 270 samples, the color indicating the real value of
the elements in that vector. The samples are ordered to allow a visual discrimination of
the 15 classes. The first 18 rows belong to class 1 (curved swing), the next 18 rows to
class 2 (horizontal swing) and so on. Given the extracted readout vector, it is possible
to even visually distinguish between certain classes of samples. However, there are also
significant similarities between classes of readout vectors visible which clearly have a
negative impact on the classification accuracy.
The situation improves when the frequency readout is used resulting in a maximum
classification accuracy of 78.51% for the readout vector extracted at time 120ms, cf.
middle top diagram in Figure 3. We also note the visibly better discrimination ability
of the classes of readout vectors in the middle lower diagram: The intra-class distance
between samples belonging to the same class is small, but inter-class distance between
samples of other classes is large. However, the best accuracy was achieved using the
analog readout extracted at time 130ms (right diagrams in Figure 3). Patterns of differ-
ent classes are clearly distinguishable in the readout vectors resulting in a good classi-
fication accuracy of 82.22%.
3.4Parameter and feature optimization of reSNN
The previous section already demonstrated that many parameters of the reSNN need
to be optimized in order to achieve satisfactory results (the results shown in Figure 3
are as good as the suitability of the chosen parameters is). Here, in order to further
improve the classification accuracy of the analog readout vector classification, we have
optimized the parameters of the eSNN classifier along with the input features (the vec-
tor elements that represent the state of the reservoir) using the Dynamic Quantum in-
spired Particle swarm optimization (DQiPSO) . The readout vectors are extracted at
time 130ms, since this time point has reported the most promising classification accu-
racy. For the DQiPSO, 20 particles were used, consisting of eight update, three filter,
three random, three embed-in and three embed-out particles. Parameter c1and c2which
control the exploration corresponding to the global best (gbest) and the personal best
(pbest) respectively, were both set to 0.05. The inertia weight was set to w = 2. See 
for further details on these parameters and the working of DQiPSO. We used 18-fold
cross validations and results were averaged in 500 iterations in order to estimate the
classification accuracy of the model.
The evolution of the accuracy obtained from the global best particle during the
PSO optimization process is presented in Figure 4a. The optimization clearly improves
the classification abilities of eSNN. After the DQiPSO optimization an accuracy of
88.59% (±2.34%) is achieved. In comparison to our previous experiments  on that
dataset, the time delay eSNN performs very similarly reporting an accuracy of 88.15%
(±6.26%). The test accuracy of an MLP under the same conditions of training and
testing was found to be 82.96% (±5.39%).
Figure 4b presents the evolution of the selected features during the optimization
process. The color of a point in this diagram reflects how often a specific feature was
selected at a certain generation. The lighter the color the more often the corresponding
feature was selected at the given generation. It can clearly be seen that a large number
of features have been discarded during the evolutionary process. The pattern of relevant
0 100200300 400500
Average accuracy in %
(a) Evolution of classification accuracy
Frequency of selected features in %
(b) Evolution of feature subsets
Fig.4: Evolution of the accuracy and the feature subsets based on the global best solu-
tion during the optimization with DQiPSO.
features matches the elements of the readout vector having larger values, cf. the dark
points in Figure 3 and compare to the selected features in Figure 4.
4Conclusion and future directions
This study has proposed an extension of the eSNN architecture, called reSNN, that
enables the method to process spatio-temporal data. Using a reservoir computing ap-
proach, a spatio-temporal signal is projected into a single high-dimensional network
state that can be learned by the eSNN training algorithm. We conclude from the exper-
imental analysis that the suitable setup of the reservoir is not an easy task and future
studies should identify ways to automate or simplify that procedure. However, once
the reservoir is configured properly, the eSNN is shown to be an efficient classifier of
the liquid states extracted from the reservoir. Satisfying classification results could be
achieved that compare well with related machine learning techniques applied to the
same data set in previous studies. Future directions include the development of new
learning algorithms for the reservoir of the reSNN and the application of the method
on other spatio-temporal real-world problems such as video or audio pattern recogni-
tion tasks. Furthermore, we intend to develop a implementation on specialised SNN
hardware [7,8] to allow the classification of spatio-temporal data streams in real time.
The work on this paper has been supported by the Knowledge Engineering and Dis-
covery Research Institute (KEDRI, www.kedri.info). One of the authors, NK, has been
supported by a Marie Curie International Incoming Fellowship with the FP7 European
Framework Programme under the project “EvoSpike”, hosted by the Neuromorphic
Cognitive Systems Group of the Institute for Neuroinformatics of the ETH and the Uni-
versity of Z¨ urich.
1. Bohte, S.M., Kok, J.N., Poutr´ e, J.A.L.: Error-backpropagation in temporally encoded net-
works of spiking neurons. Neurocomputing 48(1-4), 17–37 (2002)
ian sign language: A study using distance-based neural networks. In: Neural Networks, 2009.
IJCNN 2009. International Joint Conference on. pp. 697–704 (2009)
3. Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations, Plastic-
ity. Cambridge University Press, Cambridge, MA (2002)
4. Goodman, D., Brette, R.: Brian: a simulator for spiking neural networks in python. BMC
Neuroscience 9(Suppl 1), P92 (2008)
5. Hamed, H., Kasabov, N., Shamsuddin, S.: Probabilistic evolving spiking neural network op-
timization using dynamic quantum-inspired particle swarm optimization. Australian Journal
of Intelligent Information Processing Systems 11(01), 23–28 (2010)
6. Hamed, H., Kasabov, N., Shamsuddin, S., Widiputra, H., Dhoble, K.: An extended evolving
spiking neural network model for spatio-temporal pattern classification. In: 2011 Interna-
tional Joint Conference on Neural Networks. pp. 2653–2656 (2011)
7. Indiveri, G., Chicca, E., Douglas, R.: Artificial cognitive systems: From VLSI networks of
spiking neurons to neuromorphic cognition. Cognitive Computation 1, 119–127 (2009)
8. Indiveri, G., Stefanini, F., Chicca, E.: Spike-based learning with a generalized integrate and
fire silicon neuron. In: International Symposium on Circuits and Systems, ISCAS’10. pp.
1951–1954. IEEE (2010)
9. Kasabov, N.: The ECOS framework and the ECO learning method for evolving connectionist
systems. JACIII 2(6), 195–202 (1998)
10. Maass, W., Natschl¨ ager, T., Markram, H.: Real-time computing without stable states: A
new framework for neural computation based on perturbations. Neural Computation 14(11),
11. Norton, D., Ventura, D.: Preparing more effective liquid state machines using hebbian learn-
ing. In: International Joint Conference on Neural Networks, IJCNN 2006. pp. 4243–4248.
IEEE, Vancouver, BC (2006)
12. Norton, D., Ventura, D.: Improving liquid state machines through iterative refinement of the
reservoir. Neurocomputing 73(16-18), 2893 – 2904 (2010)
13. Schliebs, S., Defoin-Platel, M., Worner, S., Kasabov, N.: Integrated feature and parameter
optimization for an evolving spiking neural network: Exploring heterogeneous probabilistic
models. Neural Networks 22(5-6), 623 – 632 (2009)
14. Schliebs, S., Nuntalid, N., Kasabov, N.: Towards spatio-temporal pattern recognition using
evolving spiking neural networks. In: Wong, K., Mendis, B., Bouzerdoum, A. (eds.) Neural
Information Processing. Theory and Algorithms, Lecture Notes in Computer Science, vol.
6443, pp. 163–170. Springer Berlin / Heidelberg (2010)
15. Schrauwen, B., D’Haene, M., Verstraeten, D., Campenhout, J.V.: Compact hardware liquid
state machines on fpga for real-time speech recognition. Neural Networks 21(2-3), 511 – 523
16. Thorpe, S.J.: How can the human visual system process a natural scene in under 150ms? On
the role of asynchronous spike propagation. In: ESANN. D-Facto public (1997)
17. Watts, M.: A decade of Kasabov’s evolving connectionist systems: A review. Systems, Man,
and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 39(3), 253–269
18. Wysoski, S.G., Benuskova, L., Kasabov, N.K.: Adaptive learning procedure for a network of
spiking neurons and visual pattern recognition. In: Advanced Concepts for Intelligent Vision
Systems. pp. 1133–1142. Springer, Berlin / Heidelberg (2006)