Conference PaperPDF Available

Extending the Neural Engineering Framework for Nonideal Silicon Synapses

  • Applied Brain Research Inc.

Abstract and Figures

The Neural Engineering Framework (NEF) is a theory for mapping computations onto biologically plausible networks of spiking neurons. This theory has been applied to a number of neuromorphic chips. However, within both silicon and real biological systems, synapses exhibit higher-order dynamics and heterogeneity. To date, the NEF has not explicitly addressed how to account for either feature. Here, we analytically extend the NEF to directly harness the dynamics provided by heterogeneous mixed-analog-digital synapses. This theory is successfully validated by simulating two fundamental dynamical systems in Nengo using circuit models validated in SPICE. Thus, our work reveals the potential to engineer robust neuromorphic systems with well-defined high-level behaviour that harness the low-level heterogeneous properties of their physical primitives with millisecond resolution.
Content may be subject to copyright.
Extending the Neural Engineering Framework
for Nonideal Silicon Synapses
Aaron R. Voelker, Ben V. Benjamin, Terrence C. Stewart, Kwabena Boahenand Chris Eliasmith
{arvoelke, tcstewar, celiasmith} {benvb, boahen}
Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada.
Bioengineering and Electrical Engineering, Stanford University, Stanford, CA, U.S.A.
Abstract—The Neural Engineering Framework (NEF) is a
theory for mapping computations onto biologically plausible
networks of spiking neurons. This theory has been applied to a
number of neuromorphic chips. However, within both silicon and
real biological systems, synapses exhibit higher-order dynamics
and heterogeneity. To date, the NEF has not explicitly addressed
how to account for either feature. Here, we analytically extend
the NEF to directly harness the dynamics provided by heteroge-
neous mixed-analog-digital synapses. This theory is successfully
validated by simulating two fundamental dynamical systems in
Nengo using circuit models validated in SPICE. Thus, our work
reveals the potential to engineer robust neuromorphic systems
with well-defined high-level behaviour that harness the low-
level heterogeneous properties of their physical primitives with
millisecond resolution.
The field of neuromorphic engineering is concerned with
building specialized hardware to emulate the functioning of
the nervous system [1]. The Neural Engineering Frame-
work (NEF; [2]) compliments this goal with a theory for
“compiling” dynamical systems onto spiking neural networks,
and has been used to develop the largest functioning model
of the human brain, capable of performing various perceptual,
cognitive, and motor tasks [3]. This theory allows one to map
an algorithm, expressed in software [4], onto some neural
substrate realized in silicon [5]. The NEF has been applied to
neuromorphic chips including Neurogrid [5], [6] and a VLSI
prototype from ETH Zurich [7].
However, the NEF assumes that the postsynaptic current
(PSC) induced by a presynaptic spike is modelled by a
first-order lowpass filter (LPF). That is, by convolving an
impulse representing the incoming spike with an exponentially
decaying impulse-response. Furthermore, the exponential time-
constant is assumed to be the same for all synapses within
the same population. In silicon, synapses are neither first-
order nor homogeneous and spikes are not represented by
impulses.1Synapse circuits have parasitic elements that re-
sult in higher-order dynamics, transistor mismatch introduces
variability from circuit to circuit, and spikes are represented by
pulses with finite width and height. Previously, these features
restricted the overall accuracy of the NEF within neuromorphic
hardware (e.g., in [5], [6], [7]).
The silicon synapses that we study here are mixed-analog-
digital designs that implement a pulse-extender [9] and a first-
order LPF [10], modelled as a second-order LPF to account
1These statements also hold for real biological systems [8].
for parasitic capacitances. We also account for the variability
(i.e., heterogeneity) introduced by transistor mismatch in the
extended pulse’s width and height, and in the LPF’s two time-
constants. In §II, we demonstrate how to extend the NEF to
directly harness these features for system-level computation.
This extension is tested by software simulation in §IV using
the circuit models described in §III.
The NEF consists of three principles for describing neural
computation: representation, transformation, and dynamics [2].
This framework enables the mapping of dynamical systems
onto recurrently connected networks of spiking neurons. We
begin by providing a self-contained overview of these three
principles using an ideal first-order LPF. We then extend these
principles to the heterogeneous pulse-extended second-order
LPF, and show how this maps onto a target neuromorphic
A. Principle 1 – Representation
The first NEF principle states that a vector x(t)Rkmay
be encoded into the spike-trains δiof nneurons with rates:
ri(x) = Gi[αiei·x(t) + βi],i= 1 . . . n, (1)
where Giis a neuron model whose input current to the soma
is the linear encoding αiei·x(t) + βiwith gain αi>0, unit-
length encoding vector ei(row-vectors of ERn×k), and
bias current βi. The state x(t)is typically decoded from spike-
trains (see Principle 2) by convolving them with a first-order
LPF that models the PSC triggered by spikes arriving at the
synaptic cleft. We denote this filter as h(t)in the time-domain
and as H(s)in the Laplace domain:
h(t) = 1
τH(s) = 1
τs + 1 . (2)
Traditionally the same time-constant is used for all synapses
projecting to a given population.2
B. Principle 2 – Transformation
The second principle is concerned with decoding some
desired vector function f:SRkof the represented
vector. Here, Sis the domain of the vector x(t)represented
via Principle 1—typically the unit k-cube or the unit k-ball.
Let ri(x)denote the expected firing-rate of the ith neuron in
2See [11] for a recent exception.
τs+1 G[·]Dg(x)
w x
Fig. 1. Standard Principle 3 (see (6)) mapped onto an ideal architecture to
implement a general nonlinear dynamical system (see (5)). The state-vector
xis encoded in a population of neurons via Principle 1. The required signal
wis approximated by τuplus the recurrent decoders for τf(x) + xapplied
to δ, such that the first-order LPF correctly outputs x. The output vector yis
approximated using the decoders Dg(x).
response to a constant input xencoded via (1). To account
for noise from spiking and extrinsic sources of variability,
we introduce the noise term η∼ N(0, σ2). Then the matrix
Df(x)Rn×kthat optimally decodes f(x)from the spike-
trains δencoding xis obtained by solving the following
problem (via regularized least-squares):
Df(x)= arg min
(ri(x) + η)di
i(f(x)h)(t). (4)
The quantity in (4) may then be encoded via Principle 1 to
complete the connection between two populations of neurons.3
C. Principle 3 – Dynamics
The third principle addresses the problem of implementing
the following nonlinear dynamical system:
x=f(x) + u,u(t)Rk
y=g(x). (5)
Since we take the synapse (2) to be the dominant source
of dynamics for the represented vector [2, p. 327], we must
essentially “convert” (5) into an equivalent system where the
integrator is replaced by a first-order LPF. This transformation
is accomplished by driving the filter h(t)with:
w:= τ˙
x+x= (τf(x) + x)+(τu)(6)
=(wh)(t) = x(t), (7)
so that convolution with h(t)achieves the desired integration.
Therefore, the problem reduces to representing x(t)in a
population of neurons using Principle 1, while recurrently
decoding w(t)using the methods of Principle 2 (Fig. 1).
D. Extensions to Silicon Synapses
Consider an array of mheterogeneous pulse-extended
second-order LPFs (in the Laplace domain):
Hj(s) = γj(1 ejs)s1
(τj,1s+ 1) (τj,2s+ 1) ,j= 1 . . . m, (8)
where jis the width of the extended pulse, γjis the
height of the extended pulse, and τj,1,τj,2are the two time-
constants of the LPF. Hj(s), whose circuit is described in
3The effective weight-matrix in this case is W=EDf(x)T.
Fig. 2. Using extended Principle 3 (see (11)) to implement a general
nonlinear dynamical system (see (5)) on a neuromorphic architecture. The
matrix representation Φis linearly transformed by Γjto drive the jth synapse.
Dashed lines surround the silicon-neuron array that filters and encodes xinto
spike-trains (see Fig. 3).
§III, is an extended pulse γj(1 ejs)s1convolved with
a second-order LPF ((τj,1s+ 1) (τj,2s+ 1))1. These higher-
order effects result in incorrect dynamics when the NEF is
applied using the standard Principle 3 (e.g., in [5], [6], [7]),
as shown in §IV.
From (7), observe that we must drive the jth synapse with
some signal wj(t)that satisfies the following (W(s)denotes
the Laplace transform of w(t)and we omit jfor clarity):
W(s)1es s1
=γ1(1 + (τ1+τ2)s+τ1τ2s2)X(s). (9)
To solve for win practice, we first substitute 1es =
s (2s2)/2 + O(3s3), and then convert back to the time-
domain to obtain the following approximation:
w= (γ)1(x+ (τ1+τ2)˙
x) +
w. (10)
Next, we differentiate both sides of (10):
w= (γ)1(˙
x+ (τ1+τ2)¨
x) +
and substitute this ˙
wback into (10) to obtain:
w= (γ)1(x+ (τ1+τ2+/2) ˙
(τ1τ2+ (/2)(τ1+τ2))¨
x) +
x+ (2/4) ¨
Finally, we make the approximation (γ)1(/2)τ1τ2
(2/4) ¨
ww, which yields the following solution to (9):
Φ:= [x˙
Γj:= (jγj)1"1
τj,1τj,2+ (j/2)(τj,1+τj,2)#,
where jγjis the area of the extended pulse.4We compute
the time-varying matrix representation Φin the recurrent con-
nection (plus an input transformation) via Principle 2 (similar
4This form for (6) is Φ= [x˙
x]and Γj= [1, τ]T. Further generalizations
are explored in [12].
Fig. 3. Silicon synapse and soma. The synapse consists of a pulse-extender
(MP1,2) and a LPF (ML1-6). The pulse-extender converts subnanosecond digital-
pulses—representing input spikes—into submillisecond current-pulses (IPE).
Then the LPF filters these current-pulses to produce the synapse’s output (Isyn).
The soma integrates this output current on a capacitor (CS) and generates a
subnanosecond digital-pulse—representing an output spike—through positive-
feedback (MS2-5). This pulse is followed by a reset pulse, which discharges
the capacitor. This schematic has been simplified for clarity.
to Fig. 1). We then drive the jth synapse with its time-invariant
linear transformation Γj(Fig. 2).
A more refined solution may be found by expanding
1es to the third order, which adds 2/12 to the third
coordinate of Γj.5However, this refinement does not improve
our results. Some remaining details are addressed in §IV.
We now describe the silicon synapse and soma circuits
(Fig. 3) that we use to validate our extensions to the NEF.
An incoming pulse discharges CP, which Isubsequently
charges [9]. As a result, MP2 turns on momentarily, producing
an output current-pulse IPE with width and height:
where Vγis the gate-voltage at which MP2 can no longer pass
Iγ. This current-pulse is filtered to obtain the synapse’s output,
Isyn, whose dynamics obeys:
dt +Isyn =A IPE,
where τ1=CL1UT/Iτ1, A =Iτ2/Iτ1, and UTis the thermal
voltage. The above assumes all transistors operate in the
subthreshold region and ignores all parasitic capacitances [10].
If we include the parasitic capacitance CL2, a small-signal
analysis reveals second-order dynamics:
dt + (τ1+τ2)dIsyn
dt +Isyn =A IPE,
where τ2=CL2UT/ κIτ2, and κis the subthreshold-slope
coefficient. This second-order LPF and the aforementioned
pulse-extender are modelled together in the Laplace domain
by (8) after scaling by A.
The dynamics of the soma are described by:
dt =Im+Isyn,
where Imis the current in the positive-feedback loop, assuming
all transistors operate in the subthreshold region [13], [14].
5In general, the coefficients 1,/2,2/12, and so on, correspond to the
e approximants of Pq
(i+1)! .
10 30 90
Time (ms)
0.4 0.8 1.6
Time (ms)
0.2 0.4 0.8
Width (ms)
0.2 1 5
Height (ms1)
Fig. 4. Log-normally distributed parameters for the silicon synapses. τ1(µ±
σ= 31 ±6.4ms) and τ2(0.8±0.11 ms) are the two time-constants of
the second-order LPF; (0.4±0.06 ms) and γ(1.0±0.29 ms1) are the
widths and heights of the extended pulse, respectively (see (8)).
This equation may be solved to obtain the trajectory of Im,
and hence the steady-state spike-rate:
r(Isyn) = κIsyn
CSUTln Isyn /I0+ 1
Isyn /Ithr + 11
where I0and Ithr are the values of Imat reset and threshold,
respectively. These values correspond to the leakage current of
the transistors and the peak short-circuit current of the inverter,
We proceed by validating the extended NEF neuromorphic
architecture (see Fig. 2) implemented using the circuit models
(see §III) on two fundamental dynamical systems: an integrator
and a controlled oscillator. We use Nengo 2.3.1 [4] to simulate
this neuromorphic system with a time-step of 50 µs. Test data
is sampled independently of the training data used to optimize
(3) via regularized least-squares. For comparison with (5)—
simulated via Euler’s method—spike-trains are filtered using
(2) with τ= 10 ms. For each trial, the somatic parameters
and synaptic parameters (Fig. 4) are randomly sampled from
distributions generated using a model of transistor mismatch
(validated in SPICE). These parameters determine each linear
transformation Γj, as defined in (11).
A. Integrator
Consider the one-dimensional integrator, ˙x=u,y=x.
We represent the scalar x(t)in a population of 512 modelled
silicon neurons. To compute the recurrent function, we use
(11), and assume ¨x= ˙uis given. To be specific, we optimize
(3) for Dx, and then add uand ˙uto the second and third
coordinates of the representation, respectively, to decode:
Φ= [x, u, ˙u].
To evaluate the impact of each extension to the NEF, we
simulate our network under five conditions: using standard
Principle 3, accounting for second-order dynamics, accounting
for the pulse-extender, accounting for the transistor mismatch
in τ1, and our full extension (Fig. 5). We find that the last
reduces the error by 63%, relative to the first, across a wide
range of input frequencies (550 Hz).
B. Controlled 2D Oscillator
Consider a controllable two-dimensional oscillator,
x=f(x) + u,y=x,
f(x1, x2, x3)=[ωx3x2, ωx3x1,0]T,
5 10 15 20 25 30 35 40 45 50
Frequency (Hz)
Normalized RMSE
Principle 3
Fig. 5. Effect of each NEF extension, applied to a simulated inte-
grator given frequencies ranging from 550 Hz (mean spike-rate 143 Hz).
The standard approach (Principle 3) achieves a normalized RMSE of
0.203 with 95% CI of [0.189,0.216] compared to the ideal, while our
extension (Full) achieves 0.073 with 95% CI of [0.067,0.080]—a 63%
reduction in error—averaged across 25 trials and 10 frequencies. The largest
improvement comes from accounting for the second-order dynamics, while a
smaller improvement comes from accounting for the pulse-extended dynamics.
Accounting for the transistor mismatch on its own is counter-productive.
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
Time (s)
Fig. 6. Output of controlled 2D oscillator with ω= 5 Hz (mean spike-rate
140 Hz). The control (x
3) is changed from 0.5to 0.5at 1s to reverse the
direction of oscillation. The standard Principle 3 (˜
y) achieves a normalized
RMSE of 0.188 with 95% CI of [0.166,0.210] compared to the ideal (y),
while our extension (ˆ
y) achieves 0.050 with 95% CI of [0.040,0.063]—a
73% reduction in error—averaged across 25 trials.
where ωis the angular frequency in radians per second, x3
controls this frequency multiplicatively, and x
3is the fixed-
point target supplied via input u3=x
3x3. The inputs u1and
u2initiate the oscillation with a brief impulse. We represent
the three-dimensional state-vector x(t)in a population of 2048
modelled silicon neurons.6To compute the recurrent function,
we again use (11). For this example, u(t)0for most t
(apart from initial transients and changes to the target x
and so ¨
uJf(x)·f(x)(where Jfdenotes
the Jacobian of f). We then optimize (3) for Dx,Df(x), and
DJf(x)·f(x), and add uto the second column of the matrix
representation to decode:
Φ= [x,f(x) + u,Jf(x)·f(x)] .
We find that this solution reduces the error by 73% relative to
the standard Principle 3 solution (Fig. 6).
We have provided a novel extension to the NEF that di-
rectly harnesses the dynamics of heterogeneous pulse-extended
second-order LPFs. This theory is validated by software simu-
lation of a neuromorphic system, using circuit models with
6We use this many neurons in order to minimize the noise from spiking.
parameter variability validated in SPICE, for two fundamental
examples: an integrator and a controlled oscillator. When
compared to the previous standard approach, our extension is
shown to reduce the error by 63% and 73% for the integrator
and oscillator, respectively. Thus, our theory enables a more
accurate mapping of nonlinear dynamical systems onto a
recurrently connected neuromorphic architecture using non-
ideal silicon synapses. Furthermore, we derive our theory in-
dependently of the particular neuron model and encoding. This
advance helps pave the way toward understanding how non-
ideal physical primitives may be systematically analyzed and
then subsequently exploited to support useful computations in
neuromorphic hardware.
This work was supported by CFI and OIT infrastructure,
the Canada Research Chairs program, NSERC Discovery grant
261453, ONR grants N000141310419 and N0001415l2827,
and NSERC CGS-D funding. The authors thank Wilten Nicola
for inspiring (9) with a derivation for the double-exponential.
[1] C. Mead, Analog VLSI and Neural Systems. Boston, MA: Addison-
Wesley, 1989.
[2] C. Eliasmith and C. H. Anderson, Neural engineering: Computation,
representation, and dynamics in neurobiological systems. MIT Press,
[3] C. Eliasmith, T. C. Stewart, X. Choo, T. Bekolay, T. DeWolf, Y. Tang, and
D. Rasmussen, “A large-scale model of the functioning brain,” Science,
vol. 338, no. 6111, pp. 1202–1205, 2012.
[4] T. Bekolay, J. Bergstra, E. Hunsberger, T. DeWolf, T. C. Stewart,
D. Rasmussen, X. Choo, A. R. Voelker, and C. Eliasmith, “Nengo: A
Python tool for building large-scale functional brain models,Frontiers
in neuroinformatics, vol. 7, 2013.
[5] S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao,
T. Stewart, C. Eliasmith, and K. Boahen, “Silicon neurons that compute,”
in International Conference on Artificial Neural Networks, vol. 7552.
Springer, 2012, pp. 121–128.
[6] S. Menon, S. Fok, A. Neckar, O. Khatib, and K. Boahen, “Controlling
articulated robots in task-space with spiking silicon neurons,” in 5th
IEEE RAS/EMBS International Conference on Biomedical Robotics and
Biomechatronics. IEEE, 2014, pp. 181–186.
[7] F. Corradi, C. Eliasmith, and G. Indiveri, “Mapping arbitrary mathemati-
cal functions and dynamical systems to neuromorphic VLSI circuits for
spike-based neural computation,” in 2014 IEEE International Symposium
on Circuits and Systems (ISCAS). IEEE, 2014, pp. 269–272.
[8] A. Destexhe, Z. F. Mainen, and T. J. Sejnowski, “Synaptic currents,
neuromodulation and kinetic models,” The handbook of brain theory
and neural networks, vol. 66, pp. 617–648, 1995.
[9] J. V. Arthur and K. Boahen, “Recurrently connected silicon neurons with
active dendrites for one-shot learning,” in International Joint Conference
on Neural Networks (IJCNN), vol. 3. IEEE, 2004, pp. 1699–1704.
[10] W. Himmelbauer and A. G. Andreou, “Log-domain circuits in subthres-
hold MOS,” in Circuits and Systems, 1997. Proceedings of the 40th
Midwest Symposium on, vol. 1. IEEE, 1997, pp. 26–30.
[11] K. E. Friedl, A. R. Voelker, A. Peer, and C. Eliasmith, “Human-inspired
neurorobotic system for classifying surface textures by touch,” Robotics
and Automation Letters, vol. 1, no. 1, pp. 516–523, 01 2016.
[12] A. R. Voelker and C. Eliasmith, “Improving spiking dynamical net-
works: Accurate delays, higher-order synapses, and time cells,” 2017,
Manuscript in preparation.
[13] E. Culurciello, R. Etienne-Cummings, and K. Boahen, “A biomorphic
digital image sensor,IEEE Journal of Solid-State Circuits, vol. 38, no. 2,
pp. 281–294, 2003.
[14] P. Gao, B. V. Benjamin, and K. Boahen, “Dynamical system guided
mapping of quantitative neuronal models onto neuromorphic hardware,
IEEE Transactions on Circuits and Systems, vol. 59, no. 10, pp. 2383–
2394, 2012.
... Similar to other neuromorphic hardware platforms software is an essential component to make complex hardware systems accessible to users, e.g., GraphCore [32], Loihi [33,34,35], Neurogrid [36,37], SpiNNaker [38,39,40], Tianjic [41], and TrueNorth [42]. A recent publication covering the older BrainScaleS-1 (BSS-1) platform shortly compares software approaches of multiple neuromorphic systems [43]. ...
Full-text available
We present the BrainScaleS-2 mobile system as a compact analog inference engine based on the BrainScaleS-2 ASIC and demonstrate its capabilities at classifying a medical electrocardiogram dataset. The analog network core of the ASIC is utilized to perform the multiply-accumulate operations of a convolutional deep neural network. At a system power consumption of 5.6W, we measure a total energy consumption of 192 μJ for the ASIC and achieve a classification time of 276 μs per electrocardiographic patient sample. Patients with atrial fibrillation are correctly identified with a detection rate of (93.7 ± 0.7)% at (14.0 ± 1.0)% false positives. The system is directly applicable to edge inference applications due to its small size, power envelope, and flexible I/O capabilities. It has enabled the BrainScaleS-2 ASIC to be operated reliably outside a specialized lab setting. In future applications, the system allows for a combination of conventional machine learning layers with online learning in spiking neural networks on a single neuromorphic platform.
... SpiNNaker and BrainScaleS use a simulator-independent Python wrapper, PyNN (Andrew et al., 2009). Sequential mapping is used in SpiNNaker while neural engineering framework (NEF) is used for Neurogrid (Voelker et al., 2017). Neutrams (Ji et al., 2016) implements an optimized mapping technique based on the graph partition problem: Kernighan-Lin (KL) partitioning strategy for network on chip (NoC). ...
Full-text available
The hardware-software co-optimization of neural network architectures is a field of research that emerged with the advent of commercial neuromorphic chips, such as the IBM TrueNorth and Intel Loihi. Development of simulation and automated mapping software tools in tandem with the design of neuromorphic hardware, whilst taking into consideration the hardware constraints, will play an increasingly significant role in deployment of system-level applications. This paper illustrates the importance and benefits of co-design of convolutional neural networks (CNN) that are to be mapped onto neuromorphic hardware with a crossbar array of synapses. Toward this end, we first study which convolution techniques are more hardware friendly and propose different mapping techniques for different convolutions. We show that, for a seven-layered CNN, our proposed mapping technique can reduce the number of cores used by 4.9–13.8 times for crossbar sizes ranging from 128 × 256 to 1,024 × 1,024, and this can be compared to the toeplitz method of mapping. We next develop an iterative co-design process for the systematic design of more hardware-friendly CNNs whilst considering hardware constraints, such as core sizes. A python wrapper, developed for the mapping process, is also useful for validating hardware design and studies on traffic volume and energy consumption. Finally, a new neural network dubbed HFNet is proposed using the above co-design process; it achieves a classification accuracy of 71.3% on the IMAGENET dataset (comparable to the VGG-16) but uses 11 times less cores for neuromorphic hardware with core size of 1,024 × 1,024. We also modified the HFNet to fit onto different core sizes and report on the corresponding classification accuracies. Various aspects of the paper are patent pending.
... Overall, NEF's functional approach is stated by a set of principles for constructing the neural models, which involves representation, transformation and dynamics (Sharma et al. 2016;Voelker et al. 2017). Following these principles, the components of a model are the brain-area-inspired functional elements, and its structural pattern is formed by their connections (information flow) as well as the organization of components into hierarchies, subsystems or central mechanisms. ...
Full-text available
In the field of Artificial Intelligence (AI), efforts to achieve human-like behavior have taken very different paths through time. Cognitive Architectures (CAs) differentiate from traditional AI approaches, due to their intention to model cognitive and behavioral processes by understanding the brain’s structure and their functionalities in a natural way. However, the development of distinct CAs has not been easy, mainly because there is no consensus on the theoretical basis, assumptions or even purposes for their creation nor how well they reflect human function. In consequence, there is limited information about the methodological aspects to construct this type of models. To address this issue, some initial statements are established to contextualize about the origins and directions of cognitive architectures and their development, which help to outline perspectives, approaches and objectives of this work, supported by a brief study of methodological strategies and historical aspects taken by some of the most relevant architectures to propose a methodology which covers general perspectives for the construction of CAs. This proposal is intended to be flexible, focused on use-case tasks, but also directed by theoretic paradigms or manifestos. A case study between cognitive functions is then detailed, using visual perception and working memory to exemplify the proposal’s assumptions, postulates and binding tools, from their meta-architectural conceptions to validation. Finally, the discussion addresses the challenges found at this stage of development and future work directions.
... Sequential mapping is used in SpiNNaker. Neural engineering framework (NEF) is developed for Neurogrid Voelker et al. (2017). Neutrams Ji et al. (2016) addresses an optimized mapping technique based on graph partition problem: Kernighan-Lin (KL) partitioning strategy for network on chip (NoC). ...
Full-text available
The hardware-software co-optimization of neural network architectures is becoming a major stream of research especially due to the emergence of commercial neuromorphic chips such as the IBM Truenorth and Intel Loihi. Development of specific neural network architectures in tandem with the design of the neuromorphic hardware considering the hardware constraints will make a huge impact in the complete system level application. In this paper, we study various neural network architectures and propose one that is hardware-friendly for a neuromorphic hardware with crossbar array of synapses. Considering the hardware constraints, we demonstrate how one may design the neuromorphic hardware so as to maximize classification accuracy in the trained network architecture, while concurrently, we choose a neural network architecture so as to maximize utilization in the neuromorphic cores. We also proposed a framework for mapping a neural network onto a neuromorphic chip named as the Mapping and Debugging (MaD) framework. The MaD framework is designed to be generic in the sense that it is a Python wrapper which in principle can be integrated with any simulator tool for neuromorphic chips.
... The Neural Engineering Framework [29,35,36] is a generalized computational framework used for modelling large and complex neural systems. In NEF, we can create ensembles of spiking neuron models, represent values on these neural ensembles and solve for synaptic connection weights for computing functions using them. ...
Neuromorphic computing is looked at as one of the promising alternatives to the traditional von Neumann architecture. In this paper, we consider the problem of doing arithmetic on neuromorphic systems and propose an architecture for doing IEEE 754 compliant addition on a neuromorphic system. A novel encoding scheme is also proposed for reducing the inter-neural ensemble error. The complex task of floating point addition is divided into sub-tasks such as exponent alignment, mantissa addition and overflow-underflow handling. We use a cascaded approach to add the two mantissas of the given floating-point numbers and then apply our encoding scheme to reduce the error produced in this approach. Overflow and underflow are handled by approximating on XOR logic. Implementation of sub-components like right shifter and multiplexer are also specified.
... Sequential mapping is used in SpiNNaker. Neural engineering framework (NEF) is developed for Neurogrid [10]. Neutrams [11] addresses an optimized mapping technique based on graph partition problem: Kernighan-Lin (KL) partitioning strategy for network on chips. ...
Full-text available
Neuromorphic systems or dedicated hardware for neuromorphic computing is getting popular with the advancement in research on different device materials for synapses, especially in crossbar architecture and also algorithms specific or compatible to neuromorphic hardware. Hence, an automated mapping of any deep neural network onto the neuromorphic chip with crossbar array of synapses and an efficient debugging framework is very essential. Here, mapping is defined as the deployment of a section of deep neural network layer onto a neuromorphic core and the generation of connection lists among population of neurons to specify the connectivity between various neuromorphic cores on the neuromorphic chip. Debugging is the verification of computations performed on the neuromorphic chip during inferencing. Together the framework becomes Mapping and Debugging (MaD) framework. MaD framework is quite general in usage as it is a Python wrapper which can be integrated with almost every simulator tools for neuromorphic chips. This paper illustrates the MaD framework in detail, considering some optimizations while mapping onto a single neuromorphic core. A classification task on MNIST and CIFAR-10 datasets are considered for test case implementation of MaD framework.
BrainScaleS-1 is a wafer-scale mixed-signal accelerated neuromorphic system targeted for research in the fields of computational neuroscience and beyond-von-Neumann computing. Here we present the BrainScaleS Operating System (BrainScaleS OS): the software stack gives users the possibility to emulate networks described in the high-level network description language PyNN with minimal knowledge of the system, as well as expert usage facilitated by allowing access to the system at any depth of the stack. BrainScaleS OS has been used extensively in the commissioning and calibration of BrainScaleS-1 as well as in various neuromorphic experiments, e.g., rate-based deep learning, accelerated physical emulation of Bayesian inference, solving of SAT problems, and others. The tolerance to faults of individual components of the neuromorphic system is reflected in the mapping process based on information stored in an availability database. We evaluate the robustness and compensation mechanisms of the system and software stack. The software stack is designed with performance in mind, with its core implemented in C++ and most user-facing API wrapped automatically to Python. The implemented multi-FPGA orchestration allows for parallel configuration and synchronized experiments facilitating wafer-scale experiments. The initial configuration of a wafer-scale experiment with hundreds of neuromorphic ASICs is performed in a fraction of a minute. Subsequent experiments, that potentially change only a subset of parameters, can be executed with rates of typically 10Hz. The bandwidth from the host machine to the neuromorphic system is fully utilized starting from a quarter of the system’s FPGA count. Operation and development methodologies implemented for the BrainScaleS-1 neuromorphic architecture are presented and the individual components of BrainScaleS OS constituting the software stack for BrainScaleS-1 platform operation are detailed.
Full-text available
Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.
Full-text available
Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case.
Full-text available
Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale-invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.
Full-text available
Giving robots the ability to classify surface textures requires appropriate sensors and algorithms. Inspired by the biology of human tactile perception, we implement a neurorobotic texture classifier with a recurrent spiking neural network, using a novel semi-supervised approach for classifying dynamic stimuli. Input to the network is supplied by accelerometers mounted on a robotic arm. The sensor data is encoded by a heterogeneous population of neurons, modeled to match the spiking activity of mechanoreceptor cells. This activity is convolved by a hidden layer using bandpass filters to extract nonlinear frequency information from the spike trains. The resulting high-dimensional feature representation is then continuously classified using a neurally implemented support vector machine. We demonstrate that our system classifies 18 metal surface textures scanned in two opposite directions at a constant velocity. We also demonstrate that our approach significantly improves upon a baseline model that does not use the described feature extraction. This method can be performed in real-time using neuromorphic hardware, and can be extended to other applications that process dynamic stimuli online.
Conference Paper
Full-text available
Emulating how humans coordinate articulated limbs within the brain's power budget promises to accelerate progress in building autonomous biomimetic robots. Here, we used a neuromorphic approach-low-power analog silicon spiking neurons-to control an articulated robot in real-time. We obtained a closed-form control function that computes robot motor torques given the robot's joint angles (state) and desired end-effector forces; factorized the function into a set of sub-functions over five unique three-dimensional domains; and regressed each sub-function on to the steadystate spiking responses of one out of five silicon spiking-neuron pools. The spiking pools controlled a three degree-of-freedom robot's motor torques in real-time and performed reaches to arbitrary locations in space with less than 2 cm root-meansquare trajectory tracking error (of an analytical controller). The controller is compliant and can draw shapes with a pen on a dynamically perturbed surface while remaining stable. Using force control resulted in linear responses to perturbations in end-effector coordinates (task-space), which effectively filtered noise due to neuron spikes. Factorizing the controller reduced the neural regression's complexity to cubic in the dynamic range of the robot's state and desired forces. Doing so made acquiring spiking responses for regression tractable in time (∼2-3 min), and enabled reliable trajectory tracking with only 1280 neurons. This is the first time a neuromorphic system has achieved realtime manipulation for an articulated robot with three or more degrees-of-freedom.
Conference Paper
Full-text available
Brain-inspired, spike-based computation in electronic systems is being investigated for developing alternative, non-conventional computing technologies. The Neural Engineering Framework provides a method for programming these devices to implement computation. In this paper we apply this approach to perform arbitrary mathematical computation using a mixed signal analog/digital neuromorphic multi-neuron VLSI chip. This is achieved by means of a network of spiking neurons with multiple weighted connections. The synaptic weights are stored in a 4-bit on-chip programmable SRAM block. We propose a parallel event-based method for calibrating appropriately the synaptic weights and demonstrate the method by encoding and decoding arbitrary mathematical functions, and by implementing dynamical systems via recurrent connections.
Full-text available
Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Full-text available
We present an approach to map neuronal models onto neuromorphic hardware using mathematical insights from dynam-ical system theory. Quantitatively accurate mappings are impor-tant for neuromorphic systems to both leverage and extend ex-isting theoretical and numerical cortical modeling results. In the present study, we first calibrate the on-chip bias generators on our custom hardware. Then, taking advantage of the hardware's high-throughput spike communication, we rapidly estimate key map-ping parameters with a set of linear relationships for static inputs derived from dynamical system theory. We apply this mapping procedure to three different chips, and show close matching to the neuronal model and between chips—the Jenson–Shannon diver-gence was reduced to at least one tenth that of the shuffled control. We confirm that our mapping procedure generalizes to dynamic inputs: Silicon neurons match spike timings of a simulated neuron with a standard deviation of 3.4% of the average inter-spike in-terval. Index Terms—Dynamical systems, neural simulation, neuro-morphic engineering, quadratic integrate-and-fire model, silicon neuron.
Conference Paper
We use neuromorphic chips to perform arbitrary mathematical computations for the first time. Static and dynamic computations are realized with heterogeneous spiking silicon neurons by programming their weighted connections. Using 4K neurons with 16M feed-forward or recurrent synaptic connections, formed by 256K local arbors, we communicate a scalar stimulus, quadratically transform its value, and compute its time integral. Our approach provides a promising alternative for extremely power-constrained embedded controllers, such as fully implantable neuroprosthetic decoders.
1. A Neural Processor for Maze Solving.- 2 Resistive Fuses: Analog Hardware for Detecting Discontinuities in Early Vision.- 3 CMOS Integration of Herault-Jutten Cells for Separation of Sources.- 4 Circuit Models of Sensory Transduction in the Cochlea.- 5 Issues in Analog VLSI and MOS Techniques for Neural Computing.- 6 Design and Fabrication of VLSI Components for a General Purpose Analog Neural Computer.- 7 A Chip that Focuses an Image on Itself.- 8 A Foveated Retina-Like Sensor Using CCD Technology.- 9 Cooperative Stereo Matching Using Static and Dynamic Image Features.- 10 Adaptive Retina.