ArticlePDF Available

Abstract

Design of analog modular neuron based on memristor is proposed here. Since neural networks are built by repetition of basic blocks that are called neurons, using modular neurons is essential for the neural network hardware. In this work modularity of the neuron is achieved through distributed neurons structure. Some major challenges in implementation of synaptic operation are weight programmability, weight multiplication by input signal and nonvolatile weight storage. Introduction of memristor bridge synapse addresses all of these challenges. The proposed neuron is a modular neuron based on distributed neuron structure which it uses the benefits of the memristor bridge synapse for synaptic operations. In order to test appropriate operation of the proposed neuron, it is used in a real-world application of neural network. Off-chip method is used to train the neural network. The results show 86.7 % correct classification and about 0.0695 mean square error for 4-5-3 neural network based on proposed modular neuron.
REVIEW
Modular neuron comprises of memristor-based synapse
Jafar Shamsi
1
Amirali Amirsoleimani
2
Sattar Mirzakuchaki
1
Majid Ahmadi
2
Received: 4 September 2014 / Accepted: 19 August 2015
ÓThe Natural Computing Applications Forum 2015
Abstract Design of analog modular neuron based on
memristor is proposed here. Since neural networks are built
by repetition of basic blocks that are called neurons, using
modular neurons is essential for the neural network hard-
ware. In this work modularity of the neuron is achieved
through distributed neurons structure. Some major chal-
lenges in implementation of synaptic operation are weight
programmability, weight multiplication by input signal and
nonvolatile weight storage. Introduction of memristor bridge
synapse addresses all of these challenges. The proposed
neuron is a modular neuron based on distributed neuron
structure which it uses the benefits of the memristor bridge
synapse for synaptic operations. In order to test appropriate
operation of the proposed neuron, it is used in a real-world
application of neural network. Off-chip method is used to
train the neural network. The results show 86.7 % correct
classification and about 0.0695 mean square error for 4-5-3
neural network based on proposed modular neuron.
Keywords Artificial neural network Synaptic
operation Distributed neuron Memristor bridge synapse
1 Introduction
The digital, analog and mixed-signal circuits are different
possible approach for implementation of the artificial
neural networks (ANNs) hardware [1]. Although digital
implementation has higher accuracy, it is slow and has high
power and area consumption. On the other hand, analog
implementation is fast and meets the criteria of minimal
area. The main drawback of analog circuits is low accuracy
due to mismatch of the devices [2]. Analog circuits have
some advantages such as low area and low power con-
sumption, and also implementation of some math opera-
tions is achieved through the inherent physical processes
such as collecting currents in a node as the summation
operation. In other words, by considering the Kirchhoff’s
current law (KCL) in the electrical circuit theory, the
output current of a node is sum of the input currents and it
can be used for implementation of summation operation in
analog circuits.
One of the main challenges in analog ANN circuits is
implementation of programmable synapses as their weight
values are adjusted through programming. For implemen-
tation of the synaptic weights in analog circuits, resistors,
capacitors and floating gates are used [3]. Each of these
elements has some drawbacks. The resistor’s static
behavior makes it impossible to be changed or trained.
Also a capacitor cannot store weights for long periods of
time because of its permanent charge leakage; floating gate
transistors are successful alternative for working as
synapses of neurons in analog circuits which are used to
implement Hebbian-based and STDP learning rules. Low
area consumption in implementation of STDP is another
advantage of floating gates. Their main drawback is their
high nonlinearity which causes a problem in multiplying
weights and for adjusting of the synaptic weight value they
need extra circuits to drive the tunneling and hot electron
injection [4]. Emerging memristor as a nonvolatile resistor
opens a new vision in neural network hardware imple-
mentation. The memristor idea was presented by Chua in
1971 [5]. Subsequently in 2008 at HP laboratory this
missing element was discovered [6]. Memristor or memory
&Amirali Amirsoleimani
amirali_ama@yahoo.com
1
Iran University of Science and Technology, Tehran, Iran
2
University of Windsor, Windsor, Canada
123
Neural Comput & Applic
DOI 10.1007/s00521-015-2047-0
resistor is a two-port element with its memristance being
proportional to the amount and polarity of current passing
through it. Several neural network circuits based on
memristor were presented in the literature since its dis-
covery. This highlights the impacts of memristor’s emer-
gence on neural network hardware implementation. In [7]
the similarity between memristor and a biological synapse
was investigated. In [8] a cellular neural network (CNN)
based on memristor is implemented. In [9] a hybrid CMOS
memristor-based CNN was designed. Also memristor
bridge circuit presented in [10] includes four memristors
and three transistors. This circuit can produce negative,
zero and positive synaptic weights which is also used in
this work as synapse of the proposed modular neuron.
Moreover, in distributed neurons introduced in [11]
there is an activation function for each input signal rather
than having one activation function for all neuron’s input
signals. Each activation function is implemented by a
nonlinear load, and their outputs are connected to each
other to form a multi-input neuron. Modularity is one of the
significant features of distributed neurons [11,12]. Since
neural network is comprised of replicated basic element of
neuron and synapse, using the modular neurons is essential
for the neural network hardware.
In this paper by using the benefits of memristor bridge
circuits and distributed neuron, a modular neuron com-
prised of memristor-based synapse is proposed. Memristor
bridge circuit is used for synaptic operation, and distributed
neuron structure is applied to make it a modular circuit.
The paper is organized as follows. Memristor theory and
its basic relations are mentioned in Sect. 2. Explanation of
the proposed circuit and its simulations in HSPICE
Ò
are
presented in Sect. 3. In Sect. 4, off-chip method is used for
training of a neural network based on the proposed modular
neuron and its simulation results are presented. This is
followed by concluding remarks in Sect. 5.
2 Memristor
The memristor is a metal–insulator–metal (MIM) structure.
The HP memristor is consisted of two platinum (Pt) elec-
trodes and titanium dioxide (TiO
2
) layer between them. One
side of the insulator layer is doped and consisted of oxygen
deficiency titanium dioxide (TiO
2-x
). This side is having
higher conductivity in comparison with the other side. The
linear model for HP’s memristor structure is shown in
Fig. 1a. The internal state variable (x) of the memristor
shows the position of the boundary region between doped
and undoped zones. Also when xis decreased, the conduc-
tance of the channel decreases proportionally.
Memristor creates a relationship between flux and
charge;
MðqÞ¼du
dqð1Þ
where M(q) is the memristance of the memristor. When
charge and flux relation is flux-controlled, the equation will
be,
WðuÞ¼dq
duð2Þ
where W(u)is the memductance of the memristor.
Therefore, the current–voltage relationship is determined
by,
V¼MqðÞIð3Þ
I¼WuðÞVð4Þ
In [13] a linear boundary drift model is introduced by
applying window function. In this model velocity of
boundary region between doped and undoped zone of TiO
2
thin film is determined through,
(a) (b)
TiO
2
TiO
2-x
R
OFF
R
ON
Pt
Pt
D
x
TiO
2
Pt
Pt
w
Fig. 1 a HP’s Pt/TiO
2
/Pt memristor structure is shown for linear
model in [13]. The thin film length is considered D, and the position
of boundary region along the TiO
2
thin film is x. As it can be seen the
resistance of the doped region (TiO
2-x
) is shown by an equivalent
resistor R
ON
and the resistance of the undoped region (TiO
2
)is
considered by an equivalent resistor R
OFF
.bThe Pt/TiO
2
/Pt
memristor structure is presented for physical model in [21]. The
tunneling distance is shown by w
Neural Comput & Applic
123
vD¼glDRON
DiðtÞfðxÞð5Þ
where gand l
D
are the polarity of memristor and mobility
of dopants, respectively. Also Dand R
ON
are thickness and
lowest resistance of the memristor device. The current i(t)
is passing current through the memristor, and window
function f(x) is used to insert nonlinearity into the mem-
ristor’s behavior [14]. This window function is,
fðxÞ¼12x
D1

2p
ð6Þ
where pis a parameter for applying nonlinearity to mem-
ristor model. The memristance of the memristor is defined
by,
MðxÞ¼RON
x
D

þROFF 1x
D
 ð7Þ
Although the mentioned assumption for memristor
model exhibited basic memristor device characteristics, it
cannot generate real memristor device behavior [20]. In
[21] a model based on tunneling phenomenon is presented
by the structure shown in Fig. 1b. While this model shows
more accurate behavior in comparison with experimental
data extracted from a real memristor device, it is a com-
plicated model in terms of computation. The more sim-
plified model is presented in [22] based on the same
physical assumption of memristor in [21]. This model is
considered threshold voltage for memristor devices. In this
model state variable is defined by,
dwtðÞ
dt¼
kOFF
itðÞ
iOFF

aOFF
fOFF wðÞ 0\iOFF \i
0iON \i\iOFF
kON
itðÞ
iON

aON
fON wðÞ i\iON \0
8
>
>
>
>
<
>
>
>
>
:
ð8Þ
where i
ON
and i
OFF
are the current thresholds and k
OFF
,
k
ON
,a
OFF
and a
ON
are constants. The effective electric
tunnel width, which corresponds to an internal state vari-
able, is represented by w. The window functions are f
ON
and f
OFF
which limit the internal state variable to the [w
ON
,
w
OFF
] interval. Like [21] asymmetric behavior in state
variable dependence, the f
ON
and f
OFF
may have different
values. The ivrelationship in this model is,
vt¼RON ek=wOFFwON
ðÞðÞwwON
ðÞ
itðÞ ð9Þ
where k¼ln ROFF
RON

where kis a fitting parameter, which is
the proportion of the highest resistance (HRS) mode to the
lowest resistance (LRS) of the device. The Pt/TiO
2
/Pt
memristor device simulated in [23] is considered for the
simulation of the memristor bridge synapse in the modular
neuron. The ivcurve and memristance of simulated
memristor are shown in Fig. 2for both linear model of
memristor [13] by utilizing window function [14] and the
TEAM model in [22] through MATLAB
Ò
.
3 Proposed design
Artificial neural networks have a massive connection of
neurons. McCulloch–Pitts model of a neuron is demon-
strated in Fig. 3a. The main parts of this model are: (1)
synapses that multiply their weights by input signals x
i
, (2)
summation operation for summing weighted input signals
and (3) activation function which is applied on the result of
summation. Neuron output relation is:
O¼X
n
i¼1
wixiþhð10Þ
where his bias, w
i
is synapse weight, x
i
is input signal, and
Ois output of the neuron. Equivalent block diagram of
distributed neuron structure is also demonstrated in Fig. 3b.
An Ninput neuron has Nneurons with one input and one
output, and the outputs are being connected to each other
[11]. The modularity of distributed neuron is an obvious
feature which has Nsimilar neurons. All of the signals in
this block diagram are shown as voltage or current. The
synapse block operation is to multiply weights by input
signals. The voltage to current convertor changes weighted
input voltage signals to current. This conversion is due to
the simple nature of implementing current summation
function. Therefore, in this stage current signals are sum-
med and then imposed into a nonlinear load that behaves
like an activation function.
For implementing synapse of the proposed modular
neuron, the memristor bridge circuit [10] is used. This
circuit is like the famous Wheatstone bridge. As shown in
Fig. 4, it is implemented by four memristors with prede-
fined polarities.
Synaptic operation includes programming ability for
changing synaptic weights, nonvolatile storing ability for
synaptic weights and multiplication ability for multiplying
weights by input signals. The three mentioned character-
istics are fulfilled through memristor bridge synapse. For
testing programmability of this synapse circuit, input signal
is imposed for a specific time which is proportional to the
weight considered for the synapse. Since memristor is a
history-dependent device, the allocated weight for the
synapse can be stored in memristor bridge circuit. The
output voltage relationship with input voltage in the pro-
posed synapse circuit can be determined by [10],
vout ¼bVin ð11Þ
b¼M2
M1þM2
M4
M3þM4
ð12Þ
Neural Comput & Applic
123
(a) (b)
Fig. 2 a Linear boundary drift model [13] is used for simulating I
Vcurve (top figure) and memristance versus voltage (below figure)
for Pt/TiO
2
/Pt memristor. The model parameters were set to:
D=3 nm, R
init
=100 kX,R
ON
=100 X,R
OFF
=200 kX,
l
D
=10
-14
m
2
s
-1
V
-1
,p=2. These curves are for 1 V, 1 kHz
triangular wave input voltage. bTEAM model [22] is used for
simulating IVcurve (top figure) and memristance versus voltage
(below figure) for Pt/TiO
2
/Pt memristor. The model parameters were
set to: a
ON
=3, a
OFF
=3, k
ON
=-8e-13 nm/s, k
OFF
=8e-13 nm/s,
i
ON
=8.9 lA, i
OFF
=115 lA. These curves are for 3 mA, 1 Hz
sinusoidal wave input current
Fig. 3 Neuron and synapse are
displayed. aMcCulloch–Pitts
model. bEquivalent block
diagram of distributed neuron
Neural Comput & Applic
123
where bis synaptic weight. M
1
,M
2
,M
3
and M
4
are
memristances of memristors. In training phase for pro-
gramming the synaptic weight, different input signals are
utilized for programming synapse with different memristor
models. For example by using linear boundary drift model,
voltage pulse width applied to synapse is proportional to
the weight considered for synapse, while in TEAM model
the voltage pulse width is not proportional to weight con-
sidered for synapse. The memristor’s memristance behav-
ior and weight of the proposed synapse are shown in Fig. 5
for programming phase. Figure 5a, b shows simulation
based on linear boundary drift model and TEAM model,
respectively. As it can be seen in Fig. 5for memristor
bridge synapse circuit simulated with linear model for Pt/
TiO
2
/Pt device applied in [13], M
1
and M
4
change their
states from HRS to LRS while M
2
and M
3
change their
states from LRS to HRS. For this case a 2 V pulse input
voltage is applied as input signal. In the other case by
applying ramp input current with the slope of 0.16e-03 for
a memristor bridge circuit simulated with TEAM model,
the resistive states of M
2
and M
3
do not change very much.
Although memristance of the M
2
and M
3
does not change
considerably, changing in memristance of M
1
and M
4
devices causes the memristor bridge circuit to change its
weight (b) from -1to1.
As input current ramp is applied to memristor bridge
circuit, M
1
and M
4
memristor devices are starting to change
their memristances and the currents in them reach to i
ON
(8.9 lA). While the current reaches to i
ON
,M
1
and M
4
memristances are changing and M
2
and M
3
memristances
in memristor bridge circuit are not changing until the
current reaches to i
OFF
(115 lA). For reaching i
OFF
current,
M
1
and M
4
should change their resistive state completely.
Then the currents in the branches become greater than i
OFF
,
and this causes M
2
and M
3
to change their memristance as
well. This method has a drawback in initial moments of
programming with TEAM model. For example as it can be
seen in Fig. 5b after 2 s, M
1
and M
4
voltages become more
than 100 V which is not practicable and this will possibly
cause breakdown and failure in device. For solving this
problem, in order to program the memristors which satisfy
Fig. 4 Memristor bridge synapse circuit [10]
00.5 11.5 22. 5 33.5 4
0
0.5
1
1.5
2
2.5
Tim e ( s )
V
in
(v)
00.5 11.5 22. 5 33.5 4
0
0.5
1
1.5
2
x 105
Tim e ( s )
Memrist anc e(Ω)
M1,M4
M2,M3
00.5 11.5 22. 5 33.5 4
-1
-0.5
0
0.5
1
Tim e ( s )
Synapse W eight
0
100
200
V
M1
,V
M4
(v)
Time(s)
0 1 2 3 4 5 6 7 8 9 100
0.005
0.01
I
in
(A)
0 1 2 3 4 5 6 7 8 9 10
0
1
2x 10
5
Time(s)
Memristance(
Ω
)
M1,M4
M2,M3
0 1 2 3 4 5 6 7 8 9 10
-1
0
1
Time(s)
Synapse Weight
(a) (b)
Fig. 5 a Programming of memristor bridge synapse based on linear
boundary drift model [13] by applying window function [14]. The
input signal is 2 V pulsed voltage. bProgramming of memristor
bridge synapse based on TEAM model [22]. The input signal is a
ramp input current with the slope of 0.16e-03. The upper figures
display applied input signal to the bridge synapse. The middle figures
show memristance alteration of memristor M
1
and M
4
versus M
2
and
M
3
. The below figures show weight value alternation of the memristor
bridge synapse
Neural Comput & Applic
123
the TEAM model another approach is utilized. As TEAM
model is utilized for simulation of memristor bridge
synapse, in initial moments of programming when mem-
ristances of M
1
and M
4
are started to decrease, the weight
of the synapse is not changed. In addition high voltage
value is needed for having the desirable change in synapse
weight at this stage as it can be seen in red zone of Fig. 5b.
Therefore, by omitting this zone in programming procedure
not only the programming time for synapse will be reduced
but also there will be no need for a considerable high
voltage for programming of synapse in this stage.
For omitting this zone in programming procedure, all
memristors’ initial memristances should be set to R
ON
by
RESET procedure at first. In the next step for generating a
positive or negative weight in synapse, a positive or neg-
ative ramp current should be applied to synapse, respec-
tively. The proposed programming technique is utilized in
Fig. 6a, b for negative and positive weight programming of
synapse with TEAM model, respectively.
Also in RESET procedure for initializing all memristor
devices to R
ON
, same voltages are applied to all memris-
tors. For this issue a negative voltage (-5.5 V) is applied
to node A and positive voltage (?5.5 V) is imposed to
node B of memristor bidge synapse in Fig. 4. The input
node of the synapse should be connected to the ground. In
this way 5.5 V DC is applied to all of the four memristors.
The RESET procedure is done just once for each pro-
gramming procedure as the first step. The RESET proce-
dure is displayed in Fig. 7for memristor device simulated
by TEAM model with different initial memristances. As it
can be seen in Fig. 7, all devices with different initial
memristances are reached to R
ON
at the end of the applied
DC voltage.
In operation phase a narrow width pulse is applied to the
memristor bridge synapse because the memristor’s state
should not be changed in this phase. Here, the pulse width
is considered as 10 ns since it does not change the mem-
ristor’s state. When narrow width pulse is applied, the
memristor acts as a resistor, and thus in operation phase a
resistor is used instead of memristor.
The operation of the memristor bridge synapse is sim-
ulated for five different input signals in Fig. 8. The linear
relationship between output voltage and input voltage of
the synapse can be seen for different weights.
Considering the neuron structure in Fig. 3b, after mul-
tiplying synaptic weights to the input signal, the voltage
should be converted to current by voltage to current con-
vertor which is illustrated in Fig. 9. The input is in dif-
ferential mode because of differential output of the
synapse.
The CMOS transistor parameters in 180 nm CMOS
technology, resistor’s resistance and voltage sources values
are shown in Table 1. The output current relationship with
input voltage can be determined by:
-4
-2
0
V
M1
,V
M4
(v)
Tim e ( s )
00.5 11.5 22. 5 33.5 44.5 5-0. 01
-0.005
0
I
in
(A)
00.5 11.5 22. 5 33.5 44.5 5
0
500
1000
Tim e ( s )
Memristance(
Ω
)
M1,M4
M2,M3
00.5 11.5 22. 5 33.5 44.5 5
-1
-0.5
0
0.5
1
Tim e ( s )
Synapse Weight
VM1,VM4
Iin
0
2
4
V
M2
,V
M3
(v)
Tim e ( s )
00.5 11.5 22.5 33. 5 44.5 50
0.005
0.01
I
in
(A)
00.5 11.5 22.5 33. 5 44.5 5
0
500
1000
Tim e ( s )
Memristance(Ω)
M1,M4
M2,M3
00.5 11.5 22.5 33. 5 44.5 5
-1
-0.5
0
0.5
1
Tim e ( s )
Synapse Weight
Iin
VM2,VM3
(a) (b)
Fig. 6 a Programming memristor bridge synapse with the proposed
technique to negative weight by using TEAM model [22]. bPro-
gramming memristor bridge synapse with the proposed technique to
positive weight by using TEAM model [22]. The input signal is a
ramp input current with the slope of 0.16e-03. The dashed green
curves display applied input current signal to the bridge synapse in the
top figures. The middle figures show memristance alteration of
memristor M
1
and M
4
versus M
2
and M
3
. The below figures show
weight value alternation of the memristor bridge synapse (color figure
online)
Neural Comput & Applic
123
Iout ¼gm
1þgmR

:vin ð13Þ
where Ris the resistances in source of Q
1
and Q
2
and g
m
is
the transconductance of Q
1
and Q
2
. A nonlinear load is
used as the output load, whereas the combination of the
voltage to current convertor and the nonlinear load serves
as an activation function.
Activation function is one of the most important ingre-
dients of a neural network. It defines output of the neuron
to a given weighted input. After the signal conversion the
current acts as activation function. Several activation
functions are used in neurons such as hyperbolic tangent,
step function and sigmoid function. Sigmoid and hyper-
bolic tangent functions are S-curved nonlinear functions. In
[12] a nonlinear load is used for implementation of an
activation function of a distributed neuron. This nonlinear
neuron is comprised of two PMOS/NMOS transistors and
one resistor. Also it requires four voltage biases. In [15] the
nonlinear load in previous design in [12] has been revised.
It has replaced two resistors with two transistors. It also
consumes less power and needs three voltage biases for
implementing sigmoid function. In [16] the number of
sources is decreased to one for sigmoid activation function.
Here a novel nonlinear load is used for implementing a
function similar to hyperbolic tangent function. This circuit
is shown in Fig. 10. Also the transistor parameters are
shown for 180 nm CMOS technology in Table 2. The
proposed nonlinear load is fed through input current.
Therefore, it does not require any bias voltage. Its input
current is the output of the voltage to current convertor
unit. The proposed neuron operates in four regions sum-
marized in Table 3. Each region’s current is determined
according to operation of the transistors in that region.
In region II, low input current causes a low output
positive voltage, and as a result, Q
6
and Q
7
are in triode and
off regions, respectively. The output voltage in this region
is defined as follows:
010 20 30 40 50 60 70 80 90 100
4.5
5
5.5
6
6.5
Time(s)
Applied Voltage(v)
010 20 30 40 50 60 70 80 90 100
0
0.5
1
1.5
2x 105
Time(s)
Memristance(Ω)
Rinit=ROFF
Rinit=0.9*ROFF
Rinit=0.8*ROFF
Rinit=0.7*ROFF
Rinit=0.6*ROFF
Fig. 7 RESET procedure for Pt/TiO
2
/Pt device simulated by TEAM
model [23] with different initial memristances
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
Vin(v)
Vout(v)
Weight=1
Weight=0.5
Weight=0
Weight=-0.5
Weight=-1
Fig. 8 V
out
versus V
in
of the memristor bridge synapse for various
values of synaptic weight and input voltage where for each value of
input voltage the narrow width pulse is used in operation phase. The
pulse width of 10 ns is used here in operation phase
Fig. 9 Voltage to current convertor circuit
Neural Comput & Applic
123
Iin ¼Vout
RT
ð14Þ
where R
T
is resistance of Q
6
in triode region. Region III is
similar to region II where Q
7
and Q
6
are in triode and off
regions, respectively.
The output voltage increases due to the input current
increment until output voltage reaches V
tn
value (threshold
voltage of Q
6
). Consequently, if the input current increases,
it causes a change in operation from region III to region IV.
In this region Q
6
is saturated and Q
7
is off and current is
defined as follows:
Iin ¼1
2knSnðVout VtnÞ2ð1þkVout Þð15Þ
where S
n
is defined as W6
L6

.
Decreasing input current to negative value (changing the
current direction) and reaching to an output voltage of V
tp
(threshold voltage of Q
6
) cause a change in operating
region of circuit from region II to region I. In this region,
Q
7
is saturated and Q
6
is off and current is defined as
follows:
Iin ¼1
2kPSPðVout VtpÞ2ð1þkVout Þð16Þ
where S
p
is defined as W7
L7

.
To solve (14)–(16) for V
out
, for ease of calculation,
channel length modulation effect is ignored (k=0). The
following equations show the output voltage in each region
[Eqs. (17)–(19) are related to regions I, II (or III) and IV,
respectively]:
Vout ¼Vtp þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð2IinÞ=ðkpSpÞ
qð17Þ
Vout ¼RTIin ð18Þ
Vout ¼Vtn þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð2IinÞ=ðknSnÞ
pð19Þ
In Fig. 11 output voltage of the activation function is
simulated for input current.
As it can be seen in Fig. 12, by connecting mentioned
units a modular neuron comprised of a memristor-based
synapse is created. Input voltage is weighted by memristor
bridge synapse and transforms to current by voltage to
current convertor. Subsequently the created current goes
through nonlinear load. In Fig. 13 output of the proposed
neuron is depicted for pulsed 0.5 V input voltage with
10 ns width for zero, negative and positive weights. This
Table 1 Parameters of transistors and resistors and voltage sources of voltage to current convertor circuit
Transistors parameters (lm) Resistors (kX) Voltage sources (V)
W1
L1

¼1:5
0:2
W2
L2

¼1:5
0:2
W3
L3

¼1:7
0:3
W4
L4

¼1:7
0:3
W5
L5

¼9
0:5R
S1
=2R
S1
=2 VDD =0.9 VSS =-0.9
Fig. 10 Proposed passive nonlinear load
Table 2 Transistor parameters
of nonlinear load Transistors parameter (lm)
W6
L6
 4:5
0:18
W7
L7
 10
0:18
Table 3 Operation region of neuron circuit
Regions V
out
Q
6
Q
7
IV
out
\V
tp
OFF Saturation
II V
tp
\V
out
\0 OFF Triode
III 0 \V
out
\V
tn
Triode OFF
IV V
tn
\V
out
Saturation OFF
-2 -1.5 -1 -0.5 00.5 11.5 2
x 10
-5
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
I
in
(A)
V
out
(v)
Fig. 11 Nonlinear load IVcurve as an activation function
Neural Comput & Applic
123
neuron has one input and one output. The modularity is one
of the main features of these types of neurons.
This means for creating multiple input one output neu-
rons, this modular neuron is repeated in parallel and output
of these modular neurons should connect to each other.
Therefore, a distributed neuron is produced by the pro-
posed modular neuron. The output current, summation of
all the neuron output currents, is applied to each nonlinear
load. It is determined by:
ISUM ¼X
N
k¼1
gm
1þgmR

wkvinkð20Þ
where I
SUM
is collected current in output node, Nis number
of modular neurons connected to each other, w
k
is synapse
weight and v
ink
is input voltage of kth modular neuron.
4 Training and simulation results
In order to test the proposed neuron operation, it is used in
a real neural network application. The data set which
models the psychological experimental results is used as a
classification problem to test its applicability in real neural
networks [17]. The data set has three classes and four
attributes. Each sample is classified as having the balance
scale tip to the right, tip to the left, or be balanced. The
attributes are the left weight, the left distance, the right
weight and the right distance. There are 625 samples which
are divided into 500 and 125 samples as training data and
test data, respectively.
The proposed neural network structure has four inputs,
five neurons in hidden layer and three neurons in output
layer as it is shown in Fig. 14. Each neuron is based on
modular neuron with distributed structure. Therefore, there
are Nmodular neurons for each neuron where Nis the
number of inputs. This results 20 modular neurons in
hidden layer and 15 for output layer.
There are three methods for training of hardware neural
network (HNN): on-chip, off-chip and chip-in-loop [18].
In on-chip training method, extra training circuitry on the
chip is used to calculate and update the synaptic weights.
Its drawback is area consumption of extra required cir-
cuits for infrequent training phase. For off-chip method,
all calculations of training are performed in computer and
final calculated weights are downloaded to the chip.
Fig. 12 Proposed modular
neuron
0 0.5 1 1.5 2 2.5 3
x 10
-8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
Time(s)
V
out
(v)
Weight=-1
Weight=-0.75
Weight=-0.5
Weight=-0.25
Weight=0
Weight=0.25
Weight=0.5
Weight=0.75
Weight=1
Fig. 13 Positive, zero and negative weight values are considered for
synapse and output of the modular neuron for 1 V input pulse with
10 ns duration is simulated
Neural Comput & Applic
123
Because of high precision in computer calculation, the
chip has lower performance than it is expected. In chip-in-
loop method, initial weights are downloaded to the
synapses in the chip and outputs of neural network in the
chip return back to the computer. Then synaptic weights
are calculated and updated in computer based on achieved
outputs of the chip. This procedure is repeated periodi-
cally until the expected performance of the neural net-
work in the chip is achieved.
For training neural network, off-chip method is applied.
In this method, all training procedure is performed in the
host computer. Training data set is used in this step. As it
was mentioned, the chip performance degradation is the
drawback in off-chip training method. To overcome this
problem, a training method based on the modified chip-in-
loop training [3] and HSPICE–MATLAB co-simulation
[19] is used here. Modified chip-in-loop training has three
main steps: (1) Neural network on computer is trained and
synaptic weights are downloaded to the chip, (2) outputs of
all neurons are captured from the chip and then stored in
computer memory, and (3) each layer of neural network is
trained separately in chip-in-loop fashion (by using the host
computer and the chip). All neurons’ outputs of the chip
are used in this algorithm which is the major drawback of
this method and needs special consideration in designing of
the chip. This algorithm is used in this work, but instead of
chip outputs, HSPICE
Ò
simulation result is used. HSPICE–
MATLAB co-simulation is used here for training of the
neural network.
At first epoch, the neural network is trained in
MATLAB
Ò
using back propagation algorithm. Then
synaptic weight values are mapped to [-1, 1] domain in
order to applying them to the memristor bridge synapse
circuits. HSPICE
Ò
simulation is performed, and all neurons
output of the neural network are returned back to the
MATLAB
Ò
. After this step, in chip-in-loop fashion, back
propagation algorithm is applied to each layer of neural
network separately and calculated synaptic weights are
mapped to [-1, 1], and then they are applied to the
memristor bridge synapse. This procedure is repeated
periodically in each epoch until the expected performance
of chip is achieved. Figure 15 displays the steps of the
training method. Since all training procedures are per-
formed in computer, this can be considered as off-chip
training method. By using this method, the proposed neural
network is trained in thirty-eight epochs.
After training completion, produced weights are applied
to the proposed neural network modeled in HSPICE
Ò
. Test
data set is used in this step. In order to analyze neural
network performance, the outputs extracted from HSPICE
Ò
simulation are fed again to MATLAB
Ò
software for
comparing with expected outputs in the dataset. Mean
square error (MSE) of the simulation is 0.0695. Recogni-
tion rate of classes for this neural network is 86.7 %. The
summary of the proposed neural network and its simulation
results are illustrated in Table 4.
X1
X2
N1
N2
N3
N4
N5
N6
N7
N8
Right
Balanced
Left
Fig. 14 ANN architecture for classification
Training Using Back
Propagation Algorithm in
MATLAB
Mapping the Weights to
[-1,1] and Applying to
Memristor Bridge
Synapse and Simulation
in HSPICE
Return Back the
Simulation Results to
MATLAB
Back Propagation
Algorithm in MATLAB for
Each Layer Separately
Expected
Performance?
Training Complete
No
Yes
Fig. 15 Training flowchart of HSPICE–MATLAB co-simulation
Neural Comput & Applic
123
5 Conclusion
In this paper, a modular neuron is designed using a mem-
ristor-based synapse. Also it is used in a real neural net-
work application. The proposed neural cell includes three
main units, namely a memristor bridge synapse, voltage to
current convertor and nonlinear load. The memristor bridge
synapse enables this neuron to produce negative, zero and
positive weights. The voltage to current convertor changes
weighted voltage signals to current. Then currents are
summed and go through a nonlinear load as an activation
function. The off-chip learning method is used for the
proposed neuron applicability test in real neural network.
The result shows 86.7 % recognition rate and 0.0695 mean
square error for its learning.
References
1. Misra J, Saha I (2010) Artificial neural networks in hardware: a
survey of two decades of progress. Neurocomputing 74:239–255
2. Eberhardt S, Duong T, Thakoor A (1989) Design of parallel
hardware neural network systems from custom analog VLSI
‘building block’ chips. In: Conference on neural networks 1989
IJCNN international joint
3. Adhikari SP, Yang C, Kim H, Chua LO (2012) Memristor bridge
synapse-based neural network and its learning. IEEE Trans
Neural Netw Learn Syst 23(9):1426–1435
4. Rahimi Azghadi M, Iannella N, Al-Sarawi SF, Indiveri G, Abbott
G (2014) Spike-based synaptic plasticity in silicon: design,
implementation, application, and challenges. In: Proceedings of
the IEEE, vol 102(5), pp 717–737
5. Chua LO (1971) Memristor—the missing circuit element. IEEE
Trans Circuit Theory 18(5):507–519
6. Strukov DB, Snider GS, Stewart DR, Williams RS (2008) The
missing memristor found. Nature 453:80–83
7. Jo SH, Chang T, Ebong I, Bhadviya BB, Mazumder P, Lu W
(2010) Nanoscale memristor device as synapse in neuromorphic
systems. Nano Lett 10:1297–1301
8. Lehtonen E, Laiho M (2010) CNN using memristors for neigh-
borhood connections. In: 12th international workshop on cellular
nanoscale networks and their applications CNNA 2010, pp 1–4
9. Laiho M, Lehtonen E (2010) Cellular nanoscale network cell with
memristors for local implication logic and synapses. In: Pro-
ceedings of 2010 IEEE international symposium on circuits and
systems ISCAS, pp 2051–2054
10. Kim H, Sah MP, Yang C, Roska T, Chua LO (2012) Memristor
bridge synapses. Proc IEEE 100(6):2061–2070
11. Satyanarayana S, Tsividis Y, Graf HP (1989) Analogue neural
networks with distributed neurons. Electron Lett 25(5):302
12. Satyanarayana S, Tsividis YP, Graf HP (1989) A reconfigurable
VLSI neural network. In: 1989 Proceedings of international
conference on wafer scale integration, vol 27, pp 67–81
13. Biolek Z, Biolek D, Biolkova
´V (2009) SPICE model of mem-
ristor with nonlinear dopant drift. Radioengineering 18:210–214
14. Joglekar YN, Wolf SJ (2008) The elusive memristor: signatures
in basic electrical circuits. Physics 30(4):1–22
15. Djahanshahi H, Ahmadi M, Jullien GA, Miller WC (1996) A
unified synapse-neuron building block for hybrid VLSI neural
networks. In: 1996 IEEE international symposium on circuits and
systems. Circuits and systems connecting the world. ISCAS 96,
vol 3, pp 483–486
16. Khodabandehloo G, Mirhassani M, Ahmadi M (2012) Analog
implementation of a novel resistive-type sigmoidal neurons.
IEEE Trans Very Large Scale Integr (VLSI) Syst 20(4):750–754
17. Blake CL, Merz CJ (1998) UCI machine learning repository.
University of California, Irvine, School of Information and
Computer Sciences [Online]. http://archive.ics.uci.edu/ml
18. Draghici S (2000) Neural networks in analog hardware—design
and implementation issues. Int J Neural Syst 10:19–42
19. Aggarwal A, Hamilton B (2012) Training artificial neural net-
works with memristive synapses: HSPICE-matlab co-simulation.
In: 11tneuron’’h symposium on neural network applications in
electrical engineering, pp 101–106
20. Linn E, Siemon A, Waser R, Menzel S (2014) Applicability of
Well-Established Memristive Models for Simulations of Resis-
tive Switching Devices. IEEE Transactions on Circuits and
Systems I 20(4):750–754
21. Pickett MD, Strukov DB, Borghetti JL, Yang JJ, Snider GS,
Stewart DR, Williams RS (2009) Switching dynamics in titanium
dioxide memristive devices. J Appl Phys 106(7):074508
22. Kvatinsky S, Friedman EG, Kolodny A, Weiser UC (2013)
TEAM: threshold adaptive memristor model. IEEE Trans Cir-
cuits Syst I Regul Papers 60(1):211–221
23. Kvatinsky S, Talisveyberg K, Fliter D, Friedman EG, Kolodny A,
Weiser UC (2012) Models of memristors for SPICE simulations.
In: Proceedings of the IEEE convention of electrical and elec-
tronics engineers in Israel, pp 1–5
Table 4 Simulation results of the trained ANN in HSPICE
Ò
Features Value
Number of modular neurons 45
Mean square error 0.0695
Correct classification 86.7 %
Number of training epoch 38
Neural Comput & Applic
123
... As two-terminal devices, they bear structural resemblance to biological synapses and offer the advantages of low power consumption and high integration density compared to traditional electronic synapses, garnering attention from scholars worldwide. Circuit designs based on memristors to simulate the learning and memory capabilities of biological systems have opened up new avenues for the development of artificial neural networks [5][6][7]. ...
Article
The significance of neuronal discharge symmetry lies in its reflection of the stability and orderliness within the neuron's activity. This symmetry is correlated with various issues, including the biophysical properties within neurons, network structures, and synaptic transmission. Memristors are highly analogous to neuronal synapses due to their unique memory properties, allowing them to emulate biological neural synapses. In this work, a memristor is employed into the Hindmarsh‐Rose (HR) neuron as a synapse for the construction of a memristive HR neuron. Accompanied with the introducing of the absolute value and signum function, the derived neurons exhibit complex coexisting symmetric firing patterns. The phenomenon of symmetric firing can effectively simulate the depolarization and hyperpolarization processes of neurons, providing a new approach to study the diversity of the brain.
... When functional components such as phototube [17,18], thermistor [19,20], Josephson junction [21][22][23] and memristor [24][25][26] are incorporated into the neural circuits [27][28][29][30][31], external physical signals including light, temperature changes, magnetic field can be estimated by imposing equivalent channel currents on the neuron models. Biological neurons show distinct self-adaptive property and inner parameter adjustment or synaptic controllability are crucial for selecting suitable firing modes in fast way. ...
Article
Full-text available
Involvement of two capacitive variables into neuron models provides better description of the cell membrane property and then the diversity effect of electromagnetic field inner and outer of the cell membrane can be estimated in clear way. Specific electric components can be combined to build equivalent neural circuits for reproducing similar neural activities under some self-adaptive control schemes. A phototube converts external light into electric stimuli and the injected energy is encoded to excite the cell membranes for presenting suitable firing patterns. Two capacitors are connected via a linear resistor for mimicking the energy exchange and changes of membrane potentials. Combination of memristor into an additive branch circuit of the neural circuit can estimate the effect of electromagnetic induction and energy absorption. The energy function H for this light-sensitive and memristive neuron is calculated in theoretical way, and the average energy function can predict the occurrence of stochastic resonance, which can be confirmed by estimating the distribution of signal to noise ratios. The firing mode is relative to the energy value of the neuron, and a control law is suggested to control the mode transition in neural activities in an adaptive way.
... However, the output of the memristor bridge is the voltage signal, which cannot be summed directly to complete the parallel calculation, so it is necessary to use a mirror circuit to convert the voltage signal into the current signal. We use the mirror circuit in the article (Shamsi et al. 2017), and the mirrored circuit is shown in the upper part of Fig. 7. The output current is ...
Article
Full-text available
The hardware circuit of neural network based on forgetting memristors not only has the characteristics of high computational efficiency and low power consumption, but also has the advantage that a memristor can store the weight of long-term memory and short-term memory. Neural networks based on forgetting memristors can process two different data sets; however, the number of data sets processed is determined by the conversion rate of short-term memory to long-term memory neural network. In this paper, a forgetting memristor model with controllable decay rate is proposed, the short-term memory and long-term memory of the long-term and short-term memory (LSTM) network based on forgetting memristor is proposed, and the conversion speed from short-term memory network to long-term memory network is controllable. In the process of transformation from short-term memory to long-term memory of LSTM network based on forgetting memristor, the decay rate of forgetting memristor can be controlled, and the duration of short-term memory of LSTM network can be set. A reset signal mechanism is proposed so that the state of short-term memory of LSTM network with high recognition rate can be controlled. Based on the proposed controllable decay rate and reset signal, the state of the short-term memory network with high recognition rate can be set, so the LSTM network with two states can realize the recognition of different number of images under different data sets. Finally, two kinds of data sets are tested on the LSTM network based on the forgetting memristor, and the recognition rate is good, which shows the effectiveness of the proposed algorithm.
... Off-chip learning of memristive synapse has also been proposed. When computational speed and power are crucial in hardware, offchip learning can be performed separately and later mapped [81]. Normally, on-chip STDP learning is more power-efficient than off-chip STDP learning because it eliminates the need for data transfer between the chip and external memory, which consumes considerable power. ...
... The change in the memristors resistance over a wide range is due to the movement of oxygen vacancies in the volume of the structure, because of which conduction channels between the electrodes can be formed and destroyed. Artificial synapses based on memristors have a simpler structure, are faster, and consume less power [54][55][56]. ...
Article
Full-text available
The paper shows the results of the mathematical model development and the numerical simulation of the oxygen vacancies, and the distribution of TiO, Ti2O3, and TiO2 oxides in the titanium oxide nanostructure obtained by local anodic oxidation (anodization). The effect of the anodization voltage pulse duration and amplitude on the titanium oxide composition distribution and the conduction channel formation was shown. Synaptic device prototypes based on electrochemical titanium oxide are fabricated and investigated. It was shown that forming free resistive switching between the low resistances state (LRS) 1.43 ± 0.54 kΩ and the high resistance state (HRS) 28.75 ± 9.75 kΩ were observed during 100,000 switching cycles and LRS 1.49 ± 0.23 kΩ was maintained for 10,000 s. Multilevel resistive switching of the synaptic device prototype was investigated. It was shown that increasing Uset from 0.5 to 1.5 V leads to different LRS from 3.96 ± 0.19 to 0.71 ± 0.10 kΩ. The results obtained can be used in the development of technological foundations for the formation of high-performance multilevel artificial synapses for elements of neuroelectronics and hardware neural networks.
... Differential ONNs (DONNs) have recently been introduced that use differential oscillators, each providing one in-phase and one anti-phase signal [5]. Differential outputs allow a memristor-bridge synaptic circuit to be used to implement positive, negative, or zero weights [6]. ...
Preprint
Full-text available
Analog implementation of Oscillatory Neural Networks (ONNs) has the potential to implement fast and ultra-low-power computing capabilities. One of the drawbacks of analog implementation is component mismatches which cause desynchronization and instability in ONNs. Emerging devices like memristors and VO2 are particularly prone to variations. In this paper, we study the effect of component mismatches on the performance of differential ONNs (DONNs). Mismatches were considered in two main blocks: differential oscillatory neurons and synaptic circuits. To measure DONN tolerance to mismatches in each block, performance was evaluated with mismatches being present separately in each block. Memristor-bridge circuits with four memristors were used as the synaptic circuits. The differential oscillatory neurons were based on VO2-devices. The simulation results showed that DONN performance was more vulnerable to mismatches in the components of the differential oscillatory neurons than to mismatches in the synaptic circuits. DONNs were found to tolerate up to 20% mismatches in the memristance of the synaptic circuits. However, mismatches in the differential oscillatory neurons resulted in non-uniformity of the natural frequencies, causing desynchronization and instability. Simulations showed that 0.5% relative standard deviation (RSD) in natural frequencies can reduce DONN performance dramatically. In addition, sensitivity analyses showed that the high threshold voltage of VO2-devices is the most sensitive parameter for frequency non-uniformity and desynchronization.
... In this paper, to implement ICA, we have put our efforts to construct a standard neural network proposed by Amari (ACY) et al. [6] through utilising a voltage-controlled memristor crossbar arrays by utilising memristor as a synapse. Program of different weights, weight multiplication by the input signal and non-volatile weight storage are considered as the critical challenges in the execution of synaptic operation [27]. In this work, all these were accomplished by carefully utilising memristors. ...
Preprint
Full-text available
Independent component analysis is an unsupervised learning approach for computing the independent components (ICs) from the multivariate signals or data matrix. The ICs are evaluated based on the multiplication of the weight matrix with the multivariate data matrix. This study proposes a novel memristor crossbar array for the implementation of both ACY ICA and Fast ICA for blind source separation. The data input was applied in the form of pulse width modulated voltages to the crossbar array and the weight of the implemented neural network is stored in the memristor. The output charges from the memristor columns are used to calculate the weight update, which is executed through the voltages kept higher than the memristor Set/Reset voltages. In order to demonstrate its potential application, the proposed memristor crossbar arrays based fast ICA architecture is employed for image source separation problem. The experimental results demonstrate that the proposed approach is very effective to separate image sources, and also the contrast of the images are improved with an improvement factor in terms of percentage of structural similarity as 67.27% when compared with the software-based implementation of conventional ACY ICA and Fast ICA algorithms.
Article
Analog implementation of Oscillatory Neural Networks (ONNs) has the potential to implement fast and ultra-low-power computing capabilities. One of the drawbacks of analog implementation is component mismatches which cause desynchronization and instability in ONNs. Emerging devices like memristors and VO 2 _{\mathrm{2\textrm{ }}} are particularly prone to variations. In this paper, we study the effect of component mismatches on the performance of differential ONNs (DONNs). Mismatches were considered in two main blocks: differential oscillatory neurons and synaptic circuits. To measure DONN tolerance to mismatches in each block, performance was evaluated with mismatches being present separately in each block. Memristor-bridge circuits with four memristors were used as the synaptic circuits. The differential oscillatory neurons were based on VO 2_{2} -devices. The simulation results showed that DONN performance was more vulnerable to mismatches in the components of the differential oscillatory neurons than to mismatches in the synaptic circuits. DONNs were found to tolerate up to 20% mismatches in the memristance of the synaptic circuits. However, mismatches in the differential oscillatory neurons resulted in non-uniformity of the natural frequencies, causing desynchronization and instability. Simulations showed that 0.5% relative standard deviation (RSD) in natural frequencies can reduce DONN performance dramatically. In addition, sensitivity analyses showed that the high threshold voltage of VO2-devices is the most sensitive parameter for frequency non-uniformity and desynchronization.
Article
Recently, artificial neurons have attracted much attention due to their excellent energy efficiency and scalability. However, there are still few reports on the regulation of the performance of artificial neuron based on single device. In the letter, we present an artificial neuron device based on Ag/TaOx/Si that not only has excellent turn-on and turn-off performance, but also exhibits sustained stability under multiple cycle tests. Moreover, the Integrate -and-Fire (IF) neuron model is successfully simulated at room temperature. More importantly, we find that an increase of temperature could reduce the threshold voltage and reset voltage of neurons, improve the probability of neuronal firing, and promote the transformation of neurons from nonvolatile to volatile. This work provides an effective method for regulating artificial neurons and has important implications for studying information transfer in biology and adapting complex computations in a memory computing framework.
Conference Paper
Full-text available
Researchers in the field of Neuromorphic Engineering are looking at ways to reduce the chip space required to mimic the huge processing capacity of the human brain and to simplify algorithms to train it. Since the recent fabrication of a memristor by the Hewlett Packard Company, there is a possibility to achieve both of these. With their crucial hysteresis properties, memristors can store charge during the training process and respond in a desired manner, electronically mimicking synapse behaviour. This arrangement can reduce chip space and potentially simplify the learning logic. This paper presents HSPICE modeling of an artificial neural network with memristive synapses and training it for 'AND' logic. An alternative modification of the memristor model was tried to simplify the learning logic. Results show potential for application in neural circuits.
Article
Full-text available
Highly accurate and predictive models of resistive switching devices are needed to enable future memory and logic design. Widely used is the memristive modeling approach considering resistive switches as dynamical systems. Here we introduce three evaluation criteria for memristor models, checking for plausibility of the I-V characteristics, the presence of a sufficiently non-linearity of the switching kinetics, and the feasibility of predicting the behavior of two anti-serially connected devices correctly. We analyzed two classes of models: the first class comprises common linear memristor models and the second class widely used non-linear memristive models. The linear memristor models are based on Strukovs initial memristor model extended by different window functions, while the non-linear models include Picketts physics-based memristor model and models derived thereof. This study reveals lacking predictivity of the first class of models, independent of the applied window function. Only the physics-based model is able to fulfill most of the basic evaluation criteria.
Article
Full-text available
Memristive devices are novel devices, which can be used in applications ranging from memory and logic to neuromorphic systems. A memristive device offers several advantages: nonvolatility, good scalability, effectively no leakage current, and compatibility with CMOS technology, both electrically and in terms of manufacturing. Several models for memristive devices have been developed and are discussed in this paper. Digital applications such as memory and logic require a model that is highly nonlinear, simple for calculations, and sufficiently accurate. In this paper, a new memristive device model is presented-TEAM, ThrEshold Adaptive Memristor model. This model is flexible and can be fit to any practical memristive device. Previously published models are compared in this paper to the proposed TEAM model. It is shown that the proposed model is reasonably accurate and computationally efficient, and is more appropriate for circuit simulation than previously published models.
Article
This paper presents a brief review of some analog hardware implementations of neural networks. Several criteria for the classification of general neural networks implementations are discussed and a taxonomy induced by these criteria is presented. The paper also discusses some characteristics of analog implementations as well as some trade-offs and issues identified in the work reviewed. Parameters such as precision, chip area, power consumption, speed and noise susceptibility are discussed in the context of neural implementations. A unified review of various "VLSI friendly" algorithms is also presented. The paper concludes with some conclusions drawn from the analysis of the implementations presented.
Article
The ability to carry out signal processing, classification, recognition, and computation in artificial spiking neural networks (SNNs) is mediated by their synapses. In particular, through activity-dependent alteration of their efficacies, synapses play a fundamental role in learning. The mathematical prescriptions under which synapses modify their weights are termed synaptic plasticity rules. These learning rules can be based on abstract computational neuroscience models or on detailed biophysical ones. As these rules are being proposed and developed by experimental and computational neuroscientists, engineers strive to design and implement them in silicon and en masse in order to employ them in complex real-world applications. In this paper, we describe analog very large-scale integration (VLSI) circuit implementations of multiple synaptic plasticity rules, ranging from phenomenological ones (e.g., based on spike timing, mean firing rates, or both) to biophysically realistic ones (e.g., calcium-dependent models). We discuss the application domains, weaknesses, and strengths of various representative approaches proposed in the literature, and provide insight into the challenges that engineers face when designing and implementing synaptic plasticity rules in VLSI technology for utilizing them in real-world applications.
Conference Paper
Memristors are novel devices which can be used in applications such as memory, logic, analog circuits, and neuromorphic systems. Several memristor technologies have been developed such as ReRAM (Resistive RAM), MRAM (Magnetoresistance RAM), and PCM (Phase Change Memory). To design circuits with memristors, the behavior of the memristor needs to be described by a mathematical model. While the model for memristors should be sufficiently accurate as compared to the behavior of physical devices, the model must also be computationally efficient. Several models for memristors have been proposed - the linear ion drift model, the nonlinear ion drift model, the Simmons tunnel barrier model, and the ThrEshold Adaptive Memristor (TEAM) model. In this paper, the different memristor models are described and a Verilog-A implementation for these models, including the relevant window functions, are presented. These models are suitable for EDA tools such as SPICE.
Article
Analog hardware architecture of a memristor bridge synapse-based multilayer neural network and its learning scheme is proposed. The use of memristor bridge synapse in the proposed architecture solves one of the major problems, regarding nonvolatile weight storage in analog neural network implementations. To compensate for the spatial nonuniformity and nonideal response of the memristor bridge synapse, a modified chip-in-the-loop learning scheme suitable for the proposed neural network architecture is also proposed. In the proposed method, the initial learning is conducted in software, and the behavior of the software-trained network is learned by the hardware network by learning each of the single-layered neurons of the network independently. The forward calculation of the single-layered neuron learning is implemented on circuit hardware, and followed by a weight updating phase assisted by a host computer. Unlike conventional chip-in-the-loop learning, the need for the readout of synaptic weights for calculating weight updates in each epoch is eliminated by virtue of the memristor bridge synapse and the proposed learning scheme. The hardware architecture along with the successful implementation of proposed learning on a three-bit parity network, and on a car detection network is also presented.
Article
In this paper, we propose a memristor bridge circuit consisting of four identical memristors that is able to perform zero, negative, and positive synaptic weightings. Together with three additional transistors, the memristor bridge weighting circuit is able to perform synaptic operation for neural cells. It is compact as both weighting and weight programming are performed in a memristor bridge synapse. It is power efficient, since the operation is based on pulsed input signals. Its input terminals are utilized commonly for applying both weight programming and weight processing signals via time sharing. In this paper, features of the memristor bridge synapses are investigated using the TiO2_{2} memristor model via simulations.
Article
An important part of any hardware implementation of artificial neural networks (ANNs) is realization of the activation function which serves as the output stage of each layer. In this work, a new NMOS/PMOS design is proposed for realizing the sigmoid function as the activation function. Transistors in the proposed neuron are biased using only one biasing voltage. By operating in both triode and saturation regions, the proposed neuron can provide an accurate approximation of the sigmoid function. The neuron circuit is designed and laid out in 90-nm CMOS technology. The proposed neuron can be potentially used in implementation of both analog and hybrid ANNs.