Content uploaded by Sattar Mirzakuchaki
Author content
All content in this area was uploaded by Sattar Mirzakuchaki on Dec 22, 2015
Content may be subject to copyright.
REVIEW
Modular neuron comprises of memristor-based synapse
Jafar Shamsi
1
•Amirali Amirsoleimani
2
•Sattar Mirzakuchaki
1
•Majid Ahmadi
2
Received: 4 September 2014 / Accepted: 19 August 2015
ÓThe Natural Computing Applications Forum 2015
Abstract Design of analog modular neuron based on
memristor is proposed here. Since neural networks are built
by repetition of basic blocks that are called neurons, using
modular neurons is essential for the neural network hard-
ware. In this work modularity of the neuron is achieved
through distributed neurons structure. Some major chal-
lenges in implementation of synaptic operation are weight
programmability, weight multiplication by input signal and
nonvolatile weight storage. Introduction of memristor bridge
synapse addresses all of these challenges. The proposed
neuron is a modular neuron based on distributed neuron
structure which it uses the benefits of the memristor bridge
synapse for synaptic operations. In order to test appropriate
operation of the proposed neuron, it is used in a real-world
application of neural network. Off-chip method is used to
train the neural network. The results show 86.7 % correct
classification and about 0.0695 mean square error for 4-5-3
neural network based on proposed modular neuron.
Keywords Artificial neural network Synaptic
operation Distributed neuron Memristor bridge synapse
1 Introduction
The digital, analog and mixed-signal circuits are different
possible approach for implementation of the artificial
neural networks (ANNs) hardware [1]. Although digital
implementation has higher accuracy, it is slow and has high
power and area consumption. On the other hand, analog
implementation is fast and meets the criteria of minimal
area. The main drawback of analog circuits is low accuracy
due to mismatch of the devices [2]. Analog circuits have
some advantages such as low area and low power con-
sumption, and also implementation of some math opera-
tions is achieved through the inherent physical processes
such as collecting currents in a node as the summation
operation. In other words, by considering the Kirchhoff’s
current law (KCL) in the electrical circuit theory, the
output current of a node is sum of the input currents and it
can be used for implementation of summation operation in
analog circuits.
One of the main challenges in analog ANN circuits is
implementation of programmable synapses as their weight
values are adjusted through programming. For implemen-
tation of the synaptic weights in analog circuits, resistors,
capacitors and floating gates are used [3]. Each of these
elements has some drawbacks. The resistor’s static
behavior makes it impossible to be changed or trained.
Also a capacitor cannot store weights for long periods of
time because of its permanent charge leakage; floating gate
transistors are successful alternative for working as
synapses of neurons in analog circuits which are used to
implement Hebbian-based and STDP learning rules. Low
area consumption in implementation of STDP is another
advantage of floating gates. Their main drawback is their
high nonlinearity which causes a problem in multiplying
weights and for adjusting of the synaptic weight value they
need extra circuits to drive the tunneling and hot electron
injection [4]. Emerging memristor as a nonvolatile resistor
opens a new vision in neural network hardware imple-
mentation. The memristor idea was presented by Chua in
1971 [5]. Subsequently in 2008 at HP laboratory this
missing element was discovered [6]. Memristor or memory
&Amirali Amirsoleimani
amirali_ama@yahoo.com
1
Iran University of Science and Technology, Tehran, Iran
2
University of Windsor, Windsor, Canada
123
Neural Comput & Applic
DOI 10.1007/s00521-015-2047-0
resistor is a two-port element with its memristance being
proportional to the amount and polarity of current passing
through it. Several neural network circuits based on
memristor were presented in the literature since its dis-
covery. This highlights the impacts of memristor’s emer-
gence on neural network hardware implementation. In [7]
the similarity between memristor and a biological synapse
was investigated. In [8] a cellular neural network (CNN)
based on memristor is implemented. In [9] a hybrid CMOS
memristor-based CNN was designed. Also memristor
bridge circuit presented in [10] includes four memristors
and three transistors. This circuit can produce negative,
zero and positive synaptic weights which is also used in
this work as synapse of the proposed modular neuron.
Moreover, in distributed neurons introduced in [11]
there is an activation function for each input signal rather
than having one activation function for all neuron’s input
signals. Each activation function is implemented by a
nonlinear load, and their outputs are connected to each
other to form a multi-input neuron. Modularity is one of the
significant features of distributed neurons [11,12]. Since
neural network is comprised of replicated basic element of
neuron and synapse, using the modular neurons is essential
for the neural network hardware.
In this paper by using the benefits of memristor bridge
circuits and distributed neuron, a modular neuron com-
prised of memristor-based synapse is proposed. Memristor
bridge circuit is used for synaptic operation, and distributed
neuron structure is applied to make it a modular circuit.
The paper is organized as follows. Memristor theory and
its basic relations are mentioned in Sect. 2. Explanation of
the proposed circuit and its simulations in HSPICE
Ò
are
presented in Sect. 3. In Sect. 4, off-chip method is used for
training of a neural network based on the proposed modular
neuron and its simulation results are presented. This is
followed by concluding remarks in Sect. 5.
2 Memristor
The memristor is a metal–insulator–metal (MIM) structure.
The HP memristor is consisted of two platinum (Pt) elec-
trodes and titanium dioxide (TiO
2
) layer between them. One
side of the insulator layer is doped and consisted of oxygen
deficiency titanium dioxide (TiO
2-x
). This side is having
higher conductivity in comparison with the other side. The
linear model for HP’s memristor structure is shown in
Fig. 1a. The internal state variable (x) of the memristor
shows the position of the boundary region between doped
and undoped zones. Also when xis decreased, the conduc-
tance of the channel decreases proportionally.
Memristor creates a relationship between flux and
charge;
MðqÞ¼du
dqð1Þ
where M(q) is the memristance of the memristor. When
charge and flux relation is flux-controlled, the equation will
be,
WðuÞ¼dq
duð2Þ
where W(u)is the memductance of the memristor.
Therefore, the current–voltage relationship is determined
by,
V¼MqðÞIð3Þ
I¼WuðÞVð4Þ
In [13] a linear boundary drift model is introduced by
applying window function. In this model velocity of
boundary region between doped and undoped zone of TiO
2
thin film is determined through,
(a) (b)
TiO
2
TiO
2-x
R
OFF
R
ON
Pt
Pt
D
x
TiO
2
Pt
Pt
w
Fig. 1 a HP’s Pt/TiO
2
/Pt memristor structure is shown for linear
model in [13]. The thin film length is considered D, and the position
of boundary region along the TiO
2
thin film is x. As it can be seen the
resistance of the doped region (TiO
2-x
) is shown by an equivalent
resistor R
ON
and the resistance of the undoped region (TiO
2
)is
considered by an equivalent resistor R
OFF
.bThe Pt/TiO
2
/Pt
memristor structure is presented for physical model in [21]. The
tunneling distance is shown by w
Neural Comput & Applic
123
vD¼glDRON
DiðtÞfðxÞð5Þ
where gand l
D
are the polarity of memristor and mobility
of dopants, respectively. Also Dand R
ON
are thickness and
lowest resistance of the memristor device. The current i(t)
is passing current through the memristor, and window
function f(x) is used to insert nonlinearity into the mem-
ristor’s behavior [14]. This window function is,
fðxÞ¼12x
D1
2p
ð6Þ
where pis a parameter for applying nonlinearity to mem-
ristor model. The memristance of the memristor is defined
by,
MðxÞ¼RON
x
D
þROFF 1x
D
ð7Þ
Although the mentioned assumption for memristor
model exhibited basic memristor device characteristics, it
cannot generate real memristor device behavior [20]. In
[21] a model based on tunneling phenomenon is presented
by the structure shown in Fig. 1b. While this model shows
more accurate behavior in comparison with experimental
data extracted from a real memristor device, it is a com-
plicated model in terms of computation. The more sim-
plified model is presented in [22] based on the same
physical assumption of memristor in [21]. This model is
considered threshold voltage for memristor devices. In this
model state variable is defined by,
dwtðÞ
dt¼
kOFF
itðÞ
iOFF
aOFF
fOFF wðÞ 0\iOFF \i
0iON \i\iOFF
kON
itðÞ
iON
aON
fON wðÞ i\iON \0
8
>
>
>
>
<
>
>
>
>
:
ð8Þ
where i
ON
and i
OFF
are the current thresholds and k
OFF
,
k
ON
,a
OFF
and a
ON
are constants. The effective electric
tunnel width, which corresponds to an internal state vari-
able, is represented by w. The window functions are f
ON
and f
OFF
which limit the internal state variable to the [w
ON
,
w
OFF
] interval. Like [21] asymmetric behavior in state
variable dependence, the f
ON
and f
OFF
may have different
values. The i–vrelationship in this model is,
vt¼RON ek=wOFFwON
ðÞðÞwwON
ðÞ
itðÞ ð9Þ
where k¼ln ROFF
RON
where kis a fitting parameter, which is
the proportion of the highest resistance (HRS) mode to the
lowest resistance (LRS) of the device. The Pt/TiO
2
/Pt
memristor device simulated in [23] is considered for the
simulation of the memristor bridge synapse in the modular
neuron. The i–vcurve and memristance of simulated
memristor are shown in Fig. 2for both linear model of
memristor [13] by utilizing window function [14] and the
TEAM model in [22] through MATLAB
Ò
.
3 Proposed design
Artificial neural networks have a massive connection of
neurons. McCulloch–Pitts model of a neuron is demon-
strated in Fig. 3a. The main parts of this model are: (1)
synapses that multiply their weights by input signals x
i
, (2)
summation operation for summing weighted input signals
and (3) activation function which is applied on the result of
summation. Neuron output relation is:
O¼X
n
i¼1
wixiþhð10Þ
where his bias, w
i
is synapse weight, x
i
is input signal, and
Ois output of the neuron. Equivalent block diagram of
distributed neuron structure is also demonstrated in Fig. 3b.
An Ninput neuron has Nneurons with one input and one
output, and the outputs are being connected to each other
[11]. The modularity of distributed neuron is an obvious
feature which has Nsimilar neurons. All of the signals in
this block diagram are shown as voltage or current. The
synapse block operation is to multiply weights by input
signals. The voltage to current convertor changes weighted
input voltage signals to current. This conversion is due to
the simple nature of implementing current summation
function. Therefore, in this stage current signals are sum-
med and then imposed into a nonlinear load that behaves
like an activation function.
For implementing synapse of the proposed modular
neuron, the memristor bridge circuit [10] is used. This
circuit is like the famous Wheatstone bridge. As shown in
Fig. 4, it is implemented by four memristors with prede-
fined polarities.
Synaptic operation includes programming ability for
changing synaptic weights, nonvolatile storing ability for
synaptic weights and multiplication ability for multiplying
weights by input signals. The three mentioned character-
istics are fulfilled through memristor bridge synapse. For
testing programmability of this synapse circuit, input signal
is imposed for a specific time which is proportional to the
weight considered for the synapse. Since memristor is a
history-dependent device, the allocated weight for the
synapse can be stored in memristor bridge circuit. The
output voltage relationship with input voltage in the pro-
posed synapse circuit can be determined by [10],
vout ¼bVin ð11Þ
b¼M2
M1þM2
M4
M3þM4
ð12Þ
Neural Comput & Applic
123
(a) (b)
Fig. 2 a Linear boundary drift model [13] is used for simulating I–
Vcurve (top figure) and memristance versus voltage (below figure)
for Pt/TiO
2
/Pt memristor. The model parameters were set to:
D=3 nm, R
init
=100 kX,R
ON
=100 X,R
OFF
=200 kX,
l
D
=10
-14
m
2
s
-1
V
-1
,p=2. These curves are for 1 V, 1 kHz
triangular wave input voltage. bTEAM model [22] is used for
simulating I–Vcurve (top figure) and memristance versus voltage
(below figure) for Pt/TiO
2
/Pt memristor. The model parameters were
set to: a
ON
=3, a
OFF
=3, k
ON
=-8e-13 nm/s, k
OFF
=8e-13 nm/s,
i
ON
=8.9 lA, i
OFF
=115 lA. These curves are for 3 mA, 1 Hz
sinusoidal wave input current
Fig. 3 Neuron and synapse are
displayed. aMcCulloch–Pitts
model. bEquivalent block
diagram of distributed neuron
Neural Comput & Applic
123
where bis synaptic weight. M
1
,M
2
,M
3
and M
4
are
memristances of memristors. In training phase for pro-
gramming the synaptic weight, different input signals are
utilized for programming synapse with different memristor
models. For example by using linear boundary drift model,
voltage pulse width applied to synapse is proportional to
the weight considered for synapse, while in TEAM model
the voltage pulse width is not proportional to weight con-
sidered for synapse. The memristor’s memristance behav-
ior and weight of the proposed synapse are shown in Fig. 5
for programming phase. Figure 5a, b shows simulation
based on linear boundary drift model and TEAM model,
respectively. As it can be seen in Fig. 5for memristor
bridge synapse circuit simulated with linear model for Pt/
TiO
2
/Pt device applied in [13], M
1
and M
4
change their
states from HRS to LRS while M
2
and M
3
change their
states from LRS to HRS. For this case a 2 V pulse input
voltage is applied as input signal. In the other case by
applying ramp input current with the slope of 0.16e-03 for
a memristor bridge circuit simulated with TEAM model,
the resistive states of M
2
and M
3
do not change very much.
Although memristance of the M
2
and M
3
does not change
considerably, changing in memristance of M
1
and M
4
devices causes the memristor bridge circuit to change its
weight (b) from -1to1.
As input current ramp is applied to memristor bridge
circuit, M
1
and M
4
memristor devices are starting to change
their memristances and the currents in them reach to i
ON
(8.9 lA). While the current reaches to i
ON
,M
1
and M
4
memristances are changing and M
2
and M
3
memristances
in memristor bridge circuit are not changing until the
current reaches to i
OFF
(115 lA). For reaching i
OFF
current,
M
1
and M
4
should change their resistive state completely.
Then the currents in the branches become greater than i
OFF
,
and this causes M
2
and M
3
to change their memristance as
well. This method has a drawback in initial moments of
programming with TEAM model. For example as it can be
seen in Fig. 5b after 2 s, M
1
and M
4
voltages become more
than 100 V which is not practicable and this will possibly
cause breakdown and failure in device. For solving this
problem, in order to program the memristors which satisfy
Fig. 4 Memristor bridge synapse circuit [10]
00.5 11.5 22. 5 33.5 4
0
0.5
1
1.5
2
2.5
Tim e ( s )
V
in
(v)
00.5 11.5 22. 5 33.5 4
0
0.5
1
1.5
2
x 105
Tim e ( s )
Memrist anc e(Ω)
M1,M4
M2,M3
00.5 11.5 22. 5 33.5 4
-1
-0.5
0
0.5
1
Tim e ( s )
Synapse W eight
0
100
200
V
M1
,V
M4
(v)
Time(s)
0 1 2 3 4 5 6 7 8 9 100
0.005
0.01
I
in
(A)
0 1 2 3 4 5 6 7 8 9 10
0
1
2x 10
5
Time(s)
Memristance(
Ω
)
M1,M4
M2,M3
0 1 2 3 4 5 6 7 8 9 10
-1
0
1
Time(s)
Synapse Weight
(a) (b)
Fig. 5 a Programming of memristor bridge synapse based on linear
boundary drift model [13] by applying window function [14]. The
input signal is 2 V pulsed voltage. bProgramming of memristor
bridge synapse based on TEAM model [22]. The input signal is a
ramp input current with the slope of 0.16e-03. The upper figures
display applied input signal to the bridge synapse. The middle figures
show memristance alteration of memristor M
1
and M
4
versus M
2
and
M
3
. The below figures show weight value alternation of the memristor
bridge synapse
Neural Comput & Applic
123
the TEAM model another approach is utilized. As TEAM
model is utilized for simulation of memristor bridge
synapse, in initial moments of programming when mem-
ristances of M
1
and M
4
are started to decrease, the weight
of the synapse is not changed. In addition high voltage
value is needed for having the desirable change in synapse
weight at this stage as it can be seen in red zone of Fig. 5b.
Therefore, by omitting this zone in programming procedure
not only the programming time for synapse will be reduced
but also there will be no need for a considerable high
voltage for programming of synapse in this stage.
For omitting this zone in programming procedure, all
memristors’ initial memristances should be set to R
ON
by
RESET procedure at first. In the next step for generating a
positive or negative weight in synapse, a positive or neg-
ative ramp current should be applied to synapse, respec-
tively. The proposed programming technique is utilized in
Fig. 6a, b for negative and positive weight programming of
synapse with TEAM model, respectively.
Also in RESET procedure for initializing all memristor
devices to R
ON
, same voltages are applied to all memris-
tors. For this issue a negative voltage (-5.5 V) is applied
to node A and positive voltage (?5.5 V) is imposed to
node B of memristor bidge synapse in Fig. 4. The input
node of the synapse should be connected to the ground. In
this way 5.5 V DC is applied to all of the four memristors.
The RESET procedure is done just once for each pro-
gramming procedure as the first step. The RESET proce-
dure is displayed in Fig. 7for memristor device simulated
by TEAM model with different initial memristances. As it
can be seen in Fig. 7, all devices with different initial
memristances are reached to R
ON
at the end of the applied
DC voltage.
In operation phase a narrow width pulse is applied to the
memristor bridge synapse because the memristor’s state
should not be changed in this phase. Here, the pulse width
is considered as 10 ns since it does not change the mem-
ristor’s state. When narrow width pulse is applied, the
memristor acts as a resistor, and thus in operation phase a
resistor is used instead of memristor.
The operation of the memristor bridge synapse is sim-
ulated for five different input signals in Fig. 8. The linear
relationship between output voltage and input voltage of
the synapse can be seen for different weights.
Considering the neuron structure in Fig. 3b, after mul-
tiplying synaptic weights to the input signal, the voltage
should be converted to current by voltage to current con-
vertor which is illustrated in Fig. 9. The input is in dif-
ferential mode because of differential output of the
synapse.
The CMOS transistor parameters in 180 nm CMOS
technology, resistor’s resistance and voltage sources values
are shown in Table 1. The output current relationship with
input voltage can be determined by:
-4
-2
0
V
M1
,V
M4
(v)
Tim e ( s )
00.5 11.5 22. 5 33.5 44.5 5-0. 01
-0.005
0
I
in
(A)
00.5 11.5 22. 5 33.5 44.5 5
0
500
1000
Tim e ( s )
Memristance(
Ω
)
M1,M4
M2,M3
00.5 11.5 22. 5 33.5 44.5 5
-1
-0.5
0
0.5
1
Tim e ( s )
Synapse Weight
VM1,VM4
Iin
0
2
4
V
M2
,V
M3
(v)
Tim e ( s )
00.5 11.5 22.5 33. 5 44.5 50
0.005
0.01
I
in
(A)
00.5 11.5 22.5 33. 5 44.5 5
0
500
1000
Tim e ( s )
Memristance(Ω)
M1,M4
M2,M3
00.5 11.5 22.5 33. 5 44.5 5
-1
-0.5
0
0.5
1
Tim e ( s )
Synapse Weight
Iin
VM2,VM3
(a) (b)
Fig. 6 a Programming memristor bridge synapse with the proposed
technique to negative weight by using TEAM model [22]. bPro-
gramming memristor bridge synapse with the proposed technique to
positive weight by using TEAM model [22]. The input signal is a
ramp input current with the slope of 0.16e-03. The dashed green
curves display applied input current signal to the bridge synapse in the
top figures. The middle figures show memristance alteration of
memristor M
1
and M
4
versus M
2
and M
3
. The below figures show
weight value alternation of the memristor bridge synapse (color figure
online)
Neural Comput & Applic
123
Iout ¼gm
1þgmR
:vin ð13Þ
where Ris the resistances in source of Q
1
and Q
2
and g
m
is
the transconductance of Q
1
and Q
2
. A nonlinear load is
used as the output load, whereas the combination of the
voltage to current convertor and the nonlinear load serves
as an activation function.
Activation function is one of the most important ingre-
dients of a neural network. It defines output of the neuron
to a given weighted input. After the signal conversion the
current acts as activation function. Several activation
functions are used in neurons such as hyperbolic tangent,
step function and sigmoid function. Sigmoid and hyper-
bolic tangent functions are S-curved nonlinear functions. In
[12] a nonlinear load is used for implementation of an
activation function of a distributed neuron. This nonlinear
neuron is comprised of two PMOS/NMOS transistors and
one resistor. Also it requires four voltage biases. In [15] the
nonlinear load in previous design in [12] has been revised.
It has replaced two resistors with two transistors. It also
consumes less power and needs three voltage biases for
implementing sigmoid function. In [16] the number of
sources is decreased to one for sigmoid activation function.
Here a novel nonlinear load is used for implementing a
function similar to hyperbolic tangent function. This circuit
is shown in Fig. 10. Also the transistor parameters are
shown for 180 nm CMOS technology in Table 2. The
proposed nonlinear load is fed through input current.
Therefore, it does not require any bias voltage. Its input
current is the output of the voltage to current convertor
unit. The proposed neuron operates in four regions sum-
marized in Table 3. Each region’s current is determined
according to operation of the transistors in that region.
In region II, low input current causes a low output
positive voltage, and as a result, Q
6
and Q
7
are in triode and
off regions, respectively. The output voltage in this region
is defined as follows:
010 20 30 40 50 60 70 80 90 100
4.5
5
5.5
6
6.5
Time(s)
Applied Voltage(v)
010 20 30 40 50 60 70 80 90 100
0
0.5
1
1.5
2x 105
Time(s)
Memristance(Ω)
Rinit=ROFF
Rinit=0.9*ROFF
Rinit=0.8*ROFF
Rinit=0.7*ROFF
Rinit=0.6*ROFF
Fig. 7 RESET procedure for Pt/TiO
2
/Pt device simulated by TEAM
model [23] with different initial memristances
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
Vin(v)
Vout(v)
Weight=1
Weight=0.5
Weight=0
Weight=-0.5
Weight=-1
Fig. 8 V
out
versus V
in
of the memristor bridge synapse for various
values of synaptic weight and input voltage where for each value of
input voltage the narrow width pulse is used in operation phase. The
pulse width of 10 ns is used here in operation phase
Fig. 9 Voltage to current convertor circuit
Neural Comput & Applic
123
Iin ¼Vout
RT
ð14Þ
where R
T
is resistance of Q
6
in triode region. Region III is
similar to region II where Q
7
and Q
6
are in triode and off
regions, respectively.
The output voltage increases due to the input current
increment until output voltage reaches V
tn
value (threshold
voltage of Q
6
). Consequently, if the input current increases,
it causes a change in operation from region III to region IV.
In this region Q
6
is saturated and Q
7
is off and current is
defined as follows:
Iin ¼1
2knSnðVout VtnÞ2ð1þkVout Þð15Þ
where S
n
is defined as W6
L6
.
Decreasing input current to negative value (changing the
current direction) and reaching to an output voltage of V
tp
(threshold voltage of Q
6
) cause a change in operating
region of circuit from region II to region I. In this region,
Q
7
is saturated and Q
6
is off and current is defined as
follows:
Iin ¼1
2kPSPðVout VtpÞ2ð1þkVout Þð16Þ
where S
p
is defined as W7
L7
.
To solve (14)–(16) for V
out
, for ease of calculation,
channel length modulation effect is ignored (k=0). The
following equations show the output voltage in each region
[Eqs. (17)–(19) are related to regions I, II (or III) and IV,
respectively]:
Vout ¼Vtp þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð2IinÞ=ðkpSpÞ
qð17Þ
Vout ¼RTIin ð18Þ
Vout ¼Vtn þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð2IinÞ=ðknSnÞ
pð19Þ
In Fig. 11 output voltage of the activation function is
simulated for input current.
As it can be seen in Fig. 12, by connecting mentioned
units a modular neuron comprised of a memristor-based
synapse is created. Input voltage is weighted by memristor
bridge synapse and transforms to current by voltage to
current convertor. Subsequently the created current goes
through nonlinear load. In Fig. 13 output of the proposed
neuron is depicted for pulsed 0.5 V input voltage with
10 ns width for zero, negative and positive weights. This
Table 1 Parameters of transistors and resistors and voltage sources of voltage to current convertor circuit
Transistors parameters (lm) Resistors (kX) Voltage sources (V)
W1
L1
¼1:5
0:2
W2
L2
¼1:5
0:2
W3
L3
¼1:7
0:3
W4
L4
¼1:7
0:3
W5
L5
¼9
0:5R
S1
=2R
S1
=2 VDD =0.9 VSS =-0.9
Fig. 10 Proposed passive nonlinear load
Table 2 Transistor parameters
of nonlinear load Transistors parameter (lm)
W6
L6
4:5
0:18
W7
L7
10
0:18
Table 3 Operation region of neuron circuit
Regions V
out
Q
6
Q
7
IV
out
\V
tp
OFF Saturation
II V
tp
\V
out
\0 OFF Triode
III 0 \V
out
\V
tn
Triode OFF
IV V
tn
\V
out
Saturation OFF
-2 -1.5 -1 -0.5 00.5 11.5 2
x 10
-5
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
I
in
(A)
V
out
(v)
Fig. 11 Nonlinear load I–Vcurve as an activation function
Neural Comput & Applic
123
neuron has one input and one output. The modularity is one
of the main features of these types of neurons.
This means for creating multiple input one output neu-
rons, this modular neuron is repeated in parallel and output
of these modular neurons should connect to each other.
Therefore, a distributed neuron is produced by the pro-
posed modular neuron. The output current, summation of
all the neuron output currents, is applied to each nonlinear
load. It is determined by:
ISUM ¼X
N
k¼1
gm
1þgmR
wkvinkð20Þ
where I
SUM
is collected current in output node, Nis number
of modular neurons connected to each other, w
k
is synapse
weight and v
ink
is input voltage of kth modular neuron.
4 Training and simulation results
In order to test the proposed neuron operation, it is used in
a real neural network application. The data set which
models the psychological experimental results is used as a
classification problem to test its applicability in real neural
networks [17]. The data set has three classes and four
attributes. Each sample is classified as having the balance
scale tip to the right, tip to the left, or be balanced. The
attributes are the left weight, the left distance, the right
weight and the right distance. There are 625 samples which
are divided into 500 and 125 samples as training data and
test data, respectively.
The proposed neural network structure has four inputs,
five neurons in hidden layer and three neurons in output
layer as it is shown in Fig. 14. Each neuron is based on
modular neuron with distributed structure. Therefore, there
are Nmodular neurons for each neuron where Nis the
number of inputs. This results 20 modular neurons in
hidden layer and 15 for output layer.
There are three methods for training of hardware neural
network (HNN): on-chip, off-chip and chip-in-loop [18].
In on-chip training method, extra training circuitry on the
chip is used to calculate and update the synaptic weights.
Its drawback is area consumption of extra required cir-
cuits for infrequent training phase. For off-chip method,
all calculations of training are performed in computer and
final calculated weights are downloaded to the chip.
Fig. 12 Proposed modular
neuron
0 0.5 1 1.5 2 2.5 3
x 10
-8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
Time(s)
V
out
(v)
Weight=-1
Weight=-0.75
Weight=-0.5
Weight=-0.25
Weight=0
Weight=0.25
Weight=0.5
Weight=0.75
Weight=1
Fig. 13 Positive, zero and negative weight values are considered for
synapse and output of the modular neuron for 1 V input pulse with
10 ns duration is simulated
Neural Comput & Applic
123
Because of high precision in computer calculation, the
chip has lower performance than it is expected. In chip-in-
loop method, initial weights are downloaded to the
synapses in the chip and outputs of neural network in the
chip return back to the computer. Then synaptic weights
are calculated and updated in computer based on achieved
outputs of the chip. This procedure is repeated periodi-
cally until the expected performance of the neural net-
work in the chip is achieved.
For training neural network, off-chip method is applied.
In this method, all training procedure is performed in the
host computer. Training data set is used in this step. As it
was mentioned, the chip performance degradation is the
drawback in off-chip training method. To overcome this
problem, a training method based on the modified chip-in-
loop training [3] and HSPICE–MATLAB co-simulation
[19] is used here. Modified chip-in-loop training has three
main steps: (1) Neural network on computer is trained and
synaptic weights are downloaded to the chip, (2) outputs of
all neurons are captured from the chip and then stored in
computer memory, and (3) each layer of neural network is
trained separately in chip-in-loop fashion (by using the host
computer and the chip). All neurons’ outputs of the chip
are used in this algorithm which is the major drawback of
this method and needs special consideration in designing of
the chip. This algorithm is used in this work, but instead of
chip outputs, HSPICE
Ò
simulation result is used. HSPICE–
MATLAB co-simulation is used here for training of the
neural network.
At first epoch, the neural network is trained in
MATLAB
Ò
using back propagation algorithm. Then
synaptic weight values are mapped to [-1, 1] domain in
order to applying them to the memristor bridge synapse
circuits. HSPICE
Ò
simulation is performed, and all neurons
output of the neural network are returned back to the
MATLAB
Ò
. After this step, in chip-in-loop fashion, back
propagation algorithm is applied to each layer of neural
network separately and calculated synaptic weights are
mapped to [-1, 1], and then they are applied to the
memristor bridge synapse. This procedure is repeated
periodically in each epoch until the expected performance
of chip is achieved. Figure 15 displays the steps of the
training method. Since all training procedures are per-
formed in computer, this can be considered as off-chip
training method. By using this method, the proposed neural
network is trained in thirty-eight epochs.
After training completion, produced weights are applied
to the proposed neural network modeled in HSPICE
Ò
. Test
data set is used in this step. In order to analyze neural
network performance, the outputs extracted from HSPICE
Ò
simulation are fed again to MATLAB
Ò
software for
comparing with expected outputs in the dataset. Mean
square error (MSE) of the simulation is 0.0695. Recogni-
tion rate of classes for this neural network is 86.7 %. The
summary of the proposed neural network and its simulation
results are illustrated in Table 4.
X1
X2
N1
N2
N3
N4
N5
N6
N7
N8
Right
Balanced
Left
Fig. 14 ANN architecture for classification
Training Using Back
Propagation Algorithm in
MATLAB
Mapping the Weights to
[-1,1] and Applying to
Memristor Bridge
Synapse and Simulation
in HSPICE
Return Back the
Simulation Results to
MATLAB
Back Propagation
Algorithm in MATLAB for
Each Layer Separately
Expected
Performance?
Training Complete
No
Yes
Fig. 15 Training flowchart of HSPICE–MATLAB co-simulation
Neural Comput & Applic
123
5 Conclusion
In this paper, a modular neuron is designed using a mem-
ristor-based synapse. Also it is used in a real neural net-
work application. The proposed neural cell includes three
main units, namely a memristor bridge synapse, voltage to
current convertor and nonlinear load. The memristor bridge
synapse enables this neuron to produce negative, zero and
positive weights. The voltage to current convertor changes
weighted voltage signals to current. Then currents are
summed and go through a nonlinear load as an activation
function. The off-chip learning method is used for the
proposed neuron applicability test in real neural network.
The result shows 86.7 % recognition rate and 0.0695 mean
square error for its learning.
References
1. Misra J, Saha I (2010) Artificial neural networks in hardware: a
survey of two decades of progress. Neurocomputing 74:239–255
2. Eberhardt S, Duong T, Thakoor A (1989) Design of parallel
hardware neural network systems from custom analog VLSI
‘building block’ chips. In: Conference on neural networks 1989
IJCNN international joint
3. Adhikari SP, Yang C, Kim H, Chua LO (2012) Memristor bridge
synapse-based neural network and its learning. IEEE Trans
Neural Netw Learn Syst 23(9):1426–1435
4. Rahimi Azghadi M, Iannella N, Al-Sarawi SF, Indiveri G, Abbott
G (2014) Spike-based synaptic plasticity in silicon: design,
implementation, application, and challenges. In: Proceedings of
the IEEE, vol 102(5), pp 717–737
5. Chua LO (1971) Memristor—the missing circuit element. IEEE
Trans Circuit Theory 18(5):507–519
6. Strukov DB, Snider GS, Stewart DR, Williams RS (2008) The
missing memristor found. Nature 453:80–83
7. Jo SH, Chang T, Ebong I, Bhadviya BB, Mazumder P, Lu W
(2010) Nanoscale memristor device as synapse in neuromorphic
systems. Nano Lett 10:1297–1301
8. Lehtonen E, Laiho M (2010) CNN using memristors for neigh-
borhood connections. In: 12th international workshop on cellular
nanoscale networks and their applications CNNA 2010, pp 1–4
9. Laiho M, Lehtonen E (2010) Cellular nanoscale network cell with
memristors for local implication logic and synapses. In: Pro-
ceedings of 2010 IEEE international symposium on circuits and
systems ISCAS, pp 2051–2054
10. Kim H, Sah MP, Yang C, Roska T, Chua LO (2012) Memristor
bridge synapses. Proc IEEE 100(6):2061–2070
11. Satyanarayana S, Tsividis Y, Graf HP (1989) Analogue neural
networks with distributed neurons. Electron Lett 25(5):302
12. Satyanarayana S, Tsividis YP, Graf HP (1989) A reconfigurable
VLSI neural network. In: 1989 Proceedings of international
conference on wafer scale integration, vol 27, pp 67–81
13. Biolek Z, Biolek D, Biolkova
´V (2009) SPICE model of mem-
ristor with nonlinear dopant drift. Radioengineering 18:210–214
14. Joglekar YN, Wolf SJ (2008) The elusive memristor: signatures
in basic electrical circuits. Physics 30(4):1–22
15. Djahanshahi H, Ahmadi M, Jullien GA, Miller WC (1996) A
unified synapse-neuron building block for hybrid VLSI neural
networks. In: 1996 IEEE international symposium on circuits and
systems. Circuits and systems connecting the world. ISCAS 96,
vol 3, pp 483–486
16. Khodabandehloo G, Mirhassani M, Ahmadi M (2012) Analog
implementation of a novel resistive-type sigmoidal neurons.
IEEE Trans Very Large Scale Integr (VLSI) Syst 20(4):750–754
17. Blake CL, Merz CJ (1998) UCI machine learning repository.
University of California, Irvine, School of Information and
Computer Sciences [Online]. http://archive.ics.uci.edu/ml
18. Draghici S (2000) Neural networks in analog hardware—design
and implementation issues. Int J Neural Syst 10:19–42
19. Aggarwal A, Hamilton B (2012) Training artificial neural net-
works with memristive synapses: HSPICE-matlab co-simulation.
In: 11tneuron’’h symposium on neural network applications in
electrical engineering, pp 101–106
20. Linn E, Siemon A, Waser R, Menzel S (2014) Applicability of
Well-Established Memristive Models for Simulations of Resis-
tive Switching Devices. IEEE Transactions on Circuits and
Systems I 20(4):750–754
21. Pickett MD, Strukov DB, Borghetti JL, Yang JJ, Snider GS,
Stewart DR, Williams RS (2009) Switching dynamics in titanium
dioxide memristive devices. J Appl Phys 106(7):074508
22. Kvatinsky S, Friedman EG, Kolodny A, Weiser UC (2013)
TEAM: threshold adaptive memristor model. IEEE Trans Cir-
cuits Syst I Regul Papers 60(1):211–221
23. Kvatinsky S, Talisveyberg K, Fliter D, Friedman EG, Kolodny A,
Weiser UC (2012) Models of memristors for SPICE simulations.
In: Proceedings of the IEEE convention of electrical and elec-
tronics engineers in Israel, pp 1–5
Table 4 Simulation results of the trained ANN in HSPICE
Ò
Features Value
Number of modular neurons 45
Mean square error 0.0695
Correct classification 86.7 %
Number of training epoch 38
Neural Comput & Applic
123