A preview of this full-text is provided by Springer Nature.
Content available from Analog Integrated Circuits and Signal Processing
This content is subject to copyright. Terms and conditions apply.
A fast training method for memristor crossbar based multi-layer
neural networks
Raqibul Hasan
1
•Tarek M. Taha
1
•Chris Yakopcic
1
Received: 26 January 2017 / Accepted: 25 September 2017 / Published online: 5 October 2017
Springer Science+Business Media, LLC 2017
Abstract Memristor crossbar arrays carry out multiply–
add operations in parallel in the analog domain which is the
dominant operation in a neural network application. On-
chip training of memristor neural network systems have the
significant advantage of being able to get around device
variability and faults. This paper presents a novel technique
for on-chip training of multi-layer neural networks imple-
mented using a single crossbar per layer and two mem-
ristors per synapse. Using two memristors per synapse
provides double the synaptic weight precision when com-
pared to a design that uses only one memristor per synapse.
Proposed system utilizes a novel variant of the back-
propagation (BP) algorithm to reduce both circuit area and
training time. During training, all the memristors in a
crossbar are updated in four steps in parallel. We evaluated
the training of the proposed system with some nonlinearly
separable datasets through detailed SPICE simulations
which take crossbar wire resistance and sneak-paths into
consideration. The proposed training algorithm trained the
nonlinearly separable functions with a slight loss in accu-
racy compared to training with the traditional BP
algorithm.
Keywords Neural networks Memristor crossbars
Training algorithm On-chip training
1 Introduction
Reliability and power consumption are among the main
obstacles for continued performance improvement in future
computing systems. Embedded neural network based pro-
cessing systems have significant advantages to offer, such
as the ability to solve complex problems while consuming
very little power and area [1,2]. Memristors [3,4] have
received significant attention as a potential building block
for neuromorphic systems [5,6]. In these systems mem-
ristors are used in a crossbar structure. Memristor devices
in a crossbar structure can evaluate many multiply–add
operations in parallel in the analog domain very efficiently
(these are the dominant operations in neural networks).
This enables highly dense neuromorphic system with great
computational efficiency [1].
It is necessary to have an efficient training system for
memristor neural network based systems. Two approaches
for training are off-chip training and on-chip training. The
key benefit of off-chip training is that any training algo-
rithm can be implemented in software and run on powerful
computer clusters. Memristor crossbars are difficult to
model in software due to sneak paths and device variations
[7,8]. On-chip training has the advantage that it can take
into account variations between devices and can use the
full analog range of the device (as opposed to a set of
discrete resistances that off-chip training typically targets).
This paper presents circuits for on-chip training of
memristor crossbars that utilize two memristors per
synapse. The use of two memristors per synapse has sig-
nificant advantages over using a single memristor per
synapse. Most recent memristor crossbar circuit fabrica-
tions for neuromorphic computing have been using two
memristors per synapse [9,10]. Using two memristors per
synapse provides double the synaptic weight precision
&Raqibul Hasan
hasanm1@udayton.edu
Tarek M. Taha
tarek.taha@udayton.edu
Chris Yakopcic
cyakopcic1@udayton.edu
1
Department of Electrical and Computer Engineering,
University of Dayton, Dayton, OH, USA
123
Analog Integr Circ Sig Process (2017) 93:443–454
DOI 10.1007/s10470-017-1051-y
Content courtesy of Springer Nature, terms of use apply. Rights reserved.