Content uploaded by J. Camilo v. Tieck
Author content
All content in this area was uploaded by J. Camilo v. Tieck on Oct 26, 2018
Content may be subject to copyright.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Triggering robot hand reflexes with human
EMG data using spiking neurons
J. Camilo Vasquez Tieck1?, Sandro Weber2, Terrence C. Stewart3Arne
Roennau1, and R¨udiger Dillmann1
1FZI Research Center for Information Technology, Karlsruhe 76131, Germany
2TUM Technical University of Munich, 80333 M¨unchen, Germany
3Centre for Theoretical Neuroscience, University of Waterloo, Canada N2L 3G1
Abstract. The interaction of humans and robots (HRI) is of great rel-
evance for the field of neurorobotics as it can provide insights on motor
control and sensor processing mechanisms in humans that can be ap-
plied to robotics. We propose a spiking neural network (SNN) to trigger
motion reflexes on a robotic hand based on human EMG data. The first
part of the network takes EMG signals to measure muscle activity, then
classify the data to detect which finger is active in the human hand. The
second part triggers single finger reflexes using the classification output.
The finger reflexes are modeled with motion primitives activated with an
oscillator and mapped to the robot kinematic. We evaluated the SNN by
having users wear a non-invasive EMG sensor, record a training dataset,
and then flex different fingers, one at a time. The muscle activity was
recorded using a Myo sensor with eight channels. EMG signals were suc-
cessfully encoded into spikes as input for the SNN. The classification
could detect the active finger to trigger motion generation of finger re-
flexes. The SNN was able to control a real Schunk SVH robotic hand.
Being able to map myo-electric activity to functions of motor control for
a task, can provide an interesting interface for robotic applications, and
also to study brain functioning. SNN provide a challenging but interest-
ing framework to interact with human data. In future work the approach
will be extended to control a robot arm at the same time.
Keywords: Human-Robot-Interaction, humanoid robots, neurorobotics,
motion representation, EMG classification, spiking neural networks, an-
thropomorphic robot hand.
1 Introduction
The interaction of humans and robots (HRI) is of great relevance for the field of
neurorobotics as it can provide insights on motor control and sensor processing
mechanisms in humans that can be applied in robots.
Electromyography (EMG) is a common tool in medicine and biomechanics.
It is used to monitor and study the electrical activity of the muscles. There are
?Correspondence: J. C. V. Tieck, tieck@fzi.de
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
different methods to record EMG signals, and they can be either invasive or
non-invasive. Research is being made on processing and classification of EMG
signals for clinical diagnoses [6] or prosthetic applications [14].
Nevertheless, there are different hypothesis explaining how does the human
motor system work. A wide accepted theory states that the central nervous
system uses different base motor components in a hierarchy [4] to generate the
full repertoire of motions that we can perform [3].
These base components are formed by specific combination of muscle syner-
gies [7] that are active during a motion, and they are commonly called motor
primitives [5]. An approach using these concepts to control a robotic hand was
proposed in [22]. Motor primitives can be activated in different ways, for exam-
ple as a reflex. A reflex is an involuntary response to sensor stimulation and can
be either a complete execution or inhibition of a motion. An overview of differ-
ent spinal reflexes is provided by [15].
In this work, we propose a system to control a robot hand with muscle
signals from a human recorded with a non-invasive EMG sensor. We focus on the
activation of single finger motions in response to sensor stimuli. A spiking neural
network (SNN) [16][12] was implemented to classify EMG data and then trigger
the generation of motion as a reflex. An EMG sensor with eight channels was used
to record human muscle activity while moving different fingers. First, EMG data
was encoded to spikes and the signals were classified to identify the active finger.
After that, the activation signal was used to trigger an oscillator to generate
motion using a motor primitive. Then, the primitive is mapped to the robot
kinematics. Finally, the spikes are decoded to motor commands for the robot.
One highly novel aspect of this paper is the fact that the classification and
the generation of the motor primitive is implemented using a spiking neural net-
work. There are two reasons for doing this. First, the real biological system must
do something similar to this using spiking neurons. Certainly, in real biology, the
classification would not be based on EMG sensor, but would rather be based on
neural activity somewhere in the brain, but the classification and generation of
movement over time would still need to occur. This means that we can see our
system as an initial model of that biological process. Second, there is a prag-
matic/engineering reason to implement this system using spiking neurons. Pros-
thetic applications benefit from low-power hardware implementations, and there
is a variety of neuron-inspired low-power computing hardware being developed.
For example, the SpiNNaker system [11], Intel’s new Loihi chip [8] and IBM’s
TrueNorth chip [18] all provide extremely energy-efficient spiking neuron hard-
ware. If algorithms can be mapped onto this hardware, then they may be able
to be deployed using significantly less power than traditional implementations.
2 Methods
The main motivation of this work is to control a robotic hand with human muscle
signals using a spiking neural network (SNN). For this purpose we define specific
characteristics for the components. The EMG sensor has to be non-invasive,
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
the finger motions are represented with motor primitives triggered at once to
resemble reflexes, and the robot hand has to be controllable with a ROS interface.
We divide the problem in two parts, that translate to different parts of the same
SNN. The first part takes care of the EMG data interface and classification and
the second part takes care of the motion generation and robot control. Human
EMG data is captured in a non-invasive way. The first part of the SNN classifies
the EMG signals to detect which finger was active. An activation signal is then
passed on to the second part of the SNN to trigger single finger reflexes on the
real robot. A finger reflex is modeled with an oscillator that activates a motor
primitive that is mapped to motor commands according to the robot kinematics.
In Fig. 1 we present an overview of the main components showing how they
interact with each oder.
Fig. 1: Concept architecture with main components. Human muscle activity is
recorded with a non-invasive EMG sensor. The EMG data is encoded to spikes. The
first part of the SNN performs a classification to detect which finger was active. The
second part of the SNN generates motion and maps it to motor commands considering
the robot kinematics.
To generate the spiking neuron models, we used the Neural Engineering
Framework [10] and the software package Nengo [1]. This software allows for the
creation of large-scale spiking neural networks by breaking the networks down
into smaller parts. The connection weights for each sub-part are optimized sepa-
rately, and then they are combined together into one large neural network. Per-
forming this optimization (i.e. finding connection weights) locally means we can
generate large systems without using the traditional neural network approach
of optimizing over huge amounts of training data. However, the trade-off is that
we must make explicit claims about what each sub-part of the model is doing.
In particular, in order to define a spiking neuron model using Nengo and the
NEF, we must break our algorithm down into vectors, functions, and differential
equations. The activity of each group of spiking neurons is considered to be a
distributed representation of a vector (i.e. we may have 100 spiking neurons rep-
resenting a 2-dimensional vector). Connections between groups of neurons com-
pute functions on those vectors. That is, the connection weights ensure that if the
first group of neurons represents x, then the second group of neurons will repre-
sent y=f(x). By changing the connection weights, we change the function being
computed. Finally, recurrent connections can be used to implement differential
equations. We make use of this here to implement basic movement primitives.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
2.1 Human EMG data interface and training data
To record EMG data a single Myo [21] gesture control armband is used. It is
made up of eight equally spaced out blocks with non-invasive EMG sensors that
provide a sampling rate of 200Hz. The armband is used around the middle of the
forearm as shown in Fig. 2. In order to record consistent data with the sensor,
the segment with the LED light has to be placed approximately at the same
position. Slight variations after re-wearing the myo on and off didn’t have enough
influence on the recordings to make the trained network unusable. Retrieving
the raw EMG signals is done with the help of a Python API provided by [9].
Each channel encodes the individual measurement as int8 values.
Fig. 2: The Myo armband sensor placed on the arm and the different EMG channels.
When a finger is moved the muscle electric activity is recorded with eight different
sensors. The sensor has an indicator so that it can be placed always in a similar way.
For each user a training dataset is required with multiple samples. A sample
consist of a continuos sequence of finger activation in one hand. Each finger has
to be flexed down for a short period of time and then extended again. This
procedure is repeated starting from the thumb to the pinky. The training data
has to be recorded as a time continuous EMG stream of all eight channels with
appropriate binary labels for the time windows during which a finger was pressed.
A sample dataset is provided in Fig. 3.
Notice that individual channels of the EMG sensor have similar activation
for different fingers, and thus are not enough to identify the motion of a finger.
Therefore, the classification network uses a combination of all eight channels,
which provides a unique representation for each finger.
2.2 Sub network for EMG classification
After recording training data, the first part of the SNN is trained for classifi-
cation. The detailed architecture for EMG classification is presented in Fig. 4.
The raw EMG data from a user using the EMG sensor is feed to the SNN. A
population of neurons encodes the signal stream of EMG input to spikes using
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 3: A sample dataset for training with a run of all 5 fingers. From left to right the
peaks show EMG activation of the fingers starting with the thumb to the pinky. Each
finger is flexed and then extended.
stochastic population encoding. Then, a second population of neurons is trained
offline with a whole training dataset for a user as described above.
The learning rule for offline training the classification population is Pre-
scribed Error Sensitivity (PES) using the labels from the training data serving
as error signals E. PES is implemented in [2], and was first presented in [17]. The
weight updates that PES makes to minimize an error signal during learning can
be related to Skipped Back-propagation. For the weights wij from pre-synaptic
population ito post-synaptic population j, the update rule is defined as
∆wij =καjej·Eai,(1)
with κa scalar learning rate, αthe gain or scaling factor, ethe encoder for the
neuron, Ethe error to minimize, and athe desired activation.
After classification by the second population, the overall signals are low in
amplitude and consequently rather close to each other. Therefore, a population
is used to refine the classification by amplifying the signals and generating the
hand activation signal. An arbitrary defined function scales up activations above
a manually set threshold and is used to train this population. The resulting
activation signal is passed over to the motion genenration part. Examples for
classification of all the fingers of the complete procedure are provided in Fig. 11
in the results section. All populations are connected all to all.
2.3 Sub network for motion representation for reflexes
A population takes the hand activation signal from the previous classification
to trigger the appropriate finger reflex. We model a reflex as the execution of
a motor primitive based on a specific stimuli. Accordingly, the motion part of
our SNN is divided into reflex activation and motor primitive layers. The whole
architecture for the representation of reflexes is presented in Fig. 5.
The reflex activation for each finger is modeled as an oscillator
h(ω) = a·sin(bω π
2),(2)
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 4: Detailed architecture for EMG classification. Each circle represents a popula-
tion of spiking neurons. Raw EMG data is recorded from the user and is encoded into
spikes. The encoded EMG signals are classified to determine the finger that was acti-
vated. The classification signal is refined and amplified to have a clear hand activation
signal that is passed to the motion generation part.
Fig. 5: Detailed architecture for motion representation for reflexes. Each circle repre-
sents a population of spiking neurons. The hand activation signal is processed by the
hand abstraction population to extract individual finger activations. Reflexes are mod-
eled as oscillator that oscillate only one time. The activation is decoded in two com-
ponents as uand then mapped to a motor primitive with the robot kinematic param-
eters with g(f(u)). Finally, the neural activity is decoded from spikes and send as mo-
tor commands to the respective robot finger.
with ωa recurrent connection and aand bthe parameters for the amplitude
and frequency. The oscillator generates a continuos signal for a finite period that
represents the duration of a motion with a start and an end point. By indexing
the neurons in the oscillator population, the activity can be mapped to a 2D
plane. The total neural activity in the population in the plane can be represented
with the components xand y. We calculate a continuos and normalized signal
u∈[−1,1] as:
u= sin(arctan( y
x)),(3)
where arctan( y
x) represents the angle of a vector with components xand y. To
bound and smooth the signal sin() is applied.
The motor primitive is modeled as a mapping of uto a sequence of joint
activations during the period of oscillation, and it can be mapped to one or
multiple joints. In Fig. 5 the distal and proximal joints of the index finger are
mapped. A mapping
f(u) = sin(u·π−π
2)
2+1
2,(4)
is defined with a sinusoidal function to have smooth initial an final phases.
This characteristic is important when executing the motion in a real robot. The
resulting generic primitive for one joint is depicted in Fig. 6a.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
(a) (b)
Fig. 6: (a) Mapping of uto the motor primitive with f(u). (b) Complete mapping of
uto the robot kinematics with g(f(u)).
In general terms, there is no difference between voluntary and reflex motions
on the muscular activities (synergies), the difference is in the activation. For vol-
untary motions it is discrete, and for reflexes it is a complete one time execution.
2.4 Mapping to robot motor commands
Finally, in order to actually be able to control the robot, the primitives have to
be mapped to the robot kinematics. Which means scaling to the motion interval
(θmax −θmin) and offset θmin that the joint θhas. For this purpose we define
g: [0,1] →Rnas a function for each joint as
g(f(u)) = f(u)·(θmax −θmin) + θmin ,(5)
to generate appropriate motor commands.
A schema for the mapping gfor a robotic hand is illustrated in Fig. 6b and is
defined in the table in Fig. 7. This parametric representation of motions allows
us to further combine and change parameters of the motor primitives.
Joint name Primitive θmin θmax
joint10 0 0.7
joint21 0 0.4
joint31 0 0.8
Fig. 7: Table for the joint mapping schema. A joint is defined with a name, the
associated primitive, and the interval of the joint θmin and θmax. A primitive can be
mapped to one or more joints.
2.5 Integration off all components
A detailed architecture of the full SNN is presented in Fig. 8. Notice that the
primitives for the thumb, ring and pinky are mapped to one actuated joint,
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
whereas the index and middle finger primitives are mapped to two actuated
joints.
Fig. 8: Detailed architecture with EMG and motion sub networks. Each circle repre-
sents a population of spiking neurons. The dotted lines divide the conceptual compo-
nents, which are named on the bottom part according to Fig. 1.
3 Results
The experiment setup is depicted in Fig. 9, and consists of a human user, an EMG
sensor, a robot hand, and the simulation of the SNN. A user wears the EMG
sensor (Myo armband) in the forearm, and the signals are sent via bluetooth to
the computer. The computer receives the EMG data and inputs it to the SNN
simulation running in Nengo [1]. The computer communicates at the same time
with the robot hand (Schunk SVH) via ROS [19]. To control the robot hand the
official Schunk ROS driver is used [13].
3.1 SNN implementation
The SNN was implemented in Python with the Nengo simulator using leaky
integrate and fire (LIF) neurons. To get an overview of the implementation
Fig. 10 presents a view of the whole network running. The structure can be
easily mapped to that in Fig. 8. The eight channels human EMG signals are
encoded by a population as stochastic spike rates based on their values. After
performing offline training with different datasets from the same user, the EMG
classification takes place. The classification signal is then passed over to trigger
motion generation. The reflexes are implemented as oscillators that activate
motor primitives. The motor primitives are mapped to the robot kinematics as
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 9: Experiment setup. (left) The user wears the EMG sensor (Myo armband) in the
forearm. (right) The SNN is simulated (Nengo) is a computer connected via bluetooth
to the EMG sensor and via ROS to the robot hand. (middle ) The robot hand (Schunk
SVH) is connected to the computer via ROS.
defined in the methods section. In the following sections the relevant details of
each components are described.
Fig. 10: Full SNN pipeline in Nengo. There are four main components: human EMG
data capture and manipulation, EMG classification, motion generation, and finally the
mapping to the robot. The structure can be easily mapped to that in 8.
3.2 Training data
For each user a set of training data is required to train offline the SNN as de-
scribed in section 2.2. Training data for the classification network was recorded
in one session lasting 60 seconds. During that time individual fingers were pe-
riodically pressed against the palm of the hand and subsequently returned to a
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
resting pose. The fingers presses occurred in sequence from thumb to index fin-
ger with each press lasting between 300ms and 500ms. Together with the rest-
ing time one cycle took around 7.5sand a total of 8 cycles were performed. A
sample run of the training data with all 5 fingers is presented in Fig. 3. In or-
der to label EMG data, with every finger press simultaneously a keyboard but-
ton was pressed indicating the respective finger. The data is labeled in time for
each finger, and all eight EMG channels are active.
3.3 Processing of EMG data and classification
The first group of 800 neurons (EMG input in Fig. 4) was activated with the
raw EMG data as int8 to convert it to spikes. The second group of 500 neurons
(Classification in Fig. 4) was trained with the prerecorded training data at start
to give responses of the classified fingers. Then, a third group of 500 neurons
(Classification refined in Fig. 4) was used to separate and amplify signals further.
A final group of 500 neurons (Hand activation signal, Fig. 4) was trained to
give out one single signal for a specific finger and was connected to 5 groups
representing the different fingers for the robot hand.
In Fig. 11 we present samples of the SNN classifying the activation of the
different fingers. As it can be seen, the eight channels of EMG have different data
for each finger. The signals are processed with the SNN, and the activation of
the different populations can also be observed. The output of the classification
is a dominant activation of one of the populations representing each finger.
3.4 Motion generation
The resulting motion of the robot is presented as a frame sequence in Fig. 13.
The data presented corresponds to a reflex motion of the index finger. The cor-
responding activity of the SNN is presented in Fig. 12 with sufficient informa-
tion to illustrate the functioning of the SNN. The signal that triggers motion
generation comes from the classification part. An oscillator is activated for each
finger. Observe the circular activation of the oscillator population when decoded
in a plane XY in Fig. 12. From this circular activation uis decoded and mapped
to one or more joints in the robot hand. Observe that the mapping in Fig. 12
is performed to two joints of the index finger. Finally the neural activity is de-
coded and send over ROS to the robot hand.
In order to evaluate the accuracy of the classification we selected one random
user. Then he was asked to perform a sample of 50 trials with each finger. The
EMG data was feed to the trained network and we counted the classification
output for each trial. The results are summarized in Fig. 14. For this user, only
the pinky had a 1.0 of accuracy, which means that all the trials where classified
correctly. In all other fingers there were false or none classifications.
3.5 Interface to the robot hand
We describe here the mapping schema that was used with the robot hand
(Schunk SVH). The table in Fig. 15 summarizes the data. The ”Joint name”
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 11: EMG activation and classification for different fingers. The first graph from
the left shows the encoded EMG signals to spikes. The second plot shows the classifica-
tion output. The other plots show the refined classification signal and the hand activa-
tion signal respectively with one finger active. From top to bottom thumb, index, mid-
dle, ring and pinky are shown. Notice the different activations in the eight channels.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
A
B
C
D
E
Fig. 12: Motion generation from the SNN for a reflex in the index finger. The frames
A to E correspond to the fingers from the Thumb to the pinky respectively. From
left to right the plots represent: (left) Finger activation signal coming form the EMG
classification. (middle left, group of three plots) Show the activity in the oscillator with
a spike train plot, the decoded activity of the population in a plane XY , and a raster
plot color coded by the neuron’s activity. (middle right, group of three plots) Show
the decoding to u, and the mapping g(f(u)) to the robot kinematics. Note that the
mapping goes to two different joints. (right) A continuos plot of the current motor
commands being sent to the robot.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
A B C D E
Fig. 13: Frame sequence of the index finger motion generated by the SNN activity.
The motion presented corresponds to the activation shown in Fig. 12. After receiving
the classification signal the reflex is activated and fully executed once.
Finger
moved
Number
of trials
Thumb
class
Index
class
Middle
class
Ring
class
Pinky
class
None
class Accuracy
Thumb 50 46 0 0 1 0 3 0.92
Index 50 0 43 1 0 2 4 0.86
Middle 50 0 3 40 0 0 7 0.80
Ring 50 0 1 2 44 0 3 0.88
Pinky 50 0 0 0 0 50 0 1.00
Fig. 14: Table of classification accuracy. An evaluation summary of the classification
consisting of 50 trials of each finger for one random user. We counted which finger
class was detected and calculated the ratio of accuracy for each finger. Notice that only
the pinky was always classified correctly. For the other fingers there are either false
detections or none classification at all.
column corresponds to the ROS topics described in [13] for the different actu-
ated joints. A different primitive is used for each finger, and the indexing is in
column ”Primitive”. Note that index and middle finger joints are mapped to the
same primitives respectively. The ”min” and ”max” values for each joint com-
plete the table.
4 Discussion
We presented a SNN that activates motion reflexes on a robotic hand based
on human EMG data. The network classifies the EMG signals to detect finger
activation. Based on it, single finger reflexes are triggered. The finger reflexes
are modeled with motion primitives and mapped to the robot kinematic.
As can be seen in Fig. 3 and 11, the index finger showed almost no discernable
signal in the raw EMG data. For the index finger, the signal is not clear and
the output is sometimes ambiguous. As a consequence, the classification step
delivers a weak and low activation signal for the index finger that is propagated
throughout the following populations and leads to fake classification of other
fingers.
The focus of this work was on single finger movements, so data with multiple
fingers was not considered, only movements in quick succession of single fingers.
The EMG signals were used only to trigger the execution of the reflexes. In
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Joint name Primitive θmin θmax
Thumb Flexion 0 0 0.3
Thumb Opposition - - -
Index Finger Distal 1 0 0.9
Index Finger Proximal 1 0 0.7
Middle Finger Distal 2 0 0.9
Middle Finger Proximal 2 0 0.7
Ring Finger 3 0 0.7
Pinky 4 0 0.7
Finger Spread - - -
Fig. 15: Table. Joint mapping schema. A summary of corresponding topics as joint
name, the primitive used for each finger, and the joint angle intervals for each actuated
joint. The active joints in the robot hand ”Thumb Opposition” and ”Finger Spread”
remained constant all the time.
future work we want to explore a mechanism to use the EMG signal to perform
discrete control of the finger positions.
Using a second EMG sensor could provide additional input from different
muscle areas of the arm and improve classification results. Ideally the second
EMG sensor would be located close to the wrist [20] closer to the fingers and
the hand. It is difficult with the Myo armband because the circumference is too
big for that location. A bigger set of training data would also be very likely to
improve classification.
Acknowledgments. The research leading to these results has received fund-
ing from the European Union Horizon 2020 Programme under grant agreement
n.720270 (Human Brain Project SGA1).
References
1. Bekolay, T., Bergstra, J., Hunsberger, E., DeWolf, T., Stewart, T.C., Rasmussen,
D., Choo, X., Voelker, A., Eliasmith, C.: Nengo: a python tool for building large-
scale functional brain models. Frontiers in neuroinformatics (2014)
2. Bekolay, T., Kolbeck, C., Eliasmith, C.: Simultaneous unsupervised and supervised
learning of cognitive functions in biologically plausible spiking neural networks. In:
Cogsci (2013)
3. Bernstein, N.: The co-ordination and regulation of movements. Pergamon-Press
(1967)
4. Bizzi, E., Cheung, V., d’Avella, A., Saltiel, P., Tresch, M.: Combining modules for
movement. Brain research reviews (2008)
5. Chinellato, E., Pobil, A.: The Visual Neuroscience of Robotic Grasping: Achieving
Sensorimotor Skills through Dorsal-Ventral Stream Integration. Cognitive Systems
Monographs, Springer International Publishing (2016)
6. Chowdhury, R.H., Reaz, M.B., Ali, M.A.B.M., Bakar, A.A., Chellappan, K.,
Chang, T.G.: Surface electromyography signal processing and classification tech-
niques. Sensors (2013)
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
7. d’Avella, A., Saltiel, P., Bizzi, E.: Combinations of muscle synergies in the con-
struction of a natural motor behavior. Nature neuroscience (2003)
8. Davies, M., Srinivasa, N., Lin, T.H., Chinya, G., Cao, Y., Choday, S.H., Dimou,
G., Joshi, P., Imam, N., Jain, S.: Loihi: A neuromorphic manycore processor with
on-chip learning. IEEE Micro (2018)
9. dzhu: Myo python api. https://github.com/dzhu/myo-raw/ (2018), [Online; ac-
cessed 2018-02-21]
10. Eliasmith, C., Anderson, C.H.: Neural engineering: Computation, representation,
and dynamics in neurobiological systems. MIT Press, Cambridge, MA (2003)
11. Furber, S., Temple, S., Brown, A.: High-performance computing for systems of
spiking neurons. In: AISB’06 Workshop. GC5: Archit. Brain Mind (2006)
12. Gr¨uning, A., Bohte, S.M.: Spiking neural networks: Principles and challenges. In:
ESANN (2014)
13. Heppner, G.: schunk svh driver. http://wiki.ros.org/schunk{_}svh{_}driver
(2018), [Online; accessed 2018-02-21]
14. Johannes, M.S., Bigelow, J.D., Burck, J.M., Harshbarger, S.D., Kozlowski, M.V.,
Van Doren, T.: An overview of the developmental process for the modular pros-
thetic limb. Johns Hopkins APL Technical Digest (2011)
15. Knierim, J.: Spinal reflexes and descending motor pathways. Neuroscience Online
(2016), accessed: 11.02.2018
16. Maass, W.: Networks of spiking neurons: The third generation of neural network
models. Neural Networks (1997)
17. MacNeil, D., Eliasmith, C.: Fine-tuning and the stability of recurrent neural net-
works. PLoS ONE (2011)
18. Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan,
F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y.: A million spiking-neuron
integrated circuit with a scalable communication network and interface. Science
(2014)
19. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R.,
Ng, A.Y.: Ros: an open-source robot operating system. In: ICRA workshop on
open source software. Kobe, Japan (2009)
20. Tenore, F., Ramos, A., Fahmy, A., Acharya, S., Etienne-Cummings, R., Thakor,
N.V.: Towards the control of individual fingers of a prosthetic hand using surface
emg signals. In: EMBC (2007)
21. ThalmicLabs: Myo diagnostics. http://diagnostics.myo.com/ (2018), [Online;
accessed 2018-02-21]
22. Tieck, J.C.V., Donat, H., Kaiser, J., Peric, I., Ulbrich, S., Roennau, A., Z¨ollner, M.,
Dillmann, R.: Towards grasping with spiking neural networks for anthropomorphic
robot hands. ICANN (2017)