Conference PaperPDF Available

Triggering robot hand reflexes with human EMG data using spiking neurons

Authors:

Abstract and Figures

The interaction of humans and robots (HRI) is of great relevance for the field of neurorobotics as it can provide insights on motor control and sensor processing mechanisms in humans that can be applied to robotics. We propose a spiking neural network (SNN) to trigger motion reflexes on a robotic hand based on human EMG data. The first part of the network takes EMG signals to measure muscle activity, then classify the data to detect which finger is active in the human hand. The second part triggers single finger reflexes using the classification output. The finger reflexes are modeled with motion primitives activated with an oscillator and mapped to the robot kinematic. We evaluated the SNN by having users wear a non-invasive EMG sensor, record a training dataset, and then flex different fingers, one at a time. The muscle activity was recorded using a Myo sensor with eight channels. EMG signals were successfully encoded into spikes as input for the SNN. The classification could detect the active finger to trigger motion generation of finger reflexes. The SNN was able to control a real Schunk SVH robotic hand. Being able to map myo-electric activity to functions of motor control for a task, can provide an interesting interface for robotic applications, and also to study brain functioning. SNN provide a challenging but interesting framework to interact with human data. In future work the approach will be extended to control a robot arm at the same time.
Content may be subject to copyright.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Triggering robot hand reflexes with human
EMG data using spiking neurons
J. Camilo Vasquez Tieck1?, Sandro Weber2, Terrence C. Stewart3Arne
Roennau1, and udiger Dillmann1
1FZI Research Center for Information Technology, Karlsruhe 76131, Germany
2TUM Technical University of Munich, 80333 unchen, Germany
3Centre for Theoretical Neuroscience, University of Waterloo, Canada N2L 3G1
Abstract. The interaction of humans and robots (HRI) is of great rel-
evance for the field of neurorobotics as it can provide insights on motor
control and sensor processing mechanisms in humans that can be ap-
plied to robotics. We propose a spiking neural network (SNN) to trigger
motion reflexes on a robotic hand based on human EMG data. The first
part of the network takes EMG signals to measure muscle activity, then
classify the data to detect which finger is active in the human hand. The
second part triggers single finger reflexes using the classification output.
The finger reflexes are modeled with motion primitives activated with an
oscillator and mapped to the robot kinematic. We evaluated the SNN by
having users wear a non-invasive EMG sensor, record a training dataset,
and then flex different fingers, one at a time. The muscle activity was
recorded using a Myo sensor with eight channels. EMG signals were suc-
cessfully encoded into spikes as input for the SNN. The classification
could detect the active finger to trigger motion generation of finger re-
flexes. The SNN was able to control a real Schunk SVH robotic hand.
Being able to map myo-electric activity to functions of motor control for
a task, can provide an interesting interface for robotic applications, and
also to study brain functioning. SNN provide a challenging but interest-
ing framework to interact with human data. In future work the approach
will be extended to control a robot arm at the same time.
Keywords: Human-Robot-Interaction, humanoid robots, neurorobotics,
motion representation, EMG classification, spiking neural networks, an-
thropomorphic robot hand.
1 Introduction
The interaction of humans and robots (HRI) is of great relevance for the field of
neurorobotics as it can provide insights on motor control and sensor processing
mechanisms in humans that can be applied in robots.
Electromyography (EMG) is a common tool in medicine and biomechanics.
It is used to monitor and study the electrical activity of the muscles. There are
?Correspondence: J. C. V. Tieck, tieck@fzi.de
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
different methods to record EMG signals, and they can be either invasive or
non-invasive. Research is being made on processing and classification of EMG
signals for clinical diagnoses [6] or prosthetic applications [14].
Nevertheless, there are different hypothesis explaining how does the human
motor system work. A wide accepted theory states that the central nervous
system uses different base motor components in a hierarchy [4] to generate the
full repertoire of motions that we can perform [3].
These base components are formed by specific combination of muscle syner-
gies [7] that are active during a motion, and they are commonly called motor
primitives [5]. An approach using these concepts to control a robotic hand was
proposed in [22]. Motor primitives can be activated in different ways, for exam-
ple as a reflex. A reflex is an involuntary response to sensor stimulation and can
be either a complete execution or inhibition of a motion. An overview of differ-
ent spinal reflexes is provided by [15].
In this work, we propose a system to control a robot hand with muscle
signals from a human recorded with a non-invasive EMG sensor. We focus on the
activation of single finger motions in response to sensor stimuli. A spiking neural
network (SNN) [16][12] was implemented to classify EMG data and then trigger
the generation of motion as a reflex. An EMG sensor with eight channels was used
to record human muscle activity while moving different fingers. First, EMG data
was encoded to spikes and the signals were classified to identify the active finger.
After that, the activation signal was used to trigger an oscillator to generate
motion using a motor primitive. Then, the primitive is mapped to the robot
kinematics. Finally, the spikes are decoded to motor commands for the robot.
One highly novel aspect of this paper is the fact that the classification and
the generation of the motor primitive is implemented using a spiking neural net-
work. There are two reasons for doing this. First, the real biological system must
do something similar to this using spiking neurons. Certainly, in real biology, the
classification would not be based on EMG sensor, but would rather be based on
neural activity somewhere in the brain, but the classification and generation of
movement over time would still need to occur. This means that we can see our
system as an initial model of that biological process. Second, there is a prag-
matic/engineering reason to implement this system using spiking neurons. Pros-
thetic applications benefit from low-power hardware implementations, and there
is a variety of neuron-inspired low-power computing hardware being developed.
For example, the SpiNNaker system [11], Intel’s new Loihi chip [8] and IBM’s
TrueNorth chip [18] all provide extremely energy-efficient spiking neuron hard-
ware. If algorithms can be mapped onto this hardware, then they may be able
to be deployed using significantly less power than traditional implementations.
2 Methods
The main motivation of this work is to control a robotic hand with human muscle
signals using a spiking neural network (SNN). For this purpose we define specific
characteristics for the components. The EMG sensor has to be non-invasive,
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
the finger motions are represented with motor primitives triggered at once to
resemble reflexes, and the robot hand has to be controllable with a ROS interface.
We divide the problem in two parts, that translate to different parts of the same
SNN. The first part takes care of the EMG data interface and classification and
the second part takes care of the motion generation and robot control. Human
EMG data is captured in a non-invasive way. The first part of the SNN classifies
the EMG signals to detect which finger was active. An activation signal is then
passed on to the second part of the SNN to trigger single finger reflexes on the
real robot. A finger reflex is modeled with an oscillator that activates a motor
primitive that is mapped to motor commands according to the robot kinematics.
In Fig. 1 we present an overview of the main components showing how they
interact with each oder.
Fig. 1: Concept architecture with main components. Human muscle activity is
recorded with a non-invasive EMG sensor. The EMG data is encoded to spikes. The
first part of the SNN performs a classification to detect which finger was active. The
second part of the SNN generates motion and maps it to motor commands considering
the robot kinematics.
To generate the spiking neuron models, we used the Neural Engineering
Framework [10] and the software package Nengo [1]. This software allows for the
creation of large-scale spiking neural networks by breaking the networks down
into smaller parts. The connection weights for each sub-part are optimized sepa-
rately, and then they are combined together into one large neural network. Per-
forming this optimization (i.e. finding connection weights) locally means we can
generate large systems without using the traditional neural network approach
of optimizing over huge amounts of training data. However, the trade-off is that
we must make explicit claims about what each sub-part of the model is doing.
In particular, in order to define a spiking neuron model using Nengo and the
NEF, we must break our algorithm down into vectors, functions, and differential
equations. The activity of each group of spiking neurons is considered to be a
distributed representation of a vector (i.e. we may have 100 spiking neurons rep-
resenting a 2-dimensional vector). Connections between groups of neurons com-
pute functions on those vectors. That is, the connection weights ensure that if the
first group of neurons represents x, then the second group of neurons will repre-
sent y=f(x). By changing the connection weights, we change the function being
computed. Finally, recurrent connections can be used to implement differential
equations. We make use of this here to implement basic movement primitives.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
2.1 Human EMG data interface and training data
To record EMG data a single Myo [21] gesture control armband is used. It is
made up of eight equally spaced out blocks with non-invasive EMG sensors that
provide a sampling rate of 200Hz. The armband is used around the middle of the
forearm as shown in Fig. 2. In order to record consistent data with the sensor,
the segment with the LED light has to be placed approximately at the same
position. Slight variations after re-wearing the myo on and off didn’t have enough
influence on the recordings to make the trained network unusable. Retrieving
the raw EMG signals is done with the help of a Python API provided by [9].
Each channel encodes the individual measurement as int8 values.
Fig. 2: The Myo armband sensor placed on the arm and the different EMG channels.
When a finger is moved the muscle electric activity is recorded with eight different
sensors. The sensor has an indicator so that it can be placed always in a similar way.
For each user a training dataset is required with multiple samples. A sample
consist of a continuos sequence of finger activation in one hand. Each finger has
to be flexed down for a short period of time and then extended again. This
procedure is repeated starting from the thumb to the pinky. The training data
has to be recorded as a time continuous EMG stream of all eight channels with
appropriate binary labels for the time windows during which a finger was pressed.
A sample dataset is provided in Fig. 3.
Notice that individual channels of the EMG sensor have similar activation
for different fingers, and thus are not enough to identify the motion of a finger.
Therefore, the classification network uses a combination of all eight channels,
which provides a unique representation for each finger.
2.2 Sub network for EMG classification
After recording training data, the first part of the SNN is trained for classifi-
cation. The detailed architecture for EMG classification is presented in Fig. 4.
The raw EMG data from a user using the EMG sensor is feed to the SNN. A
population of neurons encodes the signal stream of EMG input to spikes using
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 3: A sample dataset for training with a run of all 5 fingers. From left to right the
peaks show EMG activation of the fingers starting with the thumb to the pinky. Each
finger is flexed and then extended.
stochastic population encoding. Then, a second population of neurons is trained
offline with a whole training dataset for a user as described above.
The learning rule for offline training the classification population is Pre-
scribed Error Sensitivity (PES) using the labels from the training data serving
as error signals E. PES is implemented in [2], and was first presented in [17]. The
weight updates that PES makes to minimize an error signal during learning can
be related to Skipped Back-propagation. For the weights wij from pre-synaptic
population ito post-synaptic population j, the update rule is defined as
∆wij =καjej·Eai,(1)
with κa scalar learning rate, αthe gain or scaling factor, ethe encoder for the
neuron, Ethe error to minimize, and athe desired activation.
After classification by the second population, the overall signals are low in
amplitude and consequently rather close to each other. Therefore, a population
is used to refine the classification by amplifying the signals and generating the
hand activation signal. An arbitrary defined function scales up activations above
a manually set threshold and is used to train this population. The resulting
activation signal is passed over to the motion genenration part. Examples for
classification of all the fingers of the complete procedure are provided in Fig. 11
in the results section. All populations are connected all to all.
2.3 Sub network for motion representation for reflexes
A population takes the hand activation signal from the previous classification
to trigger the appropriate finger reflex. We model a reflex as the execution of
a motor primitive based on a specific stimuli. Accordingly, the motion part of
our SNN is divided into reflex activation and motor primitive layers. The whole
architecture for the representation of reflexes is presented in Fig. 5.
The reflex activation for each finger is modeled as an oscillator
h(ω) = a·sin( π
2),(2)
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 4: Detailed architecture for EMG classification. Each circle represents a popula-
tion of spiking neurons. Raw EMG data is recorded from the user and is encoded into
spikes. The encoded EMG signals are classified to determine the finger that was acti-
vated. The classification signal is refined and amplified to have a clear hand activation
signal that is passed to the motion generation part.
Fig. 5: Detailed architecture for motion representation for reflexes. Each circle repre-
sents a population of spiking neurons. The hand activation signal is processed by the
hand abstraction population to extract individual finger activations. Reflexes are mod-
eled as oscillator that oscillate only one time. The activation is decoded in two com-
ponents as uand then mapped to a motor primitive with the robot kinematic param-
eters with g(f(u)). Finally, the neural activity is decoded from spikes and send as mo-
tor commands to the respective robot finger.
with ωa recurrent connection and aand bthe parameters for the amplitude
and frequency. The oscillator generates a continuos signal for a finite period that
represents the duration of a motion with a start and an end point. By indexing
the neurons in the oscillator population, the activity can be mapped to a 2D
plane. The total neural activity in the population in the plane can be represented
with the components xand y. We calculate a continuos and normalized signal
u[1,1] as:
u= sin(arctan( y
x)),(3)
where arctan( y
x) represents the angle of a vector with components xand y. To
bound and smooth the signal sin() is applied.
The motor primitive is modeled as a mapping of uto a sequence of joint
activations during the period of oscillation, and it can be mapped to one or
multiple joints. In Fig. 5 the distal and proximal joints of the index finger are
mapped. A mapping
f(u) = sin(u·ππ
2)
2+1
2,(4)
is defined with a sinusoidal function to have smooth initial an final phases.
This characteristic is important when executing the motion in a real robot. The
resulting generic primitive for one joint is depicted in Fig. 6a.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
(a) (b)
Fig. 6: (a) Mapping of uto the motor primitive with f(u). (b) Complete mapping of
uto the robot kinematics with g(f(u)).
In general terms, there is no difference between voluntary and reflex motions
on the muscular activities (synergies), the difference is in the activation. For vol-
untary motions it is discrete, and for reflexes it is a complete one time execution.
2.4 Mapping to robot motor commands
Finally, in order to actually be able to control the robot, the primitives have to
be mapped to the robot kinematics. Which means scaling to the motion interval
(θmax θmin) and offset θmin that the joint θhas. For this purpose we define
g: [0,1] Rnas a function for each joint as
g(f(u)) = f(u)·(θmax θmin) + θmin ,(5)
to generate appropriate motor commands.
A schema for the mapping gfor a robotic hand is illustrated in Fig. 6b and is
defined in the table in Fig. 7. This parametric representation of motions allows
us to further combine and change parameters of the motor primitives.
Joint name Primitive θmin θmax
joint10 0 0.7
joint21 0 0.4
joint31 0 0.8
Fig. 7: Table for the joint mapping schema. A joint is defined with a name, the
associated primitive, and the interval of the joint θmin and θmax. A primitive can be
mapped to one or more joints.
2.5 Integration off all components
A detailed architecture of the full SNN is presented in Fig. 8. Notice that the
primitives for the thumb, ring and pinky are mapped to one actuated joint,
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
whereas the index and middle finger primitives are mapped to two actuated
joints.
Fig. 8: Detailed architecture with EMG and motion sub networks. Each circle repre-
sents a population of spiking neurons. The dotted lines divide the conceptual compo-
nents, which are named on the bottom part according to Fig. 1.
3 Results
The experiment setup is depicted in Fig. 9, and consists of a human user, an EMG
sensor, a robot hand, and the simulation of the SNN. A user wears the EMG
sensor (Myo armband) in the forearm, and the signals are sent via bluetooth to
the computer. The computer receives the EMG data and inputs it to the SNN
simulation running in Nengo [1]. The computer communicates at the same time
with the robot hand (Schunk SVH) via ROS [19]. To control the robot hand the
official Schunk ROS driver is used [13].
3.1 SNN implementation
The SNN was implemented in Python with the Nengo simulator using leaky
integrate and fire (LIF) neurons. To get an overview of the implementation
Fig. 10 presents a view of the whole network running. The structure can be
easily mapped to that in Fig. 8. The eight channels human EMG signals are
encoded by a population as stochastic spike rates based on their values. After
performing offline training with different datasets from the same user, the EMG
classification takes place. The classification signal is then passed over to trigger
motion generation. The reflexes are implemented as oscillators that activate
motor primitives. The motor primitives are mapped to the robot kinematics as
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 9: Experiment setup. (left) The user wears the EMG sensor (Myo armband) in the
forearm. (right) The SNN is simulated (Nengo) is a computer connected via bluetooth
to the EMG sensor and via ROS to the robot hand. (middle ) The robot hand (Schunk
SVH) is connected to the computer via ROS.
defined in the methods section. In the following sections the relevant details of
each components are described.
Fig. 10: Full SNN pipeline in Nengo. There are four main components: human EMG
data capture and manipulation, EMG classification, motion generation, and finally the
mapping to the robot. The structure can be easily mapped to that in 8.
3.2 Training data
For each user a set of training data is required to train offline the SNN as de-
scribed in section 2.2. Training data for the classification network was recorded
in one session lasting 60 seconds. During that time individual fingers were pe-
riodically pressed against the palm of the hand and subsequently returned to a
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
resting pose. The fingers presses occurred in sequence from thumb to index fin-
ger with each press lasting between 300ms and 500ms. Together with the rest-
ing time one cycle took around 7.5sand a total of 8 cycles were performed. A
sample run of the training data with all 5 fingers is presented in Fig. 3. In or-
der to label EMG data, with every finger press simultaneously a keyboard but-
ton was pressed indicating the respective finger. The data is labeled in time for
each finger, and all eight EMG channels are active.
3.3 Processing of EMG data and classification
The first group of 800 neurons (EMG input in Fig. 4) was activated with the
raw EMG data as int8 to convert it to spikes. The second group of 500 neurons
(Classification in Fig. 4) was trained with the prerecorded training data at start
to give responses of the classified fingers. Then, a third group of 500 neurons
(Classification refined in Fig. 4) was used to separate and amplify signals further.
A final group of 500 neurons (Hand activation signal, Fig. 4) was trained to
give out one single signal for a specific finger and was connected to 5 groups
representing the different fingers for the robot hand.
In Fig. 11 we present samples of the SNN classifying the activation of the
different fingers. As it can be seen, the eight channels of EMG have different data
for each finger. The signals are processed with the SNN, and the activation of
the different populations can also be observed. The output of the classification
is a dominant activation of one of the populations representing each finger.
3.4 Motion generation
The resulting motion of the robot is presented as a frame sequence in Fig. 13.
The data presented corresponds to a reflex motion of the index finger. The cor-
responding activity of the SNN is presented in Fig. 12 with sufficient informa-
tion to illustrate the functioning of the SNN. The signal that triggers motion
generation comes from the classification part. An oscillator is activated for each
finger. Observe the circular activation of the oscillator population when decoded
in a plane XY in Fig. 12. From this circular activation uis decoded and mapped
to one or more joints in the robot hand. Observe that the mapping in Fig. 12
is performed to two joints of the index finger. Finally the neural activity is de-
coded and send over ROS to the robot hand.
In order to evaluate the accuracy of the classification we selected one random
user. Then he was asked to perform a sample of 50 trials with each finger. The
EMG data was feed to the trained network and we counted the classification
output for each trial. The results are summarized in Fig. 14. For this user, only
the pinky had a 1.0 of accuracy, which means that all the trials where classified
correctly. In all other fingers there were false or none classifications.
3.5 Interface to the robot hand
We describe here the mapping schema that was used with the robot hand
(Schunk SVH). The table in Fig. 15 summarizes the data. The ”Joint name”
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Fig. 11: EMG activation and classification for different fingers. The first graph from
the left shows the encoded EMG signals to spikes. The second plot shows the classifica-
tion output. The other plots show the refined classification signal and the hand activa-
tion signal respectively with one finger active. From top to bottom thumb, index, mid-
dle, ring and pinky are shown. Notice the different activations in the eight channels.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
A
B
C
D
E
Fig. 12: Motion generation from the SNN for a reflex in the index finger. The frames
A to E correspond to the fingers from the Thumb to the pinky respectively. From
left to right the plots represent: (left) Finger activation signal coming form the EMG
classification. (middle left, group of three plots) Show the activity in the oscillator with
a spike train plot, the decoded activity of the population in a plane XY , and a raster
plot color coded by the neuron’s activity. (middle right, group of three plots) Show
the decoding to u, and the mapping g(f(u)) to the robot kinematics. Note that the
mapping goes to two different joints. (right) A continuos plot of the current motor
commands being sent to the robot.
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
A B C D E
Fig. 13: Frame sequence of the index finger motion generated by the SNN activity.
The motion presented corresponds to the activation shown in Fig. 12. After receiving
the classification signal the reflex is activated and fully executed once.
Finger
moved
Number
of trials
Thumb
class
Index
class
Middle
class
Ring
class
Pinky
class
None
class Accuracy
Thumb 50 46 0 0 1 0 3 0.92
Index 50 0 43 1 0 2 4 0.86
Middle 50 0 3 40 0 0 7 0.80
Ring 50 0 1 2 44 0 3 0.88
Pinky 50 0 0 0 0 50 0 1.00
Fig. 14: Table of classification accuracy. An evaluation summary of the classification
consisting of 50 trials of each finger for one random user. We counted which finger
class was detected and calculated the ratio of accuracy for each finger. Notice that only
the pinky was always classified correctly. For the other fingers there are either false
detections or none classification at all.
column corresponds to the ROS topics described in [13] for the different actu-
ated joints. A different primitive is used for each finger, and the indexing is in
column ”Primitive”. Note that index and middle finger joints are mapped to the
same primitives respectively. The ”min” and ”max” values for each joint com-
plete the table.
4 Discussion
We presented a SNN that activates motion reflexes on a robotic hand based
on human EMG data. The network classifies the EMG signals to detect finger
activation. Based on it, single finger reflexes are triggered. The finger reflexes
are modeled with motion primitives and mapped to the robot kinematic.
As can be seen in Fig. 3 and 11, the index finger showed almost no discernable
signal in the raw EMG data. For the index finger, the signal is not clear and
the output is sometimes ambiguous. As a consequence, the classification step
delivers a weak and low activation signal for the index finger that is propagated
throughout the following populations and leads to fake classification of other
fingers.
The focus of this work was on single finger movements, so data with multiple
fingers was not considered, only movements in quick succession of single fingers.
The EMG signals were used only to trigger the execution of the reflexes. In
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
Joint name Primitive θmin θmax
Thumb Flexion 0 0 0.3
Thumb Opposition - - -
Index Finger Distal 1 0 0.9
Index Finger Proximal 1 0 0.7
Middle Finger Distal 2 0 0.9
Middle Finger Proximal 2 0 0.7
Ring Finger 3 0 0.7
Pinky 4 0 0.7
Finger Spread - - -
Fig. 15: Table. Joint mapping schema. A summary of corresponding topics as joint
name, the primitive used for each finger, and the joint angle intervals for each actuated
joint. The active joints in the robot hand ”Thumb Opposition” and ”Finger Spread”
remained constant all the time.
future work we want to explore a mechanism to use the EMG signal to perform
discrete control of the finger positions.
Using a second EMG sensor could provide additional input from different
muscle areas of the arm and improve classification results. Ideally the second
EMG sensor would be located close to the wrist [20] closer to the fingers and
the hand. It is difficult with the Myo armband because the circumference is too
big for that location. A bigger set of training data would also be very likely to
improve classification.
Acknowledgments. The research leading to these results has received fund-
ing from the European Union Horizon 2020 Programme under grant agreement
n.720270 (Human Brain Project SGA1).
References
1. Bekolay, T., Bergstra, J., Hunsberger, E., DeWolf, T., Stewart, T.C., Rasmussen,
D., Choo, X., Voelker, A., Eliasmith, C.: Nengo: a python tool for building large-
scale functional brain models. Frontiers in neuroinformatics (2014)
2. Bekolay, T., Kolbeck, C., Eliasmith, C.: Simultaneous unsupervised and supervised
learning of cognitive functions in biologically plausible spiking neural networks. In:
Cogsci (2013)
3. Bernstein, N.: The co-ordination and regulation of movements. Pergamon-Press
(1967)
4. Bizzi, E., Cheung, V., d’Avella, A., Saltiel, P., Tresch, M.: Combining modules for
movement. Brain research reviews (2008)
5. Chinellato, E., Pobil, A.: The Visual Neuroscience of Robotic Grasping: Achieving
Sensorimotor Skills through Dorsal-Ventral Stream Integration. Cognitive Systems
Monographs, Springer International Publishing (2016)
6. Chowdhury, R.H., Reaz, M.B., Ali, M.A.B.M., Bakar, A.A., Chellappan, K.,
Chang, T.G.: Surface electromyography signal processing and classification tech-
niques. Sensors (2013)
This is the author’s copy. Personal use of this material is permitted. Permission from the publisher must be obtained for all other uses.
This paper was published in 2018 by Springer. doi:
15th International Conference Intelligent Autonomous Systems (IAS-15).
7. d’Avella, A., Saltiel, P., Bizzi, E.: Combinations of muscle synergies in the con-
struction of a natural motor behavior. Nature neuroscience (2003)
8. Davies, M., Srinivasa, N., Lin, T.H., Chinya, G., Cao, Y., Choday, S.H., Dimou,
G., Joshi, P., Imam, N., Jain, S.: Loihi: A neuromorphic manycore processor with
on-chip learning. IEEE Micro (2018)
9. dzhu: Myo python api. https://github.com/dzhu/myo-raw/ (2018), [Online; ac-
cessed 2018-02-21]
10. Eliasmith, C., Anderson, C.H.: Neural engineering: Computation, representation,
and dynamics in neurobiological systems. MIT Press, Cambridge, MA (2003)
11. Furber, S., Temple, S., Brown, A.: High-performance computing for systems of
spiking neurons. In: AISB’06 Workshop. GC5: Archit. Brain Mind (2006)
12. Gr¨uning, A., Bohte, S.M.: Spiking neural networks: Principles and challenges. In:
ESANN (2014)
13. Heppner, G.: schunk svh driver. http://wiki.ros.org/schunk{_}svh{_}driver
(2018), [Online; accessed 2018-02-21]
14. Johannes, M.S., Bigelow, J.D., Burck, J.M., Harshbarger, S.D., Kozlowski, M.V.,
Van Doren, T.: An overview of the developmental process for the modular pros-
thetic limb. Johns Hopkins APL Technical Digest (2011)
15. Knierim, J.: Spinal reflexes and descending motor pathways. Neuroscience Online
(2016), accessed: 11.02.2018
16. Maass, W.: Networks of spiking neurons: The third generation of neural network
models. Neural Networks (1997)
17. MacNeil, D., Eliasmith, C.: Fine-tuning and the stability of recurrent neural net-
works. PLoS ONE (2011)
18. Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan,
F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y.: A million spiking-neuron
integrated circuit with a scalable communication network and interface. Science
(2014)
19. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R.,
Ng, A.Y.: Ros: an open-source robot operating system. In: ICRA workshop on
open source software. Kobe, Japan (2009)
20. Tenore, F., Ramos, A., Fahmy, A., Acharya, S., Etienne-Cummings, R., Thakor,
N.V.: Towards the control of individual fingers of a prosthetic hand using surface
emg signals. In: EMBC (2007)
21. ThalmicLabs: Myo diagnostics. http://diagnostics.myo.com/ (2018), [Online;
accessed 2018-02-21]
22. Tieck, J.C.V., Donat, H., Kaiser, J., Peric, I., Ulbrich, S., Roennau, A., ollner, M.,
Dillmann, R.: Towards grasping with spiking neural networks for anthropomorphic
robot hands. ICANN (2017)
... Although, there are other approaches using SNN for motion control using force feedback, to the best of our knowledge, there is no implementation of an SNN for soft grasping with a 5-finger anthropomorphic hand performing compliant control without force sensors using the standart joint interface. We can model a system using SNN for soft-grasping using an anthropomorphic robotic hand taking inspiration from biology and using the principles presented in previous work for a hierarchy of motor primitives with SNN to model the hand [27], to model finger reflexes [28], to coordinate multiple primitives [29], and to combine activation modalities [30]. ...
... For each finger there is a motor primitive that represents the joint synergies between the joints of the finger during a closing motion. The principles to model the finger primitives are based on [28], [30]. In addition to the five fingers, we extended the modelling with two more degrees of freedomthumb opposition and finger spread. ...
... A population of neurons generates neural activity for the duration of the grasping motion. This is an oscillator that oscillates only once ( [30], [28]), and the activity for the activation parameter u is decoded as ...
Article
Full-text available
Evolution gave humans advanced grasping capabilities combining an adaptive hand with efficient control. Grasping motions can quickly be adapted if the object moves or deforms. Soft-grasping with an anthropomorphic hand is a great capability for robots interacting with objects shaped for humans. Nevertheless, most robotic applications use vacuum, 2-finger or custom made grippers. We present a biologically inspired spiking neural network (SNN) for soft-grasping to control a robotic hand. Two control loops are combined, one from motor primitives and one from a compliant controller activated by a reflex. The finger primitives represent synergies between joints and hand primitives represent different affordances. Contact is detected with a mechanism based on inter-neuron circuits in the spinal cord to trigger reflexes. A Schunk SVH 5-finger hand was used to grasp objects with different shapes, stiffness and sizes. The SNN adapted the grasping motions without knowing the exact properties of the objects. The compliant controller with online learning proved to be sensitive, allowing even the grasping of balloons. In contrast to deep learning approaches, our SNN requires one example of each grasping motion to train the primitives. Computation of the inverse kinematics or complex contact point planning is not required. This approach simplifies the control and can be used on different robots providing similar adaptive features as a human hand. A physical imitation of a biological system implemented completely with SNN and a robotic hand can provide new insights into grasping mechanisms.
... There are methods using a spiking neural network (SNN) [7][8] as a drop-in state representation layer in combination with an ANN. In [9], a SNN is combined with an ANN for control of musculoskeletal kinematic structures, and in a similar way, Figure 1: Concept architecture with main components (adapted from [1]). Classification of sEMG signals to detect active finger is used to trigger motion on the robot hand. ...
... We focus on the activation of single finger motions in response to sensor stimuli as the reflex mechanism presented in [1,18]. A SNN was implemented to classify sEMG data and then trigger the generation of motion as a reflex. ...
... This work is an extension of the previous conference paper [1]. We extend the modelling of motions with motor primitives and explain in more detail how to map the primitives to different robots with references. ...
Article
Full-text available
The interaction between robots and humans is of great relevance for the field of neurorobotics as it can provide insights on how humans perform motor control and sensor processing and on how it can be applied to robotics. We propose a spiking neural network (SNN) to trigger finger motion reflexes on a robotic hand based on human sEMG data. The first part of the network takes sEMG signals to measure muscle activity, then classify the data to detect which finger is being flexed in the human hand. The second part triggers single finger reflexes on the robot using the classification output. The finger reflexes are modeled with motion primitives activated with an oscillator and mapped to the robot kinematic. We evaluated the SNN by having users wear a non-invasive sEMG sensor, record a training dataset, and then flex different fingers, one at a time. The muscle activity was recorded using a Myo sensor with eight different channels. The sEMG signals were successfully encoded into spikes as input for the SNN. The classification could detect the active finger and trigger the motion generation of finger reflexes. The SNN was able to control a real Schunk SVH 5-finger robotic hand online. Being able to map myo-electric activity to functions of motor control for a task, can provide an interesting interface for robotic applications, and a platform to study brain functioning. SNN provide a challenging but interesting framework to interact with human data. In future work the approach will be extended to control also a robot arm at the same time.
... Building on previous work with SNN using motor primitives for grasping Tieck, Steffen, Kaiser, Reichard, et al., 2018;Tieck, Steffen, Kaiser, Arne, & Dillmann, 2018;Tieck, Weber, Stewart, Roennau, & Dillmann, 2018), the authors propose a bio-inspired architecture to perform target reaching with a robot arm without planning. A spiking neural network represents motions in a hierarchy of motor primitives. ...
... For a simple motion, a motor primitive represents the synergies between the joints. The principles to represent motor primitives with SNN and to convert the spike activity to motor commands were introduced in Tieck, Steffen, Kaiser, Reichard, et al., 2018;Tieck, Steffen, Kaiser, Arne, et al., 2018;Tieck, Weber, Stewart, Roennau, et al., 2018). A simple illustration of the primitive modelling and the activation is presented in in Figure 2. ...
... with a flat initial and final phase is used (Tieck, Weber, Stewart, Roennau, et al., 2018). This characteristic leads to smooth transitions when performing consecutive motions in a real robot. ...
Article
Full-text available
Target reaching is one of the most important areas in robotics, object interaction, manipulation and grasping tasks require reaching specific targets. The authors avoid the complexity of calculating the inverse kinematics and doing motion planning, and instead use a combination of motor primitives. A bio-inspired architecture performs target reaching with a robot arm without planning. A spiking neural network represents motions in a hierarchy of motor primitives, and different correction primitives are combined using an error signal. In this article two experiments using a simulation of a robot arm are presented, one to extensively cover the working space by going to different points and returning to the start point, the other to test extreme targets and random points in sequence. Robotics applications—like target reaching—can provide benchmarking tasks and realistic scenarios for validation of neuroscience models, and also take advantage of the capabilities of spiking neural networks and the properties of neuromorphic hardware to run the models.
... Complex motions are represented in a hierarchy combining motor primitives. The motor primitives are modeled as in [19]. We modeled three different modalities to activate the motions: voluntary, rhythmic and reflexes. ...
... To model motions, we use the concepts of motor primitives as building blocks [2]- [4] and the hierarchical organization of the motor system combining motor primitives [5], [7]. The modeling of motor primitives was introduced in previous work in [19]. A motion has a start and an end point and it lasts for a finite time, for example to extend the arm to point at something in front. ...
... In nature, the activation of the reflexes is implemented with interneurons in the spinal cord that receive at the same time the current motor activation and the sensori-input from the body. A first version of the reflex activation mechanism without feedback was introduced in [19]. The weights of the synapses for the activation of a reflex have to be very strong, so that it provides either a one-time execution of one or more primitives, or the inhibition of the current motion. ...
... In previous work [16]- [19] we presented brain-inspired mechanisms with SNN for motion representation with a hierarchical architecture using motor primitives applied to different neurorobotics systems. The modelling used to represent the motion of the arm [17] and the hand [18] are combined and extended to model the locomotion. A six-legged robot is similar to the hand in terms of kinematic structure but with six legs instead of five fingers, and we added an additional coordination layer for locomotion. ...
... To model the individual motion of one leg, we use motor primitives implemented with SNN. The kinematics of a robot hand are similar to LAURON V but with six legs instead of five fingers, so the modelling of a leg is similar to that of a finger in [18] but it uses more active joints. The way primitives are combined and activated is similar to that of the arm in [17]. ...
... Regarding research on EMG using SNNs, Tieck et al. showcased the application of SNNs in triggering robot hand reflexes using human EMG data. This study not only highlights the biocompatibility of SNNs but also their potential in prosthetics and rehabilitation devices [54]. ...
Article
The integration of Spiking Neural Networks (SNNs) into the analysis and interpretation of physiological and speech signals has emerged as a groundbreaking approach, offering enhanced performance and deeper insights into the underlying biological processes. This review aims to summarize key advances, methodologies, and applications of SNNs within these domains, highlighting their unique ability to mimic the temporal dynamics and efficiency of the human brain. We dive into the core principles of SNNs, their neurobiological underpinnings, and the computational advantages they bring to signal processing, particularly in handling the temporal and spatial complexities inherent in physiological and speech data. Comparative analyses with conventional neural network models are presented to underscore the superior efficiency, lower power consumption, and higher temporal resolution of SNNs. The review further explores challenges and future prospects, highlighting the potential of SNNs to revolutionize wearable healthcare monitoring systems, neuroprosthetic devices, and natural language processing technologies. By providing a comprehensive overview of current strategies, this review aims to inspire innovative approaches in the field, fostering advances in real-time and energy-efficient processing of complex biological signals.
... The trained network can classify the sEMG signals and detect finger activation. The reflexes of the finger are modelled with motion primitives and mapped to a robot kinematics [109] [110]. ...
Article
Full-text available
Modelling functionalities of the brain in human-robot interaction contexts requires a real-time understanding of how each part of a robot (motors, sensors, emotions, etc.) works and how they interact all together to accomplish complex behavioural tasks while interacting with the environment. Human brains are very efficient as they process the information using event-based impulses also known as spikes, which make living creatures very efficient and able to outperform current mainstream robotic systems in almost every task that requires real-time interaction. In recent years, combined efforts by neuroscientists, biologists, computer scientists and engineers make it possible to design biologically realistic hardware and models that can endow the robots with the required human-like processing capability based on neuromorphic computing and Spiking Neural Network (SNN). However, while some attempts have been made, a comprehensive combination of neuromorphic computing and robotics is still missing. In this article, we present a systematic review of neuromorphic computing applications for socially interactive robotics.We first introduce the basic principles, models and architectures of neuromorphic computation. The remaining articles are classified according to the applications they focus on. Finally, we identify the potential research topics for fully integrated socially interactive neuromorphic robots.
... Neuronal stimulation (STIM) is responsible for modifying the force versus length relationship that causes the extension and contraction of muscle fibers [60,44]. This fact can be observed in biological systems measuring muscle stimulation externally and relating it to the force and displacement experienced by the muscles with an electromyography (EMG) [61,62]. ...
Article
This paper is devoted to developing a biological-based algorithm to simulate the control of a human arm by means of a Spiking Neural Network (SNN) with a pre-set structure similar to those found in reflex arcs. The replication of the behavior of biological systems is a primary step to understand how the brain and the neural networks that comprise it work. The study of these systems began with the analysis of animals with low neural complexity as well as with small control neural networks such as those present in the reflex acts of the human body. However, the study of the brain, due to its intricate structure, still presents many unknowns and challenges to be solved. Meanwhile, one way to understand how biological systems work is to emulate their behavior through computer simulations. Artificial neural networks (ANNs) offered the opportunity to replicate neural structures to understand and reproduce their behavior and performance. Many types of ANNs are based on the use of activation functions. However, these ANNs are simplified models that do not replicate the behavior of complex biological neural systems accurately. For this reason, spike-based models have been developed to reproduce real biological systems more faithfully. This work proposes simulating the motor behavior of the central nervous system to control the position of an arm. To this end, a spiking neural network has been developed to emulate motion control using a fixed structure that reproduces reflex arcs. A channel-based synapse model and a control scheme based on the equilibrium point hypothesis have been proposed to improve biological similarity of the controller. Furthermore, the developed controller has learning capabilities thanks to the reproduction of the synapse plasticity process that takes place in real biological systems through a supervised Spike-Timing Dependent Plasticity (STDP) learning strategy. The performance of the proposed approach has been demonstrated by simulating the control of the movement of an arm using Hill’s Muscle Model. Results suggest that the proposed control algorithm resembles the response of biological systems adequately in terms of speed and temporal response.
... More recently, a new approach for classifying EMG signals started to emerge based on the use of SNNs. In [10] EMG signals were recorded using the Myo and encoded into spikes as input for the SNN. The classification could detect the finger movements and this was used to trigger single finger reflexes. ...
Article
An accurate description of muscular activity plays an important role in the clinical diagnosis and rehabilitation research. The electromyography (EMG) is the most used technique to make accurate descriptions of muscular activity. The EMG is associated with the electrical changes generated by the activity of the motor neurons. Typically, to decode the muscular activation during different movements, a large number of individual motor neurons are monitored simultaneously, producing large amounts of data to be transferred and processed by the computing devices. In this work, we follow an alternative approach that can be deployed locally on the sensor side. We propose a neuromorphic implementation of a spiking neural network (SNN) to extract spatio-temporal information of EMG signals locally and classify hand-gestures with very low power consumption. We present experimental results on the input data stream using a mixed signal analog/digital neuromorphic processor. We performed a thorough investigation on the performance of the SNN implemented on the chip, by: i) calculating PCA on the activity of the silicon neurons at the input and the hidden layers to show how the network helps in separating the samples of different classes; ii) performing classification of the data using state-of-the- art SVM and logistic regression methods and a hardware-friendly spike-based read-out. The traditional algorithm achieved a classification rate of 84% and 81%, respectively, and the spiking learning method achieved 74%. The power consumption of the SNN is 0:05 mW, showing the potential of this approach for ultra-low power processing.
Chapter
Generally, it is a contradictory between portability and high-performance in the field of high-density surface electromyography (HD-sEMG) acquisition equipment. Moreover, it limits the development of HD-sEMG to a great extent. In this paper, we have developed a portable device with high performance. The work focuses on the multi-level design of the system to balance the contradiction between the two to achieve a better result. A powerful but small HD-sEMG acquisition device and a versatile cross-platform PC software are developed to realize the system. Its acquisition device consists of a set of preamplifiers and a signal acquisition box. Preamplifiers guarantee the high quality of the signal during transmission. Signal acquisition box completes the analog-to-digital conversion (ADC) of the signal with high precision and high sampling frequency and transmits it to the upper computer in a reliable way. Its upper software has an excellent Graphical User Interface (GUI), which can conveniently control various functions of all embedded device in this system. Besides, it has various functions such as real-time display, data triggering, offline data saving, and so on. The feature of cross-platform allows all users to use it on any PC. At last, we test the entire system and use the standard algorithm to predict the finger movements to prove the system meets the specific requirements of sEMG collection.
Article
Full-text available
Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Article
Full-text available
Electromyography (EMG) signals are becoming increasingly important in many applications, including clinical/biomedical, prosthesis or rehabilitation devices, human machine interactions, and more. However, noisy EMG signals are the major hurdles to be overcome in order to achieve improved performance in the above applications. Detection, processing and classification analysis in electromyography (EMG) is very desirable because it allows a more standardized and precise evaluation of the neurophysiological, rehabitational and assistive technological findings. This paper reviews two prominent areas; first: the pre-processing method for eliminating possible artifacts via appropriate preparation at the time of recording EMG signals, and second: a brief explanation of the different methods for processing and classifying EMG signals. This study then compares the numerous methods of analyzing EMG signals, in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above.
Article
The development of the Modular Prosthetic Limb (MPL) has and continues to be the result of cutting-edge technology innovation in mechanical, electri-cal, and software design. Sound systems engineering practices have laid the foundation for the successes achieved during the Revolutionizing Prosthetics pro-gram. From the initial effort to prove, with Prototype 1, that an advanced prosthetic device is possible to the technology candidate elimination process undertaken for Prototype 2, the design methodology included extensive analysis of user requirements, system trade studies, and testing to engineer the MPL to meet challenging sponsor needs. To show how this remarkable technology came to be, we describe in detail the MPL development process.
Article
Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
Conference Paper
Abstract— This paper gives an overview of ROS, an open- source robot operating,system. ROS is not an operating,system in the traditional sense of process management,and scheduling; rather, it provides a structured communications layer above the host operating,systems,of a heterogenous,compute,cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software,which,uses ROS.
Article
The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.