Conference PaperPDF Available

Impact of Diverse Aspects in User-Prosthesis Interfaces for Myoelectric Upper-limb Prostheses

Authors:

Abstract and Figures

Numerous assistive devices possess complex ways to operate and interact with the subjects, influencing patients to shed them from their activities of daily living. With the purpose of presenting a better solution to mitigate issues generated by complex or expensive alternatives, a test comparing different user-prosthesis interfaces was elaborated to determine the effects of diverse aspects in their user-friendliness, including that of a version created for this work. A simplistic, anthropomorphic and 3D-printed upper-limb prosthesis was adapted to evaluate all the renditions considered. The chosen design facilitates the modification of its operational mode, facilitating running the tests. Additionally, the selected prosthetic device can easily be adapted to the amputees’ lifestyle in a successful way, as shown by experimental results, providing validity to the study. For the interaction process, a wireless third party device was elected to gather the user intent and, in some renditions, to work in tandem with some sort of visual feedback or with a multimodal alternative to verify their impact on the user.
Content may be subject to copyright.
Impact of Diverse Aspects in User-Prosthesis
Interfaces for Myoelectric Upper-limb Prostheses
Diego Cardona , Guillermo Maldonado , Victor Ferman , Ali Lemus and Julio Fajardo
Abstract—Numerous assistive devices possess complex ways
to operate and interact with the subjects, influencing patients to
shed them from their activities of daily living. With the purpose
of presenting a better solution to mitigate issues generated by
complex or expensive alternatives, a test comparing different
user-prosthesis interfaces was elaborated to determine the
effects of diverse aspects in their user-friendliness, including
that of a version created for this work. A simplistic, anthro-
pomorphic and 3D-printed upper-limb prosthesis was adapted
to evaluate all the renditions considered. The chosen design
facilitates the modification of its operational mode, facilitating
running the tests. Additionally, the selected prosthetic device
can easily be adapted to the amputees’ lifestyle in a successful
way, as shown by experimental results, providing validity to the
study. For the interaction process, a wireless third party device
was elected to gather the user intent and, in some renditions,
to work in tandem with some sort of visual feedback or with a
multimodal alternative to verify their impact on the user.
Index Terms—Upper-limb prosthesis, Three-dimensional
printing, Electromyography, User-prosthesis interface
I. INTRODUCTION
In recent years, there has been substantial progress in high-
technology prosthetic devices, offering the patients numerous
alternatives, perks and characteristics to improve their condi-
tion. Most of the focus has been fixated on diverse methods
to interpret the user intent to actuate bionic prostheses or
investigating ways to make them more efficient. Nevertheless,
most of these advances have not completely succeeded in
providing the user with a simple and easy-to-use user-
prosthesis interface (UPI), because the studies have been
directed elsewhere.
Traditionally, research on upper-limb prosthesis control
was focused on different techniques based on the processing
of electromyography (EMG) signals to analyze the user intent
and to operate it with a specific activation profile. Some
solutions to this problem involve implants [1]–[3], which
employ Bluetooth or radio channel waves. These assistive
devices use wireless charging to function and must regulate
the power dissipation to a safe value for human tissue to
avoid damage. In a similar manner, several approaches opt for
Diego Cardona, Guillermo Maldonado, Ali Lemus
and Julio Fajardo are with Turing Research Laboratory,
FISICC, Galileo University, Guatemala City, Guatemala.
{juandiego.cardona,guiller,julio.fajardo}
@galileo.edu
Victor Ferman and Julio Fajardo are with the Department of Computer En-
gineering and Industrial Automation, FEEC, UNICAMP, 13083-852 Camp-
inas, SP, Brazil. {julioef,vferman}@dca.fee.unicamp.br
using Brain-Machine Interfaces (BMIs) as a means to control
these devices. One of the most recent iterations is based
on high-density electrocorticography (ECoG), which allows
the user to control each finger individually [4]. These works
show that, although implants may have promising results and
help alleviate the discommodity of wired prostheses, they
require challenging, intrusive procedures, that result in an
expensive and complex product. Other projects show more
creative approaches to analyzing the EMG signals, utilizing
other members to drive the movements of the prosthetic
limb, as shown on [5] and [6], which use the toes and the
tongue, respectively. Such methods provide alternatives for
other disarticulation types, such as the bilateral amputations,
but are not so efficient for transradial amputees, because these
they are not intuitive to the human body and they also affect
the way some common activities of the daily living (ADLs)
must be carried out, like walking and eating.
Several commercial prosthetic hands use state machines
actuated by a single feature of a predefined subset of muscle
activity, while the majority of sophisticated research assis-
tive devices are based on pattern recognition algorithms
with a multimodal approach. It consists in taking a set of
EMG features and complementing them with information
from other types of sensors like inertial measurement units
(IMUs), mechanomyography (MMG), accelerometers or even
features detected by a microphone, showing a substantial
improvement in classification rates [7]. This method has been
used, successfully, to improve the user control of prosthetic
devices [8], like using a hybrid system with Radio Frequency
Identification (RFID) tags on specific objects to reduce the
cognitive effort to operate a prosthesis and to address some
of the well-known issues of EMG techniques, such as the
limb position effect [9,10]. Similarly, other focuses have
been taken into account with the use of these types of
systems, such as using voice-control, together with visual
feedback through a touchscreen LCD, providing the users
with alternatives to control their prosthetic limb in a different
manner [11]. Other studies have been carried out to increase
the functionality of multi-grasping upper-limb prostheses,
utilizing an EMG and deep-learning artificial vision hybrid
systems. This works by associating a subset of objects to
a specific kind of grasp based on the geometric properties
of said target. The classification process is fulfilled via a
convolutional neural network (CNN) employing an object
classifier [12]–[14].
This paper focuses on the evaluation of user-prosthesis
interfaces employing a wireless module and comparing the
impact of certain aspects in the interaction process. Since
the utilization of Thalmic Labs’ Myo armband has been
shown to be an affordable and viable replacement for the
medical grade sensors, even with subjects with a certain
level of transradial amputation [4,15]–[17], it was selected
for operating the versions in this project. This gesture-
based system was elected, because it processes and classifies
the surface EMG (sEMG) signals; plus, its small subset
of contractions can be adapted to enact numerous actions.
In addition to that, its use on all the interfaces facilitates
the replication of the different iterations and removes any
possible bias regarding the transductor to gather the user
intent, evaluating only the UPI. Although the cost increases
by using such a system, this results in a more comfortable
device for the subject in comparison to implants or wired
prostheses; besides, it helps to keep a modular design. On top
of that, the contractions detected by the Myo permit creating
an interaction process with a significantly greater number of
expressions than similar alternatives.
The rest of this paper is structured as follows: Section
II elaborates on the related projects replicated for the com-
parison of the interfaces. Section III informs about the UPI
proposed for this work. Section IV describes the hardware
used for all the versions and how the whole system is
integrated. Section V describes the evaluation processes and
their interpretations. Finally, the last section, Section VI,
deals with the impact of the results.
II. RELATED WORK
To identify the aspects relevant for a user-friendly in-
teraction, several interfaces were evaluated. Their selection
was established based on different interaction processes, akin
Fig. 1: Galileo Bionic Hand: anthropomorphic, 3D-printed
upper-limb prosthesis.
price ranges and physical characteristics. That is why the
same hardware, the Galileo Hand [18], (shown in Fig. 1) was
adapted to fit each rendition and the same array of sensors,
Thalmic Labs’ Myo armband, was used on the patient’s
forearm, where the stump for transradial amputees is located,
to create a natural operational mode.
A. Multimodal approach using buttons and Myo interface
The functionality for this version, similar to the work
presented in [19], is illustrated in the Finite State Machine
(FSM) in Fig. 2. Both, the muscle contractions subset,
Q={q0, q1, q2, q3}, corresponding to Thalmic Labs’ “Myo
poses”, and the buttons, B={b0, b1}(which are installed on
top of the hand’s shell), are used to operate the prosthesis.
Using “wave out” (hand extension), q0, and “wave in” (flex-
ing the hand), q1, as well as b0and b1, causes an alteration
in the position of the menu displayed on a µLCD screen
(shown in Fig. 3), i.e. forwards or backwards, respectively;
these changes are taken place in the state S1. Moreover, S0
indicates that the prosthesis is resting in its default state,
the fingers on the prosthesis are fully extended; while S3,
that they are completely flexed. It is relevant to note that,
while on this last state mentioned, changing actions in the
menu is prevented to the user, because the timing of the
coiling and uncoiling processes differ between actions and
the finger timing may be detrimental for future behavior if
this case arose.
On the other hand, S2and S4indicate that the prosthetic
hand is currently closing or opening, correspondingly, pro-
cesses that can be interrupted by each other if a correct
command is received. Furthermore, to activate an action, q2,
“fist”, needs to be received; whilst “double tap” (two swift,
consecutive contractions) and “fingers spread” conform the
contraction q3, that deactivate it. The decision to use both
gestures to deactivate the actions was taken according to the
results shown in Section V. Finally, other relevant elements in
the FSM representing the interface’s behavior are the flags f l,
that informs that all fingers have reached the desired position,
and tr, which indicates that the time required to fully open
the hand has passed.
B. Multimodal approach based on object classification and
detection
This version is a replica of the one used in [14], which
uses a mobile application to interface the prosthesis and the
patient. This rendition possesses a camera mounted on the
top side of the shell, which takes pictures of objects that will
interact with the assistive device, and suggests a grasp to
the user via an app. The photographic module can also be
replaced with the mobile device’s own camera. By employing
the Myo’s default poses, the user can either accept, reject
or cancel the recommendations provided by the computer
vision detection and localization algorithm. This process uses
a bag of words method to assign its labels to the detected
S0S1S2S3
S4
b0, b1
q0, q1
b0, b1, q0, q1
q2
q2fl
q3
q3
q2
tr
Fig. 2: Finite State Machine showing the behavior of the
interface using buttons and the Myo to operate. S0indicates
that the hand is completely open; S1that a change in the
screen is occurring; S2that the prosthetic hand is closing,
as it completes this process, the flag fl is lifted; S3that
the fingers have reached the desired position; S4that it is
opening, when it finishes, the flag tris lifted.
Fig. 3: Galileo Hand’s graphical menu (left) and the prosthe-
sis performing the action “Close“ (right).
objects, where each each of them is associated to a specific,
customized grasp.
The interface’s operational mode is described in Fig. 4,
where the contractions, Q={q0, q1, q2, q3}, represent the
following Myo poses, respectively: “fist”, “fingers spread”,
“wave in” and “wave out”, which are used to navigate along
the states of the FSM. S0indicates that the prosthetic hand is
completely open; S1, that a picture is being taken; S2, that
a label for the detected object is being determined; while
S3, that the selected action is being executed. The remaining
relevant elements in the FSM are the flags tand l. The first
indicates a timeout in assigning a label, while the second
informs it was successfully elected. The transition that occurs
when q1is active, indicates that another image needs to be
taken and cancels the action selection process. On the other
hand, q2accepts the suggestion provided by the algorithm;
while q3, rejects it and another grasp is proposed.
S0S1S2S3
q0l
q1
q1
q2
q3
q0
t
q3
Fig. 4: Behaviour of the UPI from the version based on object
recognition. S0indicates that the prosthetic hand has all of its
fingers completely open; S1that a picture is being taken; S2
that a label is being determined, when this process is finished
successfully, the flag lis lifted, if not, tis raised; S3that the
action is being executed.
C. sEMG pattern recognition
The following interface, based on [11], but utilizing the
Myo’s pattern recognition methods, consists in a simplistic
system that maps each of the predefined “Myo poses” to
a gesture to be executed. The mapping was carried out as
follows: “wave in” to a pointing position; “wave out” to carry
out a lateral grasp; “double tap” to a hooking stance; while
“fist” and “fingers spread” to closing and opening all fingers,
respectively. The gestures selected were the ones considered
to be the most useful in the ADLs.
S0S1S2S3
S4
q0, q1
q0, q1
q2
q2fl
q0
q0
q2
tr
Fig. 5: Finite State Machine representing the UPI interaction
process from the version with the Myo armband with the
reduced contraction subset. S0means that the fingers are
opened; S1that a change in the menu is occurring; S2that
the hand is closing, as this process ends, fl is raised; S3that
the fingers are closing as per the action selected; S4that the
hand has achieved the desired position, when it is finished,
tris raised.
III. MYO-P OW ERED INTERFACE W IT H A REDUCED
CONTRACTIONS SUBSET
The following iteration was created for this work, which
also uses the Myo armband to recover the user intent. This
interface behaves in a similar matter to the one explained
in Section II-A, but without the incorporation of the buttons.
Additionally, the contraction subset Qis reduced to three, uti-
lizing “wave out” to deactivate the action. This was decided
to provide an alternative if one of the poses is inaccessible to
the patient. Plus, a possibility to alter the amount of supported
hand actions was incorporated, this, to fulfill the patients’
unique necessities. This behaviour is illustrated in Figure 5.
IV. SYSTEM ARCHITECTURE
A. Galileo Hand
The hardware selected, the Galileo Hand [11,18], consists
in a lightweight (under 350g), affordable (under $350), an-
thropomorphic, modular and intrinsic 3D-printed, ABS shell.
It encases 5 metal geared micro DC motors, one for each
finger, plus an additional one with encoder for the thumb.
Also, it has a main control PCB with an ARM Cortex M4
microcontroller unit (Teensy 3.2), 3 TI DRV8833 dual driver
motors and one 4D-Systems’ 1.44” µLCD-144-G2 screen.
The five fingers are assembled via waxed strings, which,
when coiling, close the fingers. They are also composed by
surgical-grade elastics that permit the articulations to spring
back open. The configuration of these artificial extremities
provide 15 degrees of freedom (DOF) in total, 14 of which
are comprised by each joint in the fingers to simulate flexion
and extension; whilst the remaining DOF is in charge of
rotating the thumb, which is at a 15angle from the palmar to
emulate both adduction-abduction and opposition-deposition
movements. Besides, each finger is operated by a single
motor, resulting in 6 degrees of actuation (DOA) in total.
Because of the modularity of its design, it was possible
to adapt an external unit to the artificial hand. It consists
in a Bluetooth Low Energy (BLE) device, HM-10, and a
secondary MCU, ATmega328P. This, to interface the Myo
armband and the prosthetic hand, which was achieved with
a similar process to the one proposed in [20].
B. Feedback current on/off controller
Each finger has an individual on/off controller to per-
form the flexion/extension movements, except for the thumb,
which possesses, additionally, a quadrature encoder using
a PI position one for its rotation. This way, the prosthesis
has the ability to perform different predefined gestures, i.e.
pointing, power grip, etc. The functionality for each digit is
illustrated in the Finite State Machine in Fig. 6.
The system starts with the finger fully extended (in an
“open” position), S0. The transition to S1happens when the
command to move the finger, c, is received, activating the
motor and causing the finger to start closing. While on this
state, the RMS value of the current is monitored by the main
MCU and, when a predefined threshold, th, is exceeded, the
switch to S2happens. This parameter may be different for
each individual finger, as each one has different size and,
therefore, discrepant mechanical factors, so the calibration
was carried out experimentally. At this point, the finger is
considered to be fully closed and will start to reopen if the
ocommand is issued by the user. The alteration in state
from S3to S0happens after the time, te, passes, which
was determined in an experimental manner as well, as it is
different from the time spent in S1. This disparity occurs,
because the elastic installed on each finger opposes itself to
the coiling process, but favors its opposite. It is relevant to
note that the closing/opening processes may be interrupted
and reversed if the appropriate commands are received.
V. EVALUATION AND RESULTS
A. Myo Armband Efficiency
Since the array of myoelectric sensors is not fault-free,
some actions are misclassified, a confusion matrix was elab-
orated to corroborate the results shown in [15] and to verify
its feasibility for the project. It also served as a means to
select which of the Myo armband-supported poses are the
most adequate to be implemented as default actions to operate
each interface. The data was obtained with the help of 10
volunteers, who had never used the armband before, to avoid
biased results. The matrix is adjoined posteriorly (in Fig.
7), where the actions are numbered as follows: “wave out”
(1),“wave in” (2), “fist” (3), “double tap” (4), “fingers spread”
(6) and (5) indicates a no operation (NOP), meaning the
armband did not detect any pose.
The first one, the one using the buttons, uses the gestures
with the least successful rates to return the hand to its open
position, since changing poses was designed to be blocked
while performing an action. Additionally, considering that
the most false positives for both “double tap” and “fingers
spread” were each other, both poses were chosen to fulfill
S0S1
S2
S3
c
th
o
te
o
c
Fig. 6: Finite State Machine demonstrating the open-
ing/closing behavior of each finger on the prosthesis. S0
shows when the finger is open; S1when it is closing; S2
when it is closed; S3when it is opening.
this purpose. Moreover, for assigning the remaining actions,
it was decided to map them to the most natural ways, ergo
the “waving in/out” were chosen to change actions and “fist”
remained to activate a gesture.
On the other hand, the one using the artificial vision algo-
rithms, was not modified from its original design proposed in
[14]. The same gestures were kept to interact with the mobile
application, since the actions chosen possessed apt success
rates.
For the version described in Section II-C, the poses with
the greatest success rates were mapped to the most useful
ADLs, while also considering the naturality of the mapping
(e.g. “fist” to closing the prosthesis and “fingers spread” to
opening it).
Regarding the version created for this work, the array of
gestures was selected in a similar manner as the version in
Section II-A, but, for deactivating the hand gestures, “wave
out” was chosen. The reason for this is avoiding using the
actions with poor success rates and replacing it with the most
accurate one; this alternative is possible, since the menu is
blocked during the performance of a gesture.
B. NASA Task Load Index Evaluation
Additionally, to effectively evaluate how user-friendly the
interfaces are, a NASA Task Load Index (TLX) test was
carried out, not unlike the ones mentioned in [21] and [22].
The selection of this scale to evaluate the interfaces was based
on requiring a user testing, post-task evaluation method, since
post-test evaluation techniques (like SUS), do not permit to
evaluate different parts of the interface separately. Plus, meth-
ods like SEQ are not as thorough as the one implemented,
since not many categories are considered during testing,
providing a more binary result. Additionally, the test chosen
Fig. 7: Confusion matrix evaluating the default classifier of
the Myo armband.
Fig. 8: NASA Task Load Index results for the four interfaces.
has numerous research and industry benchmarks to interpret
the scores in context, which can be helpful for future works.
This index quantifies the effectiveness and performance of
the workload to operate the device. The following categories
are taken into account: mental, physical and time demand,
performance, effort needed and the frustration evoked.
The test was passed to 10 volunteers, who were asked to
rate each category in a scale from 1 to 20, with a lower score
indicating a better result. The subjects consisted in 8 males
and 2 females between the ages of 22 and 35. The evaluation
process was carried out for all the UPIs previously mentioned
and compared them to each other to notice their strengths and
weaknesses and find out which one has a better rating. It
consisted in performing four different gestures and utilizing
some grasps to interact with their environment, i.e. they were
asked to hold a wallet, a bottle and to press a certain key in
a computer keyboard. The tasks were repeated thrice so that
they could properly adapt to the operational mode.
The results are shown in Fig. 8, where each bar represents
the choice for each individual subject. Their means are visual-
ized in Fig. 9 along with their standard deviations. The figures
reflect a great discrepancy in all categories between the
version using the computer vision algorithms and the others,
showing a poorer interface. Running a Factorial Analysis
of Variance (ANOVA) test on the results demonstrates a
significant difference in comparison to the one elaborated
for this project. The F statistic obtained was 132.4, when
its critical value is 3.84 for an alpha of 0.05. This discards
the main effect hypothesis, showing a significant inequality
between interfaces.
The sEMG pattern recognition version shows the best
results in physical and temporal demand, as well as the effort
required to complete a task. Furthermore, the one using the
buttons and the Myo together, resulted in the least frustrating
interface, while the one with reduced contractions subset
trumps the others in performance and mental demand. These
three versions are proficient in different categories, but a
clear superior one is not easily recognized with the previous
graphs. Thus, an overall performance statistic was determined
(Fig. 10), which calculates an average of all categories for
each interface. This showed that they are user-friendly itera-
Fig. 9: Mean of the results gathered from the volunteers.
Where (a) is the sEMG Pattern Recognition one; (b), the
one using the buttons and the Myo; (c) is the version using
the camera; and (d) is the iteration with reduced contractions
subset.
Fig. 10: Overall performance of the different versions. (a) is
the sEMG Pattern Recognition iteration; (b) is the one with
the buttons; (c) uses the computer vision algorithms; and (d)
is the interface utilizing a reduced contractions subset.
tions with results around the upper 70% regarding the NASA
TLX’s scale. Since the means for the remaining interfaces are
still very similar ((a) has a mean of 6.08; (b) one of 6.1; and
(d) 5.75), more Factorial ANOVA tests were run on these
iterations with the same alpha value. These evaluations were
made comparing the version proposed in this work to the
interfaces replicated to verify if the improvement is relevant.
The results do not show a significant impact between them,
showing that the different aspects in the interaction process
do not affect in a relevant matter.
VI. CONCLUSIONS
The UPI is an important aspect when selecting an assistive
device, since it directly affects the interaction process with
the prosthesis. For this reason, it is relevant to note if certain
aspects tend to be favored or opposed when creating an
interface. This study showed a tendency heavily tied between
the execution time of the actions and its subjective evaluation,
as shown by the poor reception of the version in Section
II-B and the extensive operation time required to use the
prosthetic device. This may reside in the process of taking a
picture to select a grasp taking too much time, which became
tedious to the users, evoking frustration and demanding more
effort to achieve their goals. Plus, the users require the use
of a healthy hand to operate the external device with the
app, needing certain physical prowess not possessed by all
patients, especially by bilateral amputees.
Furthermore, regarding the results in Fig. 9, the superiority
of the interface in Section II-C lies in the swift selection of
actions. This is because of the lack of a menu to interact
with, therefore the physical demand is reduced and, hence,
the effort required is also less. In contrast, the elevated mental
demand and frustration for this rendition are caused by the
need to memorize the actions mapped to the Myo poses,
which does not come easily to the patients. However, this
shows that a visual menu is not really necessary for the in-
terface to be user-friendly, which may lead to a more simple,
yet affordable alternative. Moreover, the lack of frustration
for the iteration shown in Section II-A may be result of the
sporadic inexactitude of the Myo classification process. Since
this interface provides an alternative to navigate along the
menu, providing a fulfilling alternative.
Furthermore, the mental exertion needed to operate the
interface proposed in this work, in Section III, results in the
lowest, as the user does not have to memorize the mapping of
the actions, nor need they ponder over the use of the buttons.
Besides, the contractions subset is limited, so, by reducing
the choices, this demand is also reduced. Additionally, the
performance for this version showed to be the best along the
interfaces. This may be caused by the larger gamut of actions
at the patient’s disposal and the accuracy of the poses used
to return the prosthetic hand to its initial state.
On the other hand, an aspect noted after performing
these trials was that a multimodal approach, combining the
mechanical input to the wireless one did not result in a
relevant improvement. The same conclusion applies to im-
plementing a system using an extended contractions subset.
This demonstrates that a more affordable and simple UPI
evokes a similar interface to the user, but, by reducing the
contractions subset, one can restrict the operation mode to fit
each individual amputee’s unique necessities. This prompts
in permitting the user to employ the prosthesis even if they
are unable (or unwilling) to complete certain of the Myo’s
poses. Furthermore, since the version elaborated for this
project showed similar results to the one using sEMG Pattern
Recognition, it is convenient to provide the patient with a
larger gamut of actions to provide a more customized and
practical prosthetic device.
The results gathered during this investigation shed a light
on how some common approaches to interacting with upper-
limb prostheses impact the user-friendliness of the interface.
This helps to find alternatives to ameliorating the price and
the performance of these assistive devices, either by reduc-
ing the physical effort required to operate them, providing
alternatives to do so or by reducing the complexity of the
interaction process altogether.
REFERENCES
[1] E. Moutopoulou, G. A. Bertos, A. Mablekos-Alexiou, and E. G.
Papadopoulos, “Feasibility of a biomechatronic EPP Upper Limb
Prosthesis Controller,” in 2015 37th Annual International Conference
of the IEEE Engineering in Medicine and Biology Society (EMBC).
IEEE, 2015, pp. 2454–2457.
[2] C. Miozzi, S. Guido, G. Saggio, E. Gruppioni, and G. Marrocco,
“Feasibility of an rfid-based transcutaneous wireless communication
for the control of upper-limb myoelectric prosthesis,” 2018.
[3] A. Stango, K. Y. Yazdandoost, and D. Farina, “Wireless radio channel
for intramuscular electrode implants in the control of upper limb
prostheses,” in 2015 37th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC). IEEE, 2015,
pp. 4085–4088.
[4] G. Hotson, D. P. McMullen, M. S. Fifer, M. S. Johannes, K. D.
Katyal, M. P. Para, R. Armiger, W. S. Anderson, N. V. Thakor, B. A.
Wester et al., “Individual finger control of a modular prosthetic limb
using high-density electrocorticography in a human subject,” Journal
of neural engineering, vol. 13, no. 2, p. 026017, 2016.
[5] W. T. Navaraj, H. Heidari, A. Polishchuk, D. Shakthivel, D. Bhatia, and
R. Dahiya, “Upper limb prosthetic control using toe gesture sensors,”
in 2015 IEEE SENSORS. IEEE, 2015, pp. 1–4.
[6] D. Johansen, C. Cipriani, D. B. Popovi´
c, and L. N. Struijk, “Control of
a robotic hand using a tongue control systema prosthesis application,”
IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp.
1368–1376, 2016.
[7] W. Guo, X. Sheng, H. Liu, and X. Zhu, “Mechanomyography assisted
myoeletric sensing for upper-extremity prostheses: a hybrid approach,
IEEE Sensors Journal, vol. 17, no. 10, pp. 3100–3108, 2017.
[8] M. S. Trachtenberg, G. Singhal, R. Kaliki, R. J. Smith, and N. V.
Thakor, “Radio frequency identificationan innovative solution to guide
dexterous prosthetic hands,” in Engineering in Medicine and Biology
Society, EMBC, 2011 annual international conference of the IEEE.
IEEE, 2011, pp. 3511–3514.
[9] A. Fougner, Ø. Stavdahl, P. J. Kyberd, Y. G. Losier, P. Parker et al.,
“Control of upper limb prostheses: terminology and proportional
myoelectric control a review,” Transactions on Neural Systems and
Rehabilitation Engineering, vol. 20, no. 5, pp. 663–677, 2012.
[10] A. Fougner, E. Scheme, A. D. Chan, K. Englehart, and Ø. Stavdahl,
“Resolving the limb position effect in myoelectric pattern recognition,”
IEEE Transactions on Neural Systems and Rehabilitation Engineering,
vol. 19, no. 6, pp. 644–651, 2011.
[11] J. Fajardo, A. Lemus, and E. Rohmer, “Galileo bionic hand: sEMG
activated approaches for a multifunction upper-limb prosthetic,” in
2015 IEEE Thirty Fifth Central American and Panama Convention
(CONCAPAN XXXV). IEEE, 2015, pp. 1–6.
[12] G. Ghazaei, A. Alameer, P. Degenaar, G. Morgan, and K. Nazarpour,
“Deep learning-based artificial vision for grasp classification in my-
oelectric hands,” Journal of neural engineering, vol. 14, no. 3, p.
036025, 2017.
[13] N. Bu, Y. Bandou, O. Fukuda, H. Okumura, and K. Arai, “A semi-
automatic control method for myoelectric prosthetic hand based on
image information of objects,” in Intelligent Informatics and Biomed-
ical Sciences (ICIIBMS), 2017 International Conference on. IEEE,
2017, pp. 23–28.
[14] J. Fajardo, V. Ferman, A. Mu˜
noz, D. Andrade, A. R. Neto, and
E. Rohmer, “User-Prosthesis Interface for Upper Limb Prosthesis
Based on Object Classification,” in 2018 Latin American Robotic
Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018
Workshop on Robotics in Education (WRE). IEEE, 2018, pp. 390–395.
[15] M. Cognolato, M. Atzori, D. Faccio, C. Tiengo, F. Bassette, R. Gassert,
and H. Muller, “Hand gesture classification in transradial amputees
using the myo armband classifier* this work was partially supported
by the swiss national science foundation sinergia project# 410160837
meganepro.” in 2018 7th IEEE International Conference on Biomedical
Robotics and Biomechatronics (Biorob). IEEE, 2018, pp. 156–161.
[16] A. Phinyomark, R. N Khushaba, and E. Scheme, “Feature extraction
and selection for myoelectric control based on wearable emg sensors,”
Sensors, vol. 18, no. 5, p. 1615, 2018.
[17] P. Visconti, F. Gaetani, G. Zappatore, and P. Primiceri, “Technical
features and functionalities of myo armband: an overview on related
literature and advanced applications of myoelectric armbands mainly
focused on arm prostheses,” Int. J. Smart Sens. Intell. Syst, vol. 11,
no. 1, pp. 1–25, 2018.
[18] J. Fajardo, V. Ferman, D. Cardona, G. Maldonado, A. Lemus, and
E. Rohmer, “Galileo hand: An anthropomorphic and affordable upper-
limb prosthesis,” IEEE Access, vol. 8, pp. 1–1, 2020.
[19] J. Fajardo, V. Ferman, A. Lemus, and E. Rohmer, “An affordable open-
source multifunctional upper-limb prosthesis with intrinsic actuation,”
in 2017 IEEE Workshop on Advanced Robotics and its Social Impacts
(ARSO). IEEE, 2017, pp. 1–6.
[20] F. Ryser, T. B¨
utzer, J. P. Held, O. Lambercy, and R. Gassert, “Fully
embedded myoelectric control for a wearable robotic hand orthosis,”
in 2017 International Conference on Rehabilitation Robotics (ICORR).
IEEE, 2017, pp. 615–621.
[21] S. G. Hart, “NASA-task load index (NASA-TLX); 20 years later,”
in Proceedings of the human factors and ergonomics society annual
meeting, vol. 50, no. 9. Sage publications Sage CA: Los Angeles,
CA, 2006, pp. 904–908.
[22] D. Andrade, A. R. Neto, and E. Rohmer, “Human prosthetic interac-
tion: Integration of several techniques,Simpsio Brasileiro de Automao
Inteligente, 2017.
... Eye-tracking measures included blink rate and pupillometry measures such as pupil diameter [19]- [21]. Among all the CW measures, NASA-TLX was the most frequently used method (28 out of 43 articles) [34], [35], [43]- [48], [53], [64]- [66], [69], [70], [72]- [82], [86]- [88]. The main reason for frequent use of NASA-TLX was determined as its capability to assess CW in motor tasks [58], [66], [92] and consideration of overall workload as well as the magnitude of each factor [49], [50]. ...
Article
Full-text available
Abstract—Limb amputation can cause severe functional disability for the performance of activities of daily living. Previous studies have found differences in cognitive demands imposed by prosthetic devices due to variations in their design. The objectives of this article were to 1) identify the range of cognitive workload (CW)assessment techniques used in prior studies comparing different prosthetic devices, 2) identify the device configurations or features that reduced CW of users, and 3) provide guidelines for designing future prosthetic devices to reduce CW. A literature search was conducted using Compendex, Inspec, Web of Science, Proquest, IEEE, Engineering Research Database, PubMed, Cochrane, andGoogle Scholar. Forty-three studies met the inclusion criteria. Findings suggested that CW of prosthetic devices was assesse dusing physiological, task performance, and subjective measures. However, due to the limitations of these methods, there is a need for more theoretical and model-based approaches to quantify CW. Device configurations such as hybrid input signals and use of multiodal feedback can reduce CW of prosthetic devices. Furthermore, to evaluate the effectiveness of a training strategy for reducingCW and improving device usability, both task performance and subjective measures should be considered. Based on the literature review, a set of guidelines was provided to improve the usability of future prosthetic devices and reduce CW.
... Vision-based prosthetic hands using deep learning technology have also been developed [30] [31] [32] [33]. The state-of-the-art approach offers new possibilities for the control of a dexterous prosthetic hand [34]. ...
Article
Full-text available
In this paper, we propose a novel control scheme for a vision-based prosthetic hand. To realize complex and flexible human-like hand movements, the proposed method fuses bimodal information. Combining information from surface EMG signals with object information from a vision sensor, the system can select an appropriate hand motion. The training/recognition using both sEMG signals and object images can be performed with a single deep neural network in an end-to-end manner. The bimodal sensor information enables the system to recognize the operator’s intended motion with higher accuracy than that of the conventional method using only sEMG signals. In addition, the generalization ability of the network is improved, so motion recognition robustness is enhanced against abnormal data that include partly noisy or missing samples. To verify the validity of the proposed approach, we prepared a dataset that contains the sEMG signals and the object images for 10 types of grasping motions. Three kinds of experiments were conducted: comparison of the proposed method with the conventional method, examination of the recognition robustness against partly noisy or missing samples, and challenges to recognize hand motions based on raw sEMG signals. The results revealed that the proposed bimodal network achieved considerably high recognition performance.
Article
Full-text available
The strict development processes of commercial upper-limb prostheses and the complexity of research projects required for their development makes them expensive for end users, both in terms of acquisition and maintenance. Moreover, many of them possess complex ways to operate and interact with the subjects, influencing patients to not favor these devices and shed them from their activities of daily living. The advent of 3D printers allows for distributed open-source research projects that follow new design principles; these consider simplicity without neglecting performance in terms of grasping capabilities, power consumption and controllability. In this work, a simple, yet functional design based on 3D printing is proposed, with the aim to reduce costs and manufacturing time. The operation process consists in interpreting the user intent with electromyography electrodes, while providing visual feedback through a μLCD screen. Its modular, parametric and self-contained design is intended to aid people with different transradial amputation levels, despite of the socket’s constitution. This approach allows for easy updates of the system and demands a low cognitive effort from the user, satisfying a trade-off between functionality and low cost. It also grants an easy customization of the amount and selection of available actions, as well as the sensors used for gathering the user intent, permitting alterations to fit the patients’ unique needs. Furthermore, experimental results showed an apt mechanical performance when interacting with everyday life objects, in addition to a highly accurate and responsive controller; this also applies for the user-prosthesis interface.
Conference Paper
Full-text available
The complexity of User-Prosthesis Interfaces (UPIs) to control and select different grip modes and gestures of active upper-limb prostheses, as well as the issues presented by the use of electromyography (EMG), along with the long periods of training and adaptation influence amputees on stopping using the device. Moreover, development cost and challenging research makes the final product too expensive for the vast majority of transradial amputees and often leaves the amputee with an interface that does not satisfy his needs. Usually, EMG controlled multi grasping prosthesis are mapping the challenging detection of a specific contraction of a group of muscle to one type of grasping, limiting the number of possible grasps to the number of distinguishable muscular contraction. To reduce costs and to facilitate the interaction between the user and the system in a customized way, we propose a hybrid UPI based on object classification from images and EMG, integrated with a 3D printed upper-limb prosthesis, controlled by a smartphone application developed in Android. This approach allows easy updates of the system and lower cognitive effort required from the user, satisfying a trade-off between functionality and low cost. Therefore, the user can achieve endless predefined types of grips, gestures, and sequence of actions by taking pictures of the object to interact with, only using four muscle contractions to validate and actuate a suggested type of interaction. Experimental results showed great mechanical performances of the prosthesis when interacting with everyday life objects, and high accuracy and responsiveness of the controller and classifier.
Article
Full-text available
Specialized myoelectric sensors have been used in prosthetics for decades, but, with recent advancements in wearable sensors, wireless communication and embedded technologies, wearable electromyographic (EMG) armbands are now commercially available for the general public. Due to physical, processing, and cost constraints, however, these armbands typically sample EMG signals at a lower frequency (e.g., 200 Hz for the Myo armband) than their clinical counterparts. It remains unclear whether existing EMG feature extraction methods, which largely evolved based on EMG signals sampled at 1000 Hz or above, are still effective for use with these emerging lower-bandwidth systems. In this study, the effects of sampling rate (low: 200 Hz vs. high: 1000 Hz) on the classification of hand and finger movements were evaluated for twenty-six different individual features and eight sets of multiple features using a variety of datasets comprised of both able-bodied and amputee subjects. The results show that, on average, classification accuracies drop significantly (p < 0.05) from 2% to 56% depending on the evaluated features when using the lower sampling rate, and especially for transradial amputee subjects. Importantly, for these subjects, no number of existing features can be combined to compensate for this loss in higher-frequency content. From these results, we identify two new sets of recommended EMG features (along with a novel feature, L-scale) that provide better performance for these emerging low-sampling rate systems.
Article
Full-text available
Technological advances in manufacturing smart high-performances electronic devices, increasingly available at lower costs, nowadays allow to improve users’ quality of life in many application fields. In this work, the human-machine interaction obtained by using a next generation device (Myo armband) is analyzed and discussed, with a particular focus to healthcare applications such as upper-limb prostheses. An overview on application fields of the Myo armband and on the latest research works related to its use in prosthetic applications is presented; subsequently, the technical features and functionalities of this device are examined. Myo armband is a wearable device provided with eight electro-myographic electrodes, a 9-axes Inertial Measurement Unit and a transmission module. It sends the data related to the detected signals, via Bluetooth Low Energy technology, to other electronic devices which process them and act accordingly, depending on how they are programmed (in order to drive actuators or perform other specific functions). Applied to the prosthetic field, Myo armband allows to overcome many issues related to the existing prostheses, representing a complete electronic platform that detects in real-time the main signals related to forearm activity (muscles activation and forearm movements in the three-dimensional space) and sends these data to the connected devices. Nowadays, several typologies of prostheses are available on the market; they can be mainly distinguished into low-cost prostheses, which are light and compact but allow for a limited number of movements, and high-end prostheses, which are much more complex and featured by high dexterity, but also heavy, bulky, difficult to control and very expensive. Finally, the Myo armband is an optimum candidate for prosthetic application (and many others) and offers an excellent low-cost solution for obtaining a reliable, easy to use system.
Article
Full-text available
Objective: Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach: We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results: The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra(TM) prosthetic hand and a motion control(TM) prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance: The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Conference Paper
To prevent learned non-use of the affected hand in chronic stroke survivors, rehabilitative training should be continued after discharge from the hospital. Robotic hand orthoses are a promising approach for home rehabilitation. When combined with intuitive control based on electromyo-graphy, the therapy outcome can be improved. However, such systems often require extensive cabling, experience in electrode placement and connection to external computers. This paper presents the framework for a stand-alone, fully wearable and real-time myoelectric intention detection system based on the Myo armband. The hard and software for real-time gesture classification were developed and combined with a routine to train and customize the classifier, leading to a unique ease of use. The system including training of the classifier can be set up within less than one minute. Results demonstrated that: (1) the proposed algorithm can classify five gestures with an accuracy of 98%, (2) the final system can online classify three gestures with an accuracy of 94.3% and, in a preliminary test, (3) classify three gestures from data acquired from mildly to severely impaired stroke survivors with an accuracy of over 78.8%. These results highlight the potential of the presented system for electromyography-based intention detection for stroke survivors and, with the integration of the system into a robotic hand orthosis, the potential for a wearable platform for all day robot-assisted home rehabilitation.
Conference Paper
To prevent learned non-use of the affected hand in chronic stroke survivors, rehabilitative training should be continued after discharge from the hospital. Robotic hand orthoses are a promising approach for home rehabilitation. When combined with intuitive control based on electromyography, the therapy outcome can be improved. However, such systems often require extensive cabling, experience in electrode placement and connection to external computers. This paper presents the framework for a stand-alone, fully wearable and real-time myoelectric intention detection system based on the Myo armband. The hard and software for real-time gesture classification were developed and combined with a routine to train and customize the classifier, leading to a unique ease of use. The system including training of the classifier can be set up within less than one minute. Results demonstrated that: (1) the proposed algorithm can classify five gestures with an accuracy of 98%, (2) the final system can online classify three gestures with an accuracy of 94.3% and, in a preliminary test, (3) classify three gestures from data acquired from mildly to severely impaired stroke survivors with an accuracy of over 78.8%. These results highlight the potential of the presented system for electromyography-based intention detection for stroke survivors and, with the integration of the system into a robotic hand orthosis, the potential for a wearable platform for all day robot-assisted home rehabilitation.