ArticlePDF Available

Abstract

The complexity of the user interfaces and the operating modes present in numerous assistive devices, such as intelligent prostheses, influence patients to shed them from their daily living activities. A methodology to evaluate how diverse aspects impact the workload evoked when using an upper-limb bionic prosthesis for unilateral transradial amputees is proposed and thus able to determine how user-friendly an interface is. The evaluation process consists of adapting the same 3D-printed terminal device to the different user-prosthesis-interface schemes to facilitate running the tests and avoid any possible bias. Moreover, a study comparing the results gathered by both limb-impaired and healthy subjects was carried out to contrast the subjective opinions of both types of volunteers and determines if their reactions have a significant discrepancy, as done in several other studies.
sensors
Article
Evaluation of User-Prosthesis-Interfaces for sEMG-Based
Multifunctional Prosthetic Hands
Julio Fajardo 1,2,* , Guillermo Maldonado 1, Diego Cardona 1, Victor Ferman 2and Eric Rohmer 2


Citation: Fajardo, J.; Maldonado, G.;
Cardona, D.; Ferman, V.; Rohmer, E.
Evaluation of User-Prosthesis-
Interfaces for sEMG-Based
Multifunctional Prosthetic Hands.
Sensors 2021,21, 7088. https://
doi.org/10.3390/s21217088
Academic Editors: Biswanath
Samanta and Ennio Gambi
Received: 24 August 2021
Accepted: 21 October 2021
Published: 26 October 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1Turing Research Laboratory, FISICC, Galileo University, Guatemala City 01010, Guatemala;
guiller@galileo.edu (G.M.); juandiego.cardona@galileo.edu (D.C.)
2Department of Computer Engineering and Industrial Automation, FEEC, UNICAMP,
Campinas 13083-852, Brazil; vferman@dca.fee.unicamp.br (V.F.); eric@dca.fee.unicamp.br (E.R.)
*Correspondence: julio.fajardo@galileo.edu or julioef@dca.fee.unicamp.br
Abstract:
The complexity of the user interfaces and the operating modes present in numerous
assistive devices, such as intelligent prostheses, influence patients to shed them from their daily
living activities. A methodology to evaluate how diverse aspects impact the workload evoked when
using an upper-limb bionic prosthesis for unilateral transradial amputees is proposed and thus able
to determine how user-friendly an interface is. The evaluation process consists of adapting the same
3D-printed terminal device to the different user-prosthesis-interface schemes to facilitate running
the tests and avoid any possible bias. Moreover, a study comparing the results gathered by both
limb-impaired and healthy subjects was carried out to contrast the subjective opinions of both types
of volunteers and determines if their reactions have a significant discrepancy, as done in several
other studies.
Keywords: assistive robotics; upper-limb prosthesis; electromyography; user-prosthesis interface
1. Introduction
Several works in the literature present substantial progress in advanced bionic pros-
thetic devices in recent years, offering people with disabilities many different alternatives
and characteristics to improve their condition. This progress includes promising works in
haptics [1,2] and diverse methods to recover and interpret the user intent [36]. However,
little to no effort has been directed into research for providing a simple and easy-to-use
user-prosthesis interface (UPI). This aspect is directly related to the patient’s subjective
perception of the prosthetic device itself, greatly influencing its use or not. This way, it has
been already proven that the acceptability for such devices depends more on the lack of
effort to operate it than consistently achieving successful grasps [7].
Some methods to operate upper-limb prostheses do not implement a graphical UPI,
controlling the device exclusively by analyzing a specific activation profile based on pro-
cessing electromyography (EMG) signals. Some of these iterations substitute the visual
stimuli by utilizing other types of feedback, such as vibrotactile ones [
7
]. Moreover, oth-
ers include implants that utilize Bluetooth or radio channel waves to communicate with
them [
3
,
8
,
9
]. These versions use wireless charging to function and regulate the power
dissipation inside a safe range to avoid damage to the user’s skin tissue.
On the other hand, some approaches use brain-machine interfaces (BMI) to control
these devices, eliminating any visual stimulus to interact with the artificial limb and resem-
bling the way limbs are usually operated. Newer methodologies are based on high-density
electrocorticography (ECoG), which allows the patient to control each finger individually
through an adequate re-innervation process [
4
]. However, these interfaces require very
intrusive and expensive procedures. Other projects utilize interaction processes that do
not seem intuitive to the users, employing more creative approaches to analyze the EMG
signals by using other members to drive the movements of the prosthetic limb, as shown
Sensors 2021,21, 7088. https://doi.org/10.3390/s21217088 https://www.mdpi.com/journal/sensors
Sensors 2021,21, 7088 2 of 20
in [
5
,
6
], which use the toes and the tongue, respectively. Such techniques result in viable
alternatives, especially for bilateral amputees. However, such methodologies may not
be the best option for unilateral transradial amputees since they affect how some typical
activities of daily living (ADLs) must be carried out.
Alternatively, the majority of sophisticated research assistive devices are based on
multimodal approaches. These methodologies usually consist of taking a set of predefined
and well-known EMG features and complementing them with information from other kinds
of sensors such as inertial measurement units (IMUs), micro-electromechanical systems
(MEMS) microphone, mechanomyography (MMG), or force myography (FMG) showing a
substantial improvement in classification rates and bi-manual
performance [1013]
. This
approach has been used successfully to improve the user control of prosthetic devices in
different manners, such as using a multimodal system with Radio Frequency Identification
(RFID) tags on specific objects. In this stance, the cognitive effort is reduced to operate
an upper-limb prosthetic device and address some of the well-known issues of EMG
techniques, such as the limb position effect [
14
16
]. Other stances have been taken into
account using the multimodal approach, such as utilizing voice-control, in tandem with
visual feedback through a small embedded touchscreen LCD, providing the users with
other alternatives to control their prosthetic device in different manners [17,18].
Finally, other studies have been carried out to increase upper-limb prostheses’ func-
tionality, combining surface EMG (sEMG) and deep-learning-based artificial vision systems.
This approach works by associating a subset of predefined objects to a list of specific grasps
based on the target’s geometric properties, which are gathered by different types of cam-
eras. Such classification processes are fulfilled via convolutional neural networks (CNN)
employing customized image object classifiers.
This work focuses on a methodology to evaluate how different UPIs for transradial
upper-limb prostheses influence the user’s workload and how user-friendly they are. It
is known that several studies have been conducted to evaluate specific prosthetic devices
with unimpaired subjects only [
19
22
]. The evaluation is subjective, and some assumptions
are made regarding the limb-impaired that may not always be accurate. Therefore, these
evaluation processes may show a practical and moral dilemma, especially true when
considering the interaction process with assistive devices. Therefore, an extension of
previous works [
22
,
23
] was carried out, in which the results of the evaluation process
were collected only with unimpaired subjects. This work includes results of an evaluation
process from information gathered from impaired ones and compares results of both types.
Thus, we verify that the results obtained from both are strongly related and verify the
viability and validity of creating such supposition.
The evaluation process was achieved by employing a customized EMG wireless
module (Thalmic Labs’ Myo armband ) to gather user intent, facilitating the device’s
installation independently of the user, and then comparing the retrieved results on the
impact that certain aspects may have on the interaction process. The module was selected
for operating the different UPIs through this work since it is an affordable and viable
replacement for the medical-grade sensors (process and classifies sEMG signals by itself),
even with subjects with different levels of transradial amputation [
4
,
24
26
]. Its small subset
of self-classified contractions can be adapted to perform a greater number of gestures and
grips. These features facilitate its utilization and the replication of all the interfaces since its
installation process is more comfortable than wired alternatives or implants, removing any
possible bias regarding the sensors to gather the users’ intent evaluating only the UPIs. In
this way, the NASA Task Load Index (TLX) scale was employed to estimate the workload
evoked from considered UPIs for its evaluation. Besides, a survey describing the UPI’s
user-friendliness was perceived and compared the results using a multifactorial ANOVA
analysis in order to determine how user-friendly an interface is.
The rest of this work is structured as follows: Section 2elaborates on the state of
the art of the existing methods to evaluate UPIs. Section 3describes how the whole
system is integrated and elaborates on the details of the replicated UPIs for its evaluation.
Sensors 2021,21, 7088 3 of 20
Section 4
describes the evaluation processes and their interpretations. Finally, the last
section, Section 5, deals with the impact of the results.
2. State of the Art
Since the development of UPIs has not been a focus in commercial or academic
works, the ones that center themselves in analyzing the interaction between the user and
the artificial limb are also scarce and usually focus on gathering the user intent, such as
comparing the efficiency of EMG methods with force, position, tactile, or even joystick
controls [
27
,
28
]. Nevertheless, most of these results conclude a non-significant difference
between them or the EMG one’s superiority. Other methodologies achieve enhancements to
collect that information by using hybrid systems, such as using near-infrared spectroscopy
(NIRS) [
29
], or like the ones juxtaposed in [
30
]. On the other hand, works like [
31
] delve
into the impact of short-term adaptation with independent finger position control and the
relevance of the real-time performance of the prosthetic control and its “offline analyses”.
Nonetheless, none of the previously mentioned studies provide details on assessing
interfaces in terms of how to interact with the artificial limb with the selected control. How-
ever, some works have centered on comparing two primary interfaces, pattern recognition
(PR), and direct control [
32
34
]. Some of them even considered active users’ subjective opin-
ions and the objective ones from therapists for a perception analysis on multi-functional
upper-limb prostheses [
35
]. This resulted in general disapproval for the conventional con-
trol for switching between actions and the unreliability of the pattern recognition algorithm
altogether (even though its speed was praised). Nonetheless, a similar approach has not
been taken for a more extensive array of interfaces (at the best of the authors’ knowledge).
Furthermore, regarding the tools that can be used to evaluate assistive robotics, one
can find the Psychosocial Impact of Assistive Devices Scale (PIADS), whose purpose is “to
assess the effects of an assistive device on functional independence, well-being, and quality
of life”. This reflects the self-described experience of the users and may provide insight
on the long-term use or disuse [
36
]. Another method that has been utilized to evaluate
assistive robotics is the use of the Human Activity Assistive Technology (HAAT) model,
an outline of clinically relevant aspects that need to be considered in the practice. This
method provides “enhanced access and application for occupational therapists, but poses
challenges to clarity among concepts” [
37
]. In addition to those, the Southampton Hand
Assessment Procedure (SHAP) also helps to identify which grips are better suited for
specific prosthetic designs, as it was created to measure the operating range of a hand.
However, it has been criticized for some inconsistencies during the assessment of artificial
hands and the lack of a measure for their efficiency [
38
]. Another tool commonly employed
is the NASA Task Load Index scale, used to derive an estimate of the workload of different
types of tasks and simulations [
39
]. Its implementation has been, mostly, centered on
quantifying the subjective perception of interface designs [
40
], some of them involving
assistive robotics [11,19].
3. Materials and Methods
3.1. Galileo Hand
The Galileo Hand (shown in Figure 1) was the prosthetic device selected to validate this
work. This prosthesis is an open-source and intrinsic device that encases five metal-geared
micro DC motors to drive the under-tendon-driven (UTD) [
41
,
42
] mechanism of each finger,
plus an additional DC motor with a quadrature encoder attached to perform the thumb
rotation. This device consists of an anthropomorphic, modular, and intrinsic 3D-printed
ABS shell; its weight and fabrication cost are under 350 g and USD 350, respectively.
Its main controller PCB is based on the ARM Cortex-M4 microcontroller unit (MCU),
consisting of the PRJC Teensy 3.2 development board in tandem with three TI DRV8833
dual motor drivers and one 4D-Systems’ 1.44
µ
LCD-144-G2 screen used to present visual
feedback from the UPIs to the users [18,23].
Sensors 2021,21, 7088 4 of 20
Figure 1. Galileo Hand: anthropomorphic, 3D-printed upper-limb prosthesis.
Each finger is assembled using waxed strings, which, when are coiling, close the
fingers individually. This process is achieved by motors installed on each finger, providing
5 degrees of actuation (DOA), plus an additional one for the thumb’s rotation. These
mechanisms are also made up of surgical-grade elastics that allow the fingers’ articulations
to spring back open in a UTD machine model. This configuration provides a total of
15 degrees of freedom (DOF), 1 for the rotation of the tumb and 14 comprised by each joint
in the fingers to simulate flexion and extension (three for each digit, except for the thumb,
which only has two links and two joints). In addition, the thumb is at a 15
angle from the
palmar to emulate both adduction-abduction and opposition-deposition finger movements.
3.2. Software
3.2.1. Adapting the Myo Armband
Since the proposed solution is to incorporate the Myo armband to capture the muscle’s
processed electric signals, a Bluetooth Low Energy (BLE) module, HM-10, was required to
transmit them to the Galileo Hand as interpreted poses. Utilizing the MyoBridge library
and adapting the hardware according to what was proposed in [
43
] allows for a successful
exchange between the components. The gathered information is later transferred to an
ATmega328P (secondary microcontroller unit) and, posteriorly, to the main MCU to drive
each DC motor; this is illustrated in Figure 2.
Figure 2.
System block diagram showing the embedded controller architecture and the integration with external modules.
The complementary MCU is in charge of acquiring the user intent, either as raw EMG
signals or as Myo-specific poses. Consequently, it converts them into packages transmitted
via UART to the Galileo Hand’s central controller. The HM-10’s firmware was flashed with
the MyoBridge program, using RedBearLab’s CCLoader as an aide for this procedure to
function aptly. This way, the armband will be able to connect with the BLE module and
transmit the EMG signals correctly. This process was carried out for most of the interfaces,
Sensors 2021,21, 7088 5 of 20
except for the one using an Android app, since the Myo can be connected, by default,
directly to the mobile device.
3.2.2. System Integration
Packet reception is handled using UART interruptions. Once the package is received,
it is evaluated, and action is taken based on the content of the transmission. If the message
contains a Myo-specific pose, it triggers transitions between Finite State Machines (FSM)
states, described in detail in Section 3.6, used to implement the different UPIs to control the
prosthetic device. Suppose the desired action is to alter the current selection on the screen.
In that case, a notification via another UART channel is sent to the independent
µ
LCD’s
microcontroller to perform the change it was ordered to and, thus, present visual feedback
to the user. On the other hand, if the message contains raw EMG signals, the device fills up
two circular buffers of signals collected by the electrodes placed near the palmaris longus
and the extensor digitorum muscles (for unilateral below-elbow disarticulations). This way,
customized methods to interpret the user intention can be used to adapt the bracelet to the
prosthesis, such as works presented in [26,44].
3.3. Control Strategy
Once the user’s intent has been received, the high-level controller (HLC) uses this
information to perform the necessary action that each finger must take to achieve predefined
gestures and grips available to the user. Whereas at a low level, each finger functions with
an individual hybrid control strategy for the flexion and extension processes, except for
the thumb, which also has a quadrature encoder to implement a PI position controller to
perform its rotation. Since the armature current
ia
of each DC motor is the only feedback
signal measured from the system, a simple current on-off controller is implemented to
perform the flexion process. In addition, a robust full-state observer is utilized to estimate
the angular velocity and displacement,
θ
, of the gearhead shaft of each motor [
42
]. Thus,
a robust state feedback controller is used to perform the extension process. This way,
the prosthesis can perform the different predefined grasps, i.e., power and lateral grips,
hook, etc. The functionality for each digit is illustrated in the Finite State Machine in
Figure 3.
Figure 3.
Finite State Machine demonstrating the opening/closing behavior of each finger on the
prosthesis.
S0
indicates that the finger is entirely open;
S1
, represents the flexion process triggered by
the command
c
;
S2
, indicates the finger is completely open (since
ia>th
). Additionally,
S3
represents
the extension process triggered by command ountil θθ0.
The prosthesis starts with all its fingers fully extended (in an “open” or “rest” position,
at
θθ0
), represented by the state
S0
. Thus, when the command to move a particular
finger,
c
, is received from the high-level controller, the transition to the state
S1
happens,
activating the motor and causing the finger’s flexion. In this state, the RMS value of the
Sensors 2021,21, 7088 6 of 20
armature current,
ia
, is monitored continuously and, when a predefined threshold related
experimentally to the fingertip wrench,
th
, is exceeded, the transition to
S2
happens. This
parameter differs for each finger since each has discrepant mechanical factors due to their
different size and length of the strings and elastics. Therefore, a proper calibration was
made experimentally.
The finger is considered fully closed at this state and will start with the flexion process
opening the finger if the
o
command is issued by the HLC, as shown by the transition
from states
S2
to
S3
. Finally, the transition from states
S3
to
S0
happens after the angular
displacement,
θ
is approximated to its initial value
θ0=
0. This strategy was adopted
since the elastic installed on each finger opposes itself to the coiling process but favors the
unfurling one; therefore, ensuring that the motor shaft’s angular displacement is equal
during both processes is essential. Finally, it is relevant to note that the closing/opening
procedures may be interrupted and reversed if the appropriate commands are received.
3.4. Gestures Adapted to the Prosthesis
The purpose of this subsection is to detail and clarify the actions at the patients’
disposal. The selected grasps are the following: “Close” (flexion of all the fingers and
rotation of the thumb, power grasp), “Hook” (the thumb is the only finger extended, it is
also adducted), “Lateral” (coiling of the strings of all fingers and the thumb is abducted),
“Pinch” (flexion of the index and thumb, plus abduction of the thumb, precision grasp),
“Point” (all motors are actuated, except for the index), “Peace” (all fingers are closed,
except for the index and the middle finger), “Rock” (flexion of all fingers, but the index
and the little finger; thumb adducted), “Aloha” (the index, middle and annular fingers are
flexed), “Three” (all motors are actuated except for the index, middle and annular fingers),
“Four” (similar to the previous gesture, but with the little finger extended), “Fancy” (the
only extended finger is the little finger, with an adducted thumb) and “Index” (where
the only flexed finger is the one giving the name to the action). Some of these gestures
are illustrated in Figure 4. An important note is that some of the actions installed are
for demonstrative purposes only. Other grasps may substitute some of the gestures for a
more personalized approach or even reduce the number of actions available if they are
not needed.
Figure 4.
The image shows the Galileo Hand grabbing the objects used in the trials. On the upper
left the hand is holding a ”water bottle”; on its right, a small plastic ”ball”; underneath, from left to
right, holding a ”wallet” and ”pointing”, respectively.
Sensors 2021,21, 7088 7 of 20
Now, the supported gestures for each evaluated interface will be enumerated. The tra-
ditional pattern recognition interface can complete the first four actions from the previous
list. On the other hand, the version in Section 3.6.3, the one using the app, can fulfill the
same as the previous iteration, plus “Pinch” and “Peace”. Finally, the rest of the interfaces
allow the user to select any hand actions available on the menu.
3.5. NASA Task Load Index
The NASA-TLX test was used to measure and analyze the workload evoked by each
interface under evaluation, as done in [
11
,
19
,
22
,
40
]. This test was selected to evaluate
the impact that each UPI has on the users’ workload effectively. So, considering that the
post-test evaluation techniques, such as SUS, do not permit evaluating different parts
of the interface separately, and methods such as SEQ do not consider many different
categories during testing, providing more binary results, the NASA-TLX scale was selected
because it requires user testing through a post-task evaluation method for each interface
taking into account six different workload categories: mental, physical, and time demand,
the performance, the effort needed to operate it, and the frustration evoked. In this
work, the index quantifies the effectiveness and performance of the workload to operate a
prosthetic device using a given UPI; besides, it is also considered a more comprehensive
test to evaluate user interaction, with well-known research and industry benchmarks to
interpret scores in the context, which can be helpful for future works.
In addition, a binary response survey was used to determine if a user perceived an
interface as user-friendly or not, intending to compare its results with the workload evoked
by each UPI. Finally, a multifactorial ANOVA analysis is performed to determine how
user-friendly an interface is according to the results obtained from the tests.
3.6. Experiment Design
Several interfaces were chosen for evaluation to determine the most relevant aspects
for user-friendly interaction, affecting the workload of UPIs. The selection process was
carried out by analyzing different interaction processes and considering the physical
characteristics that correspond to traditional UPIs solutions; similar price ranges were also
considered. Thus, the same one was adapted to work with each UPI to avoid selecting
hardware bias to conduct the experiments. The different UPIs evaluated for this work are
described hereunder.
3.6.1. Multimodal Approach Using Buttons and Myo Interface
Based on the work presented in [
18
], this interface operates either by receiving gestures
from the Myo armband or push buttons installed on the hand’s dorsal side to select a grip
from the graphical menu or to perform an action. The functionality of this UPI is shown in
the FSM in Figure 5. Both, the buttons,
B={b0
,
b1}
, and the muscle contractions subset,
Q={q0
,
q1
,
q2
,
q3}
, corresponding to Thalmic Labs’ “Myo poses”, are used to operate
the prosthesis. By performing “wave out”,
q0
, and “wave in”,
q1
, hand extension and
flexion respectively, as well as
b0
and
b1
, causes a forwards or backward switch of the
selected element in the menu displayed on the screen (shown in Figure 6); this process is
represented by the state
S1
. Besides,
S0
indicates that the fingers on the prosthesis are fully
extended, in their default initial state; while in
S3
, the hand is currently performing the
chosen grip. An important aspect to note is that, whilst in this state, changing the menu’s
selection is presented to the user, as the motor activation processes’ timing differs between
actions and could lead to wrong finger positioning if the case arose.
Sensors 2021,21, 7088 8 of 20
Figure 5.
Finite State Machine showing the behavior of the interface using buttons and the Myo to
operate.
S0
indicates that the hand is completely open;
S1
, that there was a change in the selected
grip;
S2
, that the selected grip is being performed (when it is completed, the flag
f1
is lifted). In
addition,
S3
represents that the hand is currently enacting the chosen gesture; while,
S4
, that the
fingers are opening (process that informs it is finished by lifting the flag f2).
Figure 6.
Galileo Hand’s graphical menu (
left
) and the prosthesis performing the action “Close”
(right).
Furthermore,
S2
and
S4
indicate that the prosthetic device is currently closing or
opening its fingers, respectively. These procedures can be interrupted by each other if a
correct command is received. In addition to that, to execute an action
q2
, “fist” needs to
be performed by the user. At the same time, both “double tap” (two swift, consecutive
contractions) and “fingers spread” are the contractions
q3
that deactivate the action. It
was decided to use both gestures to deactivate the user’s selected actions according to the
results shown in Section 4. Finally, the last elements in the FSM representing the interface’s
behavior are the flags
f1
and
f2
. The first one is triggered when all the fingers have reached
their desired position when performing an action, while the second triggers when all the
fingers returned to their initial position, θ0.
3.6.2. Myo-Powered Interface with a Reduced Contractions Subset
This interface works similarly to the multimodal one explained in Section 3.6.1, i.e., se-
lecting the desired action in a menu and performing it with an “activation pose”. The main
difference is that the subset,
Q={q0
,
q1}
, is reduced to only two contractions. In this
way, it is imitating the iteration proposed in [
22
,
42
], by utilizing “wave in” to act and
Sensors 2021,21, 7088 9 of 20
“wave out” to select and deactivate a grip, illustrated in Figure 7. This simplified subset
provides a viable alternative if some of the Myo poses are unperformable by the patient.
Additionally, the buttons are absent for this UPI to help accommodate a reliable solution to
bilateral amputees.
Figure 7.
Finite State Machine representing the UPI interaction process from the version with the
reduced contraction subset.
S0
indicates that the hand is completely open;
S1
, that there was a change
in the selected grip;
S2
, that the selected grip is being performed (when it is completed, the flag
f1
is
lifted). In addition,
S3
represents that the hand is currently enacting the chosen gesture; while,
S4
,
that the fingers are opening.
3.6.3. Multimodal Approach Based on Object Classification and Detection
This version uses a mobile application to control the prosthesis. The device possesses
a camera facing the palm, which takes pictures of the objects to be interacted with and
suggests a grasp. Alternatively, the photos can be taken with the mobile device’s photo-
graphic equipment. By performing Myo’s poses, the user can either accept, reject or cancel
the recommended grips provided by the app’s detection algorithm. This process uses a
bag of words computer vision algorithm to assign a label to the detected object with a grip.
This is a replica of the one used in [45].
The interface’s behavior is described as shown in Figure 8, where the set of contrac-
tions,
Q={q0
,
q1
,
q2
,
q3}
, represent the Myo poses which are used to choose along with
the states of the FSM: “fist”, “fingers spread”, “wave in” and “‘wave out”, respectively.
The interface’s behavior is described as shown in Figure 8, where the set of contractions,
Q={q0
,
q1
,
q2
,
q3}
, represent the Myo poses which are used to choose along with the states
of the FSM: “fist”, “fingers spread”, “wave in” and “‘wave out”, respectively. The state
S0
denotes that the prosthetic device is in its rest position with all its finger entirely open.
Simultaneously, the UPI stays idle until the user performs the contraction
q0
to trigger a
transition to the state
S1
where the system takes a picture of the object with which he wants
to interact, and then is classified by the CNN algorithm running in a smartphone until a
valid label
l
is defined. Thus, the label is validated when the classification certainty reaches
a heuristic threshold that triggers the transition to the state
S2
. If the CNN classification
does not return a valid label, the system returns to the initial state
S0
, upon a predefined
timeout
t
. In the same state,
S2
, when
q1
is performed, the transition indicates that another
photo needs to be taken, canceling the action selection process. The contraction
q2
accepts
the algorithm’s suggestion while
q3
rejects it, so the system proposes another grasp or
gesture. The text and animations of the suggested grip are provided as visual feedback via
the LCD screen, as shown in Figure 9.
Sensors 2021,21, 7088 10 of 20
Figure 8.
Comportment of the UPI from the version based on object recognition.
S0
indicates that the
prosthesis it is completely open;
S1
, that a picture is being taken;
S2
, that a label is being determined
(when this process is finished, the flag
l
is lifted, if not, timeout
t
is raised); and
S3
, that the action is
being executed.
Figure 9.
On the left, the visual feedback presented to the user on the Android app. Beside it is the
animation of the grip, which is shown to the user via the Galileo Hand’s LCD screen.
3.6.4. sEMG Pattern Recognition
Based on [
17
], this interface consists of a system that, utilizing Myos’s pattern recog-
nition methods, maps each of the predefined “Myo poses” to a grip to be performed.
So, the prosthesis executes an action after receiving the interpreted contraction from
the armband.
The layout is defined as follows: “fist” and “fingers spread” to close and open all the
fingers, respectively; “wave in” to a pointing position; “wave out” to carry out a lateral
grasp; and “double tap” to a hooking stance. The gestures were selected according to their
usability in ADLs, an aspect that was also taken into account when assigning the actions
concerning Myo’s success rate.
4. Results and Discussion
4.1. Myo Armband Efficiency
The myoelectric classifier embedded in the Myo armband is not fault-free; some
contractions are misclassified at times, even for people without muscle damage. Therefore,
a confusion matrix was elaborated to corroborate the results shown in works such as [
24
]
Sensors 2021,21, 7088 11 of 20
and to verify its reliability in gathering the user intention. This analysis also served to select
which of the Myo armband-supported poses are the most adequate to be implemented as
default contractions to operate each interface. Therefore, depending on the amputation
level, the Myo will not correctly classify all contractions for limb-impaired subjects.
The data was obtained in two stages, one for the able-bodied subjects, done in [
22
],
and another for the unilateral limb-impaired ones (as depicted in Figure 10). The first was
composed of 8 males and 2 females between the ages of 22 and 35, while the latter, by
2 male volunteers of 30 and 55 years old, as shown in Table 1.
Figure 10. A limb-impaired volunteer testing the UPI with the reduced contraction set of muscles.
Table 1. List of volunteers used in the experiment.
No. Limb-Impaired Prosthesis User? Age Gender
1 No No 25 M
2 No No 27 M
3 No No 24 M
4 No No 24 F
5 No No 23 F
6 No No 23 M
7 No No 23 M
8 No No 26 M
9 No No 22 M
10 No No 35 M
11 Yes Yes 55 M
12 Yes No 30 M
To avoid biased results, these volunteers had no experience whatsoever with the
Myo armband. Even though a more comprehensive range of ages may provide more
accuracy to a generalized population, the musculature differences tend to be minimal,
as the amputation damages it in a similar manner [
27
]. So, able-bodied subjects were asked
to perform every Myo pose in its default roster 50 times; while noting what the classifier
detected each time. The resulting matrix is shown in Figure 11, where the default MYO
poses are numbered as follows: (1) “wave out”, (2) “wave in”, (3) “fist”, (4) “double-tap”,
(5) indicates a no-operation (NOP) meaning the armband did not detect any pose and (6)
“fingers spread”. According to the results gathered by this experiment, the Myo poses
were mapped to the operation actions in diverse manners to the different interfaces. These
results do not include the tests from the two impaired volunteers due to the poor accuracy
obtained with some contractions performed by non-disabled people (specifically, “fingers
spread” and “double-tap”). This was also reflected in the constant misclassification of
these contractions from the limb-impaired ones. Therefore, the data gathered from this
Sensors 2021,21, 7088 12 of 20
type of volunteers was merely regarding the interface that obtained the closest overall
performance compared to the UPI described in Section 3.6.4, according to the able-bodied
subjects’ results.
The total accuracy achieved by the default classifier of the bracelet was about 87.7%.
As was expected, NOP was always classified correctly. On the other hand, as shown in
Figure 11, three gestures (“wave in”, “wave out” and “fist”) reached acceptable perfor-
mance metrics in terms of accuracy (diagonal cells), precision (the column on the far right),
and recall (the row at the bottom). In this way, for the interface employing a multimodal
approach using the MYO bracelet in tandem with buttons, “wave in/out” were selected
to naturally choose between a set of predefined gestures and “fist” activates the selected
gesture on the prosthetic device. In this way, for the interface employing a multimodal
approach using the MYO bracelet in tandem with buttons, “wave in/out” were selected to
choose between a set of predefined gestures naturally. At the same time, “fist” activates the
selected gesture on the prosthetic device. The remaining gestures, the ones with the least
successful rates (“finger spread” and “double-tap”), were selected to return the prosthesis
to the rest position. Thus, the high error rate of these gestures cannot influence the UPI’s
performance since the user cannot select or change a gesture while the prosthesis is acting.
Figure 11. Confusion matrix evaluating the default classifier of the Myo.
Moreover, the UPI that employs deep learning-based artificial vision algorithms was
replicated precisely from work proposed in [
45
]. Since this approach utilizes four muscle
contractions to operate the interface, the same (“wave in”, “wave out”, “fist” and “finger
spread”) were elected to interact with prosthesis and the android mobile application.
Thus, the gesture with minor performance metrics was elected to deactivate the prosthesis,
returning the fingers to the rest position. Finally, for the UPI based on sEMG pattern
recognition (Section 3.6.4), the contractions with the most significant performance rates
were mapped to the most useful grips according to the user’s preferred ADLs. However,
it is considered a natural mapping that facilitates the operation of the prosthetic device,
where “fist” activates the power grip action, “fingers spread” open the prosthesis, “wave
in” could be used for customized grips and “double tap” for a least used gesture.
Regarding the UPI based on a reduced set of contractions (Section 3.6.2), the set of
contractions was selected, taking into account the contractions that ended with better
performance (accuracy, precision, and recall). In this way, “wave in” was selected to
Sensors 2021,21, 7088 13 of 20
activate predefined grips and gestures, while “wave out” was chosen to select between
the different predefined grips and gestures and also to return to the open position. Thus,
in this iteration, the system avoids using the actions with low success rates and replacing
them with the most accurate ones, ensuring better performance for the limb-impaired
ones, as well as increasing the functionality of the prosthetic device by performing only
these contractions. This alternative is possible since the menu is blocked during a gesture’s
performance, so both hand extension and flexion are available to return the hand to its
default state.
4.2. NASA Task Load Index Evaluation
The first evaluation process consisted of asking the non-disabled volunteers to rate
each of the UPIs mentioned above on each category on a scale divided into 20 intervals,
with a lower score indicating a better result. The test consisted of performing different
gestures and utilizing different grasps to interact with commonly encountered everyday
objects. The trials were held after providing the subjects a training period (a couple of
minutes) for them to be accustomed to the interfaces (as indicated by each participant); this
was to avoid any bias regarding the order in which the interfaces were tested, which was
done randomly, not unlike [
27
]. The actions asked to be performed by the volunteers were:
to hold a small plastic ball, a water bottle, and a wallet, as well as to press a specific key
on a laptop’s keyboard. These were selected for them among the most common grasps in
ADLs that correspond to Cutkosky grasp taxonomy [
46
]. The tasks, as mentioned earlier,
were repeated thrice so that the subjects could adequately adapt to each operational mode.
Additionally, the performances of these actions can be easily evaluated, as one can visualize
the output of the keyboard on a computer screen, and the grips should hold the objects
firmly. In addition, since the purpose of this study is to evaluate the workload of the user
interface only, the terminal device was not attached to the volunteers’ limb. In this way,
both the weight of the prosthesis and the objects do not directly influence the physical
demand evoked by each user-prosthesis interface.
This assessment was carried out with the same volunteers as the previous experiment.
Considering that not every workload class carries the same level of relevancy in the
prosthetic field, these preliminary results may show bias or skewness if not appropriately
weighted. Thus, an overall performance statistic was determined Figure 13 which calculates
a weighted average of all categories for each interface, ranking them based on feedback
from the volunteers, opinions of expert engineers, and remarks from several patients,
in this order (from most important to least): Temporal Demand, Mental Demand, Physical
Demand, Performance, Effort & Frustration. Figure 12, shows the means and the standard
deviations for each of the considered categories. The results reflect a significant discrepancy
between the UPI that uses deep learning-based computer vision algorithms and all other
UPIs, showing an inferior interface that presents a significant workload in several categories.
Therefore, performing a Factorial Analysis of Variance (ANOVA) test on the results obtained
demonstrates a significant difference in contrast to the UPI described in Section 3.6.2. In
addition, with a critical value of 3.84 and an alpha of 0.05, the F statistic obtained for this
test was about 132.4. This value discards the main effect hypothesis, showing a significant
inequality between the evaluated interfaces.
Sensors 2021,21, 7088 14 of 20
Figure 12.
Mean of the results gathered from the volunteers. Where (
a
) is the sEMG PR UPI; (
b
),
the one using the buttons and the Myo; (
c
) is the version using the camera; and (
d
) is the iteration
with reduced contractions subset.
The interface based on sEMG pattern recognition presents the best results in physical
and temporal demand categories and on the category that evaluates the user’s effort to
complete a task. Furthermore, the multimodal UPI that employs buttons in tandem with the
Myo bracelet resulted in the least frustrating interface for users. In contrast, the UPI based
on reduced contractions subset obtained better results than the others in the performance
and mental demand categories. All three interfaces proved to be proficient in the different
categories; however, the results (as shown in Figure 12) do not show a significant difference
to determine which of them has a better overall performance. These results showed that
all interfaces are straightforward iterations with an overall performance around the upper
70% according to the NASA TLX’s scale. The obtained means for the remaining UPIs
are still pretty similar. As shown in Figure 13, the UPI (a) has a mean of 5.75; (b) one of
6.2; and (d) 5.86, Therefore, more Factorial ANOVA analyses were performed on these
interfaces with the same alpha value. All previous tests were performed comparing the
reduced contractions subset version to the other interfaces to corroborate improvements or
significant differences due to several participants’ interest in an alternative to a PR-based
UPI. Thus, these results show that the different aspects involved in the interaction process
do not affect the workload in a relevant matter.
The second evaluation process consisted of requesting the limb-impaired subjects to
perform the same ADLs from the preliminary testing by using the reduced contractions
subset UPI. This way, one can compare the performance concerning the other volunteers’
quantified results. This new score averaged 7.2 in the TLX scale (with 2.39 standard
deviation), as shown in Figure 14. Compared to the average value of able-bodied subjects
for the same interface (5.86), an ANOVA test was performed, and the results show no
significant difference between groups. The F-statistic obtained was 0.78, a critical value of
3.98 with an alpha value of 0.05. Additionally, every volunteer sent a survey to determine
if the interfaces are considered user-friendly. The PR, the multimodal approach, and the
reduced contractions subset interfaces show an acceptable result, as around 70%, 80%, and
90% of participants perceived them to be user-friendly UPIs, respectively. On the other
hand, the only UPI-based that shows poor results was the one based on object classification
and detection since only 30% of participants perceived it as user-friendly.
Sensors 2021,21, 7088 15 of 20
Figure 13.
Overall performance of the different versions. (
a
) is the sEMG PR iteration; (
b
) is the one
with the buttons; (
c
) uses the computer vision algorithms; and (
d
) is the interface utilizing a reduced
contractions subset.
Figure 14.
Overall performance of the reduced contractions subset version. (
a
) is the score from the
able-bodied subjects; and (b) is the one with volunteers with upper-limb difference.
5. Conclusions
An effective interaction process between the user with the prosthesis is a very relevant
aspect that users consider when selecting an assistive device and thus continue to use
in their ADLs. Therefore, it is essential to identify the aspects favoring or opposing the
target users when designing a more efficient and user-friendly interface. The results for
Sensors 2021,21, 7088 16 of 20
the interface described in Section 3.6.3 showed a trend strongly tied between the execution
time of the actions and their subjective evaluation, as evidenced by the poor reception
and the long operation time required to select and execute an action on the prosthetic
device. These strongly impact the process of interaction with the most common objects
that are part of its environment. This perception on users can be caused by the amount
of time it takes to select the object with which the user wants to interact and then take a
photo of it that must be processed to suggest the proper grip or gesture. Thus, this process
becomes complex and tedious for users, evoking frustration and demanding more effort to
achieve a particular goal. In addition to that, if the system employs the camera mounted
on a mobile device (s.a. smartphones or tablets), the user requires an able-bodied hand to
operate it with the app, needing particular physical prowess not possessed by certain kinds
of patients, specifically by bilateral amputees. If the system uses a camera mounted on the
prosthetic device, the weight and position of the camera can influence the effectiveness of
the UPI since it is crucial for the system to frame the object with which the user wants to
interact appropriately. Moreover, the object classification and detection algorithms impose
another requirement to the system in terms of the processing device’s performance running
the interface’s software. This increases the price, either by the need for a smartphone
or an embedded system that is powerful enough to run the necessary machine learning
methods. Since these accommodations are not easily attainable in developing countries
due to the general shortage of high-speed internet, cellphone service, or even electricity,
these restrictions mainly affect amputees from regions suffering from poverty. In this
way, this iteration was the worst evaluated both in the survey and in the NASA-TLX test,
demonstrating that multimodal alternatives do not always improve the interaction between
the user and the assistive device, especially when the interaction process becomes very
complicated for the user.
Regarding the results showed in Figure 12, the superiority of the interface presented
in Section 3.6.4 lies in the swift selection of grips and gestures. This perception is due
to the lack of a menu with which it is necessary to select the desired action. Therefore,
the results obtained on the physical demand and the required effort categories are low.
In contrast, the results for frustration and mental demand for this iteration are caused by
the need to memorize which Myo contractions activate a predefined action, resulting in a
slightly more complex process for patients. This is also frustrating for the limb-impaired
subjects since customized pattern recognition systems (requiring extended periods of
training) are needed to achieve low misclassification rates, and still, only a limited number
of actions can be selected. However, these impressions show that no visual feedback is
necessary for a UPI to be user-friendly, leading to a simpler and more affordable alternative
as long as the user can still operate the prosthesis. For these reasons, this interface was
the third-best evaluated by the volunteers, despite the good results obtained from the
NASA-TLX test, which show that the workload is relatively low for this iteration. On the
other hand, this interface is the one that seems to interact with the device more naturally.
However, technological advances are still needed in biomedical signal processing and
pattern recognition to naturally interpret the user’s intention, especially using affordable
approaches available to amputees.
Furthermore, the results also show a lack of frustration for the UPI presented in
Section 3.6.1,
being the second-best evaluated by volunteers. This perception may result
from the sporadic inexactitude of the default Myo classification process. This UPI provides
an alternative to navigate along with the menu by using buttons; therefore, an EMG
classifier is not strictly necessary to select an action but to confirm it, which provides a
satisfactory alternative in a multi-modal approach. This leads to the fact that a pattern
recognition system may not be necessary, which vastly reduces training time and the
complexity of the EMG sensor and the device gathering the user intent. This allows
for a simpler and less expensive solution for amputees, as only two sEMG channels in
combination with traditional digital signal processing techniques are required to detect
muscle activity from both flexor and extensor sets of muscles [23]. This is especially valid
Sensors 2021,21, 7088 17 of 20
considering that volunteers stated that they only need different grips to hold various types
of objects, not an extensive array of hand actions, meaning that the contractions to be
assessed do not need to be vast, allowing for a more straightforward and intuitive interface.
However, a UPI involving mechanical interaction (i.e., pressing buttons) is not a feasible
solution for bilateral amputees, as the interaction process does not favor them.
Furthermore, the results also show that mental exertion needed to operate the best
evaluated UPI described in Section 3.6.2 achieve the lowest score on the scale. This
perception from the volunteers may occur since the user does not need to memorize the
particular mapping that relates a contraction with a grip or gesture, nor do they need to
consider using the buttons installed on the top of the artificial limb. Since the subset of
contractions for this UPI is limited (only two contractions), the mental demand is also
reduced because the contractions were carefully selected to operate the device naturally.
Besides, the performance for this interface results in being the best along with all the
interfaces. This advantage may be due to the accuracy with which the Myo interprets
the pose used to return the prosthesis to its rest position compared to its multimodal
counterpart. The frustration level also scores low, particularly on unilateral amputees,
which may be due to their ability and experience to adapt their ADLs to employ one
healthy hand with the help of an assistive device. Thus, such patients do not need many
complex grasps, as they prefer to carry out the mechanically dexterous tasks with their
undamaged limbs. A typical example is opening a bottle, which may be easily done by
holding it firmly with the prosthesis and turning the cap with the other hand. Nevertheless,
bilateral amputees are not benefited from such a reduced pool of alternatives. However,
another advantage of this version over the PR one, though not explicitly shown on the
overall scores, is that a broader range of actions might be provided without the need to
increase the number of contractions detected.
On the other hand, after conducting these trials, the multimodal approach using a
mechanical input (buttons) and the one based on reduced contractions set did not result in
a relevant improvement. The same conclusion can be drawn to the UPI that employs an
extended subset of contractions and a range of actions. These experiments demonstrate
that a simpler and more affordable UPI results in a similar interface to the user. However,
reducing the contractions subset to operate the device can restrict the operation mode to fit
each amputee’s unique necessities, prompting the user to employ the prosthesis even if
they are unable or unwilling to complete certain Myo’s poses. In addition, these results
could vary due to the lack of evaluation by bilateral amputees in this study.
The results collected during this research give us a better idea of how different ap-
proaches used to interact with upper limb prostheses affect the user’s workload and
interface amiability. This can be used to find alternatives to improve the price, performance,
reception, and adaptation of such assistive devices by reducing the workload required to
operate them and the interaction process’s complexity altogether. This leads to believe
that the UPI does not need to be a complex one, as shown by the results for the one using
the camera, but a simple, functional one, preferably using the smallest contraction subset
possible (to increase the range of users able to operate it). The time required to complete a
grasp was also shown to be an essential aspect when evaluating the interfaces, which is
unsurprising considering it may be compared to the response time of the healthy limb. Fi-
nally, even though there is a substantial difference between able-bodied and limb-impaired
subjects, this research work’s results do not show a significant deviation, as the tests av-
eraged a similar score, and most discrepancy comes from variance within groups instead
of between groups. Therefore, the evaluation process using only healthy subjects benefits
the user-friendly UPI design process. Thus, it can help the UPIs designer discard or favor
possible solutions before being tested by people suffering from upper limb amputation
according to the analysis of the evaluation results and then test only the best iterations. It is
best to test selected iterations for deeper analysis regarding an interface’s evoked workload
and amiability for this kind of volunteer. This way, we can provide better UPIs that will
improve the quality of life of those who need it.
Sensors 2021,21, 7088 18 of 20
Author Contributions:
Conceptualization, J.F., V.F., and E.R.; methodology, J.F., D.C., and G.M.;
software, J.F., D.C., G.M., and V.F.; validation, J.F., D.C., and G.M.; formal analysis, J.F., D.C., and G.M.;
investigation, J.F., D.C., and G.M.; resources, J.F., D.C., and G.M.; data curation, J.F., D.C., and G.M.;
writing—original draft preparation, J.F., D.C., and G.M.; writing—review and editing, J.F., D.C., G.M.,
V.F., and E.R.; visualization, D.C., and G.M.; supervision, J.F., V.F., and E.R.; project administration,
J.F. and E.R.; funding acquisition, E.R. All authors have read and agreed to the published version of
the manuscript.
Funding:
This work was supported in part by São Paulo Research Foundation (FAPESP) under
Grant 2013/07559-3, in part by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
(CAPES) under Finance Code 001, and in part by Conselho Nacional de Desenvolvimento Científico
e Tecnológico (CNPq).
Institutional Review Board Statement:
The study and experiments were conducted following the
recommendations of the Brazilian Resolution 466/12 and its complementaries, and approved by the
National Research Ethics Commission (CONEP) under authorizations CAAE 37515520.1.0000.5404
and CAAE 17283319.7.0000.5404.
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the
study, and written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement:
Datasets and original images are available from the corresponding
author on request.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Chortos, A.; Liu, J.; Bao, Z. Pursuing prosthetic electronic skin. Nat. Mater. 2016,15, 937. [CrossRef] [PubMed]
2. Jimenez, M.C.; Fishel, J.A. Evaluation of force, vibration and thermal tactile feedback in prosthetic limbs. In Proceedings of the
2014 IEEE Haptics Symposium (HAPTICS), Houston, TX, USA, 23–26 February 2014; pp. 437–441.
3.
Moutopoulou, E.; Bertos, G.A.; Mablekos-Alexiou, A.; Papadopoulos, E.G. Feasibility of a biomechatronic EPP Upper Limb
Prosthesis Controller. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2454–2457.
4.
Hotson, G.; McMullen, D.P.; Fifer, M.S.; Johannes, M.S.; Katyal, K.D.; Para, M.P.; Armiger, R.; Anderson, W.S.; Thakor, N.V.;
Wester, B.A.; et al. Individual finger control of a modular prosthetic limb using high-density electrocorticography in a human
subject. J. Neural Eng. 2016,13, 026017. [CrossRef] [PubMed]
5.
Navaraj, W.T.; Heidari, H.; Polishchuk, A.; Shakthivel, D.; Bhatia, D.; Dahiya, R. Upper limb prosthetic control using toe gesture
sensors. In Proceedings of the 2015 IEEE SENSORS, Busan, Korea, 1–4 November 2015; pp. 1–4.
6.
Johansen, D.; Cipriani, C.; Popovi´c, D.B.; Struijk, L.N. Control of a robotic hand using a tongue control system—A prosthesis
application. IEEE Trans. Biomed. Eng. 2016,63, 1368–1376. [CrossRef]
7.
Cipriani, C.; Zaccone, F.; Micera, S.; Carrozza, M.C. On the shared control of an EMG-controlled prosthetic hand: Analysis of
user–prosthesis interaction. IEEE Trans. Robot. 2008,24, 170–184. [CrossRef]
8.
Miozzi, C.; Guido, S.; Saggio, G.; Gruppioni, E.; Marrocco, G. Feasibility of an RFID-based transcutaneous wireless communication
for the control of upper-limb myoelectric prosthesis. In Proceedings of the 12th European Conference on Antennas and
Propagation (EuCAP 2018), London, UK, 9–13 April 2018.
9.
Stango, A.; Yazdandoost, K.Y.; Farina, D. Wireless radio channel for intramuscular electrode implants in the control of upper limb
prostheses. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 4085–4088.
10.
Guo, W.; Sheng, X.; Liu, H.; Zhu, X. Mechanomyography assisted myoeletric sensing for upper-extremity prostheses: A hybrid
approach. IEEE Sens. J. 2017,17, 3100–3108. [CrossRef]
11.
Volkmar, R.; Dosen, S.; Gonzalez-Vargas, J.; Baum, M.; Markovic, M. Improving bimanual interaction with a prosthesis using
semi-autonomous control. J. Neuroeng. Rehabil. 2019,16, 140. [CrossRef] [PubMed]
12. Fujiwara, E.; Suzuki, C.K. Optical fiber force myography sensor for identification of hand postures. J. Sens. 2018,2018, 8940373.
[CrossRef]
13.
Jiang, X.; Merhi, L.K.; Xiao, Z.G.; Menon, C. Exploration of force myography and surface electromyography in hand gesture
classification. Med. Eng. Phys. 2017,41, 63–73. [PubMed]
14.
Trachtenberg, M.S.; Singhal, G.; Kaliki, R.; Smith, R.J.; Thakor, N.V. Radio frequency identification—An innovative solution to
guide dexterous prosthetic hands. In Proceedings of the Engineering in Medicine and Biology Society (EMBC), 2011 Annual
International Conference of the IEEE, Boston, MA, USA, 30 August–3 September 2011; pp. 3511–3514.
15.
Fougner, A.; Stavdahl, Ø.; Kyberd, P.J.; Losier, Y.G.; Parker, P. Control of upper limb prostheses: Terminology and proportional
myoelectric control a review. Trans. Neural Syst. Rehabil. Eng. 2012,20, 663–677. [CrossRef]
Sensors 2021,21, 7088 19 of 20
16.
Fougner, A.; Scheme, E.; Chan, A.D.; Englehart, K.; Stavdahl, Ø. Resolving the limb position effect in myoelectric pattern
recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2011,19, 644–651. [CrossRef] [PubMed]
17.
Fajardo, J.; Lemus, A.; Rohmer, E. Galileo bionic hand: SEMG activated approaches for a multifunction upper-limb prosthetic.
In Proceedings of the 2015 IEEE Thirty Fifth Central American and Panama Convention (CONCAPAN XXXV), Tegucigalpa,
Honduras, 11–13 November 2015; pp. 1–6.
18.
Fajardo, J.; Ferman, V.; Lemus, A.; Rohmer, E. An affordable open-source multifunctional upper-limb prosthesis with intrinsic
actuation. In Proceedings of the 2017 IEEE Workshop on Advanced Robotics and Its Social Impacts (ARSO), Austin, TX, USA,
8–10 March 2017; pp. 1–6.
19.
Andrade, D.; Neto, A.R.; Rohmer, E. Human prosthetic interaction: Integration of several techniques. In Proceedings of the
Simpósio Brasileiro de Automação Inteligente, Porto Alegre, Brazil, 1–4 October 2017; pp. 1209–1215.
20.
Battye, C.; Nightingale, A.; Whillis, J. The use of myo-electric currents in the operation of prostheses. J. Bone Jt. Surg. Br. Vol.
1955,37, 506–510. [CrossRef]
21.
Attenberger, A.; Buchenrieder, K. Remotehand: A wireless myoelectric interface. In International Conference on Human-Computer
Interaction; Springer: Berlin/Heidelberg, Germany, 2014; pp. 3–11.
22.
Cardona, D.; Maldonado, G.; Ferman, V.; Lemus, A.; Fajardo, J. Impact of Diverse Aspects in User-Prosthesis Interfaces for
Myoelectric Upper-limb Prostheses. In Proceedings of the 2020 8th IEEE RAS/EMBS International Conference for Biomedical
Robotics and Biomechatronics (BioRob), New York, NY, USA, 29 November–1 December 2020; pp. 954–960.
23.
Fajardo, J.; Ferman, V.; Cardona, D.; Maldonado, G.; Lemus, A.; Rohmer, E. Galileo Hand: An Anthropomorphic and Affordable
Upper-Limb Prosthesis. IEEE Access 2020,8, 81365–81377. [CrossRef]
24.
Cognolato, M.; Atzori, M.; Faccio, D.; Tiengo, C.; Bassette, F.; Gassert, R.; Muller, H. Hand Gesture Classification in Transradial
Amputees Using the Myo Armband Classifier* This work was partially supported by the Swiss National Science Foundation
Sinergia project# 410160837 MeganePro. In Proceedings of the 2018 7th IEEE International Conference on Biomedical Robotics
and Biomechatronics (Biorob), Enschede, The Netherlands, 26–29 August 2018; pp. 156–161.
25.
Phinyomark, A.; N Khushaba, R.; Scheme, E. Feature extraction and selection for myoelectric control based on wearable EMG
sensors. Sensors 2018,18, 1615. [CrossRef]
26.
Visconti, P.; Gaetani, F.; Zappatore, G.; Primiceri, P. Technical features and functionalities of Myo armband: An overview on
related literature and advanced applications of myoelectric armbands mainly focused on arm prostheses. Int. J. Smart Sens. Intell.
Syst. 2018,11, 1–25. [CrossRef]
27.
Corbett, E.A.; Perreault, E.J.; Kuiken, T.A. Comparison of electromyography and force as interfaces for prosthetic control.
J. Rehabil. Res. Dev. 2011,48, 629. [CrossRef]
28.
Castellini, C.; Kõiva, R.; Pasluosta, C.; Viegas, C.; Eskofier, B.M. Tactile myography: An off-line assessment of able-bodied subjects
and one upper-limb amputee. Technologies 2018,6, 38. [CrossRef]
29.
Guo, W.; Sheng, X.; Liu, H.; Zhu, X. Toward an enhanced human–machine interface for upper-limb prosthesis control with
combined EMG and NIRS signals. IEEE Trans. Hum.-Mach. Syst. 2017,47, 564–575. [CrossRef]
30.
Ribeiro, J.; Mota, F.; Cavalcante, T.; Nogueira, I.; Gondim, V.; Albuquerque, V.; Alexandria, A. Analysis of man-machine interfaces
in upper-limb prosthesis: A review. Robotics 2019,8, 16. [CrossRef]
31.
Krasoulis, A.; Vijayakumar, S.; Nazarpour, K. Effect of user adaptation on prosthetic finger control with an intuitive myoelectric
decoder. Front. Neurosci. 2019,13, 891. [CrossRef] [PubMed]
32.
Resnik, L.; Huang, H.H.; Winslow, A.; Crouch, D.L.; Zhang, F.; Wolk, N. Evaluation of EMG pattern recognition for upper limb
prosthesis control: A case study in comparison with direct myoelectric control. J. Neuroeng. Rehabil.
2018
,15, 23. [CrossRef]
[PubMed]
33.
Kuiken, T.A.; Miller, L.A.; Turner, K.; Hargrove, L.J. A comparison of pattern recognition control and direct control of a multiple
degree-of-freedom transradial prosthesis. IEEE J. Transl. Eng. Health Med. 2016,4, 1–8. [CrossRef] [PubMed]
34.
Deeny, S.; Chicoine, C.; Hargrove, L.; Parrish, T.; Jayaraman, A. A simple ERP method for quantitative analysis of cognitive
workload in myoelectric prosthesis control and human-machine interaction. PLoS ONE 2014,9, e112091. [CrossRef]
35.
Franzke, A.W.; Kristoffersen, M.B.; Bongers, R.M.; Murgia, A.; Pobatschnig, B.; Unglaube, F.; van der Sluis, C.K. Users’ and
therapists’ perceptions of myoelectric multi-function upper limb prostheses with conventional and pattern recognition control.
PLoS ONE 2019,14, e0220899. [CrossRef] [PubMed]
36. Jutai, J.; Day, H. Psychosocial impact of assistive devices scale (PIADS). Technol. Disabil. 2002,14, 107–111. [CrossRef]
37.
Giesbrecht, E. Application of the Human Activity Assistive Technology model for occupational therapy research. Aust. Occup.
Ther. J. 2013,60, 230–240. [CrossRef]
38.
Kyberd, P.J.; Murgia, A.; Gasson, M.; Tjerks, T.; Metcalf, C.; Chappell, P.H.; Warwick, K.; Lawson, S.E.; Barnhill, T. Case studies to
demonstrate the range of applications of the Southampton Hand Assessment Procedure. Br. J. Occup. Ther.
2009
,72, 212–218.
[CrossRef]
39.
Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In
Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183.
40.
Hart, S.G. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the Human Factors and Ergonomics Society Annual
Meeting; Sage Publications Sage CA: Los Angeles, CA, USA, 2006; Volume 50, pp. 904–908.
Sensors 2021,21, 7088 20 of 20
41.
Ozawa, R.; Hashirii, K.; Kobayashi, H. Design and control of underactuated tendon-driven mechanisms. In Proceedings of the
2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 1522–1527.
42.
Fajardo, J.; Cardona, D.; Maldonado, G.; Neto, A.R.; Rohmer, E. A Robust
H
Full-State Observer for Under-Tendon-Driven
Prosthetic Hands. In Proceedings of the 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM),
Boston, MA, USA, 6–9 July 2020; pp. 1555–1560. [CrossRef]
43.
Ryser, F.; Bützer, T.; Held, J.P.; Lambercy, O.; Gassert, R. Fully embedded myoelectric control for a wearable robotic hand
orthosis. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017;
pp. 615–621.
44.
Atasoy, A.; Kaya, E.; Toptas, E.; Kuchimov, S.; Kaplanoglu, E.; Ozkan, M. 24 DOF EMG controlled hybrid actuated prosthetic
hand. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society
(EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 5059–5062.
45.
Fajardo, J.; Ferman, V.; Muñoz, A.; Andrade, D.; Neto, A.R.; Rohmer, E. User-Prosthesis Interface for Upper Limb Prosthesis
Based on Object Classification. In Proceedings of the 2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on
Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE), João Pessoa, Brazil, 6–10 November 2018; pp. 390–395.
46.
Cutkosky, M.R. On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Trans. Robot. Autom.
1989
,
5, 269–279. [CrossRef]
Article
Aiming at the problem that the traditional mechanomyography (MMG) pattern recognition model for upper limb rehabilitation action has poor recognition effect on non-identical distribution test data, this research proposes an incremental learning method for deep learning based on MMG. By collecting 8-channels of MMG of 4 types of hand movements (wrist flexion, wrist extension, wrist ulnar flexion, and wrist radial flexion), after signal preprocessing, feature extraction and dimensionality reduction, 12 groups of non-identical distributed data were obtained. The model was trained by using deep neural network (DNN), and five commonly used machine learning algorithms were used as comparison for incremental training. Finally, the recognition rate of DNN was 88.25%, and the final recognition rates of Passive Aggressive (PA), Incremental Support Vector Machine (ISVM), Perceptron, Bernoulli Naive Bayes (BNB) and Multinomial Naive Bayes (MNB) were 85.94%, 84.82%, 81.01%, 73.07% and 61.20% respectively. Among them, both DNN and PA had better upward trends, and DNN had the highest final recognition rate. By comparing the confusion matrices before and after DNN incremental learning, it could be seen that DNN incremental training can significantly improve the accuracy and precision in confusion matrix. The experimental results demonstrate that the method of incremental learning can not only achieve the recognition of non-identically distributed test data, but also improve the recognition rates and the generalization performance of the model.
Conference Paper
Full-text available
Controlling different characteristics like force, speed and position is a relevant aspect in assistive robotics, because their interaction with diverse, common, everyday objects is divergent. Usual approaches to solve this issue involve the implementation of sensors; however, the unnecessary use of such devices increases the prosthetics’ prices in a significant manner. Thus, this work focuses on the design of an H∞ full-state observer to estimate the angular position and velocity of the motor’s gearhead in order to determine parameters such as the joints’ torque, fingertip force and the generalized coordinates of the digits of an under-tendon-driven system to replace the transductors. This is achieved by measuring the current demanded by the brushed DC motors operating the fingers of an open-source, 3D-printed and intrinsic prosthetic hand. Besides, the proposed method guarantees disturbance attenuation, as well as the asymptotic stability of the error estimation. In addition to that, the theoretical model was validated through its implementation on a prosthetic finger, showing successful results.
Article
Full-text available
The strict development processes of commercial upper-limb prostheses and the complexity of research projects required for their development makes them expensive for end users, both in terms of acquisition and maintenance. Moreover, many of them possess complex ways to operate and interact with the subjects, influencing patients to not favor these devices and shed them from their activities of daily living. The advent of 3D printers allows for distributed open-source research projects that follow new design principles; these consider simplicity without neglecting performance in terms of grasping capabilities, power consumption and controllability. In this work, a simple, yet functional design based on 3D printing is proposed, with the aim to reduce costs and manufacturing time. The operation process consists in interpreting the user intent with electromyography electrodes, while providing visual feedback through a μLCD screen. Its modular, parametric and self-contained design is intended to aid people with different transradial amputation levels, despite of the socket’s constitution. This approach allows for easy updates of the system and demands a low cognitive effort from the user, satisfying a trade-off between functionality and low cost. It also grants an easy customization of the amount and selection of available actions, as well as the sensors used for gathering the user intent, permitting alterations to fit the patients’ unique needs. Furthermore, experimental results showed an apt mechanical performance when interacting with everyday life objects, in addition to a highly accurate and responsive controller; this also applies for the user-prosthesis interface.
Article
Full-text available
Background: The loss of a hand is a traumatic experience that substantially compromises an individual's capability to interact with his environment. The myoelectric prostheses are state-of-the-art (SoA) functional replacements for the lost limbs. Their overall mechanical design and dexterity have improved over the last few decades, but the users have not been able to fully exploit these advances because of the lack of effective and intuitive control. Bimanual tasks are particularly challenging for an amputee since prosthesis control needs to be coordinated with the movement of the sound limb. So far, the bimanual activities have been often neglected by the prosthetic research community. Methods: We present a novel method to prosthesis control, which uses a semi-autonomous approach in order to simplify bimanual interactions. The approach supplements the commercial SoA two-channel myoelectric control with two additional sensors. Two inertial measurement units were attached to the prosthesis and the sound hand to detect the movement of both limbs. Once a bimanual interaction is detected, the system mimics the coordination strategies of able-bodied subjects to automatically adjust the prosthesis wrist rotation (pronation, supination) and grip type (lateral, palmar) to assist the sound hand during a bimanual task. The system has been evaluated in eight able-bodied subjects performing functional uni- and bi-manual tasks using the novel method and SoA two-channel myocontrol. The outcome measures were time to accomplish the task, semi-autonomous system misclassification rate, subjective rating of intuitiveness, and perceived workload (NASA TLX). Results: The results demonstrated that the novel control interface substantially outperformed the SoA myoelectric control. While using the semi-autonomous control the time to accomplish the task and the perceived workload decreased for 25 and 27%, respectively, while the subjects rated the system as more intuitive then SoA myocontrol. Conclusions: The novel system uses minimal additional hardware (two inertial sensors) and simple processing and it is therefore convenient for practical implementation. By using the proposed control scheme, the prosthesis assists the user's sound hand in performing bimanual interactions while decreasing cognitive burden.
Article
Full-text available
Machine learning-based myoelectric control is regarded as an intuitive paradigm, because of the mapping it creates between muscle co-activation patterns and prosthesis movements that aims to simulate the physiological pathways found in the human arm. Despite that, there has been evidence that closed-loop interaction with a classification-based interface results in user adaptation, which leads to performance improvement with experience. Recently, there has been a focus shift toward continuous prosthesis control, yet little is known about whether and how user adaptation affects myoelectric control performance in dexterous, intuitive tasks. We investigate the effect of short-term adaptation with independent finger position control by conducting real-time experiments with 10 able-bodied and two transradial amputee subjects. We demonstrate that despite using an intuitive decoder, experience leads to significant improvements in performance. We argue that this is due to the lack of an utterly natural control scheme, which is mainly caused by differences in the anatomy of human and artificial hands, movement intent decoding inaccuracies, and lack of proprioception. Finally, we extend previous work in classification-based and wrist continuous control by verifying that offline analyses cannot reliably predict real-time performance, thereby reiterating the importance of validating myoelectric control algorithms with real-time experiments.
Article
Full-text available
Objective: To describe users' and therapists' opinions on multi-function myoelectric upper limb prostheses with conventional control and pattern recognition control. Design: Qualitative interview study. Settings: Two rehabilitation institutions in the Netherlands and one in Austria. Subjects: The study cohort consisted of 15 prosthesis users (13 males, mean age: 43.7 years, average experience with multi-function prosthesis: 3.15 years) and seven therapists (one male, mean age: 44.1 years, average experience with multi-function prostheses: 6.6 years). Four of these users and one therapist had experience with pattern recognition control. Method: This study consisted of semi-structured interviews. The participants were interviewed at their rehabilitation centres or at home by telephone. The thematic framework approach was used for analysis. Results: The themes emerging from prosthesis users and therapists were largely congruent and resulted in one thematic framework with three main themes: control, prosthesis, and activities. The participants mostly addressed (dis-) satisfaction with the control type and the prosthesis itself and described the way they used their prostheses in daily tasks. Conclusion: Prosthesis users and therapists described multi-function upper limb prostheses as more functional devices than conventional one-degree-of-freedom prostheses. Nonetheless, the prostheses were seldom used to actively grasp and manipulate objects. Moreover, the participants clearly expressed their dissatisfaction with the mechanical robustness of the devices and with the process of switching prosthesis function under conventional control. Pattern recognition was appreciated as an intuitive control that facilitated fast switching between prosthesis functions, but was reported to be too unreliable for daily use and require extensive training.
Article
Full-text available
This paper compiles and analyzes some of the most current works related to upper limb prosthesis with emphasis on man-machine interfaces. A brief introduction of the basic subjects is given to explain what a prosthesis is, what types of prostheses exist, what they serve for, how they communicate with the user (control and feedback), and what technologies are involved. The method used in this review is also discussed, as well as the cataloging process and analysis of articles for the composition of this review. Each article is analyzed individually and its results are presented in a succinct way, in order to facilitate future research and serve as a source for professionals related to the area of prosthesis, such as doctors, engineers, researchers, and anyone interested in this subject. Finally, the needs and difficulties of the current prostheses, as well as the negative and positive points in the results are analyzed, and the progress achieved so far is discussed.
Article
Full-text available
Specialized myoelectric sensors have been used in prosthetics for decades, but, with recent advancements in wearable sensors, wireless communication and embedded technologies, wearable electromyographic (EMG) armbands are now commercially available for the general public. Due to physical, processing, and cost constraints, however, these armbands typically sample EMG signals at a lower frequency (e.g., 200 Hz for the Myo armband) than their clinical counterparts. It remains unclear whether existing EMG feature extraction methods, which largely evolved based on EMG signals sampled at 1000 Hz or above, are still effective for use with these emerging lower-bandwidth systems. In this study, the effects of sampling rate (low: 200 Hz vs. high: 1000 Hz) on the classification of hand and finger movements were evaluated for twenty-six different individual features and eight sets of multiple features using a variety of datasets comprised of both able-bodied and amputee subjects. The results show that, on average, classification accuracies drop significantly (p < 0.05) from 2% to 56% depending on the evaluated features when using the lower sampling rate, and especially for transradial amputee subjects. Importantly, for these subjects, no number of existing features can be combined to compensate for this loss in higher-frequency content. From these results, we identify two new sets of recommended EMG features (along with a novel feature, L-scale) that provide better performance for these emerging low-sampling rate systems.
Article
Full-text available
Technological advances in manufacturing smart high-performances electronic devices, increasingly available at lower costs, nowadays allow to improve users’ quality of life in many application fields. In this work, the human-machine interaction obtained by using a next generation device (Myo armband) is analyzed and discussed, with a particular focus to healthcare applications such as upper-limb prostheses. An overview on application fields of the Myo armband and on the latest research works related to its use in prosthetic applications is presented; subsequently, the technical features and functionalities of this device are examined. Myo armband is a wearable device provided with eight electro-myographic electrodes, a 9-axes Inertial Measurement Unit and a transmission module. It sends the data related to the detected signals, via Bluetooth Low Energy technology, to other electronic devices which process them and act accordingly, depending on how they are programmed (in order to drive actuators or perform other specific functions). Applied to the prosthetic field, Myo armband allows to overcome many issues related to the existing prostheses, representing a complete electronic platform that detects in real-time the main signals related to forearm activity (muscles activation and forearm movements in the three-dimensional space) and sends these data to the connected devices. Nowadays, several typologies of prostheses are available on the market; they can be mainly distinguished into low-cost prostheses, which are light and compact but allow for a limited number of movements, and high-end prostheses, which are much more complex and featured by high dexterity, but also heavy, bulky, difficult to control and very expensive. Finally, the Myo armband is an optimum candidate for prosthetic application (and many others) and offers an excellent low-cost solution for obtaining a reliable, easy to use system.
Article
Full-text available
Human-machine interfaces to control prosthetic devices still suffer from scarce dexterity and low reliability; for this reason, the community of assistive robotics is exploring novel solutions to the problem of myocontrol. In this work, we present experimental results pointing in the direction that one such method, namely Tactile Myography (TMG), can improve the situation. In particular, we use a shape-conformable high-resolution tactile bracelet wrapped around the forearm/residual limb to discriminate several wrist and finger activations performed by able-bodied subjects and a trans-radial amputee. Several combinations of features/classifiers were tested to discriminate among the activations. The balanced accuracy obtained by the best classifier/feature combination was on average 89.15% (able-bodied subjects) and 88.72% (amputated subject); when considering wrist activations only, the results were on average 98.44% for the able-bodied subjects and 98.72% for the amputee. The results obtained from the amputee were comparable to those obtained by the able-bodied subjects. This suggests that TMG is a viable technique for myoprosthetic control, either as a replacement of or as a companion to traditional surface electromyography.