ArticlePDF Available

Abstract and Figures

The strict development processes of commercial upper-limb prostheses and the complexity of research projects required for their development makes them expensive for end users, both in terms of acquisition and maintenance. Moreover, many of them possess complex ways to operate and interact with the subjects, influencing patients to not favor these devices and shed them from their activities of daily living. The advent of 3D printers allows for distributed open-source research projects that follow new design principles; these consider simplicity without neglecting performance in terms of grasping capabilities, power consumption and controllability. In this work, a simple, yet functional design based on 3D printing is proposed, with the aim to reduce costs and manufacturing time. The operation process consists in interpreting the user intent with electromyography electrodes, while providing visual feedback through a μLCD screen. Its modular, parametric and self-contained design is intended to aid people with different transradial amputation levels, despite of the socket’s constitution. This approach allows for easy updates of the system and demands a low cognitive effort from the user, satisfying a trade-off between functionality and low cost. It also grants an easy customization of the amount and selection of available actions, as well as the sensors used for gathering the user intent, permitting alterations to fit the patients’ unique needs. Furthermore, experimental results showed an apt mechanical performance when interacting with everyday life objects, in addition to a highly accurate and responsive controller; this also applies for the user-prosthesis interface.
Content may be subject to copyright.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2020.DOI
Galileo Hand: An Anthropomorphic and
Affordable Upper-Limb Prosthesis
JULIO FAJARDO 1,2, VICTOR FERMAN 2, DIEGO CARDONA 1, GUILLERMO
MALDONADO 1, ALI LEMUS 1AND ERIC ROHMER 2
1Turing Research Laboratory, FISICC, Galileo University, Guatemala City, Guatemala (e-mail: {julio.fajardo,juandiego.cardona,guiller,alilemus}@galileo.edu)
2Department of Computer Engineering and Industrial Automation, FEEC, UNICAMP, 13083-852 Campinas, SP, Brazil (e-mail:
{julioef,vferman,eric}@dca.fee.unicamp.br)
Corresponding author: Julio Fajardo (e-mail: julio.fajardo@galileo.edu, julioef@dca.fee.unicamp.br).
“This work was supported by FAPESP-CEPID/BRAINN under Grant 2013/07559-3, MCTI/SECIS/FINEP/FNDCT Grant 0266/15.
ABSTRACT The strict development processes of commercial upper-limb prostheses and the complexity
of research projects required for their development makes them expensive for end users, both in terms
of acquisition and maintenance. Moreover, many of them possess complex ways to operate and interact
with the subjects, influencing patients to not favor these devices and shed them from their activities of
daily living. The advent of 3D printers allows for distributed open-source research projects that follow
new design principles; these consider simplicity without neglecting performance in terms of grasping
capabilities, power consumption and controllability. In this work, a simple, yet functional design based
on 3D printing is proposed, with the aim to reduce costs and manufacturing time. The operation process
consists in interpreting the user intent with electromyography electrodes, while providing visual feedback
through a µLCD screen. Its modular, parametric and self-contained design is intended to aid people with
different transradial amputation levels, despite of the socket’s constitution. This approach allows for easy
updates of the system and demands a low cognitive effort from the user, satisfying a trade-off between
functionality and low cost. It also grants an easy customization of the amount and selection of available
actions, as well as the sensors used for gathering the user intent, permitting alterations to fit the patients’
unique needs. Furthermore, experimental results showed an apt mechanical performance when interacting
with everyday life objects, in addition to a highly accurate and responsive controller; this also applies for
the user-prosthesis interface.
INDEX TERMS Prosthetic hand, three-dimensional printing, electromyography, user-prosthesis interface.
I. INTRODUCTION
THE last World Report on disabilities shows that there
are at least 30 million people with amputations re-
siding in developing countries and most of them do not
have possibilities to acquire prosthetic care, neither can they
afford leading commercial assistive technology with pricing
around $1000, such as upper-limb prosthetic devices [1]–
[4]. Additionally, the acquisition of these assistive devices
is problematic in these countries, since availability is not
guaranteed [2], [3]. Meanwhile, several research laboratories
focus on improving dexterity and biomimetics of prosthetic
hands, as well as implementing expensive and intrusive ways
to gather the user intent [5]–[9], while, sometimes, neglecting
other vital aspects of the prosthetic device, like aesthetics,
controllability and the user interface, the lack of which can
influence patients to stop using them [10]. This phenomenon
also occurs with commercial prostheses, because they require
long periods of training and adaptation to aptly interact
with the user-prosthesis interface (UPI), which is, commonly,
powered by myoelectric controllers [11]; this can be cor-
rected by implementing an amiable and intuitive alternative.
Because of the limitations of conventional body-powered
prostheses, like steel hooks, and the elevated cost, weight
and difficulties to repair commercial myoelectric prosthetic
devices [12]–[14], many open-source projects based on 3D
printing technologies have been released [14]–[17], whose
target is a lightweight and affordable upper-limb prosthetic
device. This encourages its widespread distribution through
global networks by reducing manufacturing costs. That is
why the implementation of said technology in assistive de-
vices has been increasing; improving availability, pricing and
can also offer an extended set of grasps [14]–[16], [18].
VOLUME 1, 2020 1
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
The Galileo Hand, shown in Fig. 1, is an affordable, 3D-
printed, open-source, anthropomorphic and underactuated
myoelectric upper-limb prosthesis for transradial amputees,
designed to be easily built and repaired. Its UPI offers a user-
friendly alternative to traditional methods. This is achieved
by utilizing a reduced muscle contractions subset to gather
the user intent, since the bio-potentials of the limb-impaired
population differ between themselves and the ones from
the healthy subjects, because some of their musculature is
uniquely atrophied. In addition to that, it only requires ma-
terials that are readily available in developing countries for
its construction [17]. Furthermore, the design is intended to
be easily integrated on sockets provided by social security
entities in underdeveloped countries. This way, both the cost
and the manufacturing time are lessened. Moreover, its para-
metric and modular design allows for an easy modification of
the size of the fingers and the palm, with the aim of increas-
ing the range of target users. Furthermore, its six intrinsic
actuators and the self-contained embedded controller inside
the palm, provide fitting versatility to subjects with different
necessities [9], [19].
In order to replicate the six movements of the hu-
man thumb (abduction-adduction, flexion-extension and
opposition-deposition) [20], [21], a design implementing two
actuators has been elaborated. This permits to achieve more
customized actions, such as individual finger motions, time-
based sequential actions and most common types of grasping
based on the Cutkosky grasp taxonomy [22]. Another rele-
vant aspect is that the proposed prosthetic hand weighs under
350gand requires less than $350 to be built.
The rest of this work is structured as follows: Section II
elaborates on the state of the art of open-source upper-limb
prostheses, as well as their UPIs. Methods involved in the
design of the anthropomorphic and under-actuated prosthetic
hand are described in Section III. Details about the electrical
design, digital signal processing methods employed, as well
FIGURE 1: The Galileo Hand installed on a probing handle.
as the user-prosthesis interface elaborated are described in
Sections IV and V, respectively. Additionally, experimental
results about the functionality of the system, the strategy of
control and the implemented UPI are presented in Section VI.
Finally, the conclusions are presented in Section VII.
II. STATE OF THE ART
Traditionally, to analyze the user intent and to activate a spe-
cific activation profile, different techniques based on the pre-
processing of electromyography (EMG) signals have been
the focus of upper-limb prosthesis control research. Nowa-
days, typical commercial hands are operated by features of
a predefined subset of muscle contractions according to a
state machine model. Meanwhile, most research prosthetic
hands are based on pattern recognition with a multimodal
(or hybrid) approach. This method consists in combining the
EMG features with information gathered from other sensor
types to address some issues that arise while utilizing EMG,
such as electrode positioning, fatigue, inherent crosstalk in
the surface signal, displacement of the muscles and the limb
position effect [10], [11], [23]. Some multimodal works have
shown an increase in the classification accuracy by em-
ploying EMG in tandem with an Inertial Measurement Unit
(IMU) or combining it with mechanomyography (MMG)
techniques with successful results, like gathering the features
with microphones (mMMG) or accelerometers (aMMG)
[24], [25]. Besides, other projects have implemented Optical
Fiber Force Myography (FMG) as an affordable and more
accurate alternative to the usual non-hybrid versions [26],
[27]. In addition to that, ultrasound imaging has also been
used to interpret the user’s intention; this is called sono-
myography (SMG). This method detects the morphological
changes of the muscles in the forearm during the performance
of different actions and relates them to the wrist’s generalized
coordinates [28], [29].
On the other hand, these multimodal systems were also
introduced to improve the user control of prosthetic de-
vices. Implementing an EMG-Radio Frequency Identifica-
tion (RFID) hybrid and using RFID tags on specific objects
to reduce the cognitive effort to operate a prosthesis has been
proposed in [30]. Similarly, other works have experimented
with combining EMG systems with voice-control and visual
feedback, allowing the users to decide between different
modalities to control their prosthetic device [17].
Other approaches utilizing Brain-Machine Interfaces
(BMIs) as a means to control upper-limb prostheses have
also been proposed. They are based on high-density electro-
corticography (ECoG), permitting the user to control each
individual finger in a natural way. The main problems with
this methodology are its invasiveness and price, because it
requires an implant consisting of an ECoG array in the brain
and a targeted muscle re-innervation (TMR) on a specific
set of muscles, which results in challenging procedures for
the amputees [7]. Other studies implemented a combination
of BMI with other technologies, like voice recognition, eye
tracking and computer vision techniques. Nevertheless, they
2VOLUME 1, 2020
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
required high levels of concentration and training, entailing a
massive cognitive effort from the user [31], [32].
Computer vision approaches, like the one-shot learning
method, implemented to generate specific grasps for un-
known objects, have also been employed to control prosthetic
devices. This methodology “generalizes a single kinestheti-
cally demonstrated grasp to generate many grasps of other
objects of different and unfamiliar shapes“ [33]. Addition-
ally, another hybrid control using an augmented reality (AR)
headset with an integrated stereo-camera pair was used to ac-
tivate a prosthetic device via the detection of specific muscle
activity. This system is able to provide a suggestion regarding
the grasp to be actioned via stereo-vision methods, while the
users adjust the gesture selection using the AR feedback. This
results in a low effort control and better accuracy [34].
Finally, to increase the functionality of multi-grasping
upper-limb prostheses, some studies developed hybrid deep-
learning artificial vision systems combined with EMG. Aim-
ing to improve the way that the system interprets the user
intent, it associates a subset of objects to a specific kind of
grasp based on their geometric properties. The classification
task is completed through an object classifier implemented
with a convolutional neural network (CNN) [35]–[37].
Meanwhile, regarding the hardware implemented by the
diverse commercial and research prostheses, their design may
differ in terms of the fingers’ structure, actuation method,
weight, price, compliance and materials used. Taking into
account the last mentioned aspect, one can classify them
into 3D printed prostheses and the ones that are constructed
utilizing a different material. The main advantage of non-
3D-printed designs is the robustness its composition may
provide, even if it results in increasing their cost and weight.
Delving into some of these versions, there are several dif-
ferent approaches to reduce the increased price and mass;
some of them sacrifice functionality and mobility, such as the
SensorHand and the Michelangelo prosthetics. In contrast,
other iterations favor functionality over aforesaid aspects,
like the BeBionic, which achieves a much lower cost than
some of its counterparts (an aproximate of $11000), but has
a weight above the mean of the human hand’s, increasing
the fatigue factor [38], [39]. While robustness is a relevant
aspect, several 3D-printed prostheses trade some of it off for
a more affordable alternative, some ranging in prices around
the order of hundreds of dollars. Three-dimensional printing
balances affordability and robustness in comparison to other
cheaper methods like injection moulding. Additionally, this
methodology also permits to customize the design in an easy
manner, without altering the manufacturing line process [4].
Furthermore, a classification according to the degrees of
actuation (DOA) and freedom (DOF) has been proposed.
They can be differentiated in non-tendon-driven and tendon-
driven mechanisms. Moreover, the latter can be categorized
based on the actuation method (active or passive) and de-
pending on the DOA and DOF ratio. These characteristics
influence directly in the price range and functionality of the
devices, permitting or not certain biomimetic motions [40].
According to what was enunciated previously, a design
that can be replicated using rapid prototyping tools was
selected. Thus, the prosthetic hand can be built with different
materials, such as Nylon, ABS and PLA polymers, providing
a robust and affordable option. Its fingers are assembled via
surgical grade elastics and motor-powered waxed strings,
providing an under-tendon-actuated system. To operate the
device, multimodal approaches can be used due to the flex-
ibility of the controller; however, to gather the user intent,
medical grade sEMG electrodes are employed.
III. MECHANICAL DESIGN
The merit of intrinsic actuation pattern (IAP) prosthetic
hands is to provide more flexibility for people with different
levels of amputation [19], so the project can benefit more
users. Since it is essential that the patient’s stump plus the
prosthesis’s span equal the length of the preserved limb so
that amputees feel comfortable using it, the prosthesis’s size
is a relevant aspect. In this way, the placement of the actuators
and electronics inside the palm helps achieve symmetry be-
tween both arms, regardless of the amputation level, because
the prosthesis does not take up space within the socket, which
allows for a reduction in length when necessary [19]. Never-
theless, creating an intrinsic design, illustrated in Fig. 2, will
increase the mass of the prosthetic hand, which needs to be
less than a biological hand’s (around 500g), since it will be
attached to the softer tissue of the amputated limb instead of
being directly attached to the human skeleton, which means
that it is perceived heavier by the end user [15].
The prosthesis’s proposed design is underactuated with the
aim to simplify the manufacturing and assembling processes,
as well as to assimilate the human hand’s movements and
reduce costs. In addition to that, adaptive grasping can be
achieved with such actuation system, as explained in [41],
[42], which consists in interacting with objects during activ-
ities of the daily living (ADLs). The main modules in the
prosthesis are the palm, the thumb rotation mechanism, and
the fingers, which vary only in the length of each phalanx.
FIGURE 2: Mechanical design of the Galileo Hand.
VOLUME 1, 2020 3
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
FIGURE 3: Top view of the modular palm sections. (1) The
main PCB board controller. (2) Motors driving the index,
middle, ring and little fingers. (3) Actuator in charge of the
rotation of the thumb
A. PALM DESIGN AND MECHANISMS
The design requirements were set up with help of two
male volunteers suffering from unilateral, transradial ampu-
tation and taking into account the results from the reported
users’ needs in [10]. The mechanism consists in Micro-metal
brushed DC gearmotors (250:1) with an output torque of
around 0.42 Nm, which perform the flexion/extension move-
ments of the five fingers through an under-tendon-actuated
system. The palm has three different sections with individual
covers, one for the motors powering all digits but the thumb;
another for the actuator that enables the thumb rotation; and
the last one for the rest of the components; this is shown in
Fig. 3. Such a design allows for easy maintenance without
disassembling the whole artificial hand.
B. THUMB MOVEMENT CHARACTERISTICS
The thumb has been designed with two DOAs in order
to recreate the six movements that humans can perform,
as described in [20]. One actuator is located inside the
thumb metacarpal phalanx and it is responsible of flexion
and extension of the proximal and distal phalanges. The
second one, located in the metacarpophalangeal joint of the
thumb, is responsible for its abduction and adduction, which
FIGURE 4: Thumb mechanism side view, beveloid gear pair
is monitored by the reading of a quadrature encoder. This
joint is built by a bevel and a helical gear working together
to transmit the torque from the actuator with a ratio of
8:11, creating a beveloid gear pair [43], as shown in Fig. 4.
Rotating the thumb around an axis shifted 15from the palm
plane increases the abducted position of the thumb. This way,
the rotation axis is shifted without the need to incline the
motor, allowing it to perform a larger prismatic grasp [22],
while, at the same time, saving space inside the palm and
making it easier to print.
C. FINGER DESIGN
The remaining fingers consist of three phalanges and three
joints, distal and proximal interphalangeal (DIP, PIP) and
the metacarpophalangeal (MCP), as shown in Fig. 5. This
configuration is meant to mimic the biomechanics of the
human hand, resulting in a 15 DOF prosthetic device. Addi-
tionally, each of their components can be easily reprinted and
reassembled using common 3D printing polymers. Moreover,
the phalanges were designed to withstand the stress created
by the actuators during ADLs. Plus, each finger’s outer shell
is coupled to their respective phalanx to not only provide a
more aesthetic design, but also, offer better grip capabilities
if implemented with thermo-flexible materials.
Furthermore, the parametric design of the phalanges al-
lows to modify its length1, allowing a wider range of patients
to utilize the prosthetic device. With the purpose of creating
a more versatile design, fingers can be implemented in both
right or left hand prostheses. Moreover, each phalanx and its
respective shell are enumerated to simplify the assembly and
repairing processes.
1The minimal length of the proximal and middle phalanges is 22 mm; and
20 mm for the distal phalanx, because of the finger-palm ratio restrictions
and the PCB size limit.
FIGURE 5: Mechanical design for the fingers, where ris the
pulley’s radius and θrepresents the position of the motor.
4VOLUME 1, 2020
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
FIGURE 6: System architecture’s block diagram illustrating how the components for each module interact with each other.
The prosthetic device consists in six DOAs, one permitting
the thumb’s rotation and the other five allow the flexion
and extension movements of each finger. These last actions
are completed by actuating waxed nylon cords, working as
the active tendons; and round surgical elastic cords, as the
passive ones. The first, goes through the duct inside the volar
face of the finger; and the second one, through the one on the
dorsal side, as shown in the blue areas in Fig. 5.
1) Under-tendon-driven Machine
Since each active tendon is driven by a geared DC motor, a
positive, active tensile force, fta, is generated. In contrast,
the passive tendon’s force, fte, depends uniquely on the
deflection of the joints and, to prevent them from loosening,
an initial expansion must be considered [40].
Letting Lbe the number of tendons, Nthe amount of joints
and ft= [ fta fte ]Tthe resulting tensile force vector, a
relationship between ftRLand the joint torque vector
τRNis given by the equation enounced underneath.
τ=JT
jft(1)
where Jj= [ Jja Jj e ]Tis the Jacobian matrix for the
active and passive tendons and, considering rare the radii of
the pulleys on each joint, the matrix is given by the following
expression.
Jj=r r r
rrr(2)
Alternatively, the tensile force vector for the system can
also be defined with the following equation.
ft=fbJT
j+τ(3)
where fbRLis a bias tension force vector and JT
j+
is the Moore-Penrose pseudoinverse of the matrix Jjtrans-
posed.
Since fbdoes not directly affect the joint torque vector , τ,
one can define its expression as follows [40].
fb=Aξ,A=IL(JT
j)+JT
j(4)
where ξis a compatible dimensional vector with the matrix
Aand ILis the identity matrix of size L.
Therefore, since an initial expansion of the passive tendon
is considered for each finger, it is evident that fb>0,
resulting in a tendon-driven machine and, in addition to that,
since rank(Jj)=1< N, the system is, additionally, defined
as an under-tendon-driven mechanism. With this information,
one can deduce that the system’s dynamic with the following
equations:
M¨
q+1
2˙
M+S+B0˙
q+Ggq=τ(5)
τm=Jm¨
θ+ftarp(6)
where Mand B0are the inertia and damping matrices of
the finger, accordingly, Sis a skew-symmetric matrix and
Ggis the gravity load matrix. Additionally, Jmand bare the
gearhead’s moment of inertia and friction coefficient, corre-
spondingly; τm, the torque exerted by the motor gearhead’s
shaft; and rp, the radius of the pulley mounted on it [40].
IV. ELECTRICAL DESIGN
A versatile myoelectric controller is implemented with a
low cost and high performance microcontroller unit (MCU)
based on the ARM Cortex-M4 architecture on a custom
control board (shown in Fig. 7). The SIMD extensions of
its instruction set provide it signal processing capabilities
and separate stack pointers, which result ideal for real-time
FIGURE 7: Control board PCB based on ARM Cortex-M4.
VOLUME 1, 2020 5
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
applications through the use of Real-Time Operating Systems
(RTOS) [44]. In this manner, the MCU can run multiple
processes concurrently, allowing scalability, modularity and
reliability to the system. Thus, it can easily adapt to different
control strategies, as well as different UPIs, providing mul-
tiple ways to interpret the user intent through the use of one
or more types of transducers, such as in [17], [18], [26], [27],
[37].
On the other hand, since sEMG is still one of the most
reliable methods to activate the functionalities of prosthetic
devices (despite of its well-known issues) and, taking into
account that it is relatively easy to build or acquire an
affordable version of this kind of sensor, three custom PCB
boards were designed, the control board and two signal con-
ditioning ones, in order to achieve a self-contained embedded
controller that provides fitting versatility to subjects with
different amputation degrees. The block diagram proposed,
illustrated in Fig. 6, shows the system architecture of the self-
contained embedded controller that fits inside the palm of the
prosthesis.
However, affordable commercial sensors such as the Myo-
Ware Muscle Sensors (analog interface) or Thalmic Labs’
Myo armband (Bluetooth Low Energy) can be easily adapted
to the system.
A. sEMG CONTROL DESIGN
A simple on-off sEMG controller based on time-domain
features that triggers transitions of a Finite-State Machine
(FSM) (in Section V) was designed in order to implement
an intuitive user-friendly interface, allowing to achieve more
customized hand actions without requiring long periods of
training from the user [18].
1) sEMG Signal Acquisition and Conditioning
In order to save costs, two affordable, bipolar channels,
implemented with nickel-plated copper rivets as surface
mounted electrodes, are placed on the palmaris longus and
the extensor digitorum muscles (for unilateral below-elbow
disarticulations) [16], [17]. Since the biopotentials acquired
are about ±25 µV to ±10 mV , ranging in a bandwidth
between 30 Hz to 2kHz, a signal conditioning stage was
FIGURE 8: Simplified circuit of the sEMG signal condition-
ing stage, a low-pass active filter.
implemented. It consists in a single-supply operation, based
on the TI INA326 high-performance rail-to-rail precision
instrumentation amplifier and a TI OPA335 working together
under a first-order, low-pass, active filter configuration, as
shown in Fig. 8. In order to collect useful sEMG data from
the patient’s stump and sense the biopotentials of muscular
fibers during different actions, an output signal span in the
range of 0Vto 3.3Vand a bandwidth between 0Hz to
500 Hz was considered [18], [45], [46].
2) sEMG Signal Processing
The digital signal processing (DSP) involved in the sEMG
controller is implemented on a custom, main PCB board
based on the Teensy 3.2 development board (PJRC), using
NXP’s ARM Cortex-M4 Kinetis K20 microcontroller. There,
two channels of sEMG signals are collected using the on-chip
ADC with 1kHz of sample rate. Then, they are filtered in
order to eliminate the interference caused by the mains power
line’s AC frequency, using an Infinite Impulse Response (IIR)
Elliptic Band-Pass Filter of order 20 with a pass-band from
100 Hz to 480 Hz and quantized for single precision. The
FIGURE 9: Magnitude and phase responses of the IIR band-
pass filter, in blue and orange, respectively.
FIGURE 10: Pole-Zero plot showing the stability of the IIR
elliptic band-pass filter.
6VOLUME 1, 2020
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
filter was implemented using the transposed direct form II
biquadratic IIR filter structure from the CMSIS-DSP API
for ARM Cortex-M MCUs [47], [48]. Frequency and phase
response, as well as the Pole/Zero plot of the IIR filter are
shown in Figs. 9 and 10, respectively.
3) sEMG On-Off Timing Detection
A single-threshold method is used to detect the “on“ and
“off“ timing of each muscle. Then, sEMG data collected from
each channel, k, in a time window of 50 ms, is operated
according to Eq. 7, so the mean absolute value µkis deter-
mined. Later, it is compared to a predefined threshold, αk,
which varies between users and depends on the mean power
of the background noise of each channel [49], [50].
As shown in Eq. 8, if this threshold is exceeded, a contrac-
tion qkis detected and a transition in the UPI’s FSM (Fig. 12)
is triggered [17], [51].
µk=1
N
N
X
n=1
|xk,n|(7)
where xk,n is the sample nfrom the channel k, and Nis
the size of the collected window.
qk=
1,µkαk
0,µk< αk
(8)
B. MOTOR CONTROL DESIGN
Three H-bridge drivers, TI DRV8833, were selected to drive
each of the six brushed DC motors actuating the fingers. This
selection was based on their pin requirements and through
complementary PWM, they provide active braking, speed
and direction control. This way, the prosthesis is able to
perform predefined gestures through a PI controller to rotate
the thumb and a hybrid on-off control system (shown in
Fig. 11), to limit the finger’s tensile force.
This last one was designed taking into account that the
gearhead on the motor introduces backlash and friction to the
system, but, because of the power transmitted by the gearbox,
each finger acts similarly to a non-backdrivable system. That
is why the on-off controller was implemented to achieve the
flexion/extension movements with the necessary force to hold
different objects, which is the same as fta. Considering the
equation (1) and the system (5)-(6), one can limit it according
to the following expressions in a simplified model, i.e. by
considering the current demanded by the motor, ia, and the
nominal torque at the gearhead, τm.
fta =GktiaJm¨
θ
rp
(9)
where Gis the gearbox’s ratio and ktis the motor’s torque
constant.
With this information, the threshold th for each finger was
determined experimentally. Thusly, the high-level controller
FIGURE 11: FSM demonstrating the opening/closing be-
haviour of each finger on the prosthesis.
(in Fig. 11) results in the following: the system starts with
the finger fully extended (in an "open" position), modelled
by the state S0. The transition to S1happens when the
command to move the finger, c, is received, activating the
motor and causing the finger to flex. During this process,
the RMS value of the current is monitored by the MCU and
it is used to obtain the force equivalent to limit it to th;
if it is exceeded, then the switch to S2happens. The exact
value of the threshold may be different for each individual
finger, as each one has different size and, therefore, different
mechanical characteristics, so this procedure was carried out
experimentally. At this point, the finger is considered to have
reached its final position and will start to reopen if another
command, o, is issued by the user, as shown by the transition
from S2to S3. The transition to S0is time-based and te
is about 1.5times less than the time taken to finish said
action (the time spent in S1). This discrepancy arises, because
of the elastic installed on each member of the prosthetic
limb, since the material opposes itself to the coiling process,
but favors the uncoiling one. Furthermore, tewas measured
experimentally (as shown in Fig. 16) and represents the time
it takes to fully extend each finger. It is relevant to note
that the closing/opening processes may be interrupted and
reversed if the appropriate commands are received.
V. USER-PROSTHESIS INTERFACE
To select a UPI for the prosthesis, both interfaces used for
previous versions of the Galileo Hand were evaluated to
determine which one provided a better user experience. Since
both of them were implemented with the same hardware
(shown in Fig. 1), the tests run did not possess any bias
involving price ranges or physical characteristics, like gen-
eral aesthetics, weight, amount of DOF and DOA, as well
as the sensor used to detect the user intent. For the sake
of these trials, the signal capture system selected was the
Myo armband (because of its comfort and easy installation
on the volunteers), which was placed on each of the subjects’
forearms (on the same arm as the Galileo Hand to create
VOLUME 1, 2020 7
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
FIGURE 12: FSM illustrating the behavior of the interface
using a multimodal approach with buttons and sEMG sen-
sors.
a natural operation mode), where the stump for transradial
amputees is located. Also, a limited contractions subset was
aimed for to interact with the prosthesis, since some of the
gestures can not be performed by the limb-impaired, while,
at the same time, providing a decent amount of actions at
their disposal.
A. INTERFACES CONSIDERED
Hereunder, the different UPIs taken into account are elabo-
rated on.
1) sEMG pattern recognition
The first interface, based on [17], but employing the Myo’s
pattern recognition methods, one of the more traditional
research alternatives [11], consists in a simplistic system that
maps each of the predefined “Myo poses“ to a gesture to be
executed. The mapping was carried out as follows: “wave in“
(flexing the hand) to a pointing position; “wave out“ (hand
extension) to carry out a lateral grasp; “double tap“ (two
swift, consecutive contractions) to a hooking stance; while
“fist“ and “fingers spread“ to closing and opening all fingers,
respectively. The gestures selected were the ones considered
to be the most useful in the ADLs.
2) Multimodal approach using buttons and Myo interface
The functionality for this version, similar to the work pre-
sented in [18], is illustrated in the FSM in Fig. 12. The muscle
contractions subset, Q={q0, q1}, corresponding to the
hand extension and flexion movements, respectively, and the
buttons set, B={b0, b1}, are used to operate the prosthesis.
Using b0and b1alters the position forwards or backwards in
the menu displayed on a µLCD screen (as shown in Fig. 13),
accordingly. These changes are taken place in the state S1,
indicating that an alteration in the screen’s state is occurring.
Such changes are blocked to the user the moment an action
is active, because the timing for operating the motors differs
FIGURE 13: Graphical menu on the screen mounted on the
prosthetic device (left) and the Galileo Hand performing a
power grip on a ball (right).
between actions, so, if an action were changed while another
is active, this could lead to wrong finger positioning.
On the other hand, S0indicates that the prosthesis is
resting in its default state, the fingers on the prosthesis are
fully extended; while S3, that the fingers are flexed according
to the selected action. Moreover, S2and S4indicate that the
prosthetic hand is currently closing or opening, respectively,
processes that can be interrupted by each other if a correct
command is received. Additionally, to activate an action, q0
needs to be received; however, if q1is detected, the gesture
deactivation process begins.
Other relevant elements in the FSM representing the inter-
face’s behavior are the flags fl and tr. The first one informs
that all the fingers have reached their desired position when
performing an action, while the second represents that the
time required to fully open the hand has passed.
B. NASA TASK LOAD INDEX EVALUATION
To effectively determine how amiable the interfaces are, a
NASA Task Load Index (TLX) test was carried out, not
unlike [52], and compared the results to each other.
This scale quantifies the effectiveness and performance of
the workload to operate the device, taking the following cate-
gories into account: mental, physical and temporal demands,
performance, effort needed to interact with the prosthesis and
the frustration its utilization evokes.
The selection of this scale to valuate the interfaces
was based on requiring a user testing, post-task evaluation
method, since post-test assessment techniques (like SUS),
do not permit to rate each part of the interfaces separately.
Plus, methods like SEQ are not as thorough as the one im-
plemented, since not many categories are considered during
experimentation, providing a more binary result. Addition-
ally, the test chosen has numerous research and industry
benchmarks to interpret the scores in context, which can be
helpful for future works.
The trials were carried out according the Ethical Commit-
tee recommendations (CAAE 58592916.9.1001.5404) and
were passed to 10 volunteers, who were asked to rate each
8VOLUME 1, 2020
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
category in a scale from 1 to 20. The volunteers consisted
in 8 male and 2 female subjects between the ages of 22
and 35, without any physical impairment. The evaluation
and comparison processes were carried out for the two UPIs
previously mentioned to notice the strengths and weaknesses
of each iteration and find the superior one.
The test consisted in performing different actions with the
Galileo Hand to interact with their environment and try some
of the expressions at their disposal. The evaluation consisted
in executing the following gestures: “Close“ (flexing all
fingers), “Peace“ (only the index and middle finger remain
extended), “Rock“ (all fingers are closed, but the index and
the little finger) and “Three“ (the index, middle and annular
fingers are the only ones in an open position). Later, the
volunteers were asked to hold a wallet, a bottle and to press
a specific key in a computer keyboard (similar to the actions
illustrated in Fig. 14). The tasks were repeated thrice for the
subjects to properly adapt to the operation mode.
VI. RESULTS
Since the prosthesis has to be able to enclose the control
board, together with the µLCD and the DC motors, the
minimization of the palm is restricted. Considering this, its
minimal size is 98 mm x69.6mm x25 mm. In a similar
sense, to avoid a disproportional hand, the fingers’ length was
also limited, 22 mm for the middle and proximal phalanges;
and 20 mm for the distal ones. Moreover, the total weight of
the prosthetic hand lingers under 350g, excluding the socket
and the alternative chosen for powering it up, which does
not have to be placed on the patient’s stump. This fulfills
the requirement of not being too heavy for the user to feel
FIGURE 14: Galileo Bionic Hand performing different
grasps helpful for ADLs. (1) Precision (2) Hook (3) Lateral
(4) Power
uncomfortable, when installing the assistive device on soft
tissue.
Furthermore, other relevant aspects to mention are the
reaction times and general capacities of the system. The
MCP joint’s minimum flexion and extension times are around
800 ms and 600 ms, respectively. Similarly, the thumb MCP
joint’s abduction and adduction lowest times remain near
150 ms. Additionally, each finger can hold up to a maximum
of 2.5kg with a driven motor, as shown in Fig. 15, where it
was taken to its braking point; and 5kg, when the actuator
is inoperative. Besides, the resulting force exerted with the
power grasp has a magnitude of 50 N.
Moreover, the experimentation for the determination of the
threshold to restrain the strength of the finger, th, consisted
in displaying the current demanded by the motor, ia, as well
as the gearhead’s angular position, θ, during the closing and
opening processes. This resulted in the graphs shown in Fig.
16, where the peaks shown in the second image reflect an
alteration in the actuator’s behavior, i.e. when the motor starts
moving or is shut down. Here, one can notice that a less than
10 ms latency exists between both graphs.
Furthermore, the peaks overcoming the set threshold for
the current (marked in red in the second graph of Fig. 16),
indicate the moment the digit starts and finishes flexing, re-
spectively. At the latter point, the motor is shut down to save
energy, but the finger remains closed, because the passive
tendon’s force does not overcome the nominal torque at the
gearhead output. The aforementioned limit was selected so
that the actuators do not turn off when other peaks occur,
i.e. when turning the motor on for the uncoiling process.
This lead to setting this restraint to different values for each
finger, according to each of their mechanical properties and
the desired force the user wants to exert with them, fta. It is
relevant to note that the first peak is ignored when evaluating
this condition, since it means the motor is starting. However,
this is not a problem when the extension process begins, as
the elastic favors this movement, so less power is demanded.
On the other hand, it is also relevant to note the overall lack
FIGURE 15: Load testing results for a driven middle finger,
where it was brought to its breaking point to determine the
maximum weight it can hold.
VOLUME 1, 2020 9
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
FIGURE 16: The angular position’s behaviour for the opening and closing processes for the middle finger is shown on the
uppermost graph, together with the one for the current (and active tension force), on the lowermost one, together with their
corresponding thresholds.
FIGURE 17: NASA TLX results for both UPIs evaluated.
Results for each of the ten volunteers are represented by
same-colored bars on each category.
of noise of the system in terms of the current, however an IIR
low pass filter could be implemented to mitigate its effects.
In addition to that, with the resulting behavior of the
gearhead’s angular position, it was possible to determine the
finger extension time, te, as utilized in the FSM in Fig. 11.
This was possible by measuring the relationship between the
flexion and extension times for the digits, which resulted in a
factor of around 1.5.
On the other hand, the results for the NASA TLX exper-
iment are shown in Fig. 17, where each bar represents how
each individual subject rated the interfaces for each category
in question. Their means are visualized in Fig. 18 along
with their standard deviations. The figures reflect a great
discrepancy in most categories, except for the performance.
The version consisting in the direct mapping of the actions
shows the best results in physical and temporal demand, as
well as the effort required to complete a task, while the
multimodal one resulted in the least frustrating interface and
the one requiring less mental demand. Both iterations excel
in diverse aspects, but a clear improvement between them is
not palpable with the previous graph. Therefore, an overall
performance statistic was elaborated (Fig. 19), which shows
an average of all six categories for all versions. They display
similar user-friendly behaviors around the upper 70% with
respect to NASA TLX’s scale.
Since the means for both versions are very similar ((a) has
a mean of 6.08; and (b) one of 6.1), a Factorial Analysis
of Variance (ANOVA) test was run on the results to verify
if the discrepancy between both results is relevant. The F
statistic obtained was 3.84, when its critical value is 0.0005,
considering an alpha of 0.05. This affirms the main effect
hypothesis, showing an insignificant inequality between both
interaction processes, resulting in equally amiable interfaces.
So, in conclusion, the interface that does not use the
buttons proved to be the less physically demanding version,
which may lie in the swift selection of actions allowed by
the lack of a menu to interact with, resulting, also, in re-
10 VOLUME 1, 2020
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
FIGURE 18: Mean and standard deviation of the results
gathered from the volunteers’ evaluation. Where (a) is the
sEMG Pattern Recognition one and (b) the one using the
multimodal interface.
FIGURE 19: Overall performance of both interfaces. (a) is
the version using the sEMG patter recognition interface and
(b) is the multimodal one.
quiring a lower level of effort to operate. However, although
insignificant, a slight inferiority was palpable in comparison
to the alternative. This can be the consequence of the need
to memorize the actions mapped to the Myo default poses,
which does not come naturally to the subjects, as they need
to be focused on the tasks at hand, which may explain the
elevated mental demand and frustration observed in the rating
process.
In opposition, the lack of frustration for the multimodal
iteration may be result of the alternative to navigate along the
menu using the buttons, since the Myo classification process
has been known to misinterpret certain actions at times. In
addition to that, an aspect noted after performing these trials
was that a multimodal approach implementing a system using
an extended contractions subset did not result in a relevant
improvement. However, since both versions showed similar
results, it is convenient to provide the patient with a larger
gamut of actions to provide a more customized and practi-
cal prosthetic device, although increasing the price slightly,
which still remains under the $350 mark, it still proves to be
a much more affordable alternative than typical commercial
products. It is relevant to note that this price includes the
PCB, 3D-printing materials, electronic components and a
power source.
VII. CONCLUSIONS
An affordable and functional upper-limb prosthesis for tran-
sradial amputees was successfully tested and validated. In
addition to that, since its weight remains under the one of
the average human hand’s, its usability over long periods
of time is favored. Moreover, regarding the operational as-
pect of the Galileo Hand, it is relevant to note the swift
responsiveness of the on-off sEMG controller, which can be
observed in Fig. 16, as it possesses a latency, which is barely,
if at all, noticed by the user. Also, its modular, intrinsic
and versatile design allows for its adaptation to the user’s
needs, such as providing alternate ways of gathering the user
intent. Furthermore, since the system is an under-tendon-
driven machine, the mechanism is underactuated while still
allowing for an efficient gripper and maintaining a low cost,
because it requires less actuators than alternate systems. Its
grip feasibility was validated through the tests performed by
the volunteers when interacting with arbitrary objects in a
successful manner, as shown in Fig. 14. Additionally, it was
also proven that the maximum force exerted by each finger is
enough to accomplish common ADLs.
Finally, based on the NASA TLX scale (as shown in
Fig. 19), the proposed UPI has been shown to be user-
friendly and also allows to increase the amount of customized
hand postures that can be performed; an aspect commonly
lacking in many other prosthetic devices, even though it is
an important one, as it permits its operation by people with
diverse levels of amputation. Additionally, this UPI only
needs the detection of two contractions, which were selected
so that they can be easily performed by users with transradial
amputations, by the sEMG system. However, these results
have to be juxtaposed to the ones gathered by tests run on a
relevant sample of physically impaired subjects.
REFERENCES
[1] D. Pilling, P. Barrett, and M. Floyd, “Disabled people and the internet:
Experiences, barriers and opportunities,” University of London, London,
UK, 2004.
[2] W. H. Organization et al., “World report on disability: World health
organization,” World Health Organization, Geneva, CH, 2011.
[3] D. Cummings, “Prosthetics in the developing world: a review of the
literature,” Prosthetics and orthotics international, vol. 20, no. 1, pp. 51–
60, 1996.
[4] J. Ten Kate, G. Smit, and P. Breedveld, “3D-printed upper limb prostheses:
a review,” Disability and Rehabilitation: Assistive Technology, vol. 12,
no. 3, pp. 300–314, 2017.
[5] M. M. Bridges, M. P. Para, and M. J. Mashner, “Control system archi-
tecture for the modular prosthetic limb,” Johns Hopkins APL Technical
Digest, vol. 30, no. 3, 2011.
[6] T. J. Levy and J. D. Beaty, “Revolutionizing prosthetics: neuroscience
framework,” Johns Hopkins APL Technical Digest, vol. 30, no. 3, pp. 223–
229, 2011.
VOLUME 1, 2020 11
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
[7] G. Hotson, D. P. McMullen, M. S. Fifer, M. S. Johannes, K. D. Katyal,
M. P. Para, R. Armiger, W. S. Anderson, N. V. Thakor, B. A. Wester et al.,
“Individual finger control of a modular prosthetic limb using high-density
electrocorticography in a human subject,” Journal of neural engineering,
vol. 13, no. 2, p. 026017, 2016.
[8] C. Cipriani, M. Controzzi, and M. C. Carrozza, “Objectives, criteria and
methods for the design of the SmartHand transradial prosthesis,” Robotica,
vol. 28, no. 06, pp. 919–927, 2010.
[9] C. Cipriani, Controzzi, and M. C. Carrozza, “The SmartHand transradial
prosthesis,” Journal of neuroengineering and rehabilitation, vol. 8, no. 1,
p. 1, 2011.
[10] F. Cordella, A. L. Ciancio, R. Sacchetti, A. Davalli, A. G. Cutti,
E. Guglielmelli, and L. Zollo, “Literature review on needs of upper limb
prosthesis users,” Frontiers in neuroscience, vol. 10, p. 209, 2016.
[11] A. Fougner, Ø. Stavdahl, P. J. Kyberd, Y. G. Losier, P. Parker et al., “Con-
trol of upper limb prostheses: Terminology and proportional myoelectric
control-A review,” Transactions on Neural Systems and Rehabilitation
Engineering, vol. 20, no. 5, pp. 663–677, 2012.
[12] C. Medynski and B. Rattray, “Bebionic prosthetic design. in Proceedings
of Myoelectric Symposium, New Brunswick, CA, 2011, pp. 279–282.
[13] J. Troccaz and C. Connolly, “Prosthetic hands from touch bionics,” Indus-
trial Robot: An International Journal, vol. 35, no. 4, pp. 290–293, 2008.
[14] G. P. Kontoudis, M. V. Liarokapis, A. G. Zisimatos, C. I. Mavrogiannis,
and K. J. Kyriakopoulos, “Open-source, anthropomorphic, underactuated
robot hands with a selectively lockable differential mechanism: Towards
affordable prostheses,” in Intelligent Robots and Systems (IROS), 2015
IEEE/RSJ International Conference on. IEEE, 2015, pp. 5857–5862.
[15] P. Slade, A. Akhtar, M. Nguyen, and T. Bretl, “Tact: Design and per-
formance of an open-source, affordable, myoelectric prosthetic hand,” in
2015 IEEE International Conference on Robotics and Automation (ICRA).
IEEE, 2015, pp. 6451–6456.
[16] A. Akhtar, K. Y. Choi, M. Fatina, J. Cornman, E. Wu, J. Sombeck, C. Yim,
P. Slade, J. Lee, J. Moore et al., “A low-cost, open-source, compliant hand
for enabling sensorimotor control for people with transradial amputations,”
in 2016 38th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC). IEEE, 2016, pp. 4642–4645.
[17] J. Fajardo, A. Lemus, and E. Rohmer, “Galileo bionic hand: sEMG acti-
vated approaches for a multifunction upper-limb prosthetic,” in 2015 IEEE
Thirty Fifth Central American and Panama Convention (CONCAPAN
XXXV). IEEE, 2015, pp. 1–6.
[18] J. Fajardo, V. Ferman, A. Lemus, and E. Rohmer, “An affordable open-
source multifunctional upper-limb prosthesis with intrinsic actuation,” in
Advanced Robotics and its Social Impacts (ARSO), 2017 IEEE Workshop
on. IEEE, 2017, pp. 1–6.
[19] H. Liu, D. Yang, S. Fan, and H. Cai, “On the development of intrinsically-
actuated, multisensory dexterous robotic hands,” ROBOMECH Journal,
vol. 3, no. 1, p. 1, 2016.
[20] I. Kapandji, The Physiology of the Joints, Volume I: Upper Limb.
Churchill Livingstone, 2007, pp. 1–372.
[21] M. M. Rahman, T. T. Choudhury, S. N. Sidek, A. Awang et al., “Math-
ematical modeling and trajectory planning of hand finger movements,” in
2014 First Conference on Systems Informatics, Modelling and Simulation,
2014, pp. 43–47.
[22] M. R. Cutkosky, “On grasp choice, grasp models, and the design of hands
for manufacturing tasks,” IEEE Transactions on robotics and automation,
vol. 5, no. 3, pp. 269–279, 1989.
[23] A. Fougner, E. Scheme, A. D. Chan, K. Englehart, and Ø. Stavdahl,
“Resolving the limb position effect in myoelectric pattern recognition,”
IEEE Transactions on Neural Systems and Rehabilitation Engineering,
vol. 19, no. 6, pp. 644–651, 2011.
[24] W. Guo, X. Sheng, H. Liu, and X. Zhu, “Mechanomyography assisted
myoeletric sensing for upper-extremity prostheses: a hybrid approach,
IEEE Sensors Journal, vol. 17, no. 10, pp. 3100–3108, 2017.
[25] S. Wilson and R. Vaidyanathan, “Upper-limb prosthetic control using
wearable multichannel mechanomyography,” in 2017 International Con-
ference on Rehabilitation Robotics (ICORR). IEEE, 2017, pp. 1293–
1298.
[26] E. Fujiwara, Y. T. Wu, C. Suzuki, D. Andrade, A. Ribas, and E. Rohmer,
“Optical fiber force myography sensor for applications in prosthetic hand
control,” in 2018 IEEE Fifteenth International Workshop on Advanced
Motion Control. IEEE, 2018, pp. 1–6.
[27] J. Fajardo, A. R. Neto, W. Silva, M. Gomes, E. Fujiwara, and E. Rohmer,
“A wearable robotic glove based on optical fmg driven controller,” in 2019
IEEE 4th International Conference on Advanced Robotics and Mechatron-
ics (ICARM). IEEE, 2019, pp. 81–86.
[28] Y.-P. Zheng, M. Chan, J. Shi, X. Chen, and Q.-H. Huang, “Sonomyog-
raphy: Monitoring morphological changes of forearm muscles in actions
with the feasibility for the control of powered prosthesis,” Medical engi-
neering & physics, vol. 28, no. 5, pp. 405–415, 2006.
[29] A. S. Dhawan, B. Mukherjee, S. Patwardhan, N. Akhlaghi, G. Diao,
G. Levay, R. Holley, W. M. Joiner, M. Harris-Love, and S. Sikdar,
“Proprioceptive sonomyographic control: A novel method for intuitive and
proportional control of multiple degrees-of-freedom for individuals with
upper extremity limb loss,” Scientific reports, vol. 9, no. 1, pp. 1–15, 2019.
[30] M. S. Trachtenberg, G. Singhal, R. Kaliki, R. J. Smith, and N. V. Thakor,
“Radio frequency identification-an innovative solution to guide dexter-
ous prosthetic hands,” in Engineering in Medicine and Biology Society,
EMBC, 2011 annual international conference of the IEEE. IEEE, 2011,
pp. 3511–3514.
[31] C. M. Oppus, J. R. R. Prado, J. C. Escobar, J. A. G. Mariñas, and R. S.
Reyes, “Brain-computer interface and voice-controlled 3d printed pros-
thetic hand,” in Region 10 Conference (TENCON), 2016 IEEE. IEEE,
2016, pp. 2689–2693.
[32] D. P. McMullen, G. Hotson, K. D. Katyal, B. A. Wester, M. S. Fifer, T. G.
McGee, A. Harris, M. S. Johannes, R. J. Vogelstein, A. D. Ravitz et al.,
“Demonstration of a semi-autonomous hybrid brain–machine interface
using human intracranial EEG, eye tracking, and computer vision to
control a robotic upper limb prosthetic,” IEEE Transactions on Neural
Systems and Rehabilitation Engineering, vol. 22, no. 4, pp. 784–796, 2014.
[33] M. Kopicki, R. Detry, M. Adjigble, R. Stolkin, A. Leonardis, and J. L.
Wyatt, “One-shot learning and generation of dexterous grasps for novel
objects,” The International Journal of Robotics Research, vol. 35, no. 8,
pp. 959–976, 2016.
[34] M. Markovic, S. Dosen, C. Cipriani, D. Popovic, and D. Farina, “Stereo-
vision and augmented reality for closed-loop control of grasping in hand
prostheses,” Journal of neural engineering, vol. 11, no. 4, p. 046001, 2014.
[35] G. Ghazaei, A. Alameer, P. Degenaar, G. Morgan, and K. Nazarpour,
“Deep learning-based artificial vision for grasp classification in myoelec-
tric hands,” Journal of neural engineering, vol. 14, no. 3, p. 036025, 2017.
[36] N. Bu, Y. Bandou, O. Fukuda, H. Okumura, and K. Arai, “A semi-
automatic control method for myoelectric prosthetic hand based on image
information of objects,” in Intelligent Informatics and Biomedical Sci-
ences (ICIIBMS), 2017 International Conference on. IEEE, 2017, pp.
23–28.
[37] J. Fajardo, V. Ferman, A. Muñoz, D. Andrade, A. R. Neto, and E. Rohmer,
“User-prosthesis interface for upper limb prosthesis based on object clas-
sification,” in 2018 Latin American Robotic Symposium, 2018 Brazilian
Symposium on Robotics (SBR) and 2018 Workshop on Robotics in
Education (WRE). IEEE, 2018, pp. 390–395.
[38] A. Saenz. How much is the newest advanced artificial hand? $11,000
usd (video). [Online]. Available: https://singularityhub.com/2010/06/30/
how-much-is-the-newest- advanced-artificial-hand- 11000-usd- video/
[39] J. T. Belter, J. L. Segil, A. M. Dollar, and R. F. Weir, “Mechanical design
and performance specifications of anthropomorphic prosthetic hands: A
review.” Journal of Rehabilitation Research & Development, vol. 50, no. 5,
2013.
[40] R. Ozawa, K. Hashirii, and H. Kobayashi, “Design and control of under-
actuated tendon-driven mechanisms,” in 2009 IEEE International Confer-
ence on Robotics and Automation. IEEE, 2009, pp. 1522–1527.
[41] T. Takaki and T. Omata, “High-performance anthropomorphic robot hand
with grasping-force-magnification mechanism,” IEEE/ASME Transac-
tions on Mechatronics, vol. 16, no. 3, pp. 583–591, 2011.
[42] P. Dario, C. Laschi, M. C. Carrozza, E. Guglielmelli, G. I. Teti, B. Massa,
M. Zecca, D. Taddeucci, and F. Leoni, “An integrated approach for the de-
sign and development of a grasping and manipulation system in humanoid
robotics,” in (IROS 2000). Proceedings. 2000 IEEE/RSJ International
Conference on Intelligent Robots and Systems, vol. 1. IEEE, 2000, pp.
1–7.
[43] C. Zhu, C. Song, T. C. Lim, and S. Vijayakar, “Geometry design and
tooth contact analysis of crossed beveloid gears for marine transmissions,
Chinese Journal of Mechanical Engineering, vol. 25, no. 2, pp. 328–337,
2012.
[44] M. Gouda, “CMSIS-RTOS an API interface standard for real-time operat-
ing systems,” in ARM Technology Symposia, 2012.
[45] M. Rossi, S. Benatti, E. Farella, and L. Benini, “Hybrid EMG classifier
based on HMM and SVM for hand gesture recognition in prosthetics,”
12 VOLUME 1, 2020
Fajardo et al.: Galileo Hand: An Anthropomorphic and Affordable Upper-Limb Prosthesis
in Industrial Technology (ICIT), 2015 IEEE International Conference on.
IEEE, 2015, pp. 1700–1705.
[46] S. Benatti, B. Milosevic, F. Casamassima, P. Schonle, P. Bunjaku, S. Fateh,
Q. Huang, and L. Benini, “EMG-based hand gesture recognition with
flexible analog front end,” in Biomedical Circuits and Systems Conference
(BioCAS). IEEE, 2014, pp. 57–60.
[47] H.-P. Huang and C.-Y. Chiang, “DSP-based controller for a multi-degree
prosthetic hand,” in International Conference on Robotics and Automation,
2000. Proceedings (ICRA)., vol. 2. IEEE, 2000, pp. 1378–1383.
[48] J. Yiu, “The definitive guide to ARM CORTEX-M3 and CORTEX-M4
processors,” Elsevier, 3rd Edition, Chapter 22, Using the ARM CMSIS-
DSP Library, 2014.
[49] M. Reaz, M. Hussain, and F. Mohd-Yasin, “Techniques of EMG signal
analysis: detection, processing, classification and applications,” Biological
procedures online, vol. 8, no. 1, pp. 11–35, 2006.
[50] C. J. De Luca, “The use of surface electromyography in biomechanics,”
Journal of applied biomechanics, vol. 13, pp. 135–163, 1997.
[51] E. N. Kamavuako, E. J. Scheme, and K. B. Englehart, “Determination of
optimum threshold values for EMG time domain features; a multi-dataset
investigation,” Journal of neural engineering, vol. 13, no. 4, p. 046011,
2016.
[52] D. Andrade, A. R. Neto, and E. Rohmer, “Human prosthetic interaction:
Integration of several techniques,” Simpsio Brasileiro de Automao In-
teligente, 2017.
JULIO FAJARDO received the B.S. degree in
electrical and computer engineering from Galileo
University, Guatemala City, Guatemala, in 2012
and the M.S. degree in industrial electronics also
from Galileo University in 2015. He is currently
pursuing the Ph.D.degree in electrical engineer-
ing at State University of Campinas (UNICAMP),
Campinas, São Paulo, Brazil.
From 2013 to 2017, he was a Research Assistant
with the Turing Research Lab, FISICC, Galileo
University. His research interest includes assistive robotics specially focused
on upper-limb prostheses and orthoses; robust control applied to robotics by
the use of linear matrix inequalities; digital signal processing and machine
learning techniques to interpret the user intent through electromyography,
force myography and near-infrared spectroscopy signals.
VICTOR FERMAN He received the B.S. in
mechatronics engineering from Galileo Univer-
sity, Guatemala City, Guatemala, and the M.S.
degree in industrial electronics also from Galileo
University in 2015. He is currently enrolled in
the Ph.D. degree in electrical engineering at State
University of Campinas (UNICAMP), Campinas,
São Paulo, Brazil.
From 2015 to 2017, he worked as Research
Assistant with the Turing Research Lab, FISICC,
Galileo University. His research interests include assistive robotics focused
on upper-limb prostheses, human-machine interface and gait control for
lower-limb exoskeletons.
DIEGO CARDONA got his B.S. degree in elec-
trical and computer engineering at the Galileo
University in Guatemala City, Guatemala, in 2020.
There, he is a Research and Development Assis-
tant at the Turing Laboratory.
His work has been focused on PCB design,
embedded systems in upper-limb prostheses and
the way these devices are meant to interact with
the users.
GUILLERMO MALDONADO is an undergrad-
uate student pursuing a B.S. degree in mecha-
tronics engineering, from Galileo University in
Guatemala. He also works for the Turing Research
Laboratory as a Research assistant, mainly on
the research and development of prosthetic hands,
their mechanic designs and intricacies, and 3D
printing.
ALI LEMUS is the Director of Research and De-
velopment in the Computer Science Department
in Galileo University (2011) Turing Lab (2013).
Co-Founder Elemental Geeks (2011). Master’s in
Applied Information Sciences (University of To-
hoku, Japan 2009). Researcher in Artificial Intelli-
gence and Neural Networks in the Intelligent Nano
Integration Systems (Japan, 2007). Computer Sci-
ence Engineer (Universidad Francisco Marroquin,
2002). His special fields of interest included Ma-
chine Learning, E-Learning, Education, MOOCs, Gamification and Games.
ERIC ROHMER is a professor and a robotic
researcher at the faculty of electrical and computa-
tion engineering of the State University of Camp-
inas in Brazil. He concluded his PhD in 2005 in the
Tohoku University in Japan, where he worked as a
robotic researcher until 2011. He is a co- founder
of the Brazilian Institute of Neurosciences and
Neurotechnologies (BRAINN). His field of inter-
est concerns dynamic simulation based telerobotic
platforms, mobile robots’ locomotion for space
exploration and search and rescue operations, and assistive and rehabilitation
robotics focusing on mobility and upper limbs assistance. He is one of the
researchers who designed and developed Quince robot, the first Japanese
robot in use inside the Fukushima crippled nuclear reactor.
VOLUME 1, 2020 13
... Alternatively, one may explore the FMG transient characteristics to optimize the number of transducers necessary for recognizing a gesture. Albeit FMG patterns depend on the initial and final hand posetraining the classifier may be laborious for an inclusive set of postures,-assuming an event-driven finite state (EDFS) control alleviates the calibration task by mapping a few input gestures to a compendium of actions, which is applicable for commanding practical prostheses [24,25]. Figure 1 shows the system setup [17]. ...
... However, applications such as prostheses control are feasible through a limited set of gestures by implementing an event-driven finite state machine approach [24]. Instead of performing laborious training for a collection of classes, one may map a limited set of gestures into a comprehensive dictionary of actions to drive the manipulator, making the bionic prosthesis more intuitive and less exhaustive to the operator [25]. Yet, integrating the FMG sensor with other technologies (sEMG, mechanomyography, optical tracking, etc.) through collaborative or competitive data fusion is another possibility to enhance the system reliability regarding a modular approach [25,39]. ...
... Instead of performing laborious training for a collection of classes, one may map a limited set of gestures into a comprehensive dictionary of actions to drive the manipulator, making the bionic prosthesis more intuitive and less exhaustive to the operator [25]. Yet, integrating the FMG sensor with other technologies (sEMG, mechanomyography, optical tracking, etc.) through collaborative or competitive data fusion is another possibility to enhance the system reliability regarding a modular approach [25,39]. ...
Article
Full-text available
Force myography (FMG) detects hand gestures based on muscular contractions, featuring as an alternative to surface electromyography. However, typical FMG systems rely on spatially-distributed arrays of force-sensing resistors to resolve ambiguities. The aim of this proof-of-concept study is to develop a method for identifying hand poses from the static and dynamic components of FMG waveforms based on a compact, single-channel optical fiber sensor. As the user performs a gesture, a micro-bending transducer positioned on the belly of the forearm muscles registers the dynamic optical signals resulting from the exerted forces. A Raspberry Pi 3 minicomputer performs data acquisition and processing. Then, convolutional neural networks correlate the FMG waveforms with the target postures, yielding a classification accuracy of (93.98 ± 1.54)% for eight postures, based on the interrogation of a single fiber transducer.
... This is further backed up by a research survey conducted to understand the expectations, needs, and difficulties of those who use prosthetics for their aid. Article [25] created a myoelectric, anthropomorphic, open-source prosthetic for those in developing nations who have transradial amputations and also described the design approach and demonstrated that the Tact performs as well as or better than current commercial myoelectric prostheses while being simple to manufacture. In article [26] the goal of the research is to contribute to the development of a deep learning-based approach for identifying hand motions using surface electromyography data. ...
Preprint
Full-text available
For a person who lost their arm or an upper limb, even a simple task becomes cumbersome because of their disability. Prosthetics play an important role in helping these people cope up with the challenges they face. Swift developments in technology have resulted in powered myoelectric hand prosthetics entering the market but are avoided by many for being expensive to purchase and maintain. This paper outlines the development of an economical prosthetic claw that can be controlled by muscle signals. The project primarily aims to bridge the gap between cheap non-functional prosthetics and expensive fully controllable prosthetics by being affordable, durable, and easy to manufacture without sacrificing functionality. The claw and its components have been designed to be easy to modify, repair, and replace, making it a flexible platform for customization as per the user’s need. This translates to an efficient and feasible solution to the ever-growing need for affordable functional upper limb prosthetics for the physically challenged.
... This manuscript reports a control methodology for multiple grasps by prosthetic hand without using pattern recognition algorithms at lower computational cost and with much higher accuracy. Using single channel EMG and an Android application, it reports multiple grasps and individual finger movements by a prosthetic hand with 100% accuracy.Frequently reported grasp patterns performed by prosthetic hands available in the literature include thumb-index finger, tripod, sphere (power), sphere (precision), large diameter and lateral pinch (Abarca et al. 2019;Fajardo et al. 2020;Zhou et al. 2019). In the reported work, the prosthetic hand can execute disk (precision), disk (power), thumb-2 finger, thumb-3 finger, light tool, thumb-4 finger, platform push, adducted thumb, medium wrap, small diameter in addition to the previously reported grasp patterns. ...
Article
Full-text available
Human hand performs multiple grasp types during daily living activities. Adaptation of grasping force to avoid object slippage as employed by human brain has been postulated as an intelligent approach. Recently, research for prosthetic hands with human-like capabilities has been followed by many researchers, but with limited success. Advanced prosthetic hands that can perform different grasp types use multiple electromyogram (EMG) channels. This causes the user to wear more electrodes leading to inconveniences with inadequate grasping accuracy. This manuscript reports a prosthetic hand performing 16 grasp types in real-time using a single channel EMG customized with an Android application called Graspy. An embedded EMG based grasping controller with a network of force sensing resistors and kinematic sensors prevent slipping and breaking of grasping objects. Experiments were conducted with four able-bodied subjects for performing grasp types and individual finger movements. A proportional-integral-derivative algorithm was implemented to regulate finger joint kinematic of the prosthetic hand in relation to the force sensing resistors. Following the grasping intention based on EMG, the control algorithm can prevent slipping and breaking of grasping objects. The hand could perform grasping of objects like tennis ball, cookie, knife, screwdriver, water bottle, egg, pen, plastic container, circular disk etc. while emulating the 16 grasp types and individual finger movements with 100% accuracy.
... As a precise and noninvasive way of decoding user's intention of hand movements, the surface electromyography (sEMG)-based hand movement recognition has been extensively investigated in the area of rehabilitation engineering [1,2] and human-computer interaction [3,4]. Having realized that one of the key issues of sEMG-based hand movement recognition is a machine-learning-driven decision-making problem of classifying sequences of sEMG signals, many efforts have been made in improving sEMGbased hand movement recognition by designing more representative features [5], developing more sophisticated machine-learning models [6], and increasing the number of sensors [7]. ...
Article
Full-text available
As a machine-learning-driven decision-making problem, the surface electromyography (sEMG)-based hand movement recognition is one of the key issues in robust control of noninvasive neural interfaces such as myoelectric prosthesis and rehabilitation robot. Despite the recent success in sEMG-based hand movement recognition using end-to-end deep feature learning technologies based on deep learning models, the performance of today’s sEMG-based hand movement recognition system is still limited by the noisy, random, and nonstationary nature of sEMG signals and researchers have come up with a number of methods that improve sEMG-based hand movement via feature engineering. Aiming at achieving higher sEMG-based hand movement recognition accuracies while enabling a trade-off between performance and computational complexity, this study proposed a progressive fusion network (PFNet) framework, which improves sEMG-based hand movement recognition via integration of domain knowledge-guided feature engineering and deep feature learning. In particular, it learns high-level feature representations from raw sEMG signals and engineered time-frequency domain features via a feature learning network and a domain knowledge network, respectively, and then employs a 3-stage progressive fusion strategy to progressively fuse the two networks together and obtain the final decisions. Extensive experiments were conducted on five sEMG datasets to evaluate our proposed PFNet, and the experimental results showed that the proposed PFNet could achieve the average hand movement recognition accuracies of 87.8%, 85.4%, 68.3%, 71.7%, and 90.3% on the five datasets, respectively, which outperformed those achieved by the state of the arts.
... These compensatory motions have been shown to cause frequent shoulder pain and carpal tunnel syndrome along with other secondary impairments [9], eventually leading to rejection of the bionic arm [7]. In order to address these issues, numerous lightweight transradial prostheses with multi-DoF fingers have been developed [10]- [12]. However, very few studies focused on the multi-DoF functional wrist [13]- [15], despite the important role of the active wrist during daily living tasks. ...
Conference Paper
Full-text available
Upper limb prosthesis has a high abandonment rate due to the low function and heavyweight. These two factors are coupled because higher function leads to additional motors, batteries, and other electronics which makes the device heavier. Robotic emulators have been used for lower limb studies to decouple the device weight and high functionality in order to explore human-centered designs and controllers featuring off-board motors. In this study, we designed a prosthetic emulator for transradial (below elbow) prosthesis to identify the optimal design and control of the user. The device only weighs half of the physiological arm which features two active wrist movements with active power grasping. The detailed design of the prosthetic arm and the performance of the system is presented in this study. We envision this emulator can be used as a test-bed to identify the desired specification of transradial prosthesis, human-robot interaction, and human-in-the-loop control.
... Our future work will focus on online evaluation of the proposed multiview deep learning framework. Moreover, in the future, we will investigate the integration of our proposed framework with hardware systems, such as upperlimb prostheses [51,52] and space robots [53,54] that are driven by multichannel sEMG signals. ...
Article
Full-text available
Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.
Article
Full-text available
Hand prostheses should provide functional replacements of lost hands. Yet current prosthetic hands often are not intuitive to control and easy to use by amputees. Commercially available prostheses are usually controlled based on EMG signals triggered by the user to perform grasping tasks. Such EMG-based control requires long training and depends heavily on the robustness of the EMG signals. Our goal is to develop prosthetic hands with semi-autonomous grasping abilities that lead to more intuitive control by the user. In this paper, we present the development of prosthetic hands that enable such abilities as first results toward this goal. The developed prostheses provide intelligent mechatronics including adaptive actuation, multi-modal sensing and on-board computing resources to enable autonomous and intuitive control. The hands are scalable in size and based on an underactuated mechanism which allows the adaptation of grasps to the shape of arbitrary objects. They integrate a multi-modal sensor system including a camera and in the newest version a distance sensor and IMU. A resource-aware embedded system for in-hand processing of sensory data and control is included in the palm of each hand. We describe the design of the new version of the hands, the female hand prosthesis with a weight of 377 g, a grasping force of 40.5 N and closing time of 0.73 s. We evaluate the mechatronics of the hand, its grasping abilities based on the YCB Gripper Assessment Protocol as well as a task-oriented protocol for assessing the hand performance in activities of daily living. Further, we exemplarily show the suitability of the multi-modal sensor system for sensory-based, semi-autonomous grasping in daily life activities. The evaluation demonstrates the merit of the hand concept, its sensor and in-hand computing systems.
Article
Full-text available
Abstract—Limb amputation can cause severe functional disability for the performance of activities of daily living. Previous studies have found differences in cognitive demands imposed by prosthetic devices due to variations in their design. The objectives of this article were to 1) identify the range of cognitive workload (CW)assessment techniques used in prior studies comparing different prosthetic devices, 2) identify the device configurations or features that reduced CW of users, and 3) provide guidelines for designing future prosthetic devices to reduce CW. A literature search was conducted using Compendex, Inspec, Web of Science, Proquest, IEEE, Engineering Research Database, PubMed, Cochrane, andGoogle Scholar. Forty-three studies met the inclusion criteria. Findings suggested that CW of prosthetic devices was assesse dusing physiological, task performance, and subjective measures. However, due to the limitations of these methods, there is a need for more theoretical and model-based approaches to quantify CW. Device configurations such as hybrid input signals and use of multiodal feedback can reduce CW of prosthetic devices. Furthermore, to evaluate the effectiveness of a training strategy for reducingCW and improving device usability, both task performance and subjective measures should be considered. Based on the literature review, a set of guidelines was provided to improve the usability of future prosthetic devices and reduce CW.
Article
Full-text available
The complexity of the user interfaces and the operating modes present in numerous assistive devices, such as intelligent prostheses, influence patients to shed them from their daily living activities. A methodology to evaluate how diverse aspects impact the workload evoked when using an upper-limb bionic prosthesis for unilateral transradial amputees is proposed and thus able to determine how user-friendly an interface is. The evaluation process consists of adapting the same 3D-printed terminal device to the different user-prosthesis-interface schemes to facilitate running the tests and avoid any possible bias. Moreover, a study comparing the results gathered by both limb-impaired and healthy subjects was carried out to contrast the subjective opinions of both types of volunteers and determines if their reactions have a significant discrepancy, as done in several other studies.
Article
Full-text available
Technological advances in multi-articulated prosthetic hands have outpaced the development of methods to intuitively control these devices. In fact, prosthetic users often cite "difficulty of use" as a key contributing factor for abandoning their prostheses. To overcome the limitations of the currently pervasive myoelectric control strategies, namely unintuitive proportional control of multiple degrees-of-freedom, we propose a novel approach: proprioceptive sonomyographic control. Unlike myoelectric control strategies which measure electrical activation of muscles and use the extracted signals to determine the velocity of an end-effector; our sonomyography-based strategy measures mechanical muscle deformation directly with ultrasound and uses the extracted signals to proportionally control the position of an end-effector. Therefore, our sonomyography-based control is congruent with a prosthetic user’s innate proprioception of muscle deformation in the residual limb. In this work, we evaluated proprioceptive sonomyographic control with 5 prosthetic users and 5 able-bodied participants in a virtual target achievement and holding task for 5 different hand motions. We observed that with limited training, the performance of prosthetic users was comparable to that of able-bodied participants and thus conclude that proprioceptive sonomyographic control is a robust and intuitive prosthetic control strategy.
Conference Paper
Full-text available
The complexity of User-Prosthesis Interfaces (UPIs) to control and select different grip modes and gestures of active upper-limb prostheses, as well as the issues presented by the use of electromyography (EMG), along with the long periods of training and adaptation influence amputees on stopping using the device. Moreover, development cost and challenging research makes the final product too expensive for the vast majority of transradial amputees and often leaves the amputee with an interface that does not satisfy his needs. Usually, EMG controlled multi grasping prosthesis are mapping the challenging detection of a specific contraction of a group of muscle to one type of grasping, limiting the number of possible grasps to the number of distinguishable muscular contraction. To reduce costs and to facilitate the interaction between the user and the system in a customized way, we propose a hybrid UPI based on object classification from images and EMG, integrated with a 3D printed upper-limb prosthesis, controlled by a smartphone application developed in Android. This approach allows easy updates of the system and lower cognitive effort required from the user, satisfying a trade-off between functionality and low cost. Therefore, the user can achieve endless predefined types of grips, gestures, and sequence of actions by taking pictures of the object to interact with, only using four muscle contractions to validate and actuate a suggested type of interaction. Experimental results showed great mechanical performances of the prosthesis when interacting with everyday life objects, and high accuracy and responsiveness of the controller and classifier.
Article
Full-text available
Objective: Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach: We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results: The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra(TM) prosthetic hand and a motion control(TM) prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance: The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Conference Paper
In this paper we introduce a robust multi-channel wearable sensor system for capturing user intent to control robotic hands. The interface is based on a fusion of inertial measurement and mechanomyography (MMG), which measures the vibrations of muscle fibres during motion. MMG is immune to issues such as sweat, skin impedance, and the need for a reference signal that is common to electromyography (EMG). The main contributions of this work are: 1) the hardware design of a fused inertial and MMG measurement system that can be worn on the arm, 2) a unified algorithm for detection, segmentation, and classification of muscle movement corresponding to hand gestures, and 3) experiments demonstrating the real-time control of a commercial prosthetic hand (Bebionic Version 2). Results show recognition of seven gestures, achieving an offline classification accuracy of 83.5% performed on five healthy subjects and one transradial amputee. The gesture recognition was then tested in real time on subsets of two and five gestures, with an average accuracy of 93.3% and 62.2% respectively. To our knowledge this is the first applied MMG based control system for practical prosthetic control.
Article
The myoelectric upper-limb prosthetic manipulation is inherently limited by the unreliable sensor-skin interface. This paper presents a hybrid approach to overcome the limitation of electromyography (EMG) through mechanomyography (MMG) assisted myoelectric sensing. An integrated hybrid sensor system was developed for simultaneous EMG and MMG measurement. The hybrid system formed a platform to capture muscular activations in different frequencies. To evaluate the effectiveness of hybrid EMG-MMG sensing, hand motion experiments have been carried out on seven able-bodied and two transradial amputee subjects. It convincingly demonstrated a significantly (p < 0.01) improved classification accuracy (CA). Furthermore, the CA was compensated by 8.7% 33.7% in the presence of 2 3 fault EMG channels. These results suggest that MMG assisted myoelectric sensing can improve the control performance and robustness. It has great potential to promote the clinical application of multi-functional prosthetic hand with hybrid EMG-MMG sensor system.
Article
Goal: This paper aims to provide an overview with quantitative information of existing 3D-printed upper limb prostheses. We will identify the benefits and drawbacks of 3D-printed devices to enable improvement of current devices based on the demands of prostheses users. Methods: A review was performed using Scopus, Web of Science and websites related to 3D-printing. Quantitative information on the mechanical and kinematic specifications and 3D-printing technology used was extracted from the papers and websites. Results: The overview (58 devices) provides the general specifications, the mechanical and kinematic specifications of the devices and information regarding the 3D-printing technology used for hands. The overview shows prostheses for all different upper limb amputation levels with different types of control and a maximum material cost of $500. Conclusion: A large range of various prostheses have been 3D-printed, of which the majority are used by children. Evidence with respect to the user acceptance, functionality and durability of the 3D-printed hands is lacking. Contrary to what is often claimed, 3D-printing is not necessarily cheap, e.g., injection moulding can be cheaper. Conversely, 3D-printing provides a promising possibility for individualization, e.g., personalized socket, colour, shape and size, without the need for adjusting the production machine. Implications for rehabilitation Upper limb deficiency is a condition in which a part of the upper limb is missing as a result of a congenital limb deficiency of as a result of an amputation. A prosthetic hand can restore some of the functions of a missing limb and help the user in performing activities of daily living. Using 3D-printing technology is one of the solutions to manufacture hand prostheses. This overview provides information about the general, mechanical and kinematic specifications of all the devices and it provides the information about the 3D-printing technology used to print the hands.