ArticlePDF Available

Tactile Feedback in Closed-Loop Control of Myoelectric Hand Grasping: Conveying Information of Multiple Sensors Simultaneously via a Single Feedback Channel

Authors:

Abstract and Figures

The performance of object grasping and manipulation significantly rely on receiving the appropriate feedback information. In many cases, a combination of multiple feedback information is required simultaneously. In this paper, the efficacy of simultaneously conveying two independent sets of sensor information (grasp force and secondary information) through a single channel of feedback stimulation (vibrotactile via bone conduction) to the human user in a prosthetic application is investigated. Subject performance in two tasks: regulating the grasp force and identifying the secondary information, were evaluated when provided with either one corresponding information or both sets of feedback information. Visual feedback is involved in the training stage. The proposed approach is validated in human-subject experiments using a vibrotactile transducer worn on the elbow bony landmark (to realise a non-invasive bone conduction interface) carried out in a virtual reality environment to perform a closed-loop object grasping task. The experimental results show that the performance of the human subjects on either task, whilst perceiving two sets of sensory information, is not inferior to that when receiving only one set of corresponding sensory information.
Content may be subject to copyright.
ORIGINAL RESEARCH
published: 27 April 2020
doi: 10.3389/fnins.2020.00348
Frontiers in Neuroscience | www.frontiersin.org 1April 2020 | Volume 14 | Article 348
Edited by:
Loredana Zollo,
Campus Bio-Medico University, Italy
Reviewed by:
Marco D’Alonzo,
Campus Bio-Medico University, Italy
Marko Markovic,
Otto Bock HealthCare GmbH,
Germany
Maurizio Valle,
University of Genoa, Italy
Giorgio Grioli,
Italian Institute of Technology (IIT), Italy
*Correspondence:
Raphael M. Mayer
r.mayer@student.unimelb.edu.au
Specialty section:
This article was submitted to
Neural Technology,
a section of the journal
Frontiers in Neuroscience
Received: 10 September 2019
Accepted: 23 March 2020
Published: 27 April 2020
Citation:
Mayer RM, Garcia-Rosas R,
Mohammadi A, Tan Y, Alici G,
Choong P and Oetomo D (2020)
Tactile Feedback in Closed-Loop
Control of Myoelectric Hand Grasping:
Conveying Information of Multiple
Sensors Simultaneously via a Single
Feedback Channel.
Front. Neurosci. 14:348.
doi: 10.3389/fnins.2020.00348
Tactile Feedback in Closed-Loop
Control of Myoelectric Hand
Grasping: Conveying Information of
Multiple Sensors Simultaneously via
a Single Feedback Channel
Raphael M. Mayer 1
*, Ricardo Garcia-Rosas 1, Alireza Mohammadi 1, Ying Tan 1,
Gursel Alici 2,3 , Peter Choong 3,4 and Denny Oetomo 1,3
1Human Robotics Laboratory, Department of Mechanical Engineering, The University of Melbourne, Parkville, VIC, Australia,
2School of Mechanical, Materials, Mechatronic and Biomedical Engineering, University of Wollongong, Wollongong, NSW,
Australia, 3ARC Centre of Excellence for Electromaterials Science, Wollongong, NSW, Australia, 4Department of Surgery, St.
Vincent’s Hospital, The University of Melbourne, Parkville, VIC, Australia
The appropriate sensory information feedback is important for the success of an object
grasping and manipulation task. In many scenarios, the need arises for multiple feedback
information to be conveyed to a prosthetic hand user simultaneously. The multiple sets
of information may either (1) directly contribute to the performance of the grasping or
object manipulation task, such as the feedback of the grasping force, or (2) simply form
additional independent set(s) of information. In this paper, the efficacy of simultaneously
conveying two independent sets of sensor information (the grasp force and a secondary
set of information) through a single channel of feedback stimulation (vibrotactile via
bone conduction) to the human user in a prosthetic application is investigated. The
performance of the grasping task is not dependent to the second set of information in
this study. Subject performance in two tasks: regulating the grasp force and identifying
the secondary information, were evaluated when provided with either one corresponding
information or both sets of feedback information. Visual feedback is involved in the
training stage. The proposed approach is validated on human-subject experiments using
a vibrotactile transducer worn on the elbow bony landmark (to realize a non-invasive bone
conduction interface) carried out in a virtual reality environment to perform a closed-loop
object grasping task. The experimental results show that the performance of the human
subjects on either task, whilst perceiving two sets of sensory information, is not inferior
to that when receiving only one set of corresponding sensory information, demonstrating
the potential of conveying a second set of information through a bone conduction
interface in an upper limb prosthetic task.
Keywords: neuroprostheses, sensory feedback restoration, human-robot interaction, tactile feedback, bone
conduction
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
1. INTRODUCTION
It is well-established that the performance of grasping and object
manipulation task relies heavily on the appropriate feedback.
This is established in human grasping with or without using
prostheses (Childress, 1980; Augurelle et al., 2003) and in robotic
grasping algorithms (Dahiya et al., 2009; Shaw-Cortez et al.,
2018, 2019). Within prosthetic applications, such feedback allows
effective closed-loop control of the prostheses by the human user
(Saunders and Vijayakumar, 2011; Antfolk et al., 2013; Markovic
et al., 2018; Stephens-Fripp et al., 2018). To date, prosthetic hand
users rely on visual and incidental feedback for the closed-loop
control of hand prosthesis (Markovic et al., 2018), as explicit
feedback mechanisms are not prevalent in commercial prostheses
(Cordella et al., 2016). Incidental feedback can be obtained
from vibrations transmitted through the socket (Svensson et al.,
2017), proprioceptive information from the muscles (Antfolk
et al., 2013), sound from the motor (Markovic et al., 2018), or
the reaction forces transmitted by the actuating cable in body-
powered prostheses (Shehata et al., 2018). Visual feedback has
been the baseline feedback mechanism in prosthetic grasping
exercises as it is the only feedback available naturally to all
commercial hand prostheses (Saunders and Vijayakumar, 2011;
Ninu et al., 2014).
It is also established that a combination of feedback
information is required—and required simultaneously—for
effective grasping and manipulation to be realized. In Westling
and Johansson (1984) and Augurelle et al. (2003), it was
demonstrated that the maintenance of grip force as a function of
the measured load in a vertical lifting scenario is accompanied by
their slip detection function. It was argued that in the scenarios of
moving a hand-held object, accidental slips rarely occur because
“the grip force exceeds the minimal force required” by a safety
margin factor. No exceedingly high values of grip force are
obtained due to a mechanism measuring the frictional condition
using skin mechanoreceptors (Westling and Johansson, 1984).
This argues for the use of two sets of information during
the operation, namely the feedback of the grip force as well
as the information of the object slippage and friction, even
if it is to update an internal feed-forward model (Johansson
and Westling, 1987). Other examples include an exercise in
“sense and explore” where the proprioception information is
required along with the tactile information relevant to the
object/environment being explored. Information on temperature
in addition to the proprioception and tactile could also be needed
in specific applications to indicate dangerous temperature, for
example when drinking hot beverage using a prosthetic hand—
the user may not feel the temperature of the cup until it
reaches the lips and causes a burn (Lederman and Klatzky,
1987).
Investigations in the prosthetic literature have so far focused
on conveying each independent sensor information to the
human user through a single transducer. The feedback is either
continuous (Chaubey et al., 2014) or event driven (Clemente
et al., 2017) and multiple transducers have been deployed via
high density electrotactile arrays (Franceschi et al., 2017). The
number of feedback transducers that can be deployed on the
human is limited due to the physiology and the available space.
Physiologically, the minimum spatial resolution is determined by
the two point discrimination that can be discerned on the skin.
The minimum spatial resolution is 40 mm for mechanotactile
and vibrotactile feedback (on the forearm) and 9 mm for
electrotactile feedback (Svensson et al., 2017). An improved
result was shown in D’Alonzo et al. (2014), colocating the
vibrotactile and electrotactile transducers on the surface of the
skin. Spatially, the number of transducers that can be fitted in
a transhumeral or transradial socket is limited by the available
space within the socket and the contact surface with the residual
limb. The limitation of the available stimulation points is even
more compelling when using bone conduction for vibrotactile
sensation. For osseointegrated implants there is only one rigid
abutment point (Clemente et al., 2017; Li and Brånemark, 2017)
and for non-invasive bone conduction there are 2–3 usable bony
landmarks near the elbow (Mayer et al., 2019). In all these
experiments, each sensory information is still conveyed by one
dedicated feedback channel.
A few studies have recognized the need for the more efficient
use of the feedback channels and proposed the use of multiple
sensor information via a single feedback channel. Multiple sets
of information have been transmitted in a sequential manner
(Ninu et al., 2014), event triggered (Clemente et al., 2016), or
representing only a discrete combination of the information
from two sensors (Choi et al., 2016, 2017). Time sequential
(Ninu et al., 2014) or event triggered feedback (Clemente et al.,
2016) can be used for tasks or events where the need for
each sensing information can be decoupled over the subsequent
events, therefore do not address the need described above for
simultaneous feedback information.
Of the many facets of the challenges in closing the prosthetic
control loop through the provision of effective feedback, we
seek in this paper to improve the information density that
can be conveyed through a single stimulation transducer to
deliver multiple sets of feedback information simultaneously
to the prosthetic user. Specifically, the amplitude and the
frequency of the stimulus signal are used to convey different
information. This concept was observed in Dosen et al.
(2016), where a vibrotactile transducer was designed to produce
independent control of the amplitude and the frequency of
the stimulation signal. It was reported that a psychophysical
experiment on four healthy subjects found 400 stimulation
settings (a combination of amplitude and frequency of the
stimulus signal—each termed a “vixel”) distinguishable by
the subjects.
In this paper, the efficacy of this concept is further investigated
on a closed-loop operation of a hand prosthesis in virtual reality.
One set of information, the grasp force, is used in the closed-
loop application, providing sensory feedback on the grasp force
regulated by the motor input via surface electromyography
(sEMG). The second set provides an additional secondary
information. Note that a closed-loop operation differs from
psychometric evaluation as the sensory excitation is a function
of the voluntary user effort in the given task. This study is
investigated within the context of non-invasive bone conduction
interface, where the need for higher information bandwidth is
Frontiers in Neuroscience | www.frontiersin.org 2April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
compelling due to the spatial limitations in the placement of
feedback transducer to the human user. It should be noted
that the purpose of the study in this paper is to establish
the ability for a second piece of information to be perceived.
Once this is established, the second set of information may be
used to (1) perceive an independent set of information, such
as the temperature of the object grasped, or (2) improve the
performance of the primary task with additional information.
In this paper, the second set of information is not expected to
improve the performance of the primary task, which is the closed
loop object grasping task.
It was found that the human subjects were able to discern
the two sets of information even when applied simultaneously.
The baseline for comparison is the case where only one set
of sensor information was directly conveyed as feedback to
the human user. Comparing the proposed technique to the
baseline, a comparable performance in regulating the grasp force
of the prosthesis (accuracy and repeatability) and in correctly
identifying the secondary information (low, medium, or high)
was achieved.
2. CONVEYING MULTI-SENSOR
INFORMATION VIA FEWER FEEDBACK
CHANNELS
We define the sensor information as yRN, where Nis
the number of independent sets of sensor information, where
the measurements can be continuous-time signals or discrete
events. The feedback stimulation to the prosthetic user is
defined as xRM, where Mis the number of channels
(transducers) employed to provide the feedback stimulation.
The scenario being addressed in this paper is that where N>
M. The relationship between measurement yand the feedback
stimulation xcan be written as
x=φ(y) (1)
where φ:RNRM.
2.1. Sensor Information y
Four major sensing modalities are generally present in the
upper limbs: touch, proprioception, pain, and temperature.
The touch modality is further made up of a combination of
information: contact, normal and shear force/pressure, vibration,
and texture (Antfolk et al., 2013). To achieve a robust execution
of grasping and object manipulation task, only a subset of these
sensing modalities are used as feedback. Recent studies have
further isolated the types of feedback modalities and information
that would be pertinent to an effective object grasping and
manipulation, such as grip force and skin-object friction force (de
Freitasnzo et al., 2009; Ninu et al., 2014). Furthermore, literature
has explicitly determined that such combination of feedback
information is required simultaneously for an effective grasping
and manipulation (Westling and Johansson, 1984; Augurelle
et al., 2003). In the context of an upper limb prosthesis, it is
possible to equip a prosthetic hand with a large number of
sensors (Kim et al., 2014; Mohammadi et al., 2019) so that
N>M. It should be noted that the Nsets of independent
information can be constructed out of any number of sensing
modalities, such as force sensing, grasp velocity sensing, tactile
information e.g., for object roughness. It may even contain
estimated quantities that cannot be directly measured by sensors,
for example: object stiffness may require the measurements of
contact force and displacement.
2.2. Feedback Stimulation x
The state of the art of non-invasive feedback in prosthetic
technology generally utilizes electrotactile (ET), vibrotactile
(VT), and mechanotactile (MT) modalities, placed in contact
with the skin as a way to deliver the sensation (Stephens-Fripp
et al., 2019). More novel feedback mechanisms have also been
explored, such as using augmented reality (Markovic et al., 2014).
Of these modalities, ET and VT present the challenges of a
varying stimulation perception with location of application, VT
also presents the challenge where its perception is static-force
dependent (i.e., it depends on how hard the VT transducer is
pressed against the skin) while MT is often bulky, with high
power consumption (Svensson et al., 2017; Stephens-Fripp et al.,
2018).
It was shown, however, that VT applied over bony landmarks
does not suffer from the static force dependency (Mayer
et al., 2018), is compact and does not suffer from high power
consumption (Mayer et al., 2019). It does, however, restrict
the locations that this technique can be applied to on the
upper limb, as there are relatively fewer bony landmarks on
the upper limb than skin surface. A psychophysical evaluation
in Mayer et al. (2019) demonstrates comparable results in
non-invasive vibrotactile feedback on the bone to the invasive
(osseointegrated) study in Clemente et al. (2017). It is highlighted
that personalization is required for the perception threshold in
order to be used as an interface. A higher sensitivity has been
reported for frequencies in the range of 100–200 Hz where lower
stimulation forces are required. This allows the use of more
compact transducers with lower power consumption (Mayer
et al., 2019).
2.3. Specific Sensor Information and
Feedback Stimulation Utilized
In order to demonstrate the concept of conveying multi-
sensor information via fewer feedback channels, this paper
uses one feedback channel to convey two sets of independent
sensor information, namely the grasp force fgand a secondary
information s, which could be e.g., skin-object friction,
temperature. That is,
y=fg
sR2, (2)
where fgrepresents a continuous-time signal of the grasp force
and sis a discrete class of the secondary information. The primary
information, the grasp force, is used as a feedback to the task of
regulating the object grasp force. The secondary information does
not directly contribute to the task of regulating the grasp force.
Frontiers in Neuroscience | www.frontiersin.org 3April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
VT via bone conduction is selected as the feedback
stimulation, applied on the elbow bony landmark. The sinusoidal
waveform applied as the vibrotactile stimulus is:
x(t)=a(t) sin(2πf(t)t), (3)
where the amplitude a(t) is modulated as a linear function of the
continuous-time grasp force signal fg(t):
a(t)=a0+kafg(t). (4)
while the frequency f(t) is modulated as a linear function of the
secondary information s(t)
f(t)=f0+kss(t), (5)
where s(t)∈ {S1, S2, S3}is a discrete set describing the secondary
information at time t. The offset a0and f0denote the minimum
amplitude and frequency detectable by human bone conduction
perception. The constants kaand ksare positive.
3. METHODOLOGY
The proposed approach is validated in a human-subject
experiment using a VT transducer worn on the elbow bony
landmark to provide the feedback and a virtual reality
based environment to simulate the grasping task, as shown
in Figure 1A. This experiment seeks to verify that subjects
can differentiate two encoded sensory information conveyed
via one bone conduction channel. This is done by firstly,
comparing the performance of the proposed approach against
the baseline of carrying out the same task with only one set of
information conveyed through the feedback channel. Secondly,
the performance with and without the addition of visual feedback
was compared.
The experiment consists of three parts:
(1) A pre-evaluation of the psychophysics of the interface;
(2) Obtaining the bone conduction perception threshold at the
ulnar olecranon for individual subjects;
(3) Evaluating the performance of the human subject in the task
of grasping within a virtual reality environment.
The experiment was conducted on 10 able-bodied subjects
(2 female, 8 male; age 28.7 ±4 years). Informed consent
was received from all subjects in the study. The experimental
procedure was approved by the University of Melbourne
Human Research Ethics Committee, project numbers 1852875.2
and 1750711.1.
3.1. Psychophysics
This subsection performs the psychophysical evaluation of the
bone conduction interface as sensory feedback. This is done
to ensure that subjects can discriminate between the given
stimulation frequencies and amplitudes chosen in the later
for the Grasp Force Regulation and Secondary Information
Classification Task (see section 3.3). Therefore the minimum
noticeable difference for subjects, later referred to as “just
noticeable difference” (JND), is obtained to quantify the
capabilities of the bone conduction interface in frequency and
amplitude domain.
3.1.1. Setup
3.1.1.1. Orthosis
A custom elbow orthosis with adjustable bone conduction
transducers was fitted to the subjects dominant hand for the
experiment as shown in Figure 1A. The orthosis (O) was fixed to
the upper and lower arm of the subject through adjustable velcro
straps. The vibrotactile transducer (VT) position was adjusted
by a breadboard-style variable mounting in order to align and
FIGURE 1 | The setup of the grasp task in virtual reality is shown in (A) where the subject is seated with the arm placed in the orthoses (O) onto the table. The
vibrotactile transducer (VT) is mounted onto the ulnar olecranon; the EMG electrodes (sEMG) onto the forearm and the virtual reality headset (VR) placed on the
subjects head. The subjects first person view in virtual reality in (B) shows the prosthetic (P); the grasped object (D); the non-dominant hand (H) for commands as
activating/deactivating EMG or touching the sphere (S) for reporting the secondary information class and to advance to the next task; (C) shows the top person view
of the virtual reality setup.
Frontiers in Neuroscience | www.frontiersin.org 4April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
be in contact with the ulnar olecranon, which is the proximal
end of the ulna located at the elbow. The VT is adjusted using
two screws to ensure good contact with the bony landmark. The
orthosis is placed on the desk (see Figure 1A), and kept static
during the experiments.
3.1.1.2. Bone conduction
The setup consists of a B81 transducer (RadioEar Corporation,
USA), calibrated using an Artificial Mastoid Type 4930 (Brüel
& Kjære, Denmark) at the static force of 5.4 N. The stimulation
signals were updated at 90 Hz and amplified using a 15 W Public
Address amplifier Type A4017 (Redback Inc., Australia) having a
suitable 4 16output to drive the 8B81 transducers and a
suitable low harmonic distortion of <3% at 1 kHz. Calibrated
force sensitive resistor (FSR) (Interlink Electronics 402 Round
Short Tail), placed between the transducer and the mounting
plate, were used to measure the applied force using a force
sensitive area of A=1.33cm2. The calibration was done using
three different weights [0.2, 0.5, 0.7] kg measuring five repetitions
and applying a linear interpolation to obtain the force/voltage
relationship. The achieved force/voltage relationship has a
variance of 5.4 ±0.37N. The stimulation signal was generated
using a National Instruments NI USB-6343 connected to a
Windows Surface Book 2 (Intel Core i7-8, 16GB RAM, Windows
10TM) as control unit. A MATLAB R
GUI was used to guide
the user through the psychophysics and perception threshold
experiment. The computer was connected via a Wi-Fi hotspot
through a UDP connection to the head mounted virtual reality
system for the experiment tasks.
3.1.2. Protocol
It is noted that the JND of frequency (JNDf) as well as the
JND of amplitude (JNDa) are different for each person (Dosen
et al., 2016). Therefore, a sample of five subjects are employed
to evaluate JNDfand JNDato show that the subjects can
discriminate between the given stimulation frequencies and
amplitudes. The JNDfis measured for three frequencies fref
[100, 400, 750] Hz and three amplitudes aref [0.1, 0.3, 0.5]V,
giving the permutation of the nine different combinations. For
each combination, a standard two-interval forced-choice (2IFC)
threshold procedure is used. For the 2IFC, the reference stimulus
fref is selected out of the three predetermined frequencies and
the target stimulus ftwas varied in a stochastic approximation
staircase (SAS) manner, where the variation is based on the
report of the subjects of the perceived stimulus (Clemente et al.,
2017). Therefore,
ftn+1=ftn1.5fref
(2 +m)(Zn0.85), (6)
where ftnis the target stimulus during the previous trial, ftn+1is
the upcoming trial, mis the number of reversals showing how
many times the answers change from wrong to right, Znis set
to 1 for correct answer and 0 for an incorrect answer, and ftnis
initialized with 1.5 times the reference stimulus. The trials are
stopped after 50 iterations and the value ft51 for the 51st trial is
taken as the perception threshold (Clemente et al., 2017; Mayer
et al., 2019).
The JNDais obtained similar to the JNDfwhere the target
amplitude atn+1is now varied in a SAS manner and the reference
stimulus aref is chosen out of the given amplitudes.
3.2. Perception Threshold
The objective of this subsection is to determine the minimum
stimulation amplitude a0from which subjects could perceive
a given stimulation frequency f. This will be referred to as
“perception threshold” henceforth. For any given frequency, the
amplitude thresholds change and are different for each person,
thus it is necessary to be identified (Mayer et al., 2019).
3.2.1. Setup
The same setup as for the psychophysical evaluation was used
which is explained in section 3.1.1.
3.2.2. Protocol
The perception threshold is obtained using a method of
adjustment test (Kingdom and Prins, 2016, Chapter 3). The
subjects are presented n=10 times with each frequency f
[100, 200, 400, 750, 1500, 3000, 6000] Hz. At each iteration,
the amplitude is adjusted by the subject to the lowest perceived
stimulation. The subject can adjust the amplitude in small
1Usmall =0.005Vand large 1Ularge =0.05Vincrements. The
frequencies were presented in a randomized order.
The obtained perception threshold value a0for each subject
is set in the bone conduction stimulation signal (Equation 4).
The experiment then proceeded to the virtual reality based
grasping tasks.
3.3. Grasp Force Regulation and Object
Classification
Subjects were asked to perform a set of grasp force regulation
and secondary information classification tasks with a virtual
prosthetic hand. The tasks involved regulating the grasp force
of the virtual reality prosthetic hand through the use of a
sEMG-based control interface and classifying the secondary
information. Different combinations of feedback modalities
[visual feedback (V), grasp force (F), and the secondary
information (S)] were presented, as shown in Table 1.
Three grasping tasks were tested in each group (Table 1).
“Grasp Force Regulation Task” consisted purely of applying a
grasp force to an object in-hand, this task is detailed in section
3.3.3.1. “Secondary Information Classification Task” consisted
of classifying the secondary information, with no grasp force
involved, this task is detailed in section 3.3.2.2. “Mixed Task” was
a combination of “Grasp Force Regulation Task” and “Secondary
Information Classification Task,” where subjects required to
apply a given grasp force and classify the secondary information,
this task is detailed in section 3.3.2.3. Tasks VF and VS were
considered as training for the users to familiarize themselves with
the sEMG control interface and the feedback. Tasks VFS, F, S,
and FS were used to show if subjects can differentiate multiple
sensory feedback encoded in one channel with and without visual
feedback. The tasks are detailed in the following subsections.
Frontiers in Neuroscience | www.frontiersin.org 5April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
TABLE 1 | Experimental cases tested: encoding two sets of information onto the
amplitude and frequency of the vibrotactile stimulation as the feedback to the
subject through bone conduction.
Task V F S
VF Grasp force x x
VS Secondary information x x
VFS Mixed x x x
F Grasp force x
S Secondary information x
FS Mixed x x
The role of visual feedback is also compared as a baseline.
The efficacy of this concept is further investigated on a closed-loop operation of a
hand prosthesis.
3.3.1. Setup
3.3.1.1. Orthoses and bone conduction
The same setup as for the psychophysical evaluation was used, as
explained in section 3.1.1.
3.3.1.2. sEMG
MyoWare sensors with Ag-AgCl electrodes were used for sEMG
data gathering. Data gathering and virtual reality update were
performed at 90 Hz.
3.3.1.3. Virtual reality
The virtual reality component of the experiment was performed
on an HTC Vive Pro HMD with the application developed in
Unity3D. The experimental platform runs on an Intel Core i7-
8700K processor at 3.7 GHz, with 32 GB RAM, and GeForce GTX
1080Ti video card with 11 GB GDDR5. An HTC Vive Controller
was used for tracking the non-dominant hand of the subject
and to interact with the virtual reality application. The subjects
report on the secondary information and navigate through the
experiment with the non-dominant hand. An HTC Vive Tracker
was used to determine the location of the dominant hand of
the subject to determine the location of the virtual prosthesis.
The application used for the experiment can be downloaded
from https://github.com/Rigaro/VRProEP.
An average time latency of a touch event generated in virtual
reality and the activation of the feedback stimulus of tlatency =
66 ms was estimated by measuring the single time latency’s
involved. The total delay results from the time delay of sending
a command from the virtual reality setup via a UDP connection
to the stimulation control unit tUDP =65 ms (measured) and the
delay of sending the stimulation command to the NI USB-6343
tNI =1 ms (datasheet).
3.3.2. Protocol
Subjects performed a set of grasping and secondary information
classification tasks in the virtual reality environment. The tasks
were separated into two blocks (see Table 1) with a 2 min break
between them. An HTC Vive Pro Head Mounted Display (HMD)
was used to display the virtual reality environment to subjects.
The virtual reality set-up is shown in Figure 1A while the subject’s
first person view in virtual reality is shown in Figure 1B and a
top person view in Figure 1C. A Vive Controller was held by
the subject on their non-dominant hand and was used to enable
the EMG interface by a button press and to select the secondary
information class in the classification task. A standard dual-
site differential surface EMG proportional prosthetic interface
was used to command the prosthetic hand closing velocity
(Fougner et al., 2012). Muscle activation was gathered using
sEMG electrodes placed on the forearm targeting wrist flexor and
extensor muscles for hand closing and opening, respectively.
3.3.2.1. Grasp force regulation task
In the grasp force task, subjects were asked to use the sEMG
control interface to regulate the grasp force to grip objects with a
certain grasp force level. A fixed stimulation frequency was used,
in line with the result of the psychophysical evaluation, while the
amplitude a(t) is used to provide feedback on the grasp force
produced by the human subject as determined by Equation (4).
The grasp force fgwas calculated from the sEMG signal
magnitude. Therefore, the sEMG signal magnitude uEMG is
integrated in a recursive discrete manner, as given by
100 uEMG(k)100,
fg(k+1) =fg(k)+1fguEMG(k),
0fg(k+1) 1,
(7)
where uEMG(k) is the sEMG input amplification adjusted per
subject to range from [100, 100]; 1fg=0.005 is the scaling
factor to convert sEMG signal magnitude to a force rate of
change. The grasp force fgis bounded to [0, 1]. The recursion
is updated at 90 Hz.
The grasp tasks were grouped in two parts (see Table 1). In
the first part (VF), visual feedback related to the grasp force was
given to the subjects and is considered as training. The visual
feedback consisted of the grasped object changing color in a
gradient depending on the applied grasp force fg(k). In the second
part (F), no visual feedback was provided. Three different target
grasp force levels were used for the task and each was repeated
five times in a randomized manner. The target grasp force levels
were [0.3, 0.5, 0.8]. The object starting color represented the
target grasp force level, however, subjects did not explicitly know
the exact target force.
3.3.2.2. Secondary information classification task
In the secondary information classification task, subjects were
asked to report on which of the three different classes they
perceived by touching one of three spheres in front of them
representing each of the classes. The classes (s) were low, mid,
and high, which translated to the following frequencies [100,
400, 750] Hz. The grasp force was set to constant at fg=
0.8 resulting in a constant amplitude a(t) in the feedback
stimulus to the subject for this task. In other words, it is not
regulated based on the subject sEMG involvement. Each class
was presented 5 times in a randomized manner. The secondary
information classification tasks were grouped in two parts (see
Table 1). In the first part (VS), visual feedback related to the
correct class was shown to the subject through the color of
Frontiers in Neuroscience | www.frontiersin.org 6April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
the classification spheres whilst presenting the stimuli and is
therefore considered as training. In the second part (S), no visual
feedback was provided.
3.3.2.3. Mixed task
The mixed task combines both the grasp force regulation
and secondary information classification tasks simultaneously,
such that the grasp force regulation had to be executed and
the subjects were then asked to report on which secondary
information class they perceived. This means that the stimulation
provided to the subjects had the grasp force fg(k) encoded
in its amplitude a(t) and the secondary information class s
encoded in its frequency f, simultaneously. A permutation
of all force levels and secondary information classes was
presented and each combination was repeated five times in a
randomized manner. Force levels are [0.3, 0.5, 0.8] and secondary
information classes are [100, 400, 750] Hz. The mixed tasks
were grouped in two parts (see Table 1). In the first part
(VFS), visual feedback related to the grasp force was given to
the subjects. No visual feedback was given for the secondary
information feedback. In the second part (FS), no visual feedback
was provided.
3.3.3. Data Gathering and Performance Measure
The grasp force fg(as calculated in Equation 7) and the actual
sEMG activation levels were continuously recorded for all trials
for the duration of each task, along with the desired force
target. The subject’s answer for the Secondary Information
Classification Task was recorded for tasks “Secondary
Information Classification Task” and “Mixed Task,” along
with the correct class.
The following performance measures were used:
Normalized Grasp Force: is the normalized grasp force fg(kf)
at the time kf, where kfis the time the subjects finalize the
force adjustment by disabling the EMG interface by a button
press. The mean and standard deviation is calculated over
the repetitions of each task regulating the grasp force and
represents the accuracy and repeatability of the grasp force
regulation exercise.
Secondary Information Classification Rate: The rate with
which the subject identifies the correct secondary information
class was used as the performance measure in the secondary
information classification task.
The achieved results of perception threshold, secondary
information classification rate and normalized grasp force are
visually presented using boxplots, showing the median, 25th and
75th percentiles and the whiskers indicating the most extreme
points not considered outliers. Any outliers are plotted using the
“+” symbol.
For statistical analysis a non-parametric ANOVA like analysis,
specifically a Friedman test was applied (Daniel, 1990) as an
ANOVA due to non normal distributed data (Shapiro–Wilk test)
was not suitable. This was followed up by a post-hoc analysis
via Wilcoxon signed rank test (Wilcoxon, 1945). The obtained p
values are given as well as the statistical significance indicated in
the plots.
4. RESULTS
Before using the VT bone conduction feedback interface, a pre-
evaluation of the psychophysics of this interface is conducted and
the perception threshold of each subject determined.
4.1. Psychophysics
Figures 2A,B show the obtained mean and standard deviation
for the JNDaand JNDf. In Figure 2C, the mean of both JNDa
and JNDfare plotted together to show the resolution of the
proposed interface. The black dots in Figure 2C denote the
reference stimulus of the SAS approach and the red and blue
dots show the obtained mean value of the JND. Therefore,
this plot shows the next closest noticeable stimulation point
(frequency or amplitude).
The results in Figure 2C show that the JNDais the smallest for
lower frequencies except at 100 Hz and 0.3 V reference stimulus.
Hence, the fixed frequency of the grasp force regulation task,
as discussed in section 3.3.2, was set to 100 Hz since subjects
had the best amplitude discrimination. Comparing the results
obtained for VT on skin in Dosen et al. (2016), Figures 2A,B
show similar behavior where the JND is increasing linearly with
increasing amplitude and frequency. The lower value for JNDaat
100 Hz indicates better sensitivity at lower frequencies for higher
stimulation amplitudes in case of bone conduction.
4.2. Perception Threshold
Before applying VT bone conduction feedback, the lowest
perceived stimulation at the given frequencies was found using a
method of adjustment. This threshold a0was used in Equation
(4) to fit the linear relation. The maximum was set to half of
the maximum transducer voltage of 0.5 V. Figure 2D shows the
obtained perception threshold for all subjects.
4.3. Grasp and Object Classification
In the following subsections, the performance of the Mixed
Task, representing the proposed concept of conveying two sets
of information simultaneously via one feedback channel to the
human subject, is compared to the baseline performance of the
Grasp Force Regulation Task and the Secondary Information
Classification Task, using the defined performance measures.
4.3.1. Secondary Information Classification Rate
The obtained secondary information classification rates are
shown in the boxplot of Figure 3A for the VFS, S, and FS tasks.
VS and VF are the training tasks and therefore the obtained data
is not considered in the plots. In VS, the subjects received visual
feedback for the correct answer in order to learn how to interpret
the secondary information feedback and therefore reached 100%
secondary information classification rate. In S, only secondary
information feedback via bone conduction is provided without
visual feedback. In FS, the grasp force level has to be adjusted and
the correct secondary information class chosen afterwards, with
both grasp force and secondary information feedback provided
simultaneously via the bone conduction mechanism.
A mean secondary information classification rate of 86.22 ±
18.17% for VFS (visual, force, and secondary information
feedback), 92.00±16.57% for S (secondary information feedback)
Frontiers in Neuroscience | www.frontiersin.org 7April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
FIGURE 2 | Results of the psychophysical evaluation of five subjects show the mean and standard deviation of the (A) JNDagiving the amplitude resolution for three
different frequencies at three different amplitudes and (B) JNDfgiving the frequency resolution for three different amplitudes at three different frequencies while (C)
shows a summary plot of the obtained mean value of JNDa(blue) and JNDf(red) at each reference stimulus (black). In (D), the identified perception threshold value a0
at the frequencies [100, 200, 400, 750, 1500, 3000, 6000] Hz for 10 subjects, is shown.
and 89.11 ±16.16% for FS (force and secondary information
feedback) has been observed. The mean secondary information
classification rate and standard deviation for each class (low,
medium, high) for the three different tasks are given in Table 2
and the boxplot shown in Figure 3A. A Friedman test (VFS,
S, FS) for secondary information classification rate resulted in
a statistical significance for the medium secondary information
class classification (see Table 3).
For low and high secondary information class, no
statistical significance could be found, suggesting the data
is compatible with all groups having the same distribution.
For medium secondary information class a Wilcoxon
signed rank test is applied as post-hoc test and results
are shown in Table 4. A statistical significance could be
found for VFS vs. S, but not for VFS vs. FS and S vs. FS
suggesting the data is compatible with all groups having the
same distribution.
4.3.2. Normalized Grasp Force
Figure 3B shows the boxplot of the achieved grasp force by the
subjects during VFS, F, and FS. In VF, the subjects received visual
feedback for the applied grasp force to learn how to associate
grasp force to visual feedback as well as tactile feedback. In all
cases, grasp force feedback is present, while visual feedback is
present only in VFS (see Table 1). The result of each force level
and each trial is given in Table 2.
The obtained results for the Friedman Test (VFS, F, FS) of all
force levels are shown in Table 5 and no statistical significance
could be found suggesting the data is compatible with all groups
having the same distribution.
Frontiers in Neuroscience | www.frontiersin.org 8April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
FIGURE 3 | Boxplots of the obtained results of the (A) secondary information classification task, subdivided into the three secondary information classes, and in (B)
the achieved grasp force, subdivide into the three target grasp force levels, is shown for 10 subjects. *Asterisk indicates statistical difference by post-hoc
analysis p<0.05.
TABLE 2 | Shows the mean and standard deviation of the obtained results for secondary information classification rate and normalized grasp force for the 4 tasks for 10
subjects.
Level VFS F S FS
Secondary information Low 97.33 ±4.66 100.00 ±0.00 98.00 ±4.50
Classification rate (%) Medium 76.67 ±25.19 90.00 ±25.38 84.67 ±19.89
High 84.67 ±30.96 86.00 ±25.03 84.67 ±28.12
Normalized grasp force 0.2 0.29 ±0.08 0.45 ±0.33 0.50 ±0.33
0.5 0.43 ±0.11 0.60 ±0.35 0.56 ±0.29
0.8 0.68 ±0.09 0.80 ±0.24 0.80 ±0.24
TABLE 3 | The p-values of the Friedman test for the secondary information
classification rate for the three different classes.
Secondary information class
Low Medium High
pvalue 0.174 0.031 0.717
A significance level of p <0.05 was used.
5. DISCUSSION
5.1. Conveying Multi-Sensor Information
5.1.1. Secondary Information Classification
In this subsection, we discuss the performance of the subjects
in Tasks FS compared to S. The role of visual feedback (Task
VFS) is discussed separately in the following subsection 5.2.
Table 3 indicates a statistical difference for the performance
in recognizing the medium secondary information class but
not for low and high. However, the post-hoc test, Table 4,
provides more details by showing no significant difference
between the performance in Tasks S vs. FS for detecting
the medium secondary information class. Therefore, no
statistically significant difference is found between conveying
two sets of information simultaneously through the single
TABLE 4 | The p-values of the post-hoc Wilcoxon signed rank test for medium
class of the mean secondary information classification rate.
Task
VFS vs.S VFS vs.FS S vs. FS
pvalue 0.024 0.062 0.255
A significance level of p <0.05 was used.
bone conduction channel in the context of recognizing
the secondary information compared to conveying one set
of information.
5.1.2. Normalized Grasp Force
The Friedman test for the performance of the subjects in the
grasp force regulation task as shown in Table 5 does not show
any statistically significant difference across the cases of F, FS,
and VFS. This is found consistently across the three levels of
grasp force. Therefore, no statistically significant reduction in
performance is found in the proposed approach against the
baseline in the context of grasp force regulation, leading us
to conclude that adding a second set of sensor information
does not influence the ability to use the first set of sensor
Frontiers in Neuroscience | www.frontiersin.org 9April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
TABLE 5 | The p-values of the Friedman test for the Normalized Grasp Force for
the different target levels.
Target grasp force
0.2 0.5 0.8
pvalue 0.150 0.407 0.150
A significance level of p <0.05 was used.
information in a closed-loop manner. The standard deviation is
qualitatively decreasing for increasing force levels indicating a
better repeatability for higher force levels in the case of no visual
feedback (F and FS).
It should be noted that the grasping task for F is carried
out at the stimulation frequency of 100 Hz, as justified by
the psychophysical evaluation. In the case of VFS and FS, the
subject also had the chance to carry out the task of regulating
grasp force alongside the secondary information classification
exercises, which were conducted at [100, 400, 750]Hz. This
difference did not significantly influence the ability to control the
grasp force.
5.2. Role of Visual Feedback
As visual feedback is present in a prosthetic system next
to incidental feedback, the influence of visual feedback is
investigated while incidental feedback is avoided by using a
virtual reality setup. To investigate the influence of visual
feedback, whilst feeding back two sets of information, the grasp
force has been feed back as a color gradient of the grasped object.
Though this is not a real case scenario it contains the same
underlying set of information.
5.2.1. Secondary Information Classification
Comparing VFS to S showed a statistically significant increase
in the secondary information classification rate in the absence
of visual feedback (see Table 4), for the medium secondary
information class, but not for low and high. It should be noted
that the visual feedback was representing grasp force information
and not the secondary information. Comparing VFS to FS does
not yield any statistically significant difference in performance
(see Tables 3,4). Several explanations are possible. It could
suggest that the subjects were able to learn the meaning of the
feedback and perform better or that the reduced cognitive effort
increased performance. However, the data collected in this study
did not permit the authors to draw further conclusions.
5.2.2. Normalized Grasp Force
The obtained normalized grasp force performance shows no
statistically significant difference between the tasks involving
visual feedback VFS compared to those with no visual feedback
(F and FS). A smaller variance of the normalized grasp force
is obtained for VFS compared to F and FS. It should be noted
that VFS adds visual feedback for the same sensory information,
namely the grasp force. A similar observation was reported in
Patterson and Katz (1992) stating that the primary advantage of
supplemental feedback is to reduce the variability of responses.
This decrease can not be observed for F compared to FS as it
does not add more feedback of the same sensory information but
rather superimposes other types of sensory information.
It should be noted that the results are obtained using a
virtual reality setup. This allows the control of the provision
of visual feedback while guiding the subjects through the grasp
task experiment. Admittedly it abstracts the experiment from a
practical grasping task. However, it does not take away the main
premise from the study, which is to understand how well two sets
of information can be conveyed in this novel manner.
6. CONCLUSION
This study investigated the efficacy of conveying multi-sensor
information via fewer feedback channels in a prosthetics context.
Two sets of sensor information: grasp force and a secondary
information, are conveyed simultaneously to human users
through one feedback channel (a vibrotactile transducer on
bone conduction). Human subject experiment was conducted
using physical vibrotactile transducers on the elbow bony
landmark and virtual reality environment to simulate the
prosthetic grasping force regulation and secondary information
classification tasks. It was found that the subjects were able
to discern the two sets of feedback information, sufficient to
perform the grasping and secondary information classification
tasks to a performance not inferior to that when carried out
with only one set of feedback information. The addition of visual
feedback, a common feedback mechanism present in prostheses,
was found to improve the repeatability of grasp force regulation
as reported in literature.
It is expected that the result is generalizable to other types
of information and modalities (not limited to grasp force and
bone conduction stimulation) and more freedom in the selection
of the number of independent sets of sensor information N
and feedback stimulation channel M, as long as N>M. The
second set of information was generalized and labeled secondary
information but can be multiple in a real world application e.g.,
temperature, friction.
It should be noted that in this experiment, one set of sensor
information was used explicitly in the closed-loop performance
of grasp force regulation, while the other set constitutes
additional information. Future work will investigate other
modulation techniques to encode the multi-sets of information
into the one feedback stimulation channel and algorithms to
find an optimal matching between sensory information and
provided feedback.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to
the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed
and approved by Ethics Committee of the University of
Frontiers in Neuroscience | www.frontiersin.org 10 April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
Melbourne. Project numbers are 1852875.2 and 1750711.1. The
patients/participants provided their written informed consent to
participate in this study.
AUTHOR CONTRIBUTIONS
RM, RG-R, AM, YT, and DO: literature, experiment, data
analysis, and paper. GA and PC: paper design, experiment design,
and paper review.
FUNDING
This project was funded by the Valma Angliss Trust and the
University of Melbourne.
ACKNOWLEDGMENTS
The authors acknowledge the assistance in the statistical analysis
of the data in this paper by Cameron Patrick from the Statistical
Consulting Centre at The University of Melbourne.
REFERENCES
Antfolk, C., D’alonzo, M., Rosén, B., Lundborg, G., Sebelius, F., and Cipriani, C.
(2013). Sensory feedback in upper limb prosthetics. Expert Rev. Med. Dev. 10,
45–54. doi: 10.1586/erd.12.68
Augurelle, A.-S., Smith, A. M., Lejeune, T., and Thonnard, J.-L. (2003). Importance
of cutaneous feedback in maintaining a secure grip during manipulation of
hand-held objects. J. Neurophysiol. 89, 665–671. doi: 10.1152/jn.00249.2002
Chaubey, P., Rosenbaum-Chou, T., Daly, W., and Boone, D. (2014). Closed-loop
vibratory haptic feedback in upper-limb prosthetic users. J. Prosthet. Orthot. 26,
120–127. doi: 10.1097/JPO.0000000000000030
Childress, D. S. (1980). Closed-loop control in prosthetic systems: historical
perspective. Ann. Biomed. Eng. 8, 293–303.
Choi, K., Kim, P., Kim, K. S., and Kim, S. (2016). “Two-channel electrotactile
stimulation for sensory feedback of fingers of prosthesis,” in IEEE International
Conference on Intelligent Robots and Systems (Daejeon), 1133–1138.
Choi, K., Kim, P., Kim, K. S., and Kim, S. (2017). Mixed-modality stimulation
to evoke two modalities simultaneously in one channel for electrocutaneous
sensory feedback. IEEE Trans. Neural Syst. Rehabil. Eng. 25, 2258–2269.
doi: 10.1109/TNSRE.2017.2730856
Clemente, F., D’Alonzo, M., Controzzi, M., Edin, B. B., and Cipriani, C.
(2016). Non-invasive, temporally discrete feedback of object contact and
release improves grasp control of closed-loop myoelectric transradial
prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 24, 1314–1322.
doi: 10.1109/TNSRE.2015.2500586
Clemente, F., Håkansson, B., Cipriani, C., Wessberg, J., Kulbacka-Ortiz, K.,
Brånemark, R., et al. (2017). Touch and hearing mediate osseoperception. Sci.
Rep. 7:45363. doi: 10.1038/srep45363
Cordella, F., Ciancio, A. L., Sacchetti, R., Davalli, A., Cutti, A. G., Guglielmelli, E.,
et al. (2016). Literature review on needs of upper limb prosthesis users. Front.
Neurosci. 10:209. doi: 10.3389/fnins.2016.00209
Dahiya, R. S., Metta, G., Valle, M., and Sandini, G. (2009). Tactile
sensing - from humans to humanoids. IEEE Trans. Robot. 26, 1–20.
doi: 10.1109/TRO.2009.2033627
D’Alonzo, M., Dosen, S., Cipriani, C., and Farina, D. (2014). HyVE:
Hybrid vibro-electrotactile stimulation for sensory feedback and substitution
in rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 22, 290–301.
doi: 10.1109/TNSRE.2013.2266482
Daniel, W. W. (1990). Applied Nonparametric Statistics, 2nd Edn. Boston, MA:
PWS-KENT Pub.
de Freitas, P. B., Uygur, M., and Jaric, S. (2009). Grip force adaptation
in manipulation activities performed under different coating and grasping
conditions. Neurosci. Lett. 457, 16–20. doi: 10.1016/j.neulet.2009.03.108
Dosen, S., Ninu, A., Yakimovich, T., Dietl, H., and Farina, D. (2016). A novel
method to generate amplitude-frequency modulated vibrotactile stimulation.
IEEE Trans. Haptics 9, 3–12. doi: 10.1109/TOH.2015.2497229
Fougner, A., Stavdahl, O., Kyberd, P. J., Losier, Y. G., and Parker, P. A.
(2012). Control of upper limb prostheses: terminology and proportional
myoelectric controla review. IEEE Trans. Neural Syst. Rehabil.Eng. 20, 663–6 77.
doi: 10.1109/TNSRE.2012.2196711
Franceschi, M., Seminara, L., Dosen, S., Strbac, M., Valle, M., and Farina, D.
(2017). A system for electrotactile feedback using electronic skin and flexible
matrix electrodes: experimental evaluation. IEEE Trans. Haptics 10, 162–172.
doi: 10.1109/TOH.2016.2618377
Johansson, R., and Westling, G. (1987). Signals in tactile afferents from the fingers
eliciting adaptive motor responses during precision grip. Exp. Brain Res. 66,
141–154. doi: 10.1007/BF00236210
Kim, J., Lee, M., Shim, H. J., Ghaffari, R., Cho, H. R., Son, D., et al. (2014).
Stretchable silicon nanoribbon electronics for skin prosthesis. Nat. Commun.
5:5747. doi: 10.1038/ncomms6747
Kingdom, F. A. A., and Prins, N. (2016). “Psychophysics,” in Science Direct 2nd
Edn.. doi: 10.1016/C2012-0-01278-1
Lederman, S. J., and Klatzky, R. L. (1987). Hand movements : a
window into haptic object recognition. Cogn. Psychol. 19, 342–368.
doi: 10.1016/0010-0285(87)90008-9
Li, Y., and Brånemark, R. (2017). Osseointegrated prostheses for
rehabilitation following amputation. Der Unfallchirurg 120, 285–292.
doi: 10.1007/s00113-017-0331-4
Markovic, M., Dosen, S., Cipriani, C., Popovic, D., and Farina, D. (2014).
Stereovision and augmented reality for closed-loop control of grasping in hand
prostheses. J. Neural Eng. 11:046001. doi: 10.1088/1741-2560/11/4/046001
Markovic, M., Schweisfurth, M. A., Engels, L. F., Farina, D., and Dosen, S.
(2018). Myocontrol is closed-loop control : incidental feedback is sufficient for
scaling the prosthesis force in routine grasping. J. Neuroeng. Rehabil. 15:81.
doi: 10.1186/s12984-018-0422-7
Mayer, R. M., Mohammadi, A., Alici, G., Choong, P., and Oetomo, D. (2018).
“Static force dependency of bone conduction transducer as sensory feedback
for stump-socket based prosthesis,” in ACRA 2018 Proceedings (Lincoln).
Mayer, R. M., Mohammadi, A., Alici, G., Choong, P., and Oetomo, D. (2019).
“Bone conduction as sensory feedback interface : a preliminary study,” in
Engineering in Medicine and Biology (EMBC) (Berlin).
Mohammadi, A., Xu, Y., Tan, Y., Choong, P., and Oetomo, D. (2019). Magnetic-
based soft tactile sensors with deformable continuous force transfer medium
for resolving contact locations in robotic grasping and manipulation. Sensors
19:4925. doi: 10.3390/s19224925
Ninu, A., Dosen, S., Muceli, S., Rattay, F., Dietl, H., and Farina, D. (2014). Closed-
loop control of grasping with a myoelectric hand prosthesis: which are the
relevant feedback variables for force control? IEEE Trans. Neural Syst. Rehabil.
Eng. 22, 1041–1052. doi: 10.1109/TNSRE.2014.2318431
Patterson, P. E., and Katz, J. A. (1992). Design and evaluation of a sensory feedback
system that provides grasping pressure in a myoelectric hand. J. Rehabil. Res.
Dev. 29, 1–8. doi: 10.1682/JRRD.1992.01.0001
Saunders, I., and Vijayakumar, S. (2011). The role of feed-forward and feedback
processes for closed-loop prosthesis control. J. NeuroEng. Rehabil. 8:60.
doi: 10.1186/1743-0003-8-60
Shaw-Cortez, W., Oetomo, D., Manzie, C., and Choong, P. (2018). Tactile-based
blind grasping: a discrete-time object manipulation controller for robotic
hands. IEEE Robot. Autom. Lett. 3, 1064–1071. doi: 10.1109/LRA.2020.2977585
Shaw-Cortez, W., Oetomo, D., Manzie, C., and Choong, P. (2019). Robust object
manipulation for tactile-based blind grasping. Control Eng. Pract. 92:104136.
doi: 10.1016/j.conengprac.2019.104136
Shehata, A. W., Scheme, E. J., and Sensinger, J. W. (2018). Audible feedback
improves internal model strength and performance of myoelectric prosthesis
control. Sci. Rep. 8:8541. doi: 10.1038/s41598-018-26810-w
Frontiers in Neuroscience | www.frontiersin.org 11 April 2020 | Volume 14 | Article 348
Mayer et al. Multi-Sensor Information via Single Feedback-Channel
Stephens-Fripp, B., Alici, G., and Mutlu, R. (2018). A review of non-invasive
sensory feedback methods for transradial prosthetic hands. IEEE Access 6,
6878–6899. doi: 10.1109/ACCESS.2018.2791583
Stephens-Fripp, B., Mutlu, R., and Alici, G. (2019). A comparison of recognition
and sensitivity in the upper arm and lower arm to mechanotactile stimulation.
IEEE Trans. Med. Robot. Bion. 2, 76–85. doi: 10.1109/TMRB.2019.2956231
Svensson, P., Wijk, U., Björkman, A., and Antfolk, C. (2017). A review of invasive
and non-invasive sensory feedback in upper limb prostheses. Expert Rev. Med.
Dev. 14, 439–447. doi: 10.1080/17434440.2017.1332989
Westling, G., and Johansson, R. (1984). Factors influencing the force control
during precision grip. Exp. Brain Res. 53, 277–284.
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometr. Bull. 1,
80–83. doi: 10.2307/3001968
Conflict of Interest: The authors declare that the research
was conducted in the absence of any commercial or financial
relationships that could be construed as a potential conflict
of interest.
Copyright © 2020 Mayer, Garcia-Rosas, Mohammadi, Tan, Alici, Choong and
Oetomo. This is an open-access article distributed under the terms of the
Creative Commons Attribution License (CC BY). The use, distribution or
reproduction in other forums is permitted, provided the original author(s)
and the copyright owner(s) are credited and that the original publication in
this journal is cited, in accordance with accepted academic practice. No use,
distribution or reproduction is permitted which does not comply with these
terms.
Frontiers in Neuroscience | www.frontiersin.org 12 April 2020 | Volume 14 | Article 348
... -The contents of Chapter 5 have been published in Frontiers in Neurosience in 2020 [7]. ...
... To extend the promising results shown in [9] for invasive bone conduction and the preliminary studies in [1,2,4,6,7] for non-invasive bone conduction as vibrotactile feedback, it is necessary to systematically investigate the temporal and spatial parameters of this interface. ...
... The spatial parameters are characterised by the capabilities of the interface to perceive stimulation on multiple sites on the residual limb when stimulation was applied one-at-a-time as well as simultaneously on multiple sites (see Section 2.3.1.2). In multiple scenarios in prosthetic grasping, a combination of different types of feedback information are required simultaneously[7,14,32,33]. The spatial parameters, stimulating multiple stimulation sites on the physiologically given bony landmarks on the elbow, have been shown in preliminary studies showing the location identification rates[4] including a subsequent study on using different frequencies on different sites to increase location identification rates[6]. ...
Thesis
Sensory feedback is an important part of human motor control, both in the case of biological (natural) and artificial limbs (prostheses). As a result of trauma, 35.3 million people worldwide in 2017 were living with upper limb loss. Inclusion of sensory feedback into prostheses promotes body ownership and therefore increases the uptake of such with acceptance rates as low as 40%. The appropriate sensory feedback is important for the success of object grasping and manipulation tasks in grasping with a prostheses. Ultimately, sensory feedback in prosthetic applications allows effective closed-loop control of the prostheses by the human user. A multitude of invasive and non-invasive feedback interfaces have been studied in the past. Despite the emergence of invasive techniques such as osseointegration for the mechanical interface to the human, non-invasive options such as stump sockets are expected to continue to play an important role in protheses. This is because there would be amputees for whom invasive techniques are not suited, such as due to the lack of length or strength in the residual limb or the inherent surgical risks of invasive technique. Furthermore, techniques in non-invasive modality are also applicable to be used as sensory feedback outside of the prosthetic application e.g. in human robot interaction and human-machine interface e.g. in gaming. The state of the art of non-invasive tactile feedback are generally applied on the skin through electrotactile, vibrotactile and mechanotactile modalities to varying degrees of success. The use of non-invasive bone conduction is thought to address many of the disadvantages identified in state-of-the-art non-invasive feedback systems on the skin. Invasive bone conduction has been studied and shown to involve auditory as well as tactile perception. The scope of the research conducted in this thesis is to study the applicability of noninvasive bone conduction as a sensory feedback interface with its application in upper limb prosthesis. Firstly, the feasibility of non-invasive sensory feedback via bone conduction will be investigated, comparing it to the results obtained for invasive bone conduction previously studied in literature. The static force dependency of the stimulation perception has to be investigated before a thorough study of the interface, characterising it in temporal and spatial domain can be conducted. Secondly, the obtained parameters to characterise the interface can be used as requirements for the design of a stimulation interface, implementable in an upper-limb prostheses. Possible transducers are then compared to fulfill the established requirements. Thirdly, the proposed interface will be implemented in a closed-loop control of a myoelectric hand to show its applicability as well as study the possibility of improved information density within a single stimulation transducer to deliver multiple sets of feedback information simultaneously to the prostheses user.
... Recent studies have investigated vibrotactile feedback on the bone as a sensory feedback interface due to its advantageous properties (e.g. larger bandwidth and having no static force dependency) [2][3][4][5], yet longitudinal studies of such an interface have yet to be conducted. A recent review [1] summarized that previously published longitudinal studies of sensory feedback have: (1) performed repeated testing over few days/weeks, yet a consistent use throughout the day over weeks has only been shown in [6]; (2) investigated impact of training/adaption due to sensory feedback [7], which showed increased performance over an 8-day period [8]; (3) shown benefits in the learning of prosthetic use for grasp force manipulation via EMG [9]. ...
... larger bandwidth and having no static force dependency) [2][3][4][5], yet longitudinal studies of such an interface have yet to be conducted. A recent review [1] summarized that previously published longitudinal studies of sensory feedback have: (1) performed repeated testing over few days/weeks, yet a consistent use throughout the day over weeks has only been shown in [6]; (2) investigated impact of training/adaption due to sensory feedback [7], which showed increased performance over an 8-day period [8]; (3) shown benefits in the learning of prosthetic use for grasp force manipulation via EMG [9]. The impact of long term sensory feedback on the performance of grasping and object manipulation has yet to be determined. ...
... The frequency dependent force output needs to surpass the frequency dependent perception threshold to provide feedback [4] to the user. The total harmonic distortion needs to be within limits to not cause misperception between different frequencies [3]. ...
Chapter
Full-text available
Bone conduction as sensory feedback interface has shown promising properties in upper limb prostheses. Longitudinal studies are essential for the enhancement of sensory feedback in upper limb prostheses and require a deployable (compact size and low cost) wearable system. This paper compares three low cost transducers to one of the current state of the art transducers utilized in audiology. The force output and total harmonic distortion of all transducers are compared. One transducer, whilst being small in size and costing a fraction of other available transducers, showed good performance.
... Moreover, the degree of interaction between these two evoked percepts is relatively small 19,20 . In practice, this implies that during AM, a subject has an accurate perception of the frequency by which the electrical pulses are delivered and vice versa for FM 21 . ...
... It has been shown that electrotactile and vibrotactile stimulation can be perceived independently when placed on the same skin location 29 , but the ability to convey more variables via one stimulation channel may reduce cost, power consumption, and need for maintenance. Previously, Mayer and colleagues investigated this strategy in a simplified way, by asking subjects to identify one of three constant carrier frequencies (100, 400, or 750 Hz) during amplitude modulated vibrotactile feedback reflecting prosthesis grasp force 21 . The ability to convey two independent continuous signals, however, would greatly increase the bandwidth of the transmitted information. ...
Article
Full-text available
Bidirectional human–machine interfaces involve commands from the central nervous system to an external device and feedback characterizing device state. Such feedback may be elicited by electrical stimulation of somatosensory nerves, where a task-relevant variable is encoded in stimulation amplitude or frequency. Recently, concurrent modulation in amplitude and frequency (multimodal encoding) was proposed. We hypothesized that feedback with multimodal encoding may effectively be processed by the central nervous system as two independent inputs encoded in amplitude and frequency, respectively, thereby increasing state estimate quality in accordance with maximum-likelihood estimation. Using an adaptation paradigm, we tested this hypothesis during a grasp force matching task where subjects received electrotactile feedback encoding instantaneous force in amplitude, frequency, or both, in addition to their natural force feedback. The results showed that adaptations in grasp force with multimodal encoding could be accurately predicted as the integration of three independent inputs according to maximum-likelihood estimation: amplitude modulated electrotactile feedback, frequency modulated electrotactile feedback, and natural force feedback (r² = 0.73). These findings show that multimodal electrotactile feedback carries an intrinsic advantage for state estimation accuracy with respect to single-variable modulation and suggest that this scheme should be the preferred strategy for bidirectional human–machine interfaces with electrotactile feedback.
... Considering this sensory information in the control strategy could instead be essential to prevent possible damage to the user or to the prosthesis itself. For example, when the patients drink a hot beverage using the prosthetic hand, they may not feel the temperature until the cup reaches the lips [10], which can cause significant injuries to the skin. In addition, high (or cold) temperatures could damage the devices [11], resulting in costly and time-consuming maintenance. ...
Article
Full-text available
Prosthetic hand systems aim at restoring lost functionality in amputees. Manipulation and grasping are the main functions of the human hand, which are provided by skin sensitivity capable of protecting the hand from damage and perceiving the external environment. The present study aims at proposing a novel control strategy which improves the ability of the prosthetic hand to interact with the external environment by fostering the interaction of tactile (forces and slipping) and thermoceptive sensory information and by using them to guarantee grasp stability and improve user safety. The control strategy is based on force control with an internal position loop and slip detection, which is able to manage temperature information thanks to the interaction with objects at different temperatures. This architecture has been tested on a prosthetic hand, i.e., the IH2 Azzurra developed by Prensilia s.r.l, in different temperature and slippage conditions. The prosthetic system successfully performed the grasping tasks by managing the tactile and thermal information simultaneously. In particular, the system is able to guarantee a stable grasp during the execution of the tasks. Additionally, in the presence of an external stimulus (thermal or slippage), the prosthetic hand is able to react and always reacts to the stimulus instantaneously (reaction times ≤ 0.04 s, comparable to the one of the human being), regardless of its nature and in accordance with the control strategy. In this way, the prosthetic device is protected from damaging temperatures, the user is alerted of a dangerous situation and the stability of the grasp is restored in the event of a slip.
... The spatial parameters define the capabilities of the interface to perceive stimulation on multiple sites on the physiologically given bony landmarks on the elbow when stimulation was applied one-at-a-time. This is of interest in prosthetic grasping as combination of different types of feedback information are required (Westling and Johansson, 1984;Johansson and Westling, 1987;Augurelle et al., 2003;Mayer et al., 2020b). The temporal and spatial characteristics of the bone conduction has been conducted on both able-bodied subjects and subjects with trans-radial amputation and compared to each other and to the invasive bone conduction method. ...
Article
Full-text available
Bone conduction is a promising haptic feedback modality for upper-limb prosthesis users, however, its potential and characteristics as a non-invasive feedback modality have not been thoroughly investigated. This study aimed to establish the temporal and spatial characteristics of non-invasive bone conduction as a sensory feedback interface for upper-limb prostheses. Psychometric human-subject experiments were conducted on three bony landmarks of the elbow, with a vibrotactile transducer affixed to each to provide the stimulus. The study characterized the temporal domain by testing perception threshold and resolution in amplitude and frequency. The spatial domain was evaluated by assessing the ability of subjects to detect the number of simultaneous active stimulation sites. The experiment was conducted with ten able-bodied subjects and compared to two subjects with trans-radial amputation. The psychometric evaluation of the proposed non-invasive bone conduction feedback showed results comparable to invasive methods. The experimental results demonstrated similar amplitude and frequency resolution of the interface for all three stimulation sites for both able-bodied subjects and subjects with trans-radial amputation, highlighting its potential as a non-invasive feedback modality for upper-limb prostheses.
... The spatial parameters define the capabilities of the interface to perceive stimulation on multiple sites on the physiologically given bony landmarks on the elbow when stimulation was applied one-at-a-time. This is of interest in prosthetic grasping as combination of different types of feedback information are required (Westling and Johansson, 1984;Johansson and Westling, 1987;Augurelle et al., 2003;Mayer et al., 2020b). The temporal and spatial characteristics of the bone conduction has been conducted on both able-bodied subjects and subjects with trans-radial amputation and compared to each other and to the invasive bone conduction method. ...
Article
Full-text available
Bone conduction is a promising haptic feedback modality for upper-limb prosthesis users, however, its potential and characteristics as a non-invasive feedback modality have not been thoroughly investigated. This study aimed to establish the temporal and spatial characteristics of non-invasive bone conduction as a sensory feedback interface for upper-limb prostheses. Psychometric human-subject experiments were conducted on three bony landmarks of the elbow, with a vibrotactile transducer a�xed to each to provide the stimulus. The study characterized the temporal domain by testing perception threshold and resolution in amplitude and frequency. The spatial domain was evaluated by assessing the ability of subjects to detect the number of simultaneous active stimulation sites. The experiment was conducted with ten able-bodied subjects and compared to two subjects with trans-radial amputation. The psychometric evaluation of the proposed non-invasive bone conduction feedback showed results comparable to invasive methods. The experimental results demonstrated similar amplitude and frequency resolution of the interface for all three stimulation sites for both able-bodied subjects and subjects with trans-radial amputation, highlighting its potential as a non-invasive feedback modality for upper-limb prostheses.
... In 2016, Xiong reported the implementation of an anthropomorphic hand for replicating human grasping functions, which realized the blind grasp automatically and was further endued with myoelectric control (Xiong et al., 2016). In 2020, Mayer et al. (2020) reported a closed-loop control method based on tactile feedback to ensure the grasping of the myoelectric hand. Meanwhile, leading commercial prostheses such as the Michelangelo prosthetic hand by Ottobock© (Hashim et al., 2017) and the i-Limb by Össur© (van der Niet et al., 2013) provide customers with EMG-based solutions combined with intelligence control to ensure better practice for daily usage. ...
Article
Full-text available
This study aimed to highlight the demand for upper limb compound motion decoding to provide a more diversified and flexible operation for the electromyographic hand. In total, 60 compound motions were selected, which were combined with four gestures, five wrist angles, and three strength levels. Both deep learning methods and machine learning classifiers were compared to analyze the decoding performance. For deep learning, three structures and two ways of label encoding were assessed for their training processes and accuracies; for machine learning, 24 classifiers, seven features, and a combination of classifier chains were analyzed. Results show that for this relatively small sample multi-target surface electromyography (sEMG) classification, feature combination (mean absolute value, root mean square, variance, 4th-autoregressive coefficient, wavelength, zero crossings, and slope signal change) with Support Vector Machine (quadric kernel) outstood because of its high accuracy, short training process, less computation cost, and stability ( p < 0.05). The decoding result achieved an average test accuracy of 98.42 ± 1.71% with 150 ms sEMG. The average accuracy for separate gestures, wrist angles, and strength levels were 99.35 ± 0.67%, 99.34 ± 0.88%, and 99.04 ± 1.16%. Among all 60 motions, 58 showed a test accuracy greater than 95%, and one part was equal to 100%.
... The task is limited herein to forward reaching motion, with target poses similar to [6], [10] used for the purpose of classification. Three common feature modality cases in prosthetic context are investigated: (1) using kinematic features only [14], (2) using sEMG features only [15] and (3) combining kinematic and sEMG features [10]. Data were collected from a human-subject experiment in a virtual reality (VR) environment. ...
Conference Paper
Full-text available
In active prostheses, it is desired to achieve target poses for a given family of tasks, for example, in the task of forward reaching using a transhumeral prosthesis with coordinated joint movements. To do so, it is necessary to distinguish these target poses accurately using the input features (e.g. kinematic and sEMG) obtained from the human users. However, the input features have conventionally been selected through human observations and influenced heavily by the availability of sensors in this context, which may not always yield the most relevant information to differentiate the target poses in the given task. In order to better select from a pool of available input features, those most appropriate for a given set of target poses, a measure that correlates well with the resulting classification accuracy is required so that it can inform the interface design process. In this paper, a scatter-matrix based class separability measure is adopted to quantitatively evaluate the separability of the target poses from their corresponding input features. A human experiment was performed on ten able-bodied subjects. Subjects were asked to perform forward-reaching movements with their arms on nine target poses in a virtual reality (VR) platform and the corresponding kinematic information of their arm movement and muscle activities were recorded. The accuracy of the prosthetic interface in determining the intended target poses of the human user during forward reaching is evaluated for different combinations of input features, selected from the kinematic and sEMG sensors worn by the users. The results demonstrate that employing input features that yield a high separability measure between target poses results in a high accuracy in identifying the intended target poses in the execution of the task.
Article
Full-text available
Introduction In recent years, hand prostheses achieved relevant improvements in term of both motor and functional recovery. However, the rate of devices abandonment, also due to their poor embodiment, is still high. The embodiment defines the integration of an external object – in this case a prosthetic device – into the body scheme of an individual. One of the limiting factors causing lack of embodiment is the absence of a direct interaction between user and environment. Many studies focused on the extraction of tactile information via custom electronic skin technologies coupled with dedicated haptic feedback, though increasing the complexity of the prosthetic system. Contrary wise, this paper stems from the authors' preliminary works on multi-body prosthetic hand modeling and the identification of possible intrinsic information to assess object stiffness during interaction. Methods Based on these initial findings, this work presents the design, implementation and clinical validation of a novel real-time stiffness detection strategy, without ad-hoc sensing, based on a Non-linear Logistic Regression (NLR) classifier. This exploits the minimum grasp information available from an under-sensorized and under-actuated myoelectric prosthetic hand, Hannes. The NLR algorithm takes as input motor-side current, encoder position, and reference position of the hand and provides as output a classification of the grasped object (no-object, rigid object, and soft object). This information is then transmitted to the user via vibratory feedback to close the loop between user control and prosthesis interaction. This implementation was validated through a user study conducted both on able bodied subjects and amputees. Results The classifier achieved excellent performance in terms of F1Score (94.93%). Further, the able-bodied subjects and amputees were able to successfully detect the objects' stiffness with a F1Score of 94.08% and 86.41%, respectively, by using our proposed feedback strategy. This strategy allowed amputees to quickly recognize the objects' stiffness (response time of 2.82 s), indicating high intuitiveness, and it was overall appreciated as demonstrated by the questionnaire. Furthermore, an embodiment improvement was also obtained as highlighted by the proprioceptive drift toward the prosthesis (0.7 cm).
Article
Full-text available
The resolution of contact location is important in many applications in robotics and automation. This is generally done by using an array of contact or tactile receptors, which increases cost and complexity as the required resolution or area is increased. Tactile sensors have also been developed using a continuous deformable medium between the contact and the receptors, which allows few receptors to interpolate the information among them, avoiding the weakness highlighted in the former approach. The latter is generally used to measure contact force intensity or magnitude but rarely used to identify the contact locations. This paper presents a systematic design and characterisation procedure for magnetic-based soft tactile sensors (utilizing the latter approach with the deformable contact medium) with the goal of locating the contact force location. This systematic procedure provides conditions under which design parameters can be selected, supported by a selected machine learning algorithm, to achieve the desired performance of the tactile sensor in identifying the contact location. An illustrative example, which combines a particular sensor configuration (magnetic hall effect sensor as the receptor, a selected continuous medium and a selected sensing resolution) and a specific data-driven algorithm, is used to illustrate the proposed design procedure. The results of the illustrative example design demonstrates the efficacy of the proposed design procedure and the proposed sensing strategy in identifying a contact location. The resulting sensor is also tested on a robotic hand (Allegro Hand, SimLab Co) to demonstrate its application in real-world scenarios.
Conference Paper
Full-text available
Non-invasive sensory feedback is a desirable goal for upper limb prostheses as well as in human robot interaction and other human machine interfaces. Yet many approaches have been studied, none has been broadly deployed in upper limb prostheses. Bone conduction has the potential to excite an effect known as osseoperception and therefore provides a novel sensory interface. This paper presents the preliminary results of our study into the temporal parameters of a sensory feedback interface utilizing vibrotactile stimulus onto the ulnar olecranon representing a non-invasive sensory feedback interface. Three different tests are performed to establish the characterizing parameters of the interface; perception threshold, sensation discrimination and reaction time. Our results are similar to the results obtained for invasive bone conduction. The perception threshold for lower frequencies is small and therefore allows using small transducers with low power consumption. The sensation discrimination shows comparable results as reported in existing literature as well as the reaction time for the amputee is within the same range.
Conference Paper
Full-text available
The dependency of a novel sensory feedback for stump-socket based prosthesis on the static force is presented using a bone conduction transducer as feedback source. The stimulation was induced onto the bony landmarks of the elbow, specifically the Ulna and presented in an interval halving method. The perception threshold in the range of tactile and auditory perception at three different force levels has been tested. The inter subject variability is bigger than the intra subject variation. The small static force variation suggests a similar approach as in bone conduction hearing aids and therefore a static force bigger than 6N should be applied to perceive a constant stimulation. A mechanical design to include such a novel feedback into a stump-socket needs to account for this requirement. The inter subject variability needs to be addressed by incorporate some kind of person to person calibration of the gain.
Article
Full-text available
Background: Sensory feedback is critical for grasping in able-bodied subjects. Consequently, closing the loop in upper-limb prosthetics by providing artificial sensory feedback to the amputee is expected to improve the prosthesis utility. Nevertheless, even though amputees rate the prospect of sensory feedback high, its benefits in daily life are still very much debated. We argue that in order to measure the potential functional benefit of artificial sensory feedback, the baseline open-loop performance needs to be established. Methods: The myoelectric control of naïve able-bodied subjects was evaluated during modulation of electromyographic signals (EMG task), and grasping with a prosthesis (Prosthesis task). The subjects needed to activate the wrist flexor muscles and close the prosthesis to reach a randomly selected target level (routine grasping). To assess the baseline performance, the tasks were performed with a different extent of implicit feedback (proprioception, prosthesis motion and sound). Finally, the prosthesis task was repeated with explicit visual force feedback. The subjects' ability to scale the prosthesis command/force was assessed by testing for a statistically significant increase in the median of the generated commands/forces between neighboring levels. The quality of control was evaluated by computing the median absolute error (MAE) with respect to the target. Results: The subjects could successfully scale their motor commands and generated prosthesis forces across target levels in all tasks, even with the least amount of implicit feedback (only muscle proprioception, EMG task). In addition, the deviation of the generated commands/forces from the target levels decreased with additional feedback. However, the increase in implicit feedback, from proprioception to prosthesis motion and sound, seemed to have a more substantial effect than the final introduction of explicit feedback. Explicit feedback improved the performance mainly at the higher target-force levels. Conclusions: The study establishes the baseline performance of myoelectric control and prosthesis grasping force. The results demonstrate that even without additional feedback, naïve subjects can effectively modulate force with good accuracy with respect to that achieved when increasing the amount of feedback information.
Article
Full-text available
Myoelectric prosthetic devices are commonly used to help upper limb amputees perform activities of daily living, however amputees still lack the sensory feedback required to facilitate reliable and precise control. Augmented feedback may play an important role in affecting both short-term performance, through real-time regulation, and long-term performance, through the development of stronger internal models. In this work, we investigate the potential tradeoff between controllers that enable better short-term performance and those that provide sufficient feedback to develop a strong internal model. We hypothesize that augmented feedback may be used to mitigate this tradeoff, ultimately improving both short and long-term control. We used psychometric measures to assess the internal model developed while using a filtered myoelectric controller with augmented audio feedback, imitating classification-based control but with augmented regression-based feedback. In addition, we evaluated the short-term performance using a multi degree-of-freedom constrained-time target acquisition task. Results obtained from 24 able-bodied subjects show that an augmented feedback control strategy using audio cues enables the development of a stronger internal model than the filtered control with filtered feedback, and significantly better path efficiency than both raw and filtered control strategies. These results suggest that the use of augmented feedback control strategies may improve both short-term and long-term performance.
Article
Full-text available
Any implant or prosthesis replacing a function or functions of an organ or group of organs should be biologically and sensorily integrated with the human body in order to increase their acceptance with their user. If this replacement is for a human hand, which is an important interface between humans and their environment, the acceptance issue and developing sensory-motor embodiment will be more challenging. Despite progress in prosthesis technologies, 50-60% of hand amputees wear a prosthetic device. One primary reason for the rejection of the prosthetic hands is that there is no or negligibly small feedback or tactile sensation from the hand to the user, making the hands less functional. In fact, the loss of a hand means interrupting the closed-loop sensory feedback between the brain (motor control) and the hand (sensory feedback through the nerves). The lack of feedback requires significant cognitive efforts from the user in order to do basic gestures and daily activities. To this aim, recently, there has been significant development in the provision of sensory feedback from transradial prosthetic hands, to enable the user take part in the control loop and improve user embodiment. Sensory feedback to the hand users can be provided via invasive and noninvasive methods. The latter includes the use of temperature, vibration, mechanical pressure and skin stretching, electrotactile stimulation, phantom limb stimulation, audio feedback, and augmented reality. This paper provides a comprehensive review of the non-invasive methods, performs their critical evaluation, and presents challenges and opportunities associated with the non-invasive sensory feedback methods.
Article
The manuscript “Tactile-based Blind Grasping: A Discrete-Time Object Manipulation Controller for Robotic Hands” contains results that are dependent on two references. Since publication, we realized that one of these references has inconsistent results regarding continuity of quadratic programs. The second reference has updated conditions that are not completely reflected in the original manuscript. We take the time here to replace the inconsistencies and update this manuscript to preserve the theoretical guarantees of the proposed controller. We note that this correction does not change the proposed control law, and is made to formally ensure that the guarantees hold.
Article
Sensory feedback is a highly researched area for upper limb prosthetics, for which the stimulation is applied to either the upper arm or lower arm with minimal justification for the chosen site. In this study, we compare the recognition of three stimulation sites and their sensitivities to the stimulation applied to the upper and lower arm sites from a mechanotactile stimulation device. Our stimulation device is a crank-based mechanotactile feedback system, which applies a combination of a pressure applied in the normal direction and tangential direction to the skin at the stimulation sites on the lower arm and upper arm. The recognition rate for six grip patterns was found to be not statistically different between the upper and lower sites. Further, through Just Noticeable Difference (JND) measurements, there was no statistical difference between the sensitivities of the upper arm and lower arm at four stimulation sites. This study contributes to the literature fromthe point of view of identifying the upper arm as the alternative site of mechanotactile feedback for transradial prosthetic hand users in comparison to the lower arm which is primarily used for EMG electrodes to identify the user’s intention to control their robotic prosthetic hand.
Article
Tactile-based blind grasping addresses realistic robotic grasping in which the hand only has access to proprioceptive and tactile sensors. The robotic hand has no prior knowledge of the object/grasp properties, such as object weight, inertia, and shape. There exists no manipulation controller that rigorously guarantees object manipulation in such a setting. Here, a robust control law is proposed for object manipulation in tactile-based blind grasping. The analysis ensures semi-global asymptotic and exponential stability in the presence of model uncertainties and external disturbances that are neglected in related work. Simulation and hardware results validate the effectiveness of the proposed approach.
Chapter
Psychophysics is the premier research tool for studying the relationship between the physical world and its sensory representations. The first section of the article delineates the typical components of a psychophysical experiment. The second section describes the different ways in which psychophysical experiments can be classified: Class A versus Class B; Type 1 versus Type 2; performance versus appearance; forcedâ-?choice versus non-forcedâ-?choice; criterionâ-?dependent versus criterionâ-?free; detection versus discrimination; and threshold versus suprathreshold. The final section outlines the principal varieties of psychophysical procedure.