Conference PaperPDF Available

Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control


Abstract and Figures

Human-computer interaction with medical images in a sterile environment is a challenging task. It is often delegated to an assistant or performed directly by the physician with an interaction device wrapped in a sterile plastic sheath. This process is time-consuming and inefficient. To address this challenge, we introduce a gesture-based interface for a medical image viewer that is completely touchlessly controlled by the Myo Gesture Control Armband (Thalmic Labs). Based on a clinical requirement analysis, we propose a minimal gesture set to support basic interaction tasks with radiological images and 3D models. We conducted two user studies and a clinical test to evaluate the interaction device and our new gesture control interface. The evaluation results prove the applicability of our approach and provide an important foundation for future research in physician-machine interaction.
Content may be subject to copyright.
Eurographics Workshop on Visual Computing for Biology and Medicine (2015)
K. Bühler, L. Linsen, and N. W. John (Editors)
Exploration of 3D Medical Image Data for Interventional
Radiology using Myoelectric Gesture Control
J. Hettig1A. Mewes1O. Riabikin3M. Skalej3B. Preim2C. Hansen1
1Computer-Assisted Surgery Group, Faculty of Computer Science, University of Magdeburg, Germany
2Visualization Group, Faculty of Computer Science, University of Magdeburg, Germany
3Clinic of Neuroradiology, University Hospital Magdeburg, Germany
Human-computer interaction with medical images in a sterile environment is a challenging task. It is often dele-
gated to an assistant or performed directly by the physician with an interaction device wrapped in a sterile plastic
sheath. This process is time-consuming and inefficient. To address this challenge, we introduce a gesture-based in-
terface for a medical image viewer that is completely touchlessly controlled by the Myo Gesture Control Armband
(Thalmic Labs). Based on a clinical requirement analysis, we propose a minimal gesture set to support basic inter-
action tasks with radiological images and 3D models. We conducted two user studies and a clinical test to evaluate
the interaction device and our new gesture control interface. The evaluation results prove the applicability of our
approach and provide an important foundation for future research in physician-machine interaction.
1. Introduction
Interventional radiology is based on the review and as-
sessment of pre- and intraoperative images to guide instru-
ments, identify and document findings, and provide treat-
ment [TCZ13]. However, interaction with 3D medical im-
ages in a sterile environment such as an operating room (OR)
challenges physicians. During interventions, available inter-
action devices for medical image exploration, i.e., joysticks,
buttons, and touch screens, are wrapped in a transparent
plastic sheath which makes the interaction cumbersome.
Direct control with a keyboard or mouse is not an op-
tion due to contamination with bacteria [RWGW06]. There-
fore, many functions are usually triggered and controlled
indirectly by radiology technicians in a (non-sterile) con-
trol room. The technicians interpret voice commands and
hand gestures of the radiologists and operate the inter-
ventional software using conventional interaction devices.
However, indirect interaction is time-consuming and error-
prone [OGS14] and requires additional specialized person-
nel which can result in higher treatment costs.
With the introduction of new input devices and interaction
paradigms, modern human-computer interaction offers a lot
of opportunities, e.g., natural 3D user interfaces and gesture
interaction [WW11,BKLP04,PD15]. Touchless gesture in-
terfaces have the potential to improve interaction with medi-
cal images and devices in sterile environments. Accordingly,
underlying interaction concepts need to be carefully adapted
to interventional scenarios and workflows.
In this work, we present a new method to control a
medical image viewer completely touchless using the Myo
Gesture Control Armband (Thalmic Labs Inc., Kitchener,
Canada) as an input device. In contrast to camera-based sys-
tems, this device does not introduce line-of-sight or posi-
tioning problems in the OR. Furthermore, the sterility is pre-
served, because the device is worn under the physician’s
clothes and does not provide an additional hazard. We in-
troduce a gesture-controlled interface using a minimal ges-
ture set to interact with radiologic images and 3D planning
To evaluate the Myo Gesture Control Armband, its clini-
cal applicability, and the proposed gesture set, we conducted
two quantitative user studies and a clinical test during neu-
roradiological interventions. The first quantitative user study
focuses on the functionality, including device wearability
and assessing the gesture recognition rate of all hand ges-
tures supported by the software development kit (SDK). The
second quantitative user study investigates interaction with
a medical image viewer using the minimal gesture set pro-
posed in this work.
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
2. Related Work
Commercial interaction devices have been used in the ster-
ile area of operating rooms for years. In many cases, touch
screens are used. A disadvantage of touch screens is that they
need to be wrapped in a sterile plastic sheath. According to
observations by the authors, the plastic sheath considerably
reduces the image quality and could cause interaction errors.
In addition, touch screen interaction is only possible if the
physician’s hand can reach the display. During an interven-
tion, this is often hard to achieve because of limited space
around the examination table.
Nowatschin et al. [NMWL07] proposed to install a 3D
mouse close to the surgeon to allow interaction with medical
image data and 3D planning models visualized by a surgi-
cal navigation system. 3D mice are appropriate to rotate 3D
models precisely. However, they are inappropriate for simple
(but essential) interaction tasks such as object selection. Sev-
eral groups [HKS08,GDPM08] propose using a 3D point-
ing device based on optical tracking and inertial sensing, i.e.,
the Nintendo Wiimote, to interact intraoperatively with med-
ical images and 3D models. Interaction with medical image
data using inertial sensors was also proposed by Schwarz et
al. [SBN11]. They introduced a system that learns defined
user gestures that are most suitable for a given task. Hence,
the user can integrate their preferences and does not depend
on a predefined gesture set. Another system using inertial
sensors for snapshot-guided nephrostomy was proposed by
Kotwicz et al. [HLUF14]. A three-axis compass, a three-axis
gyroscope, and a three-axis accelerometer are affixed on the
user’s hand under a sterile glove to execute, via small hand
gestures, interaction functions like scroll, select, and reset.
Many systems attempt to detect finger positions using
stereo cameras [CL09] or TOF cameras [PSS09] to con-
trol a mouse cursor. Ritter et al. [RHW09] track the move-
ments of hands to enable simple interaction tasks such as
rotating geometric planning models or triggering of events
via buttons. Gallo et al. [GPC11] present an interactive sys-
tem for medical image exploration using the Kinect depth
camera (Microsoft, Redmond, WA, USA) as a proof of con-
cept. The user interacts with static or dynamic hand and arm
gestures in front of the camera to execute exploration func-
tions like pointing, zooming, translating or windowing on
radiological images. Ebert et al. [EHA12] translate the data
delivered by the Kinect camera and a voice recognition soft-
ware into keyboard and mouse commands, and evaluate re-
sponse times and usability when navigating through radio-
logical images. Hoker et al. [HPMD13] propose a basic set
of six voice and six gesture commands for direct touchless
interaction in a real OR environment using the Kinect. Al-
though gesture recognition rates were high and remained sta-
ble under different lighting conditions, their study showed
that the rate of accidental triggering due to unintended com-
mands is too high for clinical use and should be reduced. Tan
et al. [TCZ13] evaluated a Kinect-contolled image viewer
system with 29 radiologists with different levels of expe-
rience during a routine abdominal computed tomographic
study. 69% of their subjects found the system useful and 7%
did not. Cited issues included hand tracking, inconsistent re-
sponsiveness, the required use of two hands, and the need for
ample space to operate. Mewes et al. [MSR15] presented a
natural gesture set to explore radiological images (projected
onto a radiation shield) using the Leap Motion Controller
(Leap Motion, Inc, San Francisco, USA). The results of their
user study show that sterile and direct interaction with the
Leap Motion Controller has the potential to replace conven-
tional interaction devices in the OR. However, the optimal
placement of the depth sensor close to the operator, the lim-
ited robustness of gesture recognition, and missing feedback
are reported as problems. In summary, optical-based gesture
recognition systems are widely used in experimental clinical
settings. However, they show considerable drawbacks when
applied in the OR, e.g., responsiveness, robustness, limited
interaction volume, and line of sight.
Human-computer interaction based on myoelectric sig-
nals (MES) is investigated only by a few groups worldwide.
The majority of applications in the field of myoelectric con-
trol focuses on prosthetics, signal analysis, robot control and
rehabilitation. A substantial survey about the use of myo-
electric signals was introduced by Oskoei and Hu [OH07].
They reviewed various research in pattern recognition- and
non-pattern recognition-based myoelectric control, state-of-
the-art achievements and potential applications. Based on
the discussed achievements, their paper has led to a develop-
ment of new approaches for the improvement of myoelectric
control. In another work, Oskoei and Hu [OH09] examined
time-related variabilities in myoelectric signals that occur
through fatigue while playing video games. They proposed
an adaptive scheme that models fatigue-based changes and
modifies the classification criteria to provide a stable perfor-
mance in long-term operations.
With respect to the analysis of myoelectric signals, sev-
eral different methods are used to detect hand and finger ges-
tures, improve diagnostic applications and build the founda-
tion for myoelectric gesture control. Chen et al. [CZZ07]
used a linear Bayesian classifier, Naik et al. [NKA10] pre-
sented a method using Independent Component Analysis
in combination with blind source separation. Samadani and
Kulic [SK14] used Hidden Markov Models to analyze the
myoelectric signals.
An early work concerning myoelectric gesture control
was presented by Wheeler [Whe03]. He used two neuro-
electronic interfaces for virtual device control. Both inter-
face configurations are based on sampled data which were
collected from the user’s forearm with an electromyogram.
In the first study, a sleeve with dry electrodes (fixed arrange-
ment of the electrodes) is utilized to emulate a virtual joy-
stick of a flight simulator with the directions up, down, left
and right. In the second study, wet electrodes are placed on
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
the participant’s forearm (free and variable arrangement of
the electrodes) to simulate a virtual keyboard with the keys
0 to 9 and Enter. The results illustrate the potential of myo-
electric gesture control using a non-invasive setup. However,
to the knowledge of the authors, myoelectric gesture control
to support human-computer interaction during surgical pro-
cedures or radiological interventions has not been described
so far.
3. Material and Methods
The focus of this work is the evaluation of touchless interac-
tion with radiological images and 3D planning models using
the Myo Gesture Control Armband as input device. There-
fore, we introduce a minimal gesture set for a medical image
viewer. Technical and clinical requirements for our approach
were determined by analyzing the workflow of neuroradio-
logical interventions.
3.1. Requirement Analysis
In previous work [HHB14], we analyzed video data from
more than 25 different neuroradiological procedures. We
classified single interaction steps during each procedure,
such as scrolling through acquired digital subtraction an-
giography (DSA) images, rotation of 3D vascular models, or
zooming to analyze details in the images. Second, we partic-
ipated in various radiological interventions where a modern
angiography CT imaging system (Artis zeego, Siemens) was
utilized to support instrument guidance. As a result, we can
confirm the following disturbances in the clinical workflow:
Delegation of tasks: Verbal comments or hand gestures
are used to delegate human-computer interaction tasks to
an assistant in the OR or in a non-sterile control room
(indirect interaction).
Leaving the OR or operating table: Physicians have to
change their position to use the provided interaction de-
vices (joystick, buttons, and touch screens). In complex
cases, they have to leave the sterile OR to use a worksta-
tion in the control room to interact with the patient data.
Leaning over the operating table: To interact with touch
screens, physicians have to lean over the operating table
and the patient.
Third, our requirement analysis covered the research of
literature related to gesture-based and touchless interaction.
With these information, we specified seven functions listed
in Table 1.
Based on discussions with our clinical partner, we decided
to provide only two degrees of freedom for the rotation of
3D models in order to reduce the complexity. In this work,
we decided to focus on the interaction tasks that we observed
most frequently during interventions. Further observed inter-
actions, such as changing window-level settings or distance
measuring, are also important but not considered here.
Table 1: Specified explicit 2D and 3D interaction functions
based on our requirement analysis.
2D 3D
Scrolling in z-direction
Panning in x-direction Rotation around the x-axis
Panning in y-direction Rotation around the y-axis
Zooming Zooming
3.2. Myo Armband and Gesture Set
The Myo Gesture Control Armband is worn on the user’s
forearm and measures the electrical signals which arise from
biochemical processes through muscle contractions. These
contractions are caused through the movement of the hand.
The armband holds eight surface electromyographic sensors
(Medical Grade Stainless Steel EMG sensors) that measure
those signals. The hand movements include the following
five gestures (see Fig. 1) which are supported by the arm-
Double Tap: Tapping the thumb and middle finger twice
Fist: Forming a fist with the hand.
Spread Finger: Open hand with strutted fingers.
Wave In: Wave motion with the hand to the body (palmar
Wave Out: Wave motion with the hand off the body (dor-
For haptic feedback, the armband provides an opportunity
to access various lengths of vibrations. The connection and
data transmission is based on Bluetooth technology, which
is certified for use in the OR and does not interfere with any
other devices [WW04].
Due to the small number of recognized gestures by the de-
vice, we propose a minimal gesture set. We assign a gesture
to more than one function rather than assigning a specific
gesture for each tool or function. This results in a concept
that offers the possibility to expand the system and integrate
new functions without the need of learning new gestures.
Furthermore, the cognitive effort of memorizing the gesture
and corresponding function is minimal. To realize a mini-
mal gesture set, we first reduced the seven specified explicit
functions (see Table 1) to four basic functions. For that, we
mapped the available gestures on each function individually.
Subsequently, we merged the functions to simple and gen-
eral interaction tasks if it seemed consistent. The results of
this merger are the four basic functions consisting of a lock, a
select, a parameterize and an interaction function, which are
in turn mapped on the five available gestures and then used
to control the software and to interact with the visualization.
The locking status of the medical image viewer is
switched using a Double Tap (Fig. 1a) gesture. If the sys-
tem is locked, no interaction is possible and the physician
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Figure 1: Hand postures of the five gestures: (a) Double Tap (b) Fist (c) Spread Fingers (d) Wave In (e) Wave Out
can work without any disturbances. To switch between func-
tions or change a function parameter (e.g., slicing speed) the
gestures Fist (Fig. 1b) and Spread Fingers (Fig. 1c) are used
to activate the selection. Finally, the two opposing gestures
Wave In (Fig. 1d) and Wave Out (Fig. 1e) are used to select
and parameterize a function. In addition, these gestures are
used to control functions, e.g., incrementing or decrement-
ing the current slice position in the 2D image viewer.
3.3. Medical Image Viewer
We implemented a medical image viewer that serves primar-
ily as a tool to evaluate the interaction with the Myo arm-
band. The Qt application framework was used in version 5.4
to build the Graphical User Interface (GUI) and the Visual-
ization Toolkit (VTK) in version 6.1 to visualize the medi-
cal dataset. For the Myo armband, we utilized the manufac-
turer’s C++ SDK in version 0.81 and the firmware in version
This viewer also offers the possibility to integrate differ-
ent devices for comparison studies between device-specific
interaction styles. To acquire quantitative measurements, a
data logger is implemented as well. The complete control
of the viewer is performed using the Myo armband. The
viewer has two viewports to display 2D and 3D images, as
shown in Figure 2. Furthermore, a visual as well as a hap-
tic feedback was implemented to provide additional infor-
mation about the selected function, its parameterization and
occurring events.
3.4. Evaluation
We conducted two quantitative user studies and a clinical
test in the OR to evaluate the Myo armband, the proposed
minimal gesture set, and its clinical applicability.
Experimental setup: The two quantitative user studies
were performed under controlled lab conditions in an OR-
like setup that aims to simulate the conditions in an inter-
vention room (see Fig. 3). We displayed our medical im-
age viewer on a 24” touch screen monitor belonging to the
CAS-ONE IR navigation system (CAScination AG, Bern,
Switzerland). Furthermore, we placed an operating table
with a medical phantom on the table in front of the user to
simulate the real distance between monitor and the physi-
cian’s position in the OR. For our user studies, we used
a liver CT data set (84 slices) with a primary liver tu-
mor. The corresponding 3D planning models including seg-
mented liver vessels (portal vein and hepatic vein) and the
tumor were generated using the medical image processing
platform MeVisLab [RBH11].
Evaluation Criteria: Based on the requirements, we de-
fined criteria which we evaluated in our studies. The most
important clinical requirement is preserving the sterility
of the device and inherent hardware. Another aspect is
the training time and the time needed to interact with the
gesture-based interface to fulfill a given task. Furthermore,
the acceptance of the proposed concept by the physician as
the end user is important. Finally, the conducted user studies
investigate the robustness of the gesture recognition and the
associated impact on usability and applicability in the OR.
Afunctionality study was performed to evaluate the Myo
armband as interaction device with regard to accuracy and
robustness. During the study, we ensured that the position of
Figure 2: Graphical user interface of our medical image
viewer with a viewport to explore radiological images (left),
and a second viewport to visualize 3D models (right). Left
and above the two viewports, interactive icons provide vi-
sual feedback about function, parameterization, locking sta-
tus and the currently recognized gesture.
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Figure 3: Experimental setup showing the operating table,
a medical phantom, and the CAS-ONE IR navigation system
(CAScination AG, Bern, Switzerland). The subject wears the
armband on the right arm and interacts with the visualiza-
the armband was equal for all subjects by placing a marked
sensor module of the armband on the musculus palmaris
longus (lower surface of the forearm). The experiment con-
sisted of the following two parts, which were performed for
each arm individually (dominant arm first) to see if there are
any differences in handedness:
1. Training: Each subject was introduced to the function-
ality of the armband by explaining each gesture. After-
wards, the subjects had unlimited time to familiarize with
the device. This means, each subject knows how to move
the wrist and hand so that the system recognizes the ges-
2. Verification: Each subject had to explicitly perform a
given gesture ten times to verify accuracy and robustness.
This was repeated for all five gestures.
The quantitative measurements comprised the training time,
the correct and the incorrect recognized gestures. Overall,
2150 gestures were recorded and analyzed. Furthermore, we
acquired physical data about the subjects’ arms to gather in-
formation about possible causes of unpleasant sensation or a
change in the recognition rates, due to a too tight or loose fit
of the armband. At the same time, we instructed the subjects
to use the think-aloud protocol [FKG93] to gather individual
and qualitative information about the Myo armband as input
device. After the test, we asked in a questionnaire about the
wearing comfort of the armband, the motion of each gesture
to determine problems in the early stages of our develop-
ment, and if there were any differences between the domi-
nant versus non-dominant arm.
The interaction study focused on the interaction with the
medical image viewer using the proposed concept of a min-
imal gesture set. Analogous to the first study, the second
study started with an introduction and a training. The han-
dling of the medical image viewer with the minimal gesture
set was part of this training phase. Therefore, we explained
the user interface including the visual and haptic feedback
system and the gesture control using the armband. Each sub-
ject received an unlimited amount of time to understand the
handling of the medical image viewer. The test supervisor
answered no questions after the training phase in order to
evaluate the developed feedback systems regarding problem
handling and interaction flow. In the test phase, each subject
had to perform the following four tasks:
1. Localizing the liver tumor in the 2D data set and deter-
mining start and end slice (9 to 38).
2. Selecting a specific slice, zooming the image to a prede-
fined value and positioning it in the viewer’s center (com-
plex task).
3. Rotating the 3D planning model to a given orientation.
4. Zooming in the 3D view to a predefined zoom value.
All experiments were recorded using a video camera in or-
der to log verbal comments of the participants. Quantitative
measurements included the time a subject needed to perform
each task. In addition, we asked the participants to fill in an
adapted ISONORM 9421/110 questionnaire [Prü97] in or-
der to evaluate our interaction approach regarding usability,
naturalness of the execution, weariness, memorability and
understanding of each gesture.
The clinical test focused on the evaluation of the armband
during two neuroradiological interventions. This pilot study
helped us to identify problems with the gesture recognition
in a real clinical setting and moreover to get feedback from
the physicians after using the Myo armband. Therefore, we
used the data logger to record the recognized gestures and
the time steps at which the gesture was recognized. During
each intervention we also recorded the single workflow steps
(including time stamps) to evaluate the recognized gesture
and the individual hand movement. This way, we could iden-
tify, if and under which conditions any of the gestures of the
set were accidentally performed or recognized.
The first intervention was a periradicular therapy and was
performed by a resident physician who wore the armband for
about 45 minutes during the preparation and intervention. In
the second intervention, an assistant medical director wore
the armband during an embolization of a cerebral arteriove-
nous malformation for about two hours.
4. Results
The results of the functionality study are shown in Figure 4.
20 subjects (average age = 27.2 years, 14 female and 6 male)
with different levels of experience in gesture control and
varying constitutions of their forearm (circumference and
hairiness) participated in this study. Two participants were
left-handed and 18 right-handed.
Differences in handedness were noticed by nine of the
subjects after the second run of this study, regarding the
easier understanding of the hand movement (hand gesture),
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Figure 4: Recognition rates for each hand gesture within our functionality study. The pie charts visualize correctly and falsely
negative detected gestures (Double Tap = dark blue, Fist = light blue, Spread Fingers = orange, Wave In = grey, Wave Out =
and some users had difficulties using the armband on the
non-dominant arm. The Double Tap gesture had the lowest
correct recognition rate (56.04%), whereupon a double lock
system was applied to prevent unintentional interaction. This
means that an interaction is only possible if the viewer is un-
locked and a function selected and parametrized. It should
be noted that this gesture was the most time-consuming
for understanding in the training phase. Both Wave gestures
achieved a good recognition rate (71.23% and 86.40%).
Also, the Fist (78.84%) and Spread Fingers (71.76%) had
a similar good recognition rate. It should be mentioned that
both gestures (Fist and Spread Fingers) have a mutual recog-
nition rate of about 11% due to the contractions of neighbor-
ing muscles. According to our data, the recognition rate de-
pends on the training time and can be improved by a longer
practicing period for the users to familiarize with the device.
The average training time to familiarize with each ges-
ture was 111swith a standard deviation of σ=60sfor the
dominant arm, and an average of 98swith σ=58 sfor the
non-dominant arm. We assume that the differences occurred
because the hand movements were known after the first test.
The collected data about the subjects’ arms including cir-
cumference (with a mean value of 25.75cm and σ=1.72cm)
or hairiness did not influence the results in our experiments
and therefore provide no additional value. For thinner arms,
we provided a better fit of the armband through applying
clips to it to make it tighter and thus establish a better skin
contact. Comments collected from the questionnaires and
the think-aloud protocol included issues about the wearing
comfort of the Myo armband depending on the period of
time the armband is worn and related pain or unpleasant sen-
sations in the arm. Minor problems were reported regarding
the form of individual gestures and a resulting unpleasant
hand posture. Six subjects experienced a constricting sen-
sation and two mentioned that the Fist and Spread Fingers
gesture are exhausting through strong exertion executing the
gesture. The Wave Out gesture was easier to execute than the
Wave In gesture for most subjects in this study. Moreover, a
tenosynovitis can make the hand movement painful through
hyperextension of the wrist.
Our subject pool for the interaction study consisted of 10
medical domain experts, i.e., medical students and assistant
physicians (average age = 23.8 years, 6 female and 4 male).
None of these subjects participated in our first study. The
training time for the understanding of each gestures was sim-
ilar to our functionality study. However, the training time for
the interaction differed from subject to subject. The mean
time for the training was 4:51 minutes with σ=1:59 min.
This is sufficient for our non-security-sensitive purposes.
The times for each interaction task of the study are shown
in Table 2. Subjects needed the most time (2:14 min) to ro-
tate the 3D model to the given orientation. This might be
explained by the fact that the rotation had to be performed
on two axes and not via the trackball metaphor as usual. The
interaction with the 2D slices, however, succeeded in most
cases without any problems.
Table 2: Measured times for each interaction task (time in
minutes) during the interaction study.
Task Mean Time Standard Deviation σ
Training 4:51 1:59
1 1:06 0:29
2 2:03 1:20
3 2:14 1:10
4 0:53 0:36
We used a Likert scale from 1 to 5 for simplicity,natu-
ralness,memorability and understanding of each gesture as
well as the weariness using a gesture. The results are pre-
sented in Table 3. The findings of the interaction study are
in line with the results of the functionality study. This can
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Table 3: Questionnaire results for each gesture (interaction study). Rating is based on a 5-point Likert scale from 1 = strongly
disagree to 5 = strongly agree.
Double Tap Fist Spread Fingers Wave In Wave Out Mean ±SD
Simplicity 3.9 4.9 4.1 4.4 4.5 4.36±0.385
Naturalness 3.9 4.8 4.2 4.2 4.4 4.30 ±0.332
Memorability 4.7 4.9 4.7 4.6 4.6 4.70 ±0.122
Understanding 3.8 4.4 4.0 4.2 4.2 4.12 ±0.228
Weariness (not tiring) 3.8 4.6 4.1 4.0 4.1 4.12 ±0.295
be seen, e.g., in the values for the Double Tap gesture, which
had the worst recognition rate of the five gestures. This leads
to an obstruction in the workflow while solving the four
given tasks due to unintentionally executed gestures, which
triggered unwanted behavior.
The results from the clinical test, particularly the analy-
sis of the logging data, shed light on the relation between
intra-operative workflow steps and recognized gestures (see
Table 4). A major problem is the Unknown gesture, which
informed about a connection loss between the armband and
the host computer. In case of a radiological intervention, sev-
eral physicians and assistants with radiation protection vests
can obscure the Bluetooth signal. Also, a too large distance
between the receiving host PC and the physician wearing
the armband can lead to a connection loss (Bluetooth range).
The Double Tap gesture was recognized most often (first in-
tervention), because movements such as knocking a syringe
or tapping devices like a touch screen are similar to the ges-
ture’s muscle contractions and performed often during this
kind of intervention. The two gestures Fist and Spread Fin-
gers do have a chance of mutual recognition. Both gestures
are recognized in similar procedure steps consistently, e.g.,
when inserting a catheter or using a syringe to administer a
contrast agent for vessel imaging (full tension of the fore-
arm). It can be assumed that those two gestures are recog-
nized most frequently during minimal invasive interventions
if no additional intervention system is used. The Wave ges-
tures are recognized when using the angiography system,
e.g., when positioning the table with a joystick or interacting
with the image data. In some cases, those gestures are also
recognized by pointing on the monitors or gesticulating.
For a qualitative analysis, the operating physicians an-
swered questions about the wearing comfort and a possible
future use of the Myo armband as interaction device. De-
pending on the circumference of the forearm (tight fit), wear-
ing the Myo armband during a whole intervention could be
constricting, but did not affect any procedure step.
5. Discussion
The results of the functionality study showed that there are
only minor problems concerning the wearing comfort of the
armband. However, this was not confirmed by the feedback
Table 4: Log analysis of two neuroradiological interven-
tions. The table shows the quantity of recognized gestures
during the procedure.
Gesture Intervention 1 Intervention 2
Unknown 8 2
Double Tap 132 203
Fist 62 131
Spread Fingers 108 440
Wave In 28 152
Wave Out 26 89
Overall 364 1017
we received from the physicians after the interventions dur-
ing the clinical test. The physicians reported no problems
with the Myo armband as device and no interference of the
clinical workflow was observed. The haptic feedback was
not actively noticed by the physicians during the operation,
accordingly an adaption of the vibration feedback is neces-
The interaction study showed that the proposed concept
of a minimal gesture set is a notable option compared to in-
dividual gestures for each task. One benefit of this concept
is the expandability regarding new functionalities, as far as it
is logically practicable, e.g., the modification of the window
level. The individual gestures of the used set were consis-
tently rated as a good match for the functions, easy to ex-
ecute and remember, and overall a good option to interact
with the visualization through simple hand gestures. Only
the Double Tap gesture was rated inferior because of the in-
sufficient recognition rate and the resulting disturbances in
the workflow. Although the Double Tap gesture performed
badly in the functionality study, the authors decided to use it
as unlock gesture, because the other available gestures were
already used as logically connected controls for the soft-
ware functions. Delineation and unambiguity of the gestures
should be preserved. Minor drawbacks were sometimes an
unpleasant hand posture and problems with the precise exe-
cution of a function. Our defined requirements were fulfilled,
except for the robustness of the system, which is one of the
most crucial aspects. Formal feedback from the physicians
after the clinical tests indicate that the proposed concept has
the potential to improve the workflow in an OR. If physicians
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
could navigate directly without delegating interaction tasks,
assistants could prepare upcoming procedure steps instead.
Therefore, this might lead to a shortening of the intervention
time and a reduction of intervention costs. Compared to in-
teraction devices with a fixed position and varying distance
to the user (such as a control panel placed on the operating
table), or camera-based systems with a limited field of view,
the proposed system enables a very flexible and mobile in-
teraction in the OR.
6. Conclusion and Future Work
Direct interaction with medical images in a sterile environ-
ment is a challenging task. We presented and evaluated a
concept for myoelectrically controlled touchless interaction
with medical image data. Our results prove its applicability
and may inspire future research.
Future improvements concerning the robustness of the
Myo Armband are necessary to ensure a trouble-free
workflow, without misinterpreted gestures or accidentally-
executed functions. For example, a connection loss is not
acceptable for security-sensitive purposes. However, robust-
ness and recognition rate may increase for future versions of
the device and SDK.
Concepts for multimodal user interfaces (or the use of the
remaining inertial measurement unit sensors in the armband)
should be considered to further improve this system. Further-
more, a transfer of the proposed gesture set to other input
devices would enable a systematic comparison of different
interaction devices.
The willingness of the physicians to use the armband dur-
ing radiological interventions showed its potential for a real
clinical trial. This would allow us to acquire more quantita-
tive data and to evaluate the benefit of using a myoelectrical
device for direct interaction compared to task delegation.
We would like to thank the participants of the user stud-
ies, and all involved clinicians for their assistance. This
work is funded by the German Ministry of Science (BMBF)
within the STIMULATE research campus (grant number
POUPYREV I.: 3D User Interfaces: Theory and Practice. Ad-
dison Wesley Longman Publishing Co., Inc., Redwood City, CA,
USA, 2004. 1
[CL09] CHOJECKI P., LEINER U .: Touchless gesture-interaction
in the operating room. Journal of i-com (Zeitschrift für interak-
tive und kooperative Medien) 8, 1 (2009), 13–18. 2
[CZZ07] CHEN X., ZH ANG X ., ZHAO Z.-Y., YANG J.-H.,
LANTZ V., WANG K.-Q .: Multiple Hand Gesture Recognition
Based on Surface EMG Signal. In Bioinformatics and Biomedi-
cal Engineering (2007), pp. 506–509. 2
M. J., ROSS S.: You can’t touch this: touch-free navigation
through radiological images. Surg Innov 19, 3 (Sep 2012), 301–
307. 2
[FKG93] FONTE YN M. E., K UIPERS B., GRO BE S. J.: A de-
scription of think aloud method and protocol analysis. Qualita-
tive Health Research 3, 4 (1993), 430–441. 5
[GDPM08] GALLO L., DEPIET RO G. , MA RRA I.: 3d Interac-
tion with Volumetric Medical Data: Experiencing the Wiimote.
In Proceedings of the 1st International Conference on Ambient
Media and Systems (2008), Ambi-Sys ’08, pp. 14:1–14:6. 2
[GPC11] GALLO L., PLAC ITE LL I A., CIAMPI M.: Controller-
free exploration of medical image data: Experiencing the Kinect.
In Computer-Based Medical Systems (CBMS), 2011 24th Inter-
national Symposium on (Bristol, June 2011), pp. 1–6. 2
M., PRE IM B.: Workflow Analysis for Interventional Neu-
roradiology using Frequent Pattern Mining. In 13. Jahresta-
gung der Deutschen Gesellschaft fuer Computer- und Roboteras-
sistierte Chirurgie (CURAC) (München, 11.-13. September
2014), pp. 165–168. 3
F., KONRAD O. , ZIDOWITZ S., PE IT GEN H .-O.: Intraoperative
Modification of Resection Plans for Liver Surgery. Int J Comput
Assist Radiol Surg 2, 3-4 (2008), 291–297. 2
FICHTINGER G.: Feasibility of a touch-free user interface
for ultrasound snapshot-guided nephrostomy. In SPIE Medical
Imaging (03/2014 2014), vol. 9036, p. 90362F. 2
P., DUB ER C.: Speech and motion control for interventional ra-
diology: requirements and feasibility. International Journal of
Computer Assisted Radiology and Surgery 8, 6 (Nov 2013), 997–
1002. 2
M., HAN SE N C.: A gesture-controlled projection display for
CT-guided interventions. International Journal of Computer As-
sisted Radiology and Surgery (2015), 1–8. 2
[NKA10] NAIK G., KUMAR D., AR JU NAN S .: Pattern classifica-
tion of Myo-Electrical signal during different Maximum Volun-
tary Contractions: A study using BSS techniques. Measurement
Science Review (2010), 1–6. 2
IJTH T. C.: A system for analyzing intraoperative b-mode
ultrasound scans of the liver. In Proc. IEEE Eng Med Biol Soc
(2007), pp. 1346–1349. 2
ROUNCE FIE LD M. , DAS TU R N., CAR RELL T.: Touchless In-
teraction in Surgery. Commun. ACM 57, 1 (Jan. 2014), 70–77.
[OH07] OSKOEI M. A. , HUH.: Myoelectric control systems -
A survey. Biomedical Signal Processing and Control 2 (2007),
275–294. 2
[OH09] OSKOEI M., H UH.: Adaptive myoelectric human-
machine interface for video games. In Mechatronics and Au-
tomation, 2009. ICMA 2009. International Conference on (Aug
2009), pp. 1015–1020. 2
[PD15] PREIM B., DAC HS ELT R.: Interaktive Systeme: Band 2:
User Interface Engineering, 3D-Interaktion, Natural User Inter-
faces. Springer-Verlag, 2015. 1
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
[Prü97] PRÜMPER J.: Der Benutzungsfragebogen ISONORM
9241/10: Ergebnisse zur Reliabilität und Validität. In
Software-Ergonomie ’97: Usability Engineering: Integration
von Mensch-Computer-Interaktion und Software-Entwicklung
(Stuttgart, 1997), pp. 254–262. 5
3d gesture interaction for the operation room. Journal of i-com
(Zeitschrift für interaktive und kooperative Medien) 8, 1 (2009),
19–23. 2
SCHWIER M., LINK F., PEIT GE N H. O .: Medical image analy-
sis. IEEE Pulse 2, 6 (Nov 2011), 60–70. 4
PEITGE N H.: User interfaces for direct interaction with 3d plan-
ning data in the operation room. Journal of i-com (Zeitschrift für
interaktive und kooperative Medien) 8, 1 (2009), 24–31. 2
WEBER D. J.: Bacterial Contamination of Keyboards: Efficacy
and Functional Impact of Disinfectants. Infection Control & Hos-
pital Epidemiology 27 (4 2006), 372–377. 1
Gestures for Customizable Human-Computer Interaction in the
Operating Room. In Medical Image Computing and Computer-
Assisted Intervention - MICCAI, vol. 6891 of Lecture Notes in
Computer Science. Springer Berlin Heidelberg, 2011, pp. 129–
136. 2
[SK14] SAMADANI A.-A., KUL IC D.: Hand gesture recognition
based on surface electromyography. In Engineering in Medicine
and Biology Society (EMBC),36th Annual International Confer-
ence of the IEEE (2014), pp. 4196–4199. 2
A. C., KINNEY T. B.: Informatics in Radiology: Developing
a Touchless User Interface for Intraoperative Image Control dur-
ing Interventional Radiology Procedures. RadioGraphics 33, 2
(2013), E61–E70. PMID: 23264282. 1,2
[Whe03] WHEEL ER K.: Device control using gestures sensed
from EMG. In IEEE International Workshop on Soft Comput-
ing in Industrial Applications (June 2003), pp. 21–26. 2
[WW04] WALLIN M. K., WAJ NT RAUB S.: Evaluation of Blue-
tooth as a replacement for cables in intensive care and surgery.
Anesthesia & Analgesia 98, 3 (2004), 763–767. 3
[WW11] WIGDOR D., WIXON D.: Brave NUI World: Designing
Natural User Interfaces for Touch and Gesture, 1. ed. Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA, 2011. 1
The Eurographics Association 2015.
... since it allows surgeons to control medical systems in a touchless manner while maintaining total asepsis [3]- [5]. ...
... They are not only intuitive to the user but also there is a cultural aspect to them, such as in emblems [19]. Additionally, they can be used as a safe and aseptic alternative to traditional interfaces in the particular case of medical environments [3], [5]. ...
... This procedure was repeated for all 28 referents. These groupings were utilized to compute the level of agreement using AR formulation [hard representations, see (3)]. An average standard deviation of 0.09 was found among the agreement values obtained from these six participants. ...
Choosing adequate gestures for touchless interfaces is a challenging task that has a direct impact on human–computer interaction. Such gestures are commonly determined by the designer, ad-hoc, rule-based, or agreement-based methods. Previous approaches to assess agreement grouped the gestures into equivalence classes and ignored the integral properties that are shared between them. In this article, we propose a generalized framework that inherently incorporates the gesture descriptors into the agreement analysis. In contrast to previous approaches, we represent gestures using binary description vectors and allow them to be partially similar. In this context, we introduce a new metric referred to as soft agreement rate ( $\mathcal {SAR}$ ) to measure the level of agreement and provide a mathematical justification for this metric. Furthermore, we perform computational experiments to study the behavior of $\mathcal {SAR}$ and demonstrate that existing agreement metrics are a special case of our approach. Our method is evaluated and tested through a guessability study conducted with a group of neurosurgeons. Nevertheless, our formulation can be applied to any other user-elicitation study. Results show that the level of agreement obtained by $\mathcal {SAR}$ is 2.64 times higher than the previous metrics. Finally, we show that our approach complements the existing agreement techniques by generating an artificial lexicon based on the most agreed properties.
... As LMC captures the gestures in images looking upward, some gestures cannot be recognized by LMC. 5,18,19) Repeated failure of recognition makes participants think the device is difficult to use. ...
Full-text available
Background: The advancement of computer information technology would maximize its potential in operating rooms with touchless input devices. A picture archiving and communication system (PACS) was compared with a touchless input device (LMC-GW), relaying to another person to control a mouse through verbal guidance, and directly controlling a mouse. Methods: Participants (n = 34; mean age, 29.6 years) were prospectively enrolled and given nine scenarios to compare the three methods. Each scenario consisted of eight tasks, which required 6 essential functions of PACS. Time elapsed and measurement values were recorded for objective evaluation, while subjective evaluation was conducted with a questionnaire. Results: In all 8 tasks, manipulation using the mouse took significantly less time than the other methods (all p < 0.05). Study selection, panning, zooming, scrolling, distance measuring, and leg length measurement took significantly less time when LMC-GW was used compared to relaying to another person (all p < 0.01), whereas there were no significant differences in time required for measuring the angles and windowing. Although the touchless input device provided higher accessibility and lower contamination risk, it was more difficult to handle than the other input methods (all p < 0.01). Conclusions: The touchless input device provided superior or equal performance to the method of verbal instruction in the environment of operating room. Surgeons agreed that the device would be helpful for manipulating PACS in operating rooms with less contamination risk and disturbance of workflow. The touchless input device can be an alternative option for direct manipulation of a mouse in operation rooms in the future.
... It allows the stakeholders, i.e. customers or end-users to provide their requirements so that the developed system complies with their preferences [7], [8]. This technique is especially advantageous in domains requiring particular expertise i.e. radiology, neurosurgery, construction, and aviation as the experts in those domains have the intrinsic knowledge about the environment which considerably affects the choice of gestures [9]- [11]. ...
... To cope with this problematic, the state of the research systems proposed various modalities such as voice recognition [8], eye movements [9], hand gesture [10], emotion detection [11] and human action interpretation [12] to command computers. The majority of these techniques utilized touchless sensors like RGB cameras [13], [14], time of flight camera [15], Myo armband [16], accelerometer sensor [17] and depth sensors like Kinect [18]- [20], [34] and Leap Motion [7], [21]- [24], [31]. For the touchless interaction with digital equipment in operating rooms, several studies were presented in the literature: Opromolla et al. [21] studied a hand gesture recognition system based on the LMC to mainly visualize and manipulate medical images throughout the surgical procedures. ...
Conference Paper
Hand gesture recognition has become one of the most interesting means of contactless human-computer interaction. There is significant importance for commanding medical images during surgical procedures by the mean of touchless hand gestures for reducing the time of surgery and the risk of contamination. In this work, we used the Leap Motion Controller as an acquisition device, with different classification methods, to recognize 11 hand gestures dedicated to manipulating medical images through a touchless graphical user interface. This framework was trained and tested on a benchmark dataset called LeapGestureDB. We worked with statistical features calculated from fingers and hand data, then normalized and fed into various classifiers such as the support vector machine, the nearest neighbor, the decision tree, the random forest, the AdaBoost, the linear discriminant analysis and the multi-layer perceptron. The highest accuracy was 91.73% and 89.91% using the cubic SVM and the multilayer perceptron, respectively. We developed a contactless interface based on the best recognition rate in order to facilitate the way of interaction with medical images in the operating room.
... Touchless Human-Machine interfaces already exist for surgeons using different techniques. One of these is based on myoelectric signals (MES) [3], unfortunately, it needs electrodes or armbands which are not necessarily comfortable. In another project, L. Di Tomasso, et al. propose a Leap Motion device [4] as a human interface for neurosurgery. ...
... As a means of avoiding the signal processing some authors have used the Myo gesture control armband from Thalmic Labs (Canada), as an off-the-shelf solution. The Myo armband has been used in several works, as classification of hand motions (MENDEZ et al., 2017), controlling prostheses (MASSON et al., 2016FAJARDO et al., 2020), orthoses (RYSER et al., 2017), and also for another applications (MORAIS et al., 2016;HETTIG et al., 2015). ...
Full-text available
We survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under‐explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction.
Medical volume data is usually explored on monoscopic monitors. Displaying this data in three-dimensional space facilitates the development of mental maps and the identification of anatomical structures and their spatial relations. Using augmented reality (AR) may further enhance these effects by spatially aligning the volume data with the patient. However, conventional interaction methods, e.g. mouse and keyboard, may not be applicable in this environment. Appropriate interaction techniques are needed to naturally and intuitively manipulate the image data. To this end, a user study comparing four gestural interaction techniques with respect to both clipping and windowing tasks was conducted. Image data was directly displayed on a phantom using stereoscopic projective AR and direct volume visualization. Participants were able to complete both tasks with all interaction techniques with respectively similar clipping accuracy and windowing efficiency. However, results suggest advantages of gestures based on motion-sensitive devices in terms of reduced task completion time and less subjective workload. This work presents an important first step towards a surgical AR visualization system enabling intuitive exploration of volume data. Yet, more research is required to assess the interaction techniques’ applicability for intraoperative use.
Full-text available
Der erste Band vermittelte Grundlagenwissen zur Mensch-Computer-Interaktion. Dieses Buch baut darauf auf und widmet sich dem gesamten Entwicklungsprozess von User Interfaces und ausgewählten neueren Entwicklungen. In verständlicher und wissenschaftlich fundierter Weise beschreiben die Autoren, welche Phasen und Methoden das User Interface Engineering prägen. Zahlreiche Praxisbeispiele und Handlungsempfehlungen werden für alle Phasen diskutiert: von der Anforderungsanalyse über das Prototyping bis hin zur Evaluierung interaktiver Systeme. Immer mehr 3D-Inhalte sind verfügbar, und die Interaktion mit ihnen hat eigene Herausforderungen. Der Mittelteil des Buches ist daher 3D User Interfaces gewidmet. In profunder und kompakter Form werden wesentliche Aspekte behandelt, darunter 3D-Eingabe- und Ausgabegeräte, Kernaufgaben und spezielle 3D-Interaktionstechniken. Computer werden zunehmend in die reale Welt integriert, sind mobil und allgegenwärtig. Der letzte Buchteil widmet sich dafür geeigneten, natürlichen Formen der Interaktion. Nicht nur Multitouch als verbreitete Form wird systematisch diskutiert. Als erstes deutsches Fachbuch widmet sich dieses Buch auch gestischer Interaktion, Tangible User Interfaces und anderen Natural User Interfaces. Eine anschauliche Sprache, farbige Illustrationen und weiterführende Literaturhinweise machen es zu einem umfassenden Kompendium für eine breite Leserschaft.
Full-text available
The interaction with interventional imaging systems within a sterile environment is a challenging task for physicians. Direct physician-machine interaction during an intervention is rather limited because of sterility and workspace restrictions. We present a gesture-controlled projection display that enables a direct and natural physician-machine interaction during computed tomography (CT)-based interventions. Therefore, a graphical user interface is projected on a radiation shield located in front of the physician. Hand gestures in front of this display are captured and classified using a leap motion controller. We propose a gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection. Our methods were evaluated in a clinically oriented user study with 12 participants. The results of the performed user study confirm that the display and the underlying interaction concept are accepted by clinical users. The recognition of the gestures is robust, although there is potential for improvements. The gesture training times are less than 10 min, but vary heavily between the participants of the study. The developed gestures are connected logically to the intervention software and intuitive to use. The proposed gesture-controlled projection display counters current thinking, namely it gives the radiologist complete control of the intervention software. It opens new possibilities for direct physician-machine interaction during CT-based interventions and is well suited to become an integral part of future interventional suites.
Full-text available
Think Aloud (TA) studies provide rich verbal data about reasoning during a problem solving task. Using TA and protocol analysis, investigators can identify the information that is concentrated on during problem solving and how that information is used to facilitate problem resolution. From this, inferences can be made about the reasoning processes that were used during the problem-solving task. In the past, the validity of data obtained from TA studies has been suspect because of inconsistencies in data collection and the inability to verify findings obtained from the slow, laborious process of protocol analysis. This article describes a means of obtaining more accurate verbal data and analyzing it in a standardized step-by-step manner.
Full-text available
Zusammenfassung. Zur Unterstützung komplexer chirurgi-scher Eingriffe stehen seit einiger Zeit computergestützte Pla-nungssysteme bereit. Für den direkten Zugriff im OP sind diese Systeme jedoch nur sehr eingeschränkt einsetzbar, da sie für die präoperative Planung entworfen wurden und die besonderen Bedingungen im Operationssaal nicht berücksichtigen. Die spe-ziellen Anforderungen in Bezug auf kognitive Beanspruchung, Arbeitsfluss, Sterilität und Platz machen es dem Chirurg weitest-gehend unmöglich, während der OP einen Computer mit den üblichen Eingabegeräten wie Tastatur und Maus zu bedienen. Dieser Artikel diskutiert zwei Konzepte zur intraoperativen Be-trachtung von virtuellen Planungsmodellen für Operationen an der Leber. Die praktische Anwendbarkeit der Konzepte wird anhand einer Fallstudie diskutiert. Summary. Computer-aided planning of complex surgical interventions as it is available today enables the surgeon to simulate and to optimize a resection strategy beforehand. The direct access to this planning data in the operating room, however, is limited by the design of the planning systems for preoperative use. Interfaces for operating room interaction need to consider the specific requirements as to the cognitive load, workflow, sterility, and workspace of the surgeon. During surgery, the control of interfaces by conventional interaction devices such as keyboard and mouse is rather limited for the surgeon. This article discusses two concepts for the intra-operative inspection of 3D planning data for liver surgery. The practical applicability of the concepts is discussed within a case study.
In today's intensive care and surgery, a great number of cables are attached to patients. These cables can make the care and nursing of the patient difficult. Replacing them with wireless communications technology would facilitate patient care. Bluetooth is a modern radio technology developed specifically to replace cables between different pieces of communications equipment. In this study we sought to determine whether Bluetooth is a suitable replacement for cables in intensive care and during surgery with respect to electromagnetic compatibility. The following questions were addressed: Does Bluetooth interfere with medical equipment? And does the medical equipment decrease the quality of the Bluetooth communication? A Bluetooth link, simulating a patient monitoring system, was constructed with two laptops. The prototype was then used in laboratory and clinical tests according to American standards at the Karolinska Hospital in Stockholm. The tests, which included 44 different pieces of medical equipment, indicated that Bluetooth does not cause any interference. The tests also showed that the hospital environment does not affect the Bluetooth communication negatively.
Human hands are the most dexterous of human limbs and hand gestures play an important role in non-verbal communication. Underlying electromyograms associated with hand gestures provide a wealth of information based on which varying hand gestures can be recognized. This paper develops an inter-individual hand gesture recognition model based on Hidden Markov models that receives surface electromyography (sEMG) signals as inputs and predicts a corresponding hand gesture. The developed recognition model is tested with a dataset of 10 various hand gestures performed by 25 subjects in a leave-one-subject-out cross validation and an inter-individual recognition rate of 79% was achieved. The promising recognition rate demonstrates the efficacy of the proposed approach for discriminating between gesture-specific sEMG signals and could inform the design of sEMG-controlled prostheses and assistive devices.
Conference Paper
PURPOSE: Clinicians are often required to interact with visualization software during image-guided medical interventions, but sterility requirements forbid the use of traditional keyboard and mouse devices. In this study we attempt to determine the feasibility of using a touch-free interface in a real time procedure by creating a full gesture-based guidance module for ultrasound snapshot-guided percutaneous nephrostomy. METHODS: The workflow for this procedure required a gesture to select between two options, a “back” and “next” gesture, a “reset” gesture, and a way to mark a point on an image. Using an orientation sensor mounted on the hand as input device, gesture recognition software was developed based on hand orientation changes. Five operators were recruited to train the developed gesture recognition software. The participants performed each gesture ten times and placed three points on predefined target positions. They also performed tasks unrelated to the sought-after gestures to evaluate the specificity of the gesture recognition. The orientation sensor measurements and the position of the marked points were recorded. The recorded data sets were used to establish threshold values and optimize the gesture recognition algorithm. RESULTS: For the “back”, “reset” and “select option” gesture, a 100% recognition accuracy was achieved. For the “next” gesture, a 92% recognition accuracy was obtained. With the optimized gesture recognition software no misclassified gestures were observed when testing the individual gestures or when performing actions unrelated to the sought-after gestures. The mean point placement error was 0.55 mm with a standard deviation of 0.30 mm. The mean placement time was 4.8 seconds. CONCLUSION: The system that was developed is promising and demonstrates potential for touch-free interfaces in the operating room.
Purpose: Interventional radiology is performed in a sterile environment where speech and motion control of image review is needed to simplify and expedite routine procedures. The requirements and limitations were defined by testing an interventional radiology test bed speech and motion control system. Methods: Motion control software was implemented using the Microsoft[Formula: see text] Kinect[Formula: see text] (Microsoft Corp., USA) framework. The system was tested by 10 participants using a predefined set of six voice and six gesture commands under different lighting conditions to assess the influence of illumination on command recognition. The participants rated the convenience of the application and its possible use in everyday clinical routine. A basic set of voice or gesture commands required for interventional radiology were identified. Results: The majority (93 %) of commands were recognized successfully. Speech commands were less prone to errors than gesture commands. Unwanted side effects occurred (e.g., accidentally issuing a gesture command) in about 30 % of cases. Dimmed lighting conditions did not have a measurable effect on the recognition rate. Six out of 10 participants would consider using the application in everyday routine. The necessary voice/gesture commands for interventional radiology were identified and integrated into the control system. Conclusion: Speech and motion control of image review provides a new man-machine interface for radiological image handling that is especially useful in sterile environments due to no-touch navigation. Command recognition rates were high and remained stable under different lighting conditions. However, the rate of accidental triggering due to unintended commands should be reduced.
Review of prior and real-time patient images is critical during an interventional radiology procedure; however, it often poses the challenge of efficiently reviewing images while maintaining a sterile field. Although interventional radiologists can "scrub out" of the procedure, use sterile console covers, or verbally relay directions to an assistant, the ability of the interventionalist to directly control the images without having to touch the console could offer potential gains in terms of sterility, procedure efficiency, and radiation reduction. The authors investigated a potential solution with a low-cost, touch-free motion-tracking device that was originally designed as a video game controller. The device tracks a person's skeletal frame and its motions, a capacity that was adapted to allow manipulation of medical images by means of hand gestures. A custom software program called the Touchless Radiology Imaging Control System translates motion information obtained with the motion-tracking device into commands to review images on a workstation. To evaluate this system, 29 radiologists at the authors' institution were asked to perform a set of standardized tasks during a routine abdominal computed tomographic study. Participants evaluated the device for its efficacy as well as its possible advantages and disadvantages. The majority (69%) of those surveyed believed that the device could be useful in an interventional radiology practice and did not foresee problems with maintaining a sterile field. This proof-of-concept prototype and study demonstrate the potential utility of the motion-tracking device for enhancing imaging-guided treatment in the interventional radiology suite while maintaining a sterile field. Supplemental material available at© RSNA, 2013.