Content uploaded by Bernhard Preim
Author content
All content in this area was uploaded by Bernhard Preim on Oct 23, 2015
Content may be subject to copyright.
Eurographics Workshop on Visual Computing for Biology and Medicine (2015)
K. Bühler, L. Linsen, and N. W. John (Editors)
Exploration of 3D Medical Image Data for Interventional
Radiology using Myoelectric Gesture Control
J. Hettig1A. Mewes1O. Riabikin3M. Skalej3B. Preim2C. Hansen1
1Computer-Assisted Surgery Group, Faculty of Computer Science, University of Magdeburg, Germany
2Visualization Group, Faculty of Computer Science, University of Magdeburg, Germany
3Clinic of Neuroradiology, University Hospital Magdeburg, Germany
Abstract
Human-computer interaction with medical images in a sterile environment is a challenging task. It is often dele-
gated to an assistant or performed directly by the physician with an interaction device wrapped in a sterile plastic
sheath. This process is time-consuming and inefficient. To address this challenge, we introduce a gesture-based in-
terface for a medical image viewer that is completely touchlessly controlled by the Myo Gesture Control Armband
(Thalmic Labs). Based on a clinical requirement analysis, we propose a minimal gesture set to support basic inter-
action tasks with radiological images and 3D models. We conducted two user studies and a clinical test to evaluate
the interaction device and our new gesture control interface. The evaluation results prove the applicability of our
approach and provide an important foundation for future research in physician-machine interaction.
1. Introduction
Interventional radiology is based on the review and as-
sessment of pre- and intraoperative images to guide instru-
ments, identify and document findings, and provide treat-
ment [TCZ∗13]. However, interaction with 3D medical im-
ages in a sterile environment such as an operating room (OR)
challenges physicians. During interventions, available inter-
action devices for medical image exploration, i.e., joysticks,
buttons, and touch screens, are wrapped in a transparent
plastic sheath which makes the interaction cumbersome.
Direct control with a keyboard or mouse is not an op-
tion due to contamination with bacteria [RWGW06]. There-
fore, many functions are usually triggered and controlled
indirectly by radiology technicians in a (non-sterile) con-
trol room. The technicians interpret voice commands and
hand gestures of the radiologists and operate the inter-
ventional software using conventional interaction devices.
However, indirect interaction is time-consuming and error-
prone [OGS∗14] and requires additional specialized person-
nel which can result in higher treatment costs.
With the introduction of new input devices and interaction
paradigms, modern human-computer interaction offers a lot
of opportunities, e.g., natural 3D user interfaces and gesture
interaction [WW11,BKLP04,PD15]. Touchless gesture in-
terfaces have the potential to improve interaction with medi-
cal images and devices in sterile environments. Accordingly,
underlying interaction concepts need to be carefully adapted
to interventional scenarios and workflows.
In this work, we present a new method to control a
medical image viewer completely touchless using the Myo
Gesture Control Armband (Thalmic Labs Inc., Kitchener,
Canada) as an input device. In contrast to camera-based sys-
tems, this device does not introduce line-of-sight or posi-
tioning problems in the OR. Furthermore, the sterility is pre-
served, because the device is worn under the physician’s
clothes and does not provide an additional hazard. We in-
troduce a gesture-controlled interface using a minimal ges-
ture set to interact with radiologic images and 3D planning
models.
To evaluate the Myo Gesture Control Armband, its clini-
cal applicability, and the proposed gesture set, we conducted
two quantitative user studies and a clinical test during neu-
roradiological interventions. The first quantitative user study
focuses on the functionality, including device wearability
and assessing the gesture recognition rate of all hand ges-
tures supported by the software development kit (SDK). The
second quantitative user study investigates interaction with
a medical image viewer using the minimal gesture set pro-
posed in this work.
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
2. Related Work
Commercial interaction devices have been used in the ster-
ile area of operating rooms for years. In many cases, touch
screens are used. A disadvantage of touch screens is that they
need to be wrapped in a sterile plastic sheath. According to
observations by the authors, the plastic sheath considerably
reduces the image quality and could cause interaction errors.
In addition, touch screen interaction is only possible if the
physician’s hand can reach the display. During an interven-
tion, this is often hard to achieve because of limited space
around the examination table.
Nowatschin et al. [NMWL07] proposed to install a 3D
mouse close to the surgeon to allow interaction with medical
image data and 3D planning models visualized by a surgi-
cal navigation system. 3D mice are appropriate to rotate 3D
models precisely. However, they are inappropriate for simple
(but essential) interaction tasks such as object selection. Sev-
eral groups [HKS∗08,GDPM08] propose using a 3D point-
ing device based on optical tracking and inertial sensing, i.e.,
the Nintendo Wiimote, to interact intraoperatively with med-
ical images and 3D models. Interaction with medical image
data using inertial sensors was also proposed by Schwarz et
al. [SBN11]. They introduced a system that learns defined
user gestures that are most suitable for a given task. Hence,
the user can integrate their preferences and does not depend
on a predefined gesture set. Another system using inertial
sensors for snapshot-guided nephrostomy was proposed by
Kotwicz et al. [HLUF14]. A three-axis compass, a three-axis
gyroscope, and a three-axis accelerometer are affixed on the
user’s hand under a sterile glove to execute, via small hand
gestures, interaction functions like scroll, select, and reset.
Many systems attempt to detect finger positions using
stereo cameras [CL09] or TOF cameras [PSS∗09] to con-
trol a mouse cursor. Ritter et al. [RHW∗09] track the move-
ments of hands to enable simple interaction tasks such as
rotating geometric planning models or triggering of events
via buttons. Gallo et al. [GPC11] present an interactive sys-
tem for medical image exploration using the Kinect depth
camera (Microsoft, Redmond, WA, USA) as a proof of con-
cept. The user interacts with static or dynamic hand and arm
gestures in front of the camera to execute exploration func-
tions like pointing, zooming, translating or windowing on
radiological images. Ebert et al. [EHA∗12] translate the data
delivered by the Kinect camera and a voice recognition soft-
ware into keyboard and mouse commands, and evaluate re-
sponse times and usability when navigating through radio-
logical images. Hoker et al. [HPMD13] propose a basic set
of six voice and six gesture commands for direct touchless
interaction in a real OR environment using the Kinect. Al-
though gesture recognition rates were high and remained sta-
ble under different lighting conditions, their study showed
that the rate of accidental triggering due to unintended com-
mands is too high for clinical use and should be reduced. Tan
et al. [TCZ∗13] evaluated a Kinect-contolled image viewer
system with 29 radiologists with different levels of expe-
rience during a routine abdominal computed tomographic
study. 69% of their subjects found the system useful and 7%
did not. Cited issues included hand tracking, inconsistent re-
sponsiveness, the required use of two hands, and the need for
ample space to operate. Mewes et al. [MSR∗15] presented a
natural gesture set to explore radiological images (projected
onto a radiation shield) using the Leap Motion Controller
(Leap Motion, Inc, San Francisco, USA). The results of their
user study show that sterile and direct interaction with the
Leap Motion Controller has the potential to replace conven-
tional interaction devices in the OR. However, the optimal
placement of the depth sensor close to the operator, the lim-
ited robustness of gesture recognition, and missing feedback
are reported as problems. In summary, optical-based gesture
recognition systems are widely used in experimental clinical
settings. However, they show considerable drawbacks when
applied in the OR, e.g., responsiveness, robustness, limited
interaction volume, and line of sight.
Human-computer interaction based on myoelectric sig-
nals (MES) is investigated only by a few groups worldwide.
The majority of applications in the field of myoelectric con-
trol focuses on prosthetics, signal analysis, robot control and
rehabilitation. A substantial survey about the use of myo-
electric signals was introduced by Oskoei and Hu [OH07].
They reviewed various research in pattern recognition- and
non-pattern recognition-based myoelectric control, state-of-
the-art achievements and potential applications. Based on
the discussed achievements, their paper has led to a develop-
ment of new approaches for the improvement of myoelectric
control. In another work, Oskoei and Hu [OH09] examined
time-related variabilities in myoelectric signals that occur
through fatigue while playing video games. They proposed
an adaptive scheme that models fatigue-based changes and
modifies the classification criteria to provide a stable perfor-
mance in long-term operations.
With respect to the analysis of myoelectric signals, sev-
eral different methods are used to detect hand and finger ges-
tures, improve diagnostic applications and build the founda-
tion for myoelectric gesture control. Chen et al. [CZZ∗07]
used a linear Bayesian classifier, Naik et al. [NKA10] pre-
sented a method using Independent Component Analysis
in combination with blind source separation. Samadani and
Kulic [SK14] used Hidden Markov Models to analyze the
myoelectric signals.
An early work concerning myoelectric gesture control
was presented by Wheeler [Whe03]. He used two neuro-
electronic interfaces for virtual device control. Both inter-
face configurations are based on sampled data which were
collected from the user’s forearm with an electromyogram.
In the first study, a sleeve with dry electrodes (fixed arrange-
ment of the electrodes) is utilized to emulate a virtual joy-
stick of a flight simulator with the directions up, down, left
and right. In the second study, wet electrodes are placed on
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
the participant’s forearm (free and variable arrangement of
the electrodes) to simulate a virtual keyboard with the keys
0 to 9 and Enter. The results illustrate the potential of myo-
electric gesture control using a non-invasive setup. However,
to the knowledge of the authors, myoelectric gesture control
to support human-computer interaction during surgical pro-
cedures or radiological interventions has not been described
so far.
3. Material and Methods
The focus of this work is the evaluation of touchless interac-
tion with radiological images and 3D planning models using
the Myo Gesture Control Armband as input device. There-
fore, we introduce a minimal gesture set for a medical image
viewer. Technical and clinical requirements for our approach
were determined by analyzing the workflow of neuroradio-
logical interventions.
3.1. Requirement Analysis
In previous work [HHB∗14], we analyzed video data from
more than 25 different neuroradiological procedures. We
classified single interaction steps during each procedure,
such as scrolling through acquired digital subtraction an-
giography (DSA) images, rotation of 3D vascular models, or
zooming to analyze details in the images. Second, we partic-
ipated in various radiological interventions where a modern
angiography CT imaging system (Artis zeego, Siemens) was
utilized to support instrument guidance. As a result, we can
confirm the following disturbances in the clinical workflow:
•Delegation of tasks: Verbal comments or hand gestures
are used to delegate human-computer interaction tasks to
an assistant in the OR or in a non-sterile control room
(indirect interaction).
•Leaving the OR or operating table: Physicians have to
change their position to use the provided interaction de-
vices (joystick, buttons, and touch screens). In complex
cases, they have to leave the sterile OR to use a worksta-
tion in the control room to interact with the patient data.
•Leaning over the operating table: To interact with touch
screens, physicians have to lean over the operating table
and the patient.
Third, our requirement analysis covered the research of
literature related to gesture-based and touchless interaction.
With these information, we specified seven functions listed
in Table 1.
Based on discussions with our clinical partner, we decided
to provide only two degrees of freedom for the rotation of
3D models in order to reduce the complexity. In this work,
we decided to focus on the interaction tasks that we observed
most frequently during interventions. Further observed inter-
actions, such as changing window-level settings or distance
measuring, are also important but not considered here.
Table 1: Specified explicit 2D and 3D interaction functions
based on our requirement analysis.
2D 3D
Scrolling in z-direction
Panning in x-direction Rotation around the x-axis
Panning in y-direction Rotation around the y-axis
Zooming Zooming
3.2. Myo Armband and Gesture Set
The Myo Gesture Control Armband is worn on the user’s
forearm and measures the electrical signals which arise from
biochemical processes through muscle contractions. These
contractions are caused through the movement of the hand.
The armband holds eight surface electromyographic sensors
(Medical Grade Stainless Steel EMG sensors) that measure
those signals. The hand movements include the following
five gestures (see Fig. 1) which are supported by the arm-
band:
•Double Tap: Tapping the thumb and middle finger twice
together.
•Fist: Forming a fist with the hand.
•Spread Finger: Open hand with strutted fingers.
•Wave In: Wave motion with the hand to the body (palmar
flexion).
•Wave Out: Wave motion with the hand off the body (dor-
siflexion).
For haptic feedback, the armband provides an opportunity
to access various lengths of vibrations. The connection and
data transmission is based on Bluetooth technology, which
is certified for use in the OR and does not interfere with any
other devices [WW04].
Due to the small number of recognized gestures by the de-
vice, we propose a minimal gesture set. We assign a gesture
to more than one function rather than assigning a specific
gesture for each tool or function. This results in a concept
that offers the possibility to expand the system and integrate
new functions without the need of learning new gestures.
Furthermore, the cognitive effort of memorizing the gesture
and corresponding function is minimal. To realize a mini-
mal gesture set, we first reduced the seven specified explicit
functions (see Table 1) to four basic functions. For that, we
mapped the available gestures on each function individually.
Subsequently, we merged the functions to simple and gen-
eral interaction tasks if it seemed consistent. The results of
this merger are the four basic functions consisting of a lock, a
select, a parameterize and an interaction function, which are
in turn mapped on the five available gestures and then used
to control the software and to interact with the visualization.
The locking status of the medical image viewer is
switched using a Double Tap (Fig. 1a) gesture. If the sys-
tem is locked, no interaction is possible and the physician
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Figure 1: Hand postures of the five gestures: (a) Double Tap (b) Fist (c) Spread Fingers (d) Wave In (e) Wave Out
can work without any disturbances. To switch between func-
tions or change a function parameter (e.g., slicing speed) the
gestures Fist (Fig. 1b) and Spread Fingers (Fig. 1c) are used
to activate the selection. Finally, the two opposing gestures
Wave In (Fig. 1d) and Wave Out (Fig. 1e) are used to select
and parameterize a function. In addition, these gestures are
used to control functions, e.g., incrementing or decrement-
ing the current slice position in the 2D image viewer.
3.3. Medical Image Viewer
We implemented a medical image viewer that serves primar-
ily as a tool to evaluate the interaction with the Myo arm-
band. The Qt application framework was used in version 5.4
to build the Graphical User Interface (GUI) and the Visual-
ization Toolkit (VTK) in version 6.1 to visualize the medi-
cal dataset. For the Myo armband, we utilized the manufac-
turer’s C++ SDK in version 0.81 and the firmware in version
1.1.755.
This viewer also offers the possibility to integrate differ-
ent devices for comparison studies between device-specific
interaction styles. To acquire quantitative measurements, a
data logger is implemented as well. The complete control
of the viewer is performed using the Myo armband. The
viewer has two viewports to display 2D and 3D images, as
shown in Figure 2. Furthermore, a visual as well as a hap-
tic feedback was implemented to provide additional infor-
mation about the selected function, its parameterization and
occurring events.
3.4. Evaluation
We conducted two quantitative user studies and a clinical
test in the OR to evaluate the Myo armband, the proposed
minimal gesture set, and its clinical applicability.
Experimental setup: The two quantitative user studies
were performed under controlled lab conditions in an OR-
like setup that aims to simulate the conditions in an inter-
vention room (see Fig. 3). We displayed our medical im-
age viewer on a 24” touch screen monitor belonging to the
CAS-ONE IR navigation system (CAScination AG, Bern,
Switzerland). Furthermore, we placed an operating table
with a medical phantom on the table in front of the user to
simulate the real distance between monitor and the physi-
cian’s position in the OR. For our user studies, we used
a liver CT data set (84 slices) with a primary liver tu-
mor. The corresponding 3D planning models including seg-
mented liver vessels (portal vein and hepatic vein) and the
tumor were generated using the medical image processing
platform MeVisLab [RBH∗11].
Evaluation Criteria: Based on the requirements, we de-
fined criteria which we evaluated in our studies. The most
important clinical requirement is preserving the sterility
of the device and inherent hardware. Another aspect is
the training time and the time needed to interact with the
gesture-based interface to fulfill a given task. Furthermore,
the acceptance of the proposed concept by the physician as
the end user is important. Finally, the conducted user studies
investigate the robustness of the gesture recognition and the
associated impact on usability and applicability in the OR.
Afunctionality study was performed to evaluate the Myo
armband as interaction device with regard to accuracy and
robustness. During the study, we ensured that the position of
Figure 2: Graphical user interface of our medical image
viewer with a viewport to explore radiological images (left),
and a second viewport to visualize 3D models (right). Left
and above the two viewports, interactive icons provide vi-
sual feedback about function, parameterization, locking sta-
tus and the currently recognized gesture.
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Figure 3: Experimental setup showing the operating table,
a medical phantom, and the CAS-ONE IR navigation system
(CAScination AG, Bern, Switzerland). The subject wears the
armband on the right arm and interacts with the visualiza-
tion.
the armband was equal for all subjects by placing a marked
sensor module of the armband on the musculus palmaris
longus (lower surface of the forearm). The experiment con-
sisted of the following two parts, which were performed for
each arm individually (dominant arm first) to see if there are
any differences in handedness:
1. Training: Each subject was introduced to the function-
ality of the armband by explaining each gesture. After-
wards, the subjects had unlimited time to familiarize with
the device. This means, each subject knows how to move
the wrist and hand so that the system recognizes the ges-
ture.
2. Verification: Each subject had to explicitly perform a
given gesture ten times to verify accuracy and robustness.
This was repeated for all five gestures.
The quantitative measurements comprised the training time,
the correct and the incorrect recognized gestures. Overall,
2150 gestures were recorded and analyzed. Furthermore, we
acquired physical data about the subjects’ arms to gather in-
formation about possible causes of unpleasant sensation or a
change in the recognition rates, due to a too tight or loose fit
of the armband. At the same time, we instructed the subjects
to use the think-aloud protocol [FKG93] to gather individual
and qualitative information about the Myo armband as input
device. After the test, we asked in a questionnaire about the
wearing comfort of the armband, the motion of each gesture
to determine problems in the early stages of our develop-
ment, and if there were any differences between the domi-
nant versus non-dominant arm.
The interaction study focused on the interaction with the
medical image viewer using the proposed concept of a min-
imal gesture set. Analogous to the first study, the second
study started with an introduction and a training. The han-
dling of the medical image viewer with the minimal gesture
set was part of this training phase. Therefore, we explained
the user interface including the visual and haptic feedback
system and the gesture control using the armband. Each sub-
ject received an unlimited amount of time to understand the
handling of the medical image viewer. The test supervisor
answered no questions after the training phase in order to
evaluate the developed feedback systems regarding problem
handling and interaction flow. In the test phase, each subject
had to perform the following four tasks:
1. Localizing the liver tumor in the 2D data set and deter-
mining start and end slice (9 to 38).
2. Selecting a specific slice, zooming the image to a prede-
fined value and positioning it in the viewer’s center (com-
plex task).
3. Rotating the 3D planning model to a given orientation.
4. Zooming in the 3D view to a predefined zoom value.
All experiments were recorded using a video camera in or-
der to log verbal comments of the participants. Quantitative
measurements included the time a subject needed to perform
each task. In addition, we asked the participants to fill in an
adapted ISONORM 9421/110 questionnaire [Prü97] in or-
der to evaluate our interaction approach regarding usability,
naturalness of the execution, weariness, memorability and
understanding of each gesture.
The clinical test focused on the evaluation of the armband
during two neuroradiological interventions. This pilot study
helped us to identify problems with the gesture recognition
in a real clinical setting and moreover to get feedback from
the physicians after using the Myo armband. Therefore, we
used the data logger to record the recognized gestures and
the time steps at which the gesture was recognized. During
each intervention we also recorded the single workflow steps
(including time stamps) to evaluate the recognized gesture
and the individual hand movement. This way, we could iden-
tify, if and under which conditions any of the gestures of the
set were accidentally performed or recognized.
The first intervention was a periradicular therapy and was
performed by a resident physician who wore the armband for
about 45 minutes during the preparation and intervention. In
the second intervention, an assistant medical director wore
the armband during an embolization of a cerebral arteriove-
nous malformation for about two hours.
4. Results
The results of the functionality study are shown in Figure 4.
20 subjects (average age = 27.2 years, 14 female and 6 male)
with different levels of experience in gesture control and
varying constitutions of their forearm (circumference and
hairiness) participated in this study. Two participants were
left-handed and 18 right-handed.
Differences in handedness were noticed by nine of the
subjects after the second run of this study, regarding the
easier understanding of the hand movement (hand gesture),
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Figure 4: Recognition rates for each hand gesture within our functionality study. The pie charts visualize correctly and falsely
negative detected gestures (Double Tap = dark blue, Fist = light blue, Spread Fingers = orange, Wave In = grey, Wave Out =
yellow).
and some users had difficulties using the armband on the
non-dominant arm. The Double Tap gesture had the lowest
correct recognition rate (56.04%), whereupon a double lock
system was applied to prevent unintentional interaction. This
means that an interaction is only possible if the viewer is un-
locked and a function selected and parametrized. It should
be noted that this gesture was the most time-consuming
for understanding in the training phase. Both Wave gestures
achieved a good recognition rate (71.23% and 86.40%).
Also, the Fist (78.84%) and Spread Fingers (71.76%) had
a similar good recognition rate. It should be mentioned that
both gestures (Fist and Spread Fingers) have a mutual recog-
nition rate of about 11% due to the contractions of neighbor-
ing muscles. According to our data, the recognition rate de-
pends on the training time and can be improved by a longer
practicing period for the users to familiarize with the device.
The average training time to familiarize with each ges-
ture was 111swith a standard deviation of σ=60sfor the
dominant arm, and an average of 98swith σ=58 sfor the
non-dominant arm. We assume that the differences occurred
because the hand movements were known after the first test.
The collected data about the subjects’ arms including cir-
cumference (with a mean value of 25.75cm and σ=1.72cm)
or hairiness did not influence the results in our experiments
and therefore provide no additional value. For thinner arms,
we provided a better fit of the armband through applying
clips to it to make it tighter and thus establish a better skin
contact. Comments collected from the questionnaires and
the think-aloud protocol included issues about the wearing
comfort of the Myo armband depending on the period of
time the armband is worn and related pain or unpleasant sen-
sations in the arm. Minor problems were reported regarding
the form of individual gestures and a resulting unpleasant
hand posture. Six subjects experienced a constricting sen-
sation and two mentioned that the Fist and Spread Fingers
gesture are exhausting through strong exertion executing the
gesture. The Wave Out gesture was easier to execute than the
Wave In gesture for most subjects in this study. Moreover, a
tenosynovitis can make the hand movement painful through
hyperextension of the wrist.
Our subject pool for the interaction study consisted of 10
medical domain experts, i.e., medical students and assistant
physicians (average age = 23.8 years, 6 female and 4 male).
None of these subjects participated in our first study. The
training time for the understanding of each gestures was sim-
ilar to our functionality study. However, the training time for
the interaction differed from subject to subject. The mean
time for the training was 4:51 minutes with σ=1:59 min.
This is sufficient for our non-security-sensitive purposes.
The times for each interaction task of the study are shown
in Table 2. Subjects needed the most time (2:14 min) to ro-
tate the 3D model to the given orientation. This might be
explained by the fact that the rotation had to be performed
on two axes and not via the trackball metaphor as usual. The
interaction with the 2D slices, however, succeeded in most
cases without any problems.
Table 2: Measured times for each interaction task (time in
minutes) during the interaction study.
Task Mean Time Standard Deviation σ
Training 4:51 1:59
1 1:06 0:29
2 2:03 1:20
3 2:14 1:10
4 0:53 0:36
We used a Likert scale from 1 to 5 for simplicity,natu-
ralness,memorability and understanding of each gesture as
well as the weariness using a gesture. The results are pre-
sented in Table 3. The findings of the interaction study are
in line with the results of the functionality study. This can
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
Table 3: Questionnaire results for each gesture (interaction study). Rating is based on a 5-point Likert scale from 1 = strongly
disagree to 5 = strongly agree.
Double Tap Fist Spread Fingers Wave In Wave Out Mean ±SD
Simplicity 3.9 4.9 4.1 4.4 4.5 4.36±0.385
Naturalness 3.9 4.8 4.2 4.2 4.4 4.30 ±0.332
Memorability 4.7 4.9 4.7 4.6 4.6 4.70 ±0.122
Understanding 3.8 4.4 4.0 4.2 4.2 4.12 ±0.228
Weariness (not tiring) 3.8 4.6 4.1 4.0 4.1 4.12 ±0.295
be seen, e.g., in the values for the Double Tap gesture, which
had the worst recognition rate of the five gestures. This leads
to an obstruction in the workflow while solving the four
given tasks due to unintentionally executed gestures, which
triggered unwanted behavior.
The results from the clinical test, particularly the analy-
sis of the logging data, shed light on the relation between
intra-operative workflow steps and recognized gestures (see
Table 4). A major problem is the Unknown gesture, which
informed about a connection loss between the armband and
the host computer. In case of a radiological intervention, sev-
eral physicians and assistants with radiation protection vests
can obscure the Bluetooth signal. Also, a too large distance
between the receiving host PC and the physician wearing
the armband can lead to a connection loss (Bluetooth range).
The Double Tap gesture was recognized most often (first in-
tervention), because movements such as knocking a syringe
or tapping devices like a touch screen are similar to the ges-
ture’s muscle contractions and performed often during this
kind of intervention. The two gestures Fist and Spread Fin-
gers do have a chance of mutual recognition. Both gestures
are recognized in similar procedure steps consistently, e.g.,
when inserting a catheter or using a syringe to administer a
contrast agent for vessel imaging (full tension of the fore-
arm). It can be assumed that those two gestures are recog-
nized most frequently during minimal invasive interventions
if no additional intervention system is used. The Wave ges-
tures are recognized when using the angiography system,
e.g., when positioning the table with a joystick or interacting
with the image data. In some cases, those gestures are also
recognized by pointing on the monitors or gesticulating.
For a qualitative analysis, the operating physicians an-
swered questions about the wearing comfort and a possible
future use of the Myo armband as interaction device. De-
pending on the circumference of the forearm (tight fit), wear-
ing the Myo armband during a whole intervention could be
constricting, but did not affect any procedure step.
5. Discussion
The results of the functionality study showed that there are
only minor problems concerning the wearing comfort of the
armband. However, this was not confirmed by the feedback
Table 4: Log analysis of two neuroradiological interven-
tions. The table shows the quantity of recognized gestures
during the procedure.
Gesture Intervention 1 Intervention 2
Unknown 8 2
Double Tap 132 203
Fist 62 131
Spread Fingers 108 440
Wave In 28 152
Wave Out 26 89
Overall 364 1017
we received from the physicians after the interventions dur-
ing the clinical test. The physicians reported no problems
with the Myo armband as device and no interference of the
clinical workflow was observed. The haptic feedback was
not actively noticed by the physicians during the operation,
accordingly an adaption of the vibration feedback is neces-
sary.
The interaction study showed that the proposed concept
of a minimal gesture set is a notable option compared to in-
dividual gestures for each task. One benefit of this concept
is the expandability regarding new functionalities, as far as it
is logically practicable, e.g., the modification of the window
level. The individual gestures of the used set were consis-
tently rated as a good match for the functions, easy to ex-
ecute and remember, and overall a good option to interact
with the visualization through simple hand gestures. Only
the Double Tap gesture was rated inferior because of the in-
sufficient recognition rate and the resulting disturbances in
the workflow. Although the Double Tap gesture performed
badly in the functionality study, the authors decided to use it
as unlock gesture, because the other available gestures were
already used as logically connected controls for the soft-
ware functions. Delineation and unambiguity of the gestures
should be preserved. Minor drawbacks were sometimes an
unpleasant hand posture and problems with the precise exe-
cution of a function. Our defined requirements were fulfilled,
except for the robustness of the system, which is one of the
most crucial aspects. Formal feedback from the physicians
after the clinical tests indicate that the proposed concept has
the potential to improve the workflow in an OR. If physicians
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
could navigate directly without delegating interaction tasks,
assistants could prepare upcoming procedure steps instead.
Therefore, this might lead to a shortening of the intervention
time and a reduction of intervention costs. Compared to in-
teraction devices with a fixed position and varying distance
to the user (such as a control panel placed on the operating
table), or camera-based systems with a limited field of view,
the proposed system enables a very flexible and mobile in-
teraction in the OR.
6. Conclusion and Future Work
Direct interaction with medical images in a sterile environ-
ment is a challenging task. We presented and evaluated a
concept for myoelectrically controlled touchless interaction
with medical image data. Our results prove its applicability
and may inspire future research.
Future improvements concerning the robustness of the
Myo Armband are necessary to ensure a trouble-free
workflow, without misinterpreted gestures or accidentally-
executed functions. For example, a connection loss is not
acceptable for security-sensitive purposes. However, robust-
ness and recognition rate may increase for future versions of
the device and SDK.
Concepts for multimodal user interfaces (or the use of the
remaining inertial measurement unit sensors in the armband)
should be considered to further improve this system. Further-
more, a transfer of the proposed gesture set to other input
devices would enable a systematic comparison of different
interaction devices.
The willingness of the physicians to use the armband dur-
ing radiological interventions showed its potential for a real
clinical trial. This would allow us to acquire more quantita-
tive data and to evaluate the benefit of using a myoelectrical
device for direct interaction compared to task delegation.
Acknowledgments
We would like to thank the participants of the user stud-
ies, and all involved clinicians for their assistance. This
work is funded by the German Ministry of Science (BMBF)
within the STIMULATE research campus (grant number
03FO16102A).
References
[BKLP04] BOWMAN D. A., KRUIJFF E. , LAVIOLA J. J.,
POUPYREV I.: 3D User Interfaces: Theory and Practice. Ad-
dison Wesley Longman Publishing Co., Inc., Redwood City, CA,
USA, 2004. 1
[CL09] CHOJECKI P., LEINER U .: Touchless gesture-interaction
in the operating room. Journal of i-com (Zeitschrift für interak-
tive und kooperative Medien) 8, 1 (2009), 13–18. 2
[CZZ∗07] CHEN X., ZH ANG X ., ZHAO Z.-Y., YANG J.-H.,
LANTZ V., WANG K.-Q .: Multiple Hand Gesture Recognition
Based on Surface EMG Signal. In Bioinformatics and Biomedi-
cal Engineering (2007), pp. 506–509. 2
[EHA∗12] EBERT L. C., HATCH G., AM PANO ZI G., THAL I
M. J., ROSS S.: You can’t touch this: touch-free navigation
through radiological images. Surg Innov 19, 3 (Sep 2012), 301–
307. 2
[FKG93] FONTE YN M. E., K UIPERS B., GRO BE S. J.: A de-
scription of think aloud method and protocol analysis. Qualita-
tive Health Research 3, 4 (1993), 430–441. 5
[GDPM08] GALLO L., DEPIET RO G. , MA RRA I.: 3d Interac-
tion with Volumetric Medical Data: Experiencing the Wiimote.
In Proceedings of the 1st International Conference on Ambient
Media and Systems (2008), Ambi-Sys ’08, pp. 14:1–14:6. 2
[GPC11] GALLO L., PLAC ITE LL I A., CIAMPI M.: Controller-
free exploration of medical image data: Experiencing the Kinect.
In Computer-Based Medical Systems (CBMS), 2011 24th Inter-
national Symposium on (Bristol, June 2011), pp. 1–6. 2
[HHB∗14] HÜBLE R A., HAN SE N C., BEUING O., SKALEJ
M., PRE IM B.: Workflow Analysis for Interventional Neu-
roradiology using Frequent Pattern Mining. In 13. Jahresta-
gung der Deutschen Gesellschaft fuer Computer- und Roboteras-
sistierte Chirurgie (CURAC) (München, 11.-13. September
2014), pp. 165–168. 3
[HKS∗08] HANSE N C., KÖHN A. , SCHLICHTING S., WEIL ER
F., KONRAD O. , ZIDOWITZ S., PE IT GEN H .-O.: Intraoperative
Modification of Resection Plans for Liver Surgery. Int J Comput
Assist Radiol Surg 2, 3-4 (2008), 291–297. 2
[HLUF14] HERNICZEK S. K., LAS SO A. , UNGI T.,
FICHTINGER G.: Feasibility of a touch-free user interface
for ultrasound snapshot-guided nephrostomy. In SPIE Medical
Imaging (03/2014 2014), vol. 9036, p. 90362F. 2
[HPMD13] HOT KER A. M., PI TTO N M. B., MILDENBERGER
P., DUB ER C.: Speech and motion control for interventional ra-
diology: requirements and feasibility. International Journal of
Computer Assisted Radiology and Surgery 8, 6 (Nov 2013), 997–
1002. 2
[MSR∗15] MEWES A., SAAL FE LD P., RIABIKIN O ., SKALEJ
M., HAN SE N C.: A gesture-controlled projection display for
CT-guided interventions. International Journal of Computer As-
sisted Radiology and Surgery (2015), 1–8. 2
[NKA10] NAIK G., KUMAR D., AR JU NAN S .: Pattern classifica-
tion of Myo-Electrical signal during different Maximum Volun-
tary Contractions: A study using BSS techniques. Measurement
Science Review (2010), 1–6. 2
[NMWL07] NOWATSC HI N S., MAR KERT M., W EB ER S.,
LÃIJTH T. C.: A system for analyzing intraoperative b-mode
ultrasound scans of the liver. In Proc. IEEE Eng Med Biol Soc
(2007), pp. 1346–1349. 2
[OGS∗14] O’HARA K., GONZALEZ G., SE LL EN A., PE NN EY
G., VARNAVAS A., MENTIS H., CRIMINISI A., CORISH R.,
ROUNCE FIE LD M. , DAS TU R N., CAR RELL T.: Touchless In-
teraction in Surgery. Commun. ACM 57, 1 (Jan. 2014), 70–77.
1
[OH07] OSKOEI M. A. , HUH.: Myoelectric control systems -
A survey. Biomedical Signal Processing and Control 2 (2007),
275–294. 2
[OH09] OSKOEI M., H UH.: Adaptive myoelectric human-
machine interface for video games. In Mechatronics and Au-
tomation, 2009. ICMA 2009. International Conference on (Aug
2009), pp. 1015–1020. 2
[PD15] PREIM B., DAC HS ELT R.: Interaktive Systeme: Band 2:
User Interface Engineering, 3D-Interaktion, Natural User Inter-
faces. Springer-Verlag, 2015. 1
c
The Eurographics Association 2015.
J. Hettig et al. / Exploration of 3D Medical Image Data for Interventional Radiology using Myoelectric Gesture Control
[Prü97] PRÜMPER J.: Der Benutzungsfragebogen ISONORM
9241/10: Ergebnisse zur Reliabilität und Validität. In
Software-Ergonomie ’97: Usability Engineering: Integration
von Mensch-Computer-Interaktion und Software-Entwicklung
(Stuttgart, 1997), pp. 254–262. 5
[PSS∗09] PENNE J., SOUT SC HEK S., S TÜRMER M., SCHALLER
C., PLACHT S., KORNHUBER J., HORNEGGER J.: Touchless
3d gesture interaction for the operation room. Journal of i-com
(Zeitschrift für interaktive und kooperative Medien) 8, 1 (2009),
19–23. 2
[RBH∗11] RITTE R F., BOSKAMP T., HOMEYER A. , LAU E H.,
SCHWIER M., LINK F., PEIT GE N H. O .: Medical image analy-
sis. IEEE Pulse 2, 6 (Nov 2011), 60–70. 4
[RHW∗09] RITTE R F., HA NS EN C. , WI LKENS K., KÖHN A.,
PEITGE N H.: User interfaces for direct interaction with 3d plan-
ning data in the operation room. Journal of i-com (Zeitschrift für
interaktive und kooperative Medien) 8, 1 (2009), 24–31. 2
[RWGW06] RUTALA W. A. , WH ITE M. S. , GE RGEN M. F.,
WEBER D. J.: Bacterial Contamination of Keyboards: Efficacy
and Functional Impact of Disinfectants. Infection Control & Hos-
pital Epidemiology 27 (4 2006), 372–377. 1
[SBN11] SCHWARZ L., BIGDELOU A., NAVAB N.: Learning
Gestures for Customizable Human-Computer Interaction in the
Operating Room. In Medical Image Computing and Computer-
Assisted Intervention - MICCAI, vol. 6891 of Lecture Notes in
Computer Science. Springer Berlin Heidelberg, 2011, pp. 129–
136. 2
[SK14] SAMADANI A.-A., KUL IC D.: Hand gesture recognition
based on surface electromyography. In Engineering in Medicine
and Biology Society (EMBC),36th Annual International Confer-
ence of the IEEE (2014), pp. 4196–4199. 2
[TCZ∗13] TAN J. H., CHAO C. , ZAWAIDEH M., RO BERTS
A. C., KINNEY T. B.: Informatics in Radiology: Developing
a Touchless User Interface for Intraoperative Image Control dur-
ing Interventional Radiology Procedures. RadioGraphics 33, 2
(2013), E61–E70. PMID: 23264282. 1,2
[Whe03] WHEEL ER K.: Device control using gestures sensed
from EMG. In IEEE International Workshop on Soft Comput-
ing in Industrial Applications (June 2003), pp. 21–26. 2
[WW04] WALLIN M. K., WAJ NT RAUB S.: Evaluation of Blue-
tooth as a replacement for cables in intensive care and surgery.
Anesthesia & Analgesia 98, 3 (2004), 763–767. 3
[WW11] WIGDOR D., WIXON D.: Brave NUI World: Designing
Natural User Interfaces for Touch and Gesture, 1. ed. Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA, 2011. 1
c
The Eurographics Association 2015.