Content uploaded by Anton Treskunov
Author content
All content in this area was uploaded by Anton Treskunov
Content may be subject to copyright.
Creating Mixed Reality Manikins for Medical Education
Andrei Sherstyuk
University of Hawaii
andreis@hawaii.edu
Dale Vincent
University of Hawaii
dvincent@hawaii.edu
Benjamin Berg
University of Hawaii
bwberg@hawaii.edu
Abstract
In medical education, human patient simulators, or
manikins, are a well established method of teaching med-
ical skills. The current state of the art man ikins are limited
in their functio ns by a fixed number of in-built sensors and
actuators that control the manikin behaviors and responses.
We d e scribe how applying standard techniques from the
fields of Virtual and Mixed Reality can significantly expand
manikin functionality, at relatively low costs. We describe a
working prototype of a Mixed Reality Manikin , with tech-
nical implementation details and one complete scenario.
Also, we discuss a number of extensions and applications
of our technique.
1. Introduction
Medical manikins are realistic lo oking life-size replicas
of a human body, equipped with a large number of elec-
tronic, pneumatic and mechanical devices, controlled from
a host computer. Manikins can be programmed to simu-
late a variety of conditions. The level of visual realism and
physiological fidelity varies between models, but in general,
manikins can provide a range of convincingly accurate re-
sponses to medical interventions.
Most of manik ins capabilities for interaction, including
physical examination are implemented in hardware. All in-
teractions between a hum a n and a manikin are mediated
by dedicated mechanical or electronic devices, installed in
the manikin. For example, a SimMan line of products by
Laerdal Medical Corporation [
1] have touch sensitive ele-
ments installed at both wrists. These sensors allow a person
doing examination to check a manikin’s pulse by physically
touching its wrists. The manikin “fee ls” that its pulse is be-
ing felt an d responds b y providing the pu lse data to the host
computer.
In addition to checking pulses, he a lthcare persons in
training are expected to learn how to collect other data using
physical examination techniques. Manual examination may
be as simple as touching the patient at different loc a tions
and asking whether it hurts. Nevertheless, these techniques
are not supported even in advanced manikins, because user
hands are not part of the system. Figurative ly speaking,
manikins are not aware of their own bod ies as tangible ob-
jects. To compensate for the absence of feedback from the
manikins, it is a common teaching practice for an instruc-
tor to observe student examination techniques from behind
a one-way mirror. If a student is palpating a simulated ap-
pendicitis and presses on the tender location, the instructor
can provide a cry of pain u sing a microphone.
The need for such continuous and close human facilita-
tion during the course of the exercise h a s many disadvan-
tages. First, it requires undivided attention from the instruc-
tor, which makes it difficult to supervise more tha n one stu-
dent at a time. As a result, manikin-based training is very
resource intensive. Secondly, visual monitoring, even with
video recording equipment, may n ot a lways ca pture all stu-
dent actions, which reduces the quality of debriefing and
performance evaluations. Finally, examination techniques
may be subtle and require precise positioning on the pa -
tient’s body. Such details are also easy to miss in visual
observation alone .
All of these issues can be solved by making manikins
sense where and how they are touched, allowing them to re-
spond autonomously and keep logs of these events. We sug-
gest filling this gap in manikin functionality by employing
methods known from Mixed Reality (MR) and Augmented
Reality (AR) fields. Briefly, to make a manikin touch-
sensitive at selected locations, we reproduce real physical
examination procedures in the 3D domain. The geo metry
surface model of the manikin and user hands are checked
for collisions, which gives the location of points of con-
tact. A gesture recognition process, running in real time,
determines which examina tion procedure is currently being
applied. With this informatio n, the simulation software that
controls the manikin’s be havior is able to trigger an appro-
priate response function, such as a cry of pain in the appen-
dicitis scenario.
The paper is organized as fo llows. In the next section,
we review related work in the area of applying MR and AR
methods to medical edu c a tion. In section
3, we describe
our MR manikins, including hardware and software com-
18th International Conference on Artificial Reality and Telexistence 2008
210
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
ponents, with a special attention to implementation of vir-
tual hands. One comp lete training scenario is described in
section
4, followed by discussion of possible extensions and
applications of our method.
2. Related work
Medicine and medical education are a f e rtile ground for
VR techniques to grow, for an imp ortant reason : the c ost of
human error is high. In the last few years, medical VR expe-
rienced a rapid expansion, driven by advances in hardware
(tracking, haptics, displays [
2]), new concepts in user inter-
face de sign, such as Tangible User Interface (TUI) [3] and a
palette of new in terface metaphors and display techniques,
including MagicLens [4] an d Virtual Mirror [5]. These a d-
vances made it possible to visualize invisible, o bscured or
abstract objects and data, such as a flow of gases in a Mixed
Reality anesthesia machine simulator [6]. Another example
of visual augmentation is a system described by Bichlmeier
et al., that allows surgeons literally see into a living huma n
patient, using a Head Mounted Display and CT scans of the
patient [
7]. Besides hand- held displays [4, 5, 6] an d Head
Mounted Displays [
7], video projection of 3D content onto
curved surfaces was successfully employed , for example, in
Virtual Anatomical Model developed by Kondo, Kijima and
Takahashi [
8]. The authors used a human shaped surface as
a screen for displaying internal organs, dynamically adapt-
ing the view for the user’s position and orientation, and the
shape o f the screen [9]. Althou gh the projection is mono-
scopic, due to motion parallax, the projected organs appear
as if they lie inside the torso shape.
Visual overlays of medical imaging data such as CT
scans and ultrasound scans [
10] onto human patients, were
among the first applications of Augmented Reality [11].
In addition to visual display, other input modalities were
explored, including the sense o f touch [
12]. SpiderWorld
VR system for treating arachnop hobia, described by Carlin,
Hoffman and Weghorst [13], exemplifies one of the earli-
est examples of u sing tactile augmentation for medical pur-
poses. In SpiderWorld, immersed VR patients interacted
with a virtual spider, which was co-located and synchro-
nized in movements with a replica of a palm-sized taran-
tula, made of a furry material. During contact with a user
hand, the visual input was receiv ing strong reinforcement
from the tactile feedback.
One of the recent developments in mixing VR with
tactile-based interfaces was presented by Lok and Ko-
tranza [
14]. Their sy stem integrated a phy sical tangib le
model of a human breast with a life-size virtual patient,
displayed on a screen. The virtual patient communicated
with a student performing a breast examination for cancer,
showing signs of distress and anxiety. This work mostly
focused on improving student communicatio n skills. The
authors reported that many students readily accepted the
tactile modality in their interactions with the Mixed Real-
ity Humans, as they named their touch-enhanced simulator.
Students naturally used gentle stroking and touching mo-
tions to calm the “patient”.
Following the classic AR taxonomy by Milgram et
al [
11], both the SpiderWorld [13] and Mixed Reality Hu-
mans [
14] belong to the ‘mostly-virtual’ side of the virtual-
to-real continuum of environments. As discussed in the In-
troduction, our goal is to enrich and expand hands-on ex-
perience that medical students have when working with hu-
man manikins. Thus, our work lies closer to the ‘m ostly-
real’ end of the range, taking advantage of the realis-
tic appearance and rich tactile feedback provided by the
manikins.
Traditional (i.e., non-VR) medical simulators, including
human manikins, are also evolving rapidly. Manikins be-
come m ore sophisticated and begin to take advantage of
methods from the VR field. For example, the latest 3G
model of SimMan line of manikins [
1], uses RFID tags for
identifying syringes for the virtual administration of phar-
maceuticals. This is done by attaching a labeled syringe to
an IV-port on one of his a rms. This dedicated IV-arm has
an RFID antenna in stalled under the skin surface, which
allows the manikin to detect the presence of the labeled
drug and measure the administered amount, by capturing
elapsed time while in contact. Such virtual med ica tion with
proximity-based tracking falls in the same category as o ur
method. However, the lo c a lization precision of RFID-based
tracking is not sufficient for our purposes. Thus, we chose
a more p recise magnetic tracking solution [
16], for user ac-
tivity recognition and classification.
Reliable recogn ition of user activity is another impor-
tant component of a successful medical training system, as
discussed by Navab et al [
15]. Pulse taking and drug ad-
ministration actions, de scribed above, are detected and pro-
cessed by dedicated devices, such as pressure-sensitive el-
ements and RFID antennas, installed in well-known loca-
tions. In order to recognize palpation, Virtual Anatomical
Model simulator [
8], also make use of pressure sensors im-
plemented in hardware. Two sensors are used, one for sim-
ulated appendicitis and the o ther for cholecystitis, installed
in lowe r and upper abdominal areas, respectively.
Our main contribution is a novel approach of process-
ing tactile interaction in software. This approach effectively
removes limitations on the number of touch-sensitive loca-
tions, and makes more medical scenarios available for sim-
ulation.
3. Mixed reality manikins
We already briefly described our method of making
manikins touch-sensitive by echoing physical user-ma nikin
interactions in the 3D domain. In this sectio n, we present
our system in full detail.
18th International Conference on Artificial Reality and Telexistence 2008
211
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
Figure 1. Anne Torso, a realistic lifesize CPR trainer, from
Laerdal [1], augmented wi th a tangible user interface. System
components: the manikin object, Flock of Birds tracking system
with two sensors Velcroed onto sports gloves, laptop PC, speakers.
Below: the manikin in working position for physical examination,
with a debug view of the 3D models on the laptop screen.
3.1. System c onfiguration
A mixed reality manikin consists of three parts: a tan-
gible interface object (the manikin itself), a motion track-
ing system, and a software mo dule wh ich processes user in-
put and simulates manikin’s responses. These responses are
pre-programmed according to specifications of the training
scenario.
A p rototype of our system is shown in Figure
1. It in-
cludes an Anne Torso, a lifesize female manikin fo r car-
diopulmonary resuscitation (CPR) training by Laerdal [1]
and a Flock of Birds system from Ascension [
16] with
tracking range of 4 feet in all directions. The software
module is imp lemented in Flatland, an o pen source VR
system [
17], with added user gesture-recognition capabil-
ities [18]. The system runs on a Linux laptop PC, 1.86 GHz
CPU, and 1G RAM.
The 3D models of user hands and the manikin surface
are shown for illustrative purposes only (Figure
1. During
system use, students do not look at the screen – they work
with the manikin directly, as shown in Figure 4.
3.2. Virtual hands
A virtual hand is one of the oldest metaphors in VR [19].
It remains by far the most popular technique for direct ma-
nipulations of objects in close proximity, which is exactly
the case with human manikins. Virtual han ds are the most
important and delicate part of our system, because users
expect them to be as sensitive and versatile as their real
hands. High end manikins have very realistic looking sur-
face made of e lastic skin -like material. Some models even
mimic distribution of human soft and hard tissues under the
skin. Thus, when user touch the manikin, the sensation is
very rich and life-like. As a result, user involuntary expect
the manikin to reciprocate and “fee l-back” the hand-surface
contact event, with the same level of tactile fidelity an d spa-
tial resolution.
A c a refully implemented virtual hand control system can
create and support this illusion, by recognizing stereotypi-
cal physical examination gestures and making the manikin
react promptly. Below, we discuss implementation issues,
that are specific to our application.
3.3. Spatial resolution requirements for hand-
surface contac t
During p hysical examination, spatial resolution for hand
positioning varies between simulated conditions and tech-
niques used for their detection. In many cases, these re-
quirements are surprisingly low.
For some cases, the area of hand loca lization may be as
big as the whole abdomen (e.g., simulated peritonitis); for
others, one qu a drant of the abdomen (e.g., left upper quad-
rant for splenic rupture, right lower quadrant fo r appendici-
tis). T hese conditions are commonly diagnosed using pal-
pation techn iques, consisting of applying gentle p ressure o n
the areas of interest. During p a lpation, the hands move in
unison and are held in a crossed position. Palpation can be
captured in VR by placing a motion sensor close to the cen-
ter of the user ha nd, and monitoring the mutual proximity
of both hands an d their collisions with the surface. In pilot
tests, contact spheres the size of a tennis ba ll y ielded reli-
able three-way collision detections (hand-hand-surface) for
virtual palpation.
Other examination techniques need higher precision in
localization of contact area. For example, when applying
percussion, a non-dominant han d is placed palm down on
the designated area, while the other hand taps over that area.
The tip of the middle finger on the m oving hand must hit
the center of the middle fing e r on the resting hand. Thus, in
order to detect percussion in VR, th e system m ust be able
to locate no t only the user hands, but finge rs as well.
18th International Conference on Artificial Reality and Telexistence 2008
212
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
This may be achieved by direct tracking of user finger-
tips with miniature sensors, such a s used in Ascension Mini
Bird 800 system [
16]; th e ir sensors are the size of a finger-
nail a nd weigh 1.2 gram. The tracking range is 76 cm in
any direction, which is sufficient for our pu rposes. Another
solution is to track hands as solid objects and obtain the fin-
gertips locations with a CyberGlove [20], fit to a skeletal
model of the hand. This configuration, however, may be
very expensive. We have experimented briefly with a bud-
get virtual glove [
21], which measures finger be nding an-
gles, and found it less useful, than expected. Among other
issues, we encountered p roblems with stability of tracking,
which was critical for reliable detection and processing of
hand ac tions. Instead of direct finger tracking, a combined
solution was chosen, described next.
3.4. Real hands, virtual fingers
In our system, we imp lemented a co mbined tracking so-
lution. Each hand is tracked with a single motion sensor,
covering an area of 4 feet in each direction from the center
of the manikin. Magnetic tracking gives the general hand
position and orientation. By using an anatomically correct
skeletal model of a human hand, the system infers locations
of all virtual fingers needed to process the current hand ac-
tivity. The virtual fingers are represented by small inv isible
cubic shapes, attached to strategically impo rtant joints of
the hand skeleton such as end joints of each finger.
Thus, our hand tracking is implemented partially in hard-
ware, using magnetic sensors attached with Velcro to the top
of regular sports gloves (Figure
1) and then refined in soft-
ware, using a hierarchical skeletal model of human hand
(Figure
2). The skeletal hand model is also used to update
the visible skin of each virtual hand, primarily for debug-
ging and monitoring purposes.
Figure 2. Virtual hands in flat and neutral poses. Left: skin sur-
face. Right: skeleton and wireframe views. Small cubes represent
virtual fingertips, attached to skeletal joints for precise l ocaliza-
tion of contact points. The circles show where motion sensors are
attached.
3.5. Activity recognition and hand processing loop
The key element in our ‘real-hand, virtual-fin ger’ solu -
tion is based upon real-time activity recognition. The sys-
tem analyzes user hand location, orientation and velocity,
as reported by the Flock of Birds, and checks for collision s
with the 3D ge ometry model of the manikin . With this in-
formation, the system infers the current user activity and up-
dates the hand pose accordingly. For example, when o ne of
the hands is found to be resting on the manikin’s abdomen
(the hand collides with the surface and its velocity is close
to zero), the corresponding virtual hand assumes a flat pose
(Figure
2, top left). When the user hand is moving freely,
its virtual counterpart is set to neutral pose (Figure
2, bot-
tom left). Note a close match between the gu e ssed sha pes
of virtual hands (flat and neutral) and the actual poses as-
sumed by hands of a real user performing percussion, as
seen in Figure
4.
Presently, the system recognizes the following examina-
tion procedures: percussion, shallow and deep palpation,
pulse check, press-and-su dden-release gesture.
On every cycle of the main simulation loop, the system
goes through the following routine:
1. For each hand, chec k for collisions between its bound-
ing sphe re and the 3D model of the manikin; if no col-
lisions are detected, set hand pose to neutral and return.
2. Check the hand orientation and velocity (both relative
and ab so lute); determine the intended action and up -
date the hand pose a c c ordingly; update location of all
virtual fingers;
3. For each virtual finger, involved in the current activ-
ity, check for collisions between the manikin surface
model and the finger shape; if no collisions are de-
tected, return;
4. Process collisions and evoke appropriate functions to
simulate manikin response;
In section
4, one particular case will be described in de-
tail, including a code sample for the simulated abdominal
pain.
Figure 3. During calibration, virtual hands are adjusted to accom-
modate thickness of user palms (left ) and the length of their fingers
(right). The virtual hands are moved along specified directions,
until virtual fingertips touch each other, to match the current user
pose. Calibration also fixes the problem of unevenly attached mo-
tion sensors.
18th International Conference on Artificial Reality and Telexistence 2008
213
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
3.6. Hand calibration and alignment
Calibration is performed for each new user, after he or
she puts on the gloves and straps the motion sensors onto
them. During calibration, users are asked to put their hands
in a ‘praying’ position and keep them in this pose for 10
seconds (Figure
3, left). During that time, the system mea-
sures the distance between the tips of v irtual middle fingers,
and translates the virtual hands in Y position until these two
points coincide. This step accommodates users with differ-
ent palm thickn e ss. During the n ext step (Figure
3, right),
virtual hands are translated along Z-direction, adjusting for
finger length . Translations are performed f or both hands, in
the coordinate system of the corresponding motion sensor.
The calibration process takes a few secon ds and is fully au-
tomated. A ten second long iteration loop ensures that the
system collects enough samples of specific hand positions
and computes a useful average value.
Alignment is performed once per system installation, af-
ter the manikin is p lac e d in a working position and the mag-
netic transmitter is in stalled in its close proximity, as shown
in Figure
1. The alignment procedure registers the virtual
hands with physical location of the manikin and the mag-
netic transmitter, which defin e s the origin of the tracked
space. In order to align the ha nds with the manikin mod e l,
the user must touch a d e dicated spot on the manikin surface
with one of the motion senso rs, making a physical contact.
The system captures the offset between th e c urrent loca-
tion of the sensor and the virtual landmark. Then, both
hands a re translated by that offset, making contact in VR.
If the debug view is open, users can see their hand s ‘snap’
onto the dedicated lo c a tion. For th a t purpose, we use the
manikin’s navel, an easy-to-find and ce ntrally located fe a -
ture. The system is now ready for use.
4. An example: simulated abdominal pain
A prototype of a mixed reality manik in was first pre-
sented to public at the Medical Simulation Workshop, Asia
Pacific Military Medicine Conference held in Singapore in
April 2008 [
22]. The audience of the workshop were mostly
medical educators an d health -care providers. The simulated
patient was programmed to have abdom inal pain, randomly
assigned to different locations. In some cases, the simulated
patient was pa in free. Workshop attendees were invited to
examine the patient, using percussion technique, and de-
cide whether the patient was non-tender (healthy) or tender
(had abdominal pain). One of the sessions is shown in Fig-
ure
4. For that scenario, we used a very simple model of the
manikin abdominal surface, a union of nine spheres, shown
in Figure 5. The tender zone was randomly assigned to one
of the spheres. When a user tapped on a non-tender loca-
tion, the system responded with a neutral ‘knock’ sound,
indicating that the tapping event was detected, but the loca-
Figure 4. T he augmented manikin was first presented at Medical
Simulation Workshop held in Singapore Medical Training Insti-
tute, April 16th, 2008. A young cadet is performing percussion of
Anne Torso manikin, searching for sore spots.
18th International Conference on Artificial Reality and Telexistence 2008
214
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
tion is not sore. When a painful zone was encountered, the
program played back one of the prerecorded sounds of pain.
At this moment, most participants stopped and declared the
examination complete.
Informal observations of the participants gave us very
useful fee dback:
The concept of mixed reality ma nikins was well received.
Over thirty medical professionals participated in the exer-
cise. Practically all of them accepted the ‘magic’ of per-
forming live percussion on a plastic inanimate object. Only
one p e rson lost interest during the exercise a nd quit; the re-
maining participants continued with the examination until
they were a ble to decide on the patient’s condition.
Calibration must be done for all users. The default
placements of virtual han ds on the tracker may work ade-
quately for the developers, but for most other users, these
settings need adjustment, as described in
3.6.
Variability of motion. The pe rcussion technique ap-
parently allows for certain variations in hand movements.
Some users tapped very fast and their motions failed to reg-
ister with the system, which expected the hitting hand to
stay within a certain speed range. This suggests that the ges-
ture recognition system c ould benefit from a training phase,
when each n ew user gives a few sample strokes. These sam-
ples can be captured, measured an d memorized by the sys-
tem.
Figure 5. Debug view of user hands and touch-sensitive zones
used in simulated abdominal pain case. Top: hands are idle, no
contact, no action detected. Bottom: tapping event detected, hit-
ting the lower right zone in the abdominal area, highlighted and
circled.
OBJECT *LH; // left hand object (tracked)
OBJECT *RH; // right hand object (tracked)
OBJECT *AO; // abdomen object: union of zones
boolean tapping; // are hands tapping now?
OBJECT *zone; // current zone being probed
boolean sore; // is current zone painful to touch?
if(in_collision(LH, AO) && in_collision(RH, AO)) {
// both hands are touching the abdomen, check movements
tapping = detect_percussion_gesture(LH, RH);
if(tapping) {
zone = find_closest_object(AO, LH, RH);
// touching sensitive zone, provide audio response
if(sore = is_sore(zone)) {
play_painful_sound();
} else {
play_neutral_sound();
}
if(debug) {
// provide visual responses
if(sore) {
high_light_object(zone, RED);
} else {
high_light_object(zone, GREEN);
}
}
}
}
Figure 6. An outline of the hand processing code for simulated
abdominal pain case.
5. Improved manikin surface model
A simple union of contact spheres was quite sufficient for
simulated abdominal pain scenario. However, other medical
conditions and examination techniques may require higher
precision in localization o f han d-surface contact points, as
discussed in section
3.3.
In order to simplify the process of manikin surface mod-
eling, we developed a n ew techn ique, which effectively
turned the motion tracking equipment into a surface scan-
ner. The main idea behind our meth od is to approximate the
working area of the manikin b y a heightfield over a plane.
In order to build the heightfield in 3D, a user moves one
of the motion sensors over the area of interest, such a s the
manikin’s torso. The system finds the closest vertex on th e
heightfield grid and snaps this vertex vertically to the cur-
rent location of the motion sensor. The whole process hap-
pens in real-time and is monitored visually.
Using this techniques, a detailed surface model of Anne
Torso manikin was created under 10 minutes, as shown in
Figures
7 and 8. Besides its speed, ou r semi-automatic
surface scann ing technique has the following features: it
is c ost-effective, requires no special skills nor equipment
and is easy to learn and use. In addition, models created
with this method are already ‘pre-calibrated’ for use with
the magnetic tracker, because all d istortions and irregular-
ities in the magnetic environment around the working area
are imprinted into the vertex coordinates of th e model. Full
details on this technique are forthcomin g [
23].
18th International Conference on Artificial Reality and Telexistence 2008
215
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
We be lieve, that high quality surface models, in conjunc-
tion with high resolution tracking ma y result in a new family
of applications, such as acupuncture training.
Figure 7. Scanning of Anne Torso wit h a magnetic sensor in a
plastic enclosure: i nit ial contour (top), intermediate shape (mid-
dle), final mesh (bottom). The 3D shapes are shown as-is, without
retouching. Mesh size 40 x 40 cm, 40 x 40 points. Time taken:
about 8 minutes.
Figure 8. Wireframe views of the 3D scans of Anne Torso. The
bottom-right shape is smoothed with a low-pass filter.
6. Applications and extensions
Mixing real and virtual elements in medica l simulators,
equipped with a tracker, yields a multitude of interesting
extensions. Below, we list a few that immediately follow
from ou r basic technique.
• Tool tracking. A stethoscope, reflex hammer, scalpel –
all these tools may be tracked and processed for colli-
sions with manikin surface model in the same ma nner
as user hands. Adding use of medical tools to training
scenarios will expand manikin capabilities even more.
• Instant programming of training scenarios. By touch-
ing various areas on the manikin and recording his or
her own vocal annotations, an instructor can “teach”
the manikin how to respond to different examina-
tion procedures, according to the simulated condi-
tion. These location-action -respo nse m a ppings may
be saved for later use.
• Non-contact interaction. Tracking of user hands and
hand-he ld instruments allows to process non-contact
examination techniques also. Examples include: clap-
ping hands to check hearing; make the patient’s eyes
follow a moving object; simulate p upil contraction as
a response to a tracked penlight.
• Measuring movements for performance evaluation.
Hand tracking provides a u nique opportunity to mea-
sure user a c tions precisely. For example, in CPR train-
ing, the sy stem can measure and log the loc a tion, rate
and depth of applied chest compressions.
7. Future work
The next logical step in developing mixed reality
manikins is integration with the native host computer, sup-
plied by the m a nufacturer. Suc h integration may start with
sharing log files that keep records of all user a c tivities. Fur-
ther steps may inclu de access to man ikin’s actuators. For
example, a 3G SimM a n manikin has an “aggressive patient”
behavior, when the manikin moves his arms violently, im-
itating ho stile intentions towards the examiner. These ex-
treme responses may be provoked by incorrect or clumsy
user hand maneuvers, for example, inflicting too much pain
on a tender area while performing palpation.
There are other interesting research areas related to
multi-modal interactions with man ikins. For example, a
skin-like surface of manikin is suited well for projecting a d-
ditional video material: b lood, wounds, scars, e tc, both in
real-time and in fast-forward time scale, in order to show
how a wound will heal, depending on the depth of a virtual
incision made with a tracked scalpel tool.
Additional viewing modalities is yet another topic of fu-
ture work, including simulated x-ray vision by projecting
18th International Conference on Artificial Reality and Telexistence 2008
216
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
bone structures into the area of interest, following the re-
sults described by Kondo and Kijima [
9].
8. Conclusions
We presented a new technique for adding touch-
sensitivity to manikin simulators with the fo llowing fea-
tures:
• Multi-purpose: a standard human manikin can be pro-
grammed to simulate a large number of medical con-
ditions and examination procedures.
• Multi-user: adding m ore motion sensors will a llow
several users to share the same working space.
• Relatively inexpensive: a fraction of the cost of a
manikin.
• Portable: may be shared between manikin s.
With our technique, a human manikin simulator becomes
one big tangible interface object, with programm a ble sensi-
tivity at arbitrary locations and flexible responses to physi-
cal examin a tion.
References
[1] Laerdal Medical Corporation, http://www.laerdal.com
1, 2,
3
[2] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier and
B. MacIntyre. Recent Advances in Augmented Reality. IEEE
Computer Graphics And Applications, Vol. 21, No. 6, pp.
34–47, 2001.
2
[3] B. Ullmer and H. Ishii. Emerging frameworks for tangible
user interfaces. IBM Systems Journal, Vol. 39, No. 3&4, pp.
915–931, 2000.
2
[4] J. Looser, M. Billinghurst and A. Cockburn. Through the
looking glass: the use of lenses as an interface tool for Aug-
mented Reality interfaces. Proceedings of the 2nd Interna-
tional Conference on Computer Graphics and Interactive
Techniques in Australasia and SouthEast Asia, June 15-18,
Singapore, 2004.
2
[5] N. Navab, M. Feuerstein, C. Bichlmeier. Laparoscopic Vir-
tual Mirror – New Interaction Paradigm for Monitor Based
Augmented Reality. Proceedings of IEEE VR Conference,
Charlotte, North Carolina, USA, March 10-14, 2007.
2
[6] J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, B. Lok. A
Mixed Reality Approach for Merging Abstract and Concrete
Knowledge. Proceedings of IEEE VR Conference, Reno,
Nevada, pp. 27–34, 2008.
2
[7] C. Bichlmeier, F. Wimmer, S.M. Heining, N. Navab. Con-
textual Anatomic Mimesis: Hybrid In-Situ Visualization
Method for Improving Multi-Sensory Depth Perception in
Medical Augmented Reality Proceedings of The Sixth IEEE
and ACM International Symposium on Mixed and Aug-
mented Reality ISMAR ’07, Nara, Japan, Nov. 13-16, 2007.
2
[8] D. Kondo, R. Kij ima, Y. Takahashi. Dynamic Anatomical
Model for Medical Education using Free Form Projection
Display. Proceedings of the 13th International Conference
on Virtual Systems and Multimedia, Brisbane, Australia,
Sept.23-26, 2007. 2
[9] D. Kondo and R. Kijima. Proposal of a Free Form Projection
Display Using the Principle of Duality Rendering. Proceed-
ings of 9th International Conference on Virtual Systems and
MultiMedia, pp. 346-352, 2002.
2, 8
[10] M. Bajura, H. Fuchs, and R. Ohbuchi. Merging virtual ob-
jects with the real world: Seeing ultrasound imagery within
the patient. Computer Graphics, 26(2), 1992.
2
[11] P. Milgram, H. Takemura, A. Utsumi, F. Kishino. Aug-
mented Reality: A Class of Displays on the Reality-
Virtuality Continuum. SPIE Vol. 2351, Telemanipulator and
Telepresence Technologies, pp. 282–292, 1994. 2
[12] H. Hoffman. Physically touching virtual objects using tac-
tile augmentation enhances the realism of virtual environ-
ments. Proceedings of the IEEE Virtual Reality Annual Inter-
national Symposium, Atlanta GA, p. 59-63. IEEE Computer
Society, Los Alamitos, California, 1998. 2
[13] A. Carlin, H. Hoffman, S. Weghorst. Virtual reality and tac-
tile augmentation in the treatment of spider phobia: A case
study. Behaviour Research and Therapy, 35, pp. 153–158,
1997.
2
[14] B. Lok and A. Kotranza. Virtual Human + Tangible Inter-
face = Mixed Reality Human: An Initial Exploration with a
Virtual Breast Exam Patient. Proceedings of IEEE VR Con-
ference, Reno, Nevada 2008, pp.99–106.
2
[15] N. Navab, J. Traub, T. Sielhorst, M. Feuerstein, C.
Bichlmeier. Action- and Workflow-Driven Augmented Re-
ality for Computer-Ai ded Medical Procedures. IEEE Com-
puter Graphics and Applications, vol. 27, no. 5, pp. 10–14,
Sept/Oct, 2007.
2
[16] Ascension Technology Corporation, http://www.ascension-
tech.com
2, 3, 4
[17] Flatland Project, http://www.hpc.unm.edu/homunculus/ 3
[18] A. Sherstyuk, D. Vincent, J. Hwa Lui, K. Connolly, K. Wang,
S. Saiki, T. Caudell. Design and Development of a Pose-
Based Command Language for Triage Training in Virtual
Reality, Proceedings of IEEE Symposium on 3D User Inter-
faces, March 10-14, 2007.
3
[19] R. Jacoby, M. Ferneau and J. Humphries. Gestural Interac-
tion in a Virtual Environment. Stereoscopic Displays and Vir-
tual Reality Systems, SPIE 2177, pp. 355–364, 1994.
3
[20] Immersion Corporation, http://www.immersion.com/ 4
[21] P5 Data Glove, http://www.vrealities.com/P5.html 4
[22] Workshop on Medical Simulation Systems at t he 18th An-
nual Asia Pacific Military Medicine Conference, Singapore,
April 2008, htt p:/ /www.apmmc.org/
5
[23] A. Sherstyuk, A. Treskunov, B. Berg. Fast Geometry Acqui-
sition for Mixed Reality Applications Using Motion Track-
ing. Proceedings of The Seventh IEEE and ACM Interna-
tional Symposium on Mixed and Augmented Reality ISMAR
’08, Cambridge, UK, Sept.15-18, 2008.
6
18th International Conference on Artificial Reality and Telexistence 2008
217
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278