ChapterPDF Available

Mixed Reality Manikins for Medical Education

Authors:

Abstract and Figures

In medical education, human patient simulators, or manikins, are a well established method of teaching medical skills. The current state of the art manikins are limited in their functions by a fixed number of in-built hardware devices, such as pressure sensors and motor actuators that control the manikin behaviors and responses. In this work, we review several research projects, where applied techniques from the fields of Augmented and Mixed Reality allowed to significantly expand manikin functionality. We will pay special attention to tactile augmentation, and describe in detail a fully functional “touch-enabled” human manikin, developed at SimTiki Medical Simulation Center, University of Hawaii. Also, we will outline possible extensions of the proposed touch-augmented human patient simulator and share our thoughts on the future directions in use of Augmented Reality in medical education.
Content may be subject to copyright.
Creating Mixed Reality Manikins for Medical Education
Andrei Sherstyuk
University of Hawaii
andreis@hawaii.edu
Dale Vincent
University of Hawaii
dvincent@hawaii.edu
Benjamin Berg
University of Hawaii
bwberg@hawaii.edu
Abstract
In medical education, human patient simulators, or
manikins, are a well established method of teaching med-
ical skills. The current state of the art man ikins are limited
in their functio ns by a fixed number of in-built sensors and
actuators that control the manikin behaviors and responses.
We d e scribe how applying standard techniques from the
fields of Virtual and Mixed Reality can significantly expand
manikin functionality, at relatively low costs. We describe a
working prototype of a Mixed Reality Manikin , with tech-
nical implementation details and one complete scenario.
Also, we discuss a number of extensions and applications
of our technique.
1. Introduction
Medical manikins are realistic lo oking life-size replicas
of a human body, equipped with a large number of elec-
tronic, pneumatic and mechanical devices, controlled from
a host computer. Manikins can be programmed to simu-
late a variety of conditions. The level of visual realism and
physiological fidelity varies between models, but in general,
manikins can provide a range of convincingly accurate re-
sponses to medical interventions.
Most of manik ins capabilities for interaction, including
physical examination are implemented in hardware. All in-
teractions between a hum a n and a manikin are mediated
by dedicated mechanical or electronic devices, installed in
the manikin. For example, a SimMan line of products by
Laerdal Medical Corporation [
1] have touch sensitive ele-
ments installed at both wrists. These sensors allow a person
doing examination to check a manikin’s pulse by physically
touching its wrists. The manikin “fee ls” that its pulse is be-
ing felt an d responds b y providing the pu lse data to the host
computer.
In addition to checking pulses, he a lthcare persons in
training are expected to learn how to collect other data using
physical examination techniques. Manual examination may
be as simple as touching the patient at different loc a tions
and asking whether it hurts. Nevertheless, these techniques
are not supported even in advanced manikins, because user
hands are not part of the system. Figurative ly speaking,
manikins are not aware of their own bod ies as tangible ob-
jects. To compensate for the absence of feedback from the
manikins, it is a common teaching practice for an instruc-
tor to observe student examination techniques from behind
a one-way mirror. If a student is palpating a simulated ap-
pendicitis and presses on the tender location, the instructor
can provide a cry of pain u sing a microphone.
The need for such continuous and close human facilita-
tion during the course of the exercise h a s many disadvan-
tages. First, it requires undivided attention from the instruc-
tor, which makes it difficult to supervise more tha n one stu-
dent at a time. As a result, manikin-based training is very
resource intensive. Secondly, visual monitoring, even with
video recording equipment, may n ot a lways ca pture all stu-
dent actions, which reduces the quality of debriefing and
performance evaluations. Finally, examination techniques
may be subtle and require precise positioning on the pa -
tient’s body. Such details are also easy to miss in visual
observation alone .
All of these issues can be solved by making manikins
sense where and how they are touched, allowing them to re-
spond autonomously and keep logs of these events. We sug-
gest filling this gap in manikin functionality by employing
methods known from Mixed Reality (MR) and Augmented
Reality (AR) fields. Briefly, to make a manikin touch-
sensitive at selected locations, we reproduce real physical
examination procedures in the 3D domain. The geo metry
surface model of the manikin and user hands are checked
for collisions, which gives the location of points of con-
tact. A gesture recognition process, running in real time,
determines which examina tion procedure is currently being
applied. With this informatio n, the simulation software that
controls the manikin’s be havior is able to trigger an appro-
priate response function, such as a cry of pain in the appen-
dicitis scenario.
The paper is organized as fo llows. In the next section,
we review related work in the area of applying MR and AR
methods to medical edu c a tion. In section
3, we describe
our MR manikins, including hardware and software com-
18th International Conference on Artificial Reality and Telexistence 2008
210
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
ponents, with a special attention to implementation of vir-
tual hands. One comp lete training scenario is described in
section
4, followed by discussion of possible extensions and
applications of our method.
2. Related work
Medicine and medical education are a f e rtile ground for
VR techniques to grow, for an imp ortant reason : the c ost of
human error is high. In the last few years, medical VR expe-
rienced a rapid expansion, driven by advances in hardware
(tracking, haptics, displays [
2]), new concepts in user inter-
face de sign, such as Tangible User Interface (TUI) [3] and a
palette of new in terface metaphors and display techniques,
including MagicLens [4] an d Virtual Mirror [5]. These a d-
vances made it possible to visualize invisible, o bscured or
abstract objects and data, such as a flow of gases in a Mixed
Reality anesthesia machine simulator [6]. Another example
of visual augmentation is a system described by Bichlmeier
et al., that allows surgeons literally see into a living huma n
patient, using a Head Mounted Display and CT scans of the
patient [
7]. Besides hand- held displays [4, 5, 6] an d Head
Mounted Displays [
7], video projection of 3D content onto
curved surfaces was successfully employed , for example, in
Virtual Anatomical Model developed by Kondo, Kijima and
Takahashi [
8]. The authors used a human shaped surface as
a screen for displaying internal organs, dynamically adapt-
ing the view for the user’s position and orientation, and the
shape o f the screen [9]. Althou gh the projection is mono-
scopic, due to motion parallax, the projected organs appear
as if they lie inside the torso shape.
Visual overlays of medical imaging data such as CT
scans and ultrasound scans [
10] onto human patients, were
among the first applications of Augmented Reality [11].
In addition to visual display, other input modalities were
explored, including the sense o f touch [
12]. SpiderWorld
VR system for treating arachnop hobia, described by Carlin,
Hoffman and Weghorst [13], exemplifies one of the earli-
est examples of u sing tactile augmentation for medical pur-
poses. In SpiderWorld, immersed VR patients interacted
with a virtual spider, which was co-located and synchro-
nized in movements with a replica of a palm-sized taran-
tula, made of a furry material. During contact with a user
hand, the visual input was receiv ing strong reinforcement
from the tactile feedback.
One of the recent developments in mixing VR with
tactile-based interfaces was presented by Lok and Ko-
tranza [
14]. Their sy stem integrated a phy sical tangib le
model of a human breast with a life-size virtual patient,
displayed on a screen. The virtual patient communicated
with a student performing a breast examination for cancer,
showing signs of distress and anxiety. This work mostly
focused on improving student communicatio n skills. The
authors reported that many students readily accepted the
tactile modality in their interactions with the Mixed Real-
ity Humans, as they named their touch-enhanced simulator.
Students naturally used gentle stroking and touching mo-
tions to calm the “patient”.
Following the classic AR taxonomy by Milgram et
al [
11], both the SpiderWorld [13] and Mixed Reality Hu-
mans [
14] belong to the ‘mostly-virtual’ side of the virtual-
to-real continuum of environments. As discussed in the In-
troduction, our goal is to enrich and expand hands-on ex-
perience that medical students have when working with hu-
man manikins. Thus, our work lies closer to the ‘m ostly-
real’ end of the range, taking advantage of the realis-
tic appearance and rich tactile feedback provided by the
manikins.
Traditional (i.e., non-VR) medical simulators, including
human manikins, are also evolving rapidly. Manikins be-
come m ore sophisticated and begin to take advantage of
methods from the VR field. For example, the latest 3G
model of SimMan line of manikins [
1], uses RFID tags for
identifying syringes for the virtual administration of phar-
maceuticals. This is done by attaching a labeled syringe to
an IV-port on one of his a rms. This dedicated IV-arm has
an RFID antenna in stalled under the skin surface, which
allows the manikin to detect the presence of the labeled
drug and measure the administered amount, by capturing
elapsed time while in contact. Such virtual med ica tion with
proximity-based tracking falls in the same category as o ur
method. However, the lo c a lization precision of RFID-based
tracking is not sufficient for our purposes. Thus, we chose
a more p recise magnetic tracking solution [
16], for user ac-
tivity recognition and classification.
Reliable recogn ition of user activity is another impor-
tant component of a successful medical training system, as
discussed by Navab et al [
15]. Pulse taking and drug ad-
ministration actions, de scribed above, are detected and pro-
cessed by dedicated devices, such as pressure-sensitive el-
ements and RFID antennas, installed in well-known loca-
tions. In order to recognize palpation, Virtual Anatomical
Model simulator [
8], also make use of pressure sensors im-
plemented in hardware. Two sensors are used, one for sim-
ulated appendicitis and the o ther for cholecystitis, installed
in lowe r and upper abdominal areas, respectively.
Our main contribution is a novel approach of process-
ing tactile interaction in software. This approach effectively
removes limitations on the number of touch-sensitive loca-
tions, and makes more medical scenarios available for sim-
ulation.
3. Mixed reality manikins
We already briefly described our method of making
manikins touch-sensitive by echoing physical user-ma nikin
interactions in the 3D domain. In this sectio n, we present
our system in full detail.
18th International Conference on Artificial Reality and Telexistence 2008
211
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
Figure 1. Anne Torso, a realistic lifesize CPR trainer, from
Laerdal [1], augmented wi th a tangible user interface. System
components: the manikin object, Flock of Birds tracking system
with two sensors Velcroed onto sports gloves, laptop PC, speakers.
Below: the manikin in working position for physical examination,
with a debug view of the 3D models on the laptop screen.
3.1. System c onfiguration
A mixed reality manikin consists of three parts: a tan-
gible interface object (the manikin itself), a motion track-
ing system, and a software mo dule wh ich processes user in-
put and simulates manikin’s responses. These responses are
pre-programmed according to specifications of the training
scenario.
A p rototype of our system is shown in Figure
1. It in-
cludes an Anne Torso, a lifesize female manikin fo r car-
diopulmonary resuscitation (CPR) training by Laerdal [1]
and a Flock of Birds system from Ascension [
16] with
tracking range of 4 feet in all directions. The software
module is imp lemented in Flatland, an o pen source VR
system [
17], with added user gesture-recognition capabil-
ities [18]. The system runs on a Linux laptop PC, 1.86 GHz
CPU, and 1G RAM.
The 3D models of user hands and the manikin surface
are shown for illustrative purposes only (Figure
1. During
system use, students do not look at the screen they work
with the manikin directly, as shown in Figure 4.
3.2. Virtual hands
A virtual hand is one of the oldest metaphors in VR [19].
It remains by far the most popular technique for direct ma-
nipulations of objects in close proximity, which is exactly
the case with human manikins. Virtual han ds are the most
important and delicate part of our system, because users
expect them to be as sensitive and versatile as their real
hands. High end manikins have very realistic looking sur-
face made of e lastic skin -like material. Some models even
mimic distribution of human soft and hard tissues under the
skin. Thus, when user touch the manikin, the sensation is
very rich and life-like. As a result, user involuntary expect
the manikin to reciprocate and “fee l-back” the hand-surface
contact event, with the same level of tactile fidelity an d spa-
tial resolution.
A c a refully implemented virtual hand control system can
create and support this illusion, by recognizing stereotypi-
cal physical examination gestures and making the manikin
react promptly. Below, we discuss implementation issues,
that are specific to our application.
3.3. Spatial resolution requirements for hand-
surface contac t
During p hysical examination, spatial resolution for hand
positioning varies between simulated conditions and tech-
niques used for their detection. In many cases, these re-
quirements are surprisingly low.
For some cases, the area of hand loca lization may be as
big as the whole abdomen (e.g., simulated peritonitis); for
others, one qu a drant of the abdomen (e.g., left upper quad-
rant for splenic rupture, right lower quadrant fo r appendici-
tis). T hese conditions are commonly diagnosed using pal-
pation techn iques, consisting of applying gentle p ressure o n
the areas of interest. During p a lpation, the hands move in
unison and are held in a crossed position. Palpation can be
captured in VR by placing a motion sensor close to the cen-
ter of the user ha nd, and monitoring the mutual proximity
of both hands an d their collisions with the surface. In pilot
tests, contact spheres the size of a tennis ba ll y ielded reli-
able three-way collision detections (hand-hand-surface) for
virtual palpation.
Other examination techniques need higher precision in
localization of contact area. For example, when applying
percussion, a non-dominant han d is placed palm down on
the designated area, while the other hand taps over that area.
The tip of the middle finger on the m oving hand must hit
the center of the middle fing e r on the resting hand. Thus, in
order to detect percussion in VR, th e system m ust be able
to locate no t only the user hands, but finge rs as well.
18th International Conference on Artificial Reality and Telexistence 2008
212
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
This may be achieved by direct tracking of user finger-
tips with miniature sensors, such a s used in Ascension Mini
Bird 800 system [
16]; th e ir sensors are the size of a finger-
nail a nd weigh 1.2 gram. The tracking range is 76 cm in
any direction, which is sufficient for our pu rposes. Another
solution is to track hands as solid objects and obtain the fin-
gertips locations with a CyberGlove [20], fit to a skeletal
model of the hand. This configuration, however, may be
very expensive. We have experimented briefly with a bud-
get virtual glove [
21], which measures finger be nding an-
gles, and found it less useful, than expected. Among other
issues, we encountered p roblems with stability of tracking,
which was critical for reliable detection and processing of
hand ac tions. Instead of direct finger tracking, a combined
solution was chosen, described next.
3.4. Real hands, virtual fingers
In our system, we imp lemented a co mbined tracking so-
lution. Each hand is tracked with a single motion sensor,
covering an area of 4 feet in each direction from the center
of the manikin. Magnetic tracking gives the general hand
position and orientation. By using an anatomically correct
skeletal model of a human hand, the system infers locations
of all virtual fingers needed to process the current hand ac-
tivity. The virtual fingers are represented by small inv isible
cubic shapes, attached to strategically impo rtant joints of
the hand skeleton such as end joints of each finger.
Thus, our hand tracking is implemented partially in hard-
ware, using magnetic sensors attached with Velcro to the top
of regular sports gloves (Figure
1) and then refined in soft-
ware, using a hierarchical skeletal model of human hand
(Figure
2). The skeletal hand model is also used to update
the visible skin of each virtual hand, primarily for debug-
ging and monitoring purposes.
Figure 2. Virtual hands in flat and neutral poses. Left: skin sur-
face. Right: skeleton and wireframe views. Small cubes represent
virtual fingertips, attached to skeletal joints for precise l ocaliza-
tion of contact points. The circles show where motion sensors are
attached.
3.5. Activity recognition and hand processing loop
The key element in our ‘real-hand, virtual-fin ger’ solu -
tion is based upon real-time activity recognition. The sys-
tem analyzes user hand location, orientation and velocity,
as reported by the Flock of Birds, and checks for collision s
with the 3D ge ometry model of the manikin . With this in-
formation, the system infers the current user activity and up-
dates the hand pose accordingly. For example, when o ne of
the hands is found to be resting on the manikins abdomen
(the hand collides with the surface and its velocity is close
to zero), the corresponding virtual hand assumes a flat pose
(Figure
2, top left). When the user hand is moving freely,
its virtual counterpart is set to neutral pose (Figure
2, bot-
tom left). Note a close match between the gu e ssed sha pes
of virtual hands (flat and neutral) and the actual poses as-
sumed by hands of a real user performing percussion, as
seen in Figure
4.
Presently, the system recognizes the following examina-
tion procedures: percussion, shallow and deep palpation,
pulse check, press-and-su dden-release gesture.
On every cycle of the main simulation loop, the system
goes through the following routine:
1. For each hand, chec k for collisions between its bound-
ing sphe re and the 3D model of the manikin; if no col-
lisions are detected, set hand pose to neutral and return.
2. Check the hand orientation and velocity (both relative
and ab so lute); determine the intended action and up -
date the hand pose a c c ordingly; update location of all
virtual fingers;
3. For each virtual finger, involved in the current activ-
ity, check for collisions between the manikin surface
model and the finger shape; if no collisions are de-
tected, return;
4. Process collisions and evoke appropriate functions to
simulate manikin response;
In section
4, one particular case will be described in de-
tail, including a code sample for the simulated abdominal
pain.
Figure 3. During calibration, virtual hands are adjusted to accom-
modate thickness of user palms (left ) and the length of their fingers
(right). The virtual hands are moved along specified directions,
until virtual fingertips touch each other, to match the current user
pose. Calibration also xes the problem of unevenly attached mo-
tion sensors.
18th International Conference on Artificial Reality and Telexistence 2008
213
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
3.6. Hand calibration and alignment
Calibration is performed for each new user, after he or
she puts on the gloves and straps the motion sensors onto
them. During calibration, users are asked to put their hands
in a ‘praying’ position and keep them in this pose for 10
seconds (Figure
3, left). During that time, the system mea-
sures the distance between the tips of v irtual middle fingers,
and translates the virtual hands in Y position until these two
points coincide. This step accommodates users with differ-
ent palm thickn e ss. During the n ext step (Figure
3, right),
virtual hands are translated along Z-direction, adjusting for
finger length . Translations are performed f or both hands, in
the coordinate system of the corresponding motion sensor.
The calibration process takes a few secon ds and is fully au-
tomated. A ten second long iteration loop ensures that the
system collects enough samples of specific hand positions
and computes a useful average value.
Alignment is performed once per system installation, af-
ter the manikin is p lac e d in a working position and the mag-
netic transmitter is in stalled in its close proximity, as shown
in Figure
1. The alignment procedure registers the virtual
hands with physical location of the manikin and the mag-
netic transmitter, which defin e s the origin of the tracked
space. In order to align the ha nds with the manikin mod e l,
the user must touch a d e dicated spot on the manikin surface
with one of the motion senso rs, making a physical contact.
The system captures the offset between th e c urrent loca-
tion of the sensor and the virtual landmark. Then, both
hands a re translated by that offset, making contact in VR.
If the debug view is open, users can see their hand s ‘snap’
onto the dedicated lo c a tion. For th a t purpose, we use the
manikin’s navel, an easy-to-find and ce ntrally located fe a -
ture. The system is now ready for use.
4. An example: simulated abdominal pain
A prototype of a mixed reality manik in was first pre-
sented to public at the Medical Simulation Workshop, Asia
Pacific Military Medicine Conference held in Singapore in
April 2008 [
22]. The audience of the workshop were mostly
medical educators an d health -care providers. The simulated
patient was programmed to have abdom inal pain, randomly
assigned to different locations. In some cases, the simulated
patient was pa in free. Workshop attendees were invited to
examine the patient, using percussion technique, and de-
cide whether the patient was non-tender (healthy) or tender
(had abdominal pain). One of the sessions is shown in Fig-
ure
4. For that scenario, we used a very simple model of the
manikin abdominal surface, a union of nine spheres, shown
in Figure 5. The tender zone was randomly assigned to one
of the spheres. When a user tapped on a non-tender loca-
tion, the system responded with a neutral ‘knock’ sound,
indicating that the tapping event was detected, but the loca-
Figure 4. T he augmented manikin was first presented at Medical
Simulation Workshop held in Singapore Medical Training Insti-
tute, April 16th, 2008. A young cadet is performing percussion of
Anne Torso manikin, searching for sore spots.
18th International Conference on Artificial Reality and Telexistence 2008
214
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
tion is not sore. When a painful zone was encountered, the
program played back one of the prerecorded sounds of pain.
At this moment, most participants stopped and declared the
examination complete.
Informal observations of the participants gave us very
useful fee dback:
The concept of mixed reality ma nikins was well received.
Over thirty medical professionals participated in the exer-
cise. Practically all of them accepted the ‘magic’ of per-
forming live percussion on a plastic inanimate object. Only
one p e rson lost interest during the exercise a nd quit; the re-
maining participants continued with the examination until
they were a ble to decide on the patient’s condition.
Calibration must be done for all users. The default
placements of virtual han ds on the tracker may work ade-
quately for the developers, but for most other users, these
settings need adjustment, as described in
3.6.
Variability of motion. The pe rcussion technique ap-
parently allows for certain variations in hand movements.
Some users tapped very fast and their motions failed to reg-
ister with the system, which expected the hitting hand to
stay within a certain speed range. This suggests that the ges-
ture recognition system c ould benefit from a training phase,
when each n ew user gives a few sample strokes. These sam-
ples can be captured, measured an d memorized by the sys-
tem.
Figure 5. Debug view of user hands and touch-sensitive zones
used in simulated abdominal pain case. Top: hands are idle, no
contact, no action detected. Bottom: tapping event detected, hit-
ting the lower right zone in the abdominal area, highlighted and
circled.
OBJECT *LH; // left hand object (tracked)
OBJECT *RH; // right hand object (tracked)
OBJECT *AO; // abdomen object: union of zones
boolean tapping; // are hands tapping now?
OBJECT *zone; // current zone being probed
boolean sore; // is current zone painful to touch?
if(in_collision(LH, AO) && in_collision(RH, AO)) {
// both hands are touching the abdomen, check movements
tapping = detect_percussion_gesture(LH, RH);
if(tapping) {
zone = find_closest_object(AO, LH, RH);
// touching sensitive zone, provide audio response
if(sore = is_sore(zone)) {
play_painful_sound();
} else {
play_neutral_sound();
}
if(debug) {
// provide visual responses
if(sore) {
high_light_object(zone, RED);
} else {
high_light_object(zone, GREEN);
}
}
}
}
Figure 6. An outline of the hand processing code for simulated
abdominal pain case.
5. Improved manikin surface model
A simple union of contact spheres was quite sufficient for
simulated abdominal pain scenario. However, other medical
conditions and examination techniques may require higher
precision in localization o f han d-surface contact points, as
discussed in section
3.3.
In order to simplify the process of manikin surface mod-
eling, we developed a n ew techn ique, which effectively
turned the motion tracking equipment into a surface scan-
ner. The main idea behind our meth od is to approximate the
working area of the manikin b y a heightfield over a plane.
In order to build the heightfield in 3D, a user moves one
of the motion sensors over the area of interest, such a s the
manikin’s torso. The system finds the closest vertex on th e
heightfield grid and snaps this vertex vertically to the cur-
rent location of the motion sensor. The whole process hap-
pens in real-time and is monitored visually.
Using this techniques, a detailed surface model of Anne
Torso manikin was created under 10 minutes, as shown in
Figures
7 and 8. Besides its speed, ou r semi-automatic
surface scann ing technique has the following features: it
is c ost-effective, requires no special skills nor equipment
and is easy to learn and use. In addition, models created
with this method are already ‘pre-calibrated’ for use with
the magnetic tracker, because all d istortions and irregular-
ities in the magnetic environment around the working area
are imprinted into the vertex coordinates of th e model. Full
details on this technique are forthcomin g [
23].
18th International Conference on Artificial Reality and Telexistence 2008
215
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
We be lieve, that high quality surface models, in conjunc-
tion with high resolution tracking ma y result in a new family
of applications, such as acupuncture training.
Figure 7. Scanning of Anne Torso wit h a magnetic sensor in a
plastic enclosure: i nit ial contour (top), intermediate shape (mid-
dle), nal mesh (bottom). The 3D shapes are shown as-is, without
retouching. Mesh size 40 x 40 cm, 40 x 40 points. Time taken:
about 8 minutes.
Figure 8. Wireframe views of the 3D scans of Anne Torso. The
bottom-right shape is smoothed with a low-pass lter.
6. Applications and extensions
Mixing real and virtual elements in medica l simulators,
equipped with a tracker, yields a multitude of interesting
extensions. Below, we list a few that immediately follow
from ou r basic technique.
Tool tracking. A stethoscope, reflex hammer, scalpel
all these tools may be tracked and processed for colli-
sions with manikin surface model in the same ma nner
as user hands. Adding use of medical tools to training
scenarios will expand manikin capabilities even more.
Instant programming of training scenarios. By touch-
ing various areas on the manikin and recording his or
her own vocal annotations, an instructor can “teach”
the manikin how to respond to different examina-
tion procedures, according to the simulated condi-
tion. These location-action -respo nse m a ppings may
be saved for later use.
Non-contact interaction. Tracking of user hands and
hand-he ld instruments allows to process non-contact
examination techniques also. Examples include: clap-
ping hands to check hearing; make the patient’s eyes
follow a moving object; simulate p upil contraction as
a response to a tracked penlight.
Measuring movements for performance evaluation.
Hand tracking provides a u nique opportunity to mea-
sure user a c tions precisely. For example, in CPR train-
ing, the sy stem can measure and log the loc a tion, rate
and depth of applied chest compressions.
7. Future work
The next logical step in developing mixed reality
manikins is integration with the native host computer, sup-
plied by the m a nufacturer. Suc h integration may start with
sharing log files that keep records of all user a c tivities. Fur-
ther steps may inclu de access to man ikin’s actuators. For
example, a 3G SimM a n manikin has an “aggressive patient”
behavior, when the manikin moves his arms violently, im-
itating ho stile intentions towards the examiner. These ex-
treme responses may be provoked by incorrect or clumsy
user hand maneuvers, for example, inflicting too much pain
on a tender area while performing palpation.
There are other interesting research areas related to
multi-modal interactions with man ikins. For example, a
skin-like surface of manikin is suited well for projecting a d-
ditional video material: b lood, wounds, scars, e tc, both in
real-time and in fast-forward time scale, in order to show
how a wound will heal, depending on the depth of a virtual
incision made with a tracked scalpel tool.
Additional viewing modalities is yet another topic of fu-
ture work, including simulated x-ray vision by projecting
18th International Conference on Artificial Reality and Telexistence 2008
216
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
bone structures into the area of interest, following the re-
sults described by Kondo and Kijima [
9].
8. Conclusions
We presented a new technique for adding touch-
sensitivity to manikin simulators with the fo llowing fea-
tures:
Multi-purpose: a standard human manikin can be pro-
grammed to simulate a large number of medical con-
ditions and examination procedures.
Multi-user: adding m ore motion sensors will a llow
several users to share the same working space.
Relatively inexpensive: a fraction of the cost of a
manikin.
Portable: may be shared between manikin s.
With our technique, a human manikin simulator becomes
one big tangible interface object, with programm a ble sensi-
tivity at arbitrary locations and flexible responses to physi-
cal examin a tion.
References
[1] Laerdal Medical Corporation, http://www.laerdal.com
1, 2,
3
[2] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier and
B. MacIntyre. Recent Advances in Augmented Reality. IEEE
Computer Graphics And Applications, Vol. 21, No. 6, pp.
34–47, 2001.
2
[3] B. Ullmer and H. Ishii. Emerging frameworks for tangible
user interfaces. IBM Systems Journal, Vol. 39, No. 3&4, pp.
915–931, 2000.
2
[4] J. Looser, M. Billinghurst and A. Cockburn. Through the
looking glass: the use of lenses as an interface tool for Aug-
mented Reality interfaces. Proceedings of the 2nd Interna-
tional Conference on Computer Graphics and Interactive
Techniques in Australasia and SouthEast Asia, June 15-18,
Singapore, 2004.
2
[5] N. Navab, M. Feuerstein, C. Bichlmeier. Laparoscopic Vir-
tual Mirror New Interaction Paradigm for Monitor Based
Augmented Reality. Proceedings of IEEE VR Conference,
Charlotte, North Carolina, USA, March 10-14, 2007.
2
[6] J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, B. Lok. A
Mixed Reality Approach for Merging Abstract and Concrete
Knowledge. Proceedings of IEEE VR Conference, Reno,
Nevada, pp. 27–34, 2008.
2
[7] C. Bichlmeier, F. Wimmer, S.M. Heining, N. Navab. Con-
textual Anatomic Mimesis: Hybrid In-Situ Visualization
Method for Improving Multi-Sensory Depth Perception in
Medical Augmented Reality Proceedings of The Sixth IEEE
and ACM International Symposium on Mixed and Aug-
mented Reality ISMAR ’07, Nara, Japan, Nov. 13-16, 2007.
2
[8] D. Kondo, R. Kij ima, Y. Takahashi. Dynamic Anatomical
Model for Medical Education using Free Form Projection
Display. Proceedings of the 13th International Conference
on Virtual Systems and Multimedia, Brisbane, Australia,
Sept.23-26, 2007. 2
[9] D. Kondo and R. Kijima. Proposal of a Free Form Projection
Display Using the Principle of Duality Rendering. Proceed-
ings of 9th International Conference on Virtual Systems and
MultiMedia, pp. 346-352, 2002.
2, 8
[10] M. Bajura, H. Fuchs, and R. Ohbuchi. Merging virtual ob-
jects with the real world: Seeing ultrasound imagery within
the patient. Computer Graphics, 26(2), 1992.
2
[11] P. Milgram, H. Takemura, A. Utsumi, F. Kishino. Aug-
mented Reality: A Class of Displays on the Reality-
Virtuality Continuum. SPIE Vol. 2351, Telemanipulator and
Telepresence Technologies, pp. 282–292, 1994. 2
[12] H. Hoffman. Physically touching virtual objects using tac-
tile augmentation enhances the realism of virtual environ-
ments. Proceedings of the IEEE Virtual Reality Annual Inter-
national Symposium, Atlanta GA, p. 59-63. IEEE Computer
Society, Los Alamitos, California, 1998. 2
[13] A. Carlin, H. Hoffman, S. Weghorst. Virtual reality and tac-
tile augmentation in the treatment of spider phobia: A case
study. Behaviour Research and Therapy, 35, pp. 153–158,
1997.
2
[14] B. Lok and A. Kotranza. Virtual Human + Tangible Inter-
face = Mixed Reality Human: An Initial Exploration with a
Virtual Breast Exam Patient. Proceedings of IEEE VR Con-
ference, Reno, Nevada 2008, pp.99–106.
2
[15] N. Navab, J. Traub, T. Sielhorst, M. Feuerstein, C.
Bichlmeier. Action- and Workflow-Driven Augmented Re-
ality for Computer-Ai ded Medical Procedures. IEEE Com-
puter Graphics and Applications, vol. 27, no. 5, pp. 10–14,
Sept/Oct, 2007.
2
[16] Ascension Technology Corporation, http://www.ascension-
tech.com
2, 3, 4
[17] Flatland Project, http://www.hpc.unm.edu/homunculus/ 3
[18] A. Sherstyuk, D. Vincent, J. Hwa Lui, K. Connolly, K. Wang,
S. Saiki, T. Caudell. Design and Development of a Pose-
Based Command Language for Triage Training in Virtual
Reality, Proceedings of IEEE Symposium on 3D User Inter-
faces, March 10-14, 2007.
3
[19] R. Jacoby, M. Ferneau and J. Humphries. Gestural Interac-
tion in a Virtual Environment. Stereoscopic Displays and Vir-
tual Reality Systems, SPIE 2177, pp. 355–364, 1994.
3
[20] Immersion Corporation, http://www.immersion.com/ 4
[21] P5 Data Glove, http://www.vrealities.com/P5.html 4
[22] Workshop on Medical Simulation Systems at t he 18th An-
nual Asia Pacific Military Medicine Conference, Singapore,
April 2008, htt p:/ /www.apmmc.org/
5
[23] A. Sherstyuk, A. Treskunov, B. Berg. Fast Geometry Acqui-
sition for Mixed Reality Applications Using Motion Track-
ing. Proceedings of The Seventh IEEE and ACM Interna-
tional Symposium on Mixed and Augmented Reality ISMAR
’08, Cambridge, UK, Sept.15-18, 2008.
6
18th International Conference on Artificial Reality and Telexistence 2008
217
ICAT 2008
Dec. 1-3, Yokohama, Japan
ISSN: 1345-1278
... They suggested performing further research in determining whether VR training could lead to better real-world outcomes for patients and in acquiring nontechnical skills. Sherstyuk et al. (2011) reviewed three existing systems in research centres that applied AR or MR to expand manikin functionality, focusing on tactile augmentation. The systems were named Visible Korean Human Phantom, Free Form Projection Display applications, and Mixed Reality Humans. ...
... The systems were named Visible Korean Human Phantom, Free Form Projection Display applications, and Mixed Reality Humans. Sherstyuk et al. (2011) viewed MR-based manikins as a good tool for medical education and training. In addition to the review, they also described their fully functional 'touch-enabled" human manikin at the University of Hawaii. ...
... While previous studies discussed the latest developments of immersive technology in medical education (Huang et al., 2018;Jiang et al., 2019;Michael et al., 2014;Sherstyuk et al., 2011), the development trends of immersive technologies in medical practices seldom have been discussed. In fact, investigating the research front and trends of development are important to many scholars not only to research and develop applications in specific fields, such as education and training, but also to identify future research directions (Chang et al., 2018;Livingston & Flores, 2017;Ramírez-Montoya et al., 2021;Xie et al., 2019). ...
Article
Full-text available
Virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR) are examples of immersive technologies that have the potential to improve medical practice and education. As a result, they have recently sparked much research interest. However, there are few reviews related to the use of immersive technologies (including VR, AR, MR, and XR) in medical practice and education. Remarkably, six research questions related to the trends, application areas, recipients, teaching contents, evaluation methods, and performance remain unanswered. To this end, this study conducts a systematic review to analyse 128 articles from 2012 source papers, all of which are indexed in the Web of Science. The review results indicate that immersive technology is currently used primarily on surgery and anatomy-related subjects for doctors, medical students and interns. Furthermore, group experiments are the most commonly used data collection method. The results provide insights into the current research trends related to immersive technology applications for medical practice and education. They also serve as an essential reference for scholars in the medical practice and education contexts.
... Of the studies include in these reviews, only two focused on medical or healthcare education. The first reviewed the current state of mixed reality manikins for medical education (Sherstyuk, Vincent, Berg, & Treskunov, 2011). The second analyzed applying AR in laparoscopic surgery with a focus on training (S. ...
... Two pieces of literature relevant to healthcare education, focused on introducing several examples of using AR systems.Sherstyuk, et al. introduced human manikins with augmented sensory input for medical education(Sherstyuk et al., 2011), while Botden and Jakimowicz compared three AR systems that allow the trainee to use the same instruments currently being used in the operating room for laparoscopic surgery (S.Botden & Jakimowicz, 2009). Al-Issa, et al. used systematic review to investigate the effectiveness of physical outcomes through use of AR in rehabilitation. ...
Preprint
Full-text available
Background. Developing healthcare competencies in students and professionals poses great educational challenges. A possible solution is to provide learning opportunities that utilize augmented reality (AR), where virtual learning experiences can be embedded within a real physical context. The aim of this study was to provide a comprehensive overview of the current state of AR in terms of user acceptance, the AR applications currently developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review, which is the broadest type of research review method allowing for the inclusion of various research designs. This allows us to more fully understand a phenomenon of interest. Our review included multi-disciplinary research publications in English reported until 2012. Results. We found 2 529 research papers from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, twenty quantitative and two mixed-method studies were included. Using thematic analysis, we have described characteristics for research, technology and education. This study showed that AR was applied across a wide range of topics in healthcare education. Furthermore, acceptance for AR as a learning technology was reported among the learners, as well as its potential for improving different types of competencies. Discussion. AR is still considered a novelty in the literature, with most of the studies reporting early prototypes. Additionally, the designed AR applications lacked an explicit pedagogical theoretical framework. Instead, the learning strategies adopted were of the traditional style ‘see one, do one and teach one’ and do not integrate clinical competencies to ensure patients’ safety.
... In the neurosurgical context, [33] discussed that there are five groups of surgical simulators used in neurosurgical training: computer\VR simulators, cadaveric models, in vivo models, synthetic models, and living patients. Cadaveric and in vivo models include dead animals used for anatomy studies, and synthetic models use sensor-based mannequins that are connected to computer machines [34]. ...
Thesis
Full-text available
When a novice neurosurgeon performs a psychomotor surgical task (e.g., tool navigation into brain structures), a potential risk of damaging healthy tissues and eloquent brain structures is unavoidable. When novices make multiple hits, thus a set of undesirable trajectories is created, and resulting in the potential for surgical complications. Thus, it is important that novices not only aim for a high-level of surgical mastery but also receive deliberate training in common neurosurgical procedures and underlying tasks. Surgical simulators have emerged as an adequate candidate as effective method to teach novices in safe and free-error training environments. The design of neurosurgical simulators requires a comprehensive approach to development and. In that in mind, we demonstrate a detailed case study in which two Augmented Reality (AR) training simulation modules were designed and implemented through the adoption of Model-driven Engineering. User performance evaluation is a key aspect of the surgical simulation validity. Many AR surgical simulators become obsolete; either they are not sufficient to support enough surgical scenarios, or they were validated according to subjective assessments that did not meet every need. Accordingly, we demonstrate the feasibility of the AR simulation modules through two user studies, objectively measuring novices’ performance based on quantitative metrics. Neurosurgical simulators are prone to perceptual distance underestimation. Few investigations were conducted for improving user depth perception in head-mounted display-based AR systems with perceptual motion cues. Consequently, we report our investigation’s results about whether or not head motion and perception motion cues had an influence on users’ performance.
... Kobayashi et al. (2017) developed, for acute care procedure training, an XR application that overlaid (through Hololens®) task-relevant anatomy images over the skill trainer. Indeed, one reason to use AR is to help the user having a visual of the patient's internal body state, interactively (Sherstyuk, et al., 2011). The authors claimed that the manual overlay and registration process was the main technical limitation of the work. ...
Article
Full-text available
In the medical education field, the use of highly sophisticated simulators and extended reality (XR) simulations allow training complex procedures and acquiring new knowledge and attitudes. XR is considered useful for the enhancement of healthcare education; however, several issues need further research. The main aim of this study is to define a comprehensive method to design and optimize every kind of simulator and simulation, integrating all the relevant elements concerning the scenario design and prototype development. A complete framework for the design of any kind of advanced clinical simulation is proposed and it has been applied to realize a mixed reality (MR) prototype for the simulation of the rachicentesis. The purpose of the MR application is to immerse the trainee in a more realistic environment and to put him/her under pressure during the simulation, as in real practice. The application was tested with two different devices: the headset Vox Gear Plus for smartphone and the Microsoft Hololens. Eighteen students of the 6th year of Medicine and Surgery Course were enrolled in the study. Results show the comparison of user experience related to the two different devices and simulation performance using the Hololens.
... Örneğin bir tıp ya da hemşire öğrenenin kas içi ya da damar içi enjeksiyonunu gerçekliğin artırıldığı bir ortamda uygulaması, hem öğrenenlerin özgüveni hem de uygulama alanlarında güvenli hasta bakımının sağlanması için gereklidir.Daha kişiselleştirilmiş ve daha özgün öğrenme fırsatları ile birden çok öğrenme stiline hitap eden AG, öğrenenlerin üç boyutlu ortamda görsel nesnelerle çalışmasını sağlayarak motivasyon ve öğrenme sürecine aktif olarak katılımını arttırmaktadır(Arvanitis vd., 2007;Ersoy, Duman ve Öncü, 2016; Kerawalla vd., 2006). AG teknolojisi, sağlık ile ilgili disiplinlerde okuyan öğrenenlere özellikle karar verme, etkili ekip çalışması ve yerel öncelikleri ele almaya yönelik küresel kaynakların uyarlaması gibi temel yetkinliklere ulaşmada da zengin bir öğrenme sağlamaktadır(Frenk, Chen ve Bhutta, 2010;Sherstyuk, Vincent, Berg ve Treskunov, 2011). Örneğin; doktorlar invaziv prosedürlere ihtiyaç duymadan hastanın içsel görünümünü elde edebilmektedir. ...
Article
Full-text available
Sağlık profesyonellerinin (doktor, hemşire, ebe vd.) eğitimi, nitelikli sağlık hizmetlerinin sunulmasında kritik bir öneme sahiptir. Bu mesleklerin adayları kuramsal bilgilerini uygulamaya dönüştürmede bazı zorluklarla karşı karşıyadır. Gerçek uygulama ortamlarının sınırlılığı, tıp, hemşirelik ve diğer sağlık alanlarındaki öğrencilerin kliniklerde aynı zamanda staja çıkmalarının yarattığı yoğunluk, hastaların öğrenciler tarafından bakılmak istememesi, eğitim kurumunda uygulama araçlarının yetersizliği bunlardan bazılarıdır. Öğrenenlerin uygulama ortamlarında etkin ve verimli olması, nitelikli bir öğrenme sürecinin yapılandırılması yeni nesil teknolojilerin öğrenme sürecine dâhil edilmesi ile sağlanabilir. Gerçek dünya ile bağlantısını devam ettiren, veri ve görüntülerin gerçek dünya görüntülerine eklenebildiği, gerçek ve sanal nesnelerin aynı ortamda birlikte algılanmasını sağlayan artırılmış gerçeklik uygulamaları, dünyada ve Türkiye’de birçok alanda kullanılmaktadır. Bu makalede sağlık profesyonellerinin eğitimi açısından artırılmış gerçeklik uygulamaları ele alınarak konuya genel bir bakış sunulmuştur.
... There's no risk for the patient because the interaction with a virtual anatomy is performed and therefore there is less stress. Virtual Reality is educational as opposed to a book learning anatomy and provides three-dimensional models that can be navigated [18]. Virtual reality also has some distinctive benefits. ...
Conference Paper
Full-text available
The healthcare sector is being greatly influenced by the digital transformation that is taking place on a global scale. Sri Lanka too is seeing the benefits of adopting technology into healthcare with applications such as eHealth mHealth and electronic channeling already being used by hospitals and the public alike. It is a great opportunity that using mixed reality in Sri Lankan Health sector to learn about the new ways in which digital technology can uplift the sector. Developing a universe of recent developments requires the inclusion of advanced innovations in the segment of health sector, which can make work and learning progressively effective. Medicinal services is one of the greatest adopters of VR which incorporates medical procedure and preparing.AR and VR have improved healthcare facilities by enhancing the nature of expert, patient as well as understudy knowledge of therapeutic information. This can be considered as safe and cost-effective than traditional methods. By using this mixed reality in Sri Lanka, it is a great opportunity to overcome the challenges and can improve the efficiency and quality of healthcare. The utilization of 3D virtual worlds and gaming advances to give a social and intelligent experience for health understudy and quiet training of healthcare students and patient.
Article
Full-text available
The scientific literature highlights how Mixed Reality (MR) simulations allow obtaining several benefits in healthcare education. Simulation-based training, boosted by MR, offers an exciting and immersive learning experience that helps health professionals to acquire knowledge and skills, without exposing patients to unnecessary risks. High engagement, informational overload, and unfamiliarity with virtual elements could expose students to cognitive overload and acute stress. The implementation of effective simulation design strategies able to preserve the psychological safety of learners and the investigation of the impacts and effects of simulations are two open challenges to be faced. In this context, the present study proposes a method to design a medical simulation and evaluate its effectiveness, with the final aim to achieve the learning outcomes and do not compromise the students' psychological safety. The method has been applied in the design and development of an MR application to simulate the rachicentesis procedure for diagnostic purposes in adults. The MR application has been tested by involving twenty students of the 6 th year of Medicine and Surgery of Università Politecnica delle Marche. Multiple measurement techniques such as self-report, physiological indices, and observer ratings of performance, cognitive and emotional states of learners have been implemented to improve the rigour of the study. Also, a user-experience analysis has been accomplished to discriminate between two different devices: Vox Gear Plus® and Microsoft Hololens®. To compare the results with a reference, students performed the simulation also without using the MR application. The use of MR resulted in increased stress measured by physiological parameters without a high increase in perceived workload. It satisfies the objective to enhance the realism of the simulation without generating cognitive overload, which favours productive learning. The user experience (UX) has found greater benefits in involvement, immersion, and realism; however, it has emphasized the technological limitations of devices such as obstruction, loss of depth (Vox Gear Plus), and narrow FOV (Microsoft Hololens).
Chapter
A mixed reality (MR) system, by providing visual, auditory, and haptic feedback to the learner, can offer a high level of immersion and realism, especially in the healthcare context. In medical training through MR simulations, it is particularly important to avoid mental overload, discomfort, fatigue, and stress, to guarantee productive learning. The present work proposes a systematic assessment of stress, cognitive load, and performance (through subjective and objective measures) of students during an MR simulation for the rachicentesis procedure. A specific application has been developed to enhance the sense of realism, by showing, over the skill trainer, a digital patient that responds with auditory and visual feedback, based on the learner’s interaction. A sample of 18 students has been enrolled in the pilot study. Preliminary results suggest the effectiveness of the proposed MR application using Hololens: high performances are achieved, and the cognitive conditions are well balanced.
Chapter
Continuing professional development is mandatory for all healthcare professionals in Australia. This chapter explores how the expectations of the regulatory and professional organisations of nursing and midwifery can be integrated within the profession by enrolled and registered nurses and midwives to meet the requirements and maintain their registrations. Using actual case studies as a basis, the chapter demonstrates how continuing professional development can be delivered as mobile or m-learning using social media or mobile technologies within this health profession. This chapter focuses on case studies from the Australian healthcare sector; however, it appears that similar issues arise in other countries and so the challenges and solutions described in the case studies can inform practice in other countries. It concludes by discussing the potential for continuing professional development m-learning into the future.
Article
Full-text available
In this paper we discuss Augmented Reality (AR) displays in a general sense, within the context of a Reality-Virtuality (RV) continuum, encompassing a large class of "Mixed Reality" (MR) displays, which also includes Augmented Virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different Mixed Reality display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework, comprising: Extent of World Knowledge (EWK), Reproduction Fidelity (RF) and Extent of Presence Metaphor (EPM). A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.
Article
Full-text available
Triage is a medical term that describes the process of prioritizing and delivering care to multiple casualties within a short time frame. Because of the inherent limitations of traditional methods of teach-ing triage, such as paper-based scenarios and the use of actors as standardized patients, computer-based simulations and virtual real-ity (VR) scenarios are being advocated. We present our system for VR triage, focusing on design and development of a pose and gesture based interface that allows a learner to navigate in a virtual space among multiple simulated ca-sualties. The learner is also able to manipulate virtual instruments effectively in order to complete required training tasks.
Article
Full-text available
We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.
Article
Authors are ,developing ,Virtual Anatomical ,Model ,for medical education using a Free Form Projection technology. Inthe system, a screen in the shape of human torso is used. An image of virtual organs is projected ,onto the curved ,screen surface. The torso can be handled directly by the user's hand and examined, and the virtual organ is shown as if it is fixed inside the body surface, and gives the user a sense of motion parallax. Inour previous system, however, the organs themselves are static and fixed objects. To give medical students an essential anatomical learning, It is necessary that the virtual organs provides a reaction ,that reflects the user's action. In this paper, we realized a function that the objects are changed dynamically according to user's movement,and user's action. Keywords: Medical Education, Virtual Anatomical Model, Free Form Projection Display 1.,Introduction For medical students, it is important to learn the complexities of human body
Article
This paper discusses the use of hand gestures (i.e., changing finger flexion) within a virtual environment (VE). Many systems now employ static hand postures (i.e., static finger flexion), often coupled with hand translations and rotations, as a method of interacting with a VE. However, few systems are currently using dynamically changing finger flexion for interacting with VEs. In our system, the user wears an electronically instrumented glove. We have developed a simple algorithm for recognizing gestures for use in two applications: automotive design and visualization of atmospheric data. In addition to recognizing the gestures, we also calculate the rate at which the gestures are made and the rate and direction of hand movement while making the gestures. We report on our experiences with the algorithm design and implementation, and the use of the gestures in our applications. We also talk about our background work in user calibration of the glove, as well as learned and innate posture recognition (postures recognized with and without training, respectively).
Article
Human anatomical models are sometimes used by medical students to master the human body's structure. Any medical student can learn the position, size and shape of complex inner organs exactly by handling and examining the parts of a human anatomical model. The purpose of this study is to construct a virtual anatomical model using a screen in the shape of a human figure on which images of inner organs are represented by projection, and can be handled directly by the user's hand and examined. For this purpose, a method so called as "Duality Rendering" is introduced to compensate the distortion when the image is projected on the arbitrary surface. The nature of this free-form projection display is examined in terms of the relation between the sensor error and the remaining amount of image distortion for pragmatic use in the anatomy education. 1. Introduction For medical students, it is necessary to learn the complexities of human body structure. Human anatomical models are sometimes used by medical students to master the human body structure. The medical student can learn the position, size and shape of inner organs exactly by handling and examining those parts of the human anatomical model. In this paper, the authors propose a virtual anatomical model that is the combination of a physical body object with superimposed computer graphics of inner organs. The motivation to develop this virtual anatomical model is to fuse the merits of the physical object and those of digital media. The former is the easiness of handling, natural observation by the user in with real life-size object. The latter is the freedom of the interactivity, animation and Hyperlink etc., which is specific to digital media. For this purpose, a free form display is necessary that can be
Article
In general, the image projected into the curved surface of a screen is seen with distortion. The distortion that is according to location of the user's viewpoint, location of the projector and the bend of the screen need to be cancel. In the case when the flat screen is used, it is not difficult to compensate this distortion that derives from the offset in between the viewpoint and projection. It is not clear how the user can see the image that is projected on the arbitrary shaped screen without distortion. The purpose of this study is to find a general technique of distortion cancel In this paper, a method that is named as "Duality Rendering" is proposed to cancel the distortion of the image when the screen is arbitrary shape surface and the user's eye is in arbitrary position. This method is based on the idea of having two similar separated spaces, where the corresponding projection and viewing transformation are performed, and finally result in canceling each other. The merit of this method is that all the calculation can be done simply with the general graphics library such as OpenGL, utilizing the power of the graphics hardware. The preliminary implementation using a curved screen with this "Duality Rendering" is constructed. This prototype is proved to have enough capability to give the sense of the motion parallax.
Article
Recent advances in the field of computer graphics have enabled ex-tremely fast and high quality rendering of volumetric data. However, these al-gorithms have been developed and optimized for visualization on single view displays, and not for stereoscopic augmented reality systems. In this paper, we present our implementation and results for the integration of a high quality hard-ware accelerated volume renderer into a medical augmented reality framework using a video see through HMD. The performance and quality of the renderer is evaluated through phantom experiments and an in-vivo experiment. Compared to the literature, our approach allows direct real-time stereo visualization of volu-metric medical data on a HMD without prior time consuming pre-processing or segmentation. To further improve the visual perception and interaction of real and virtual objects, the renderer implements occlusion handling with the physicians hands and tracked medical instruments.
Article
This paper reports on the preparation, creation and first applications of the Visible Korean Human Phantom -VKHP that pro-vides a realistic environment for the development and evaluation of med-ical augmented reality technology. We consider realistic development and evaluation environments as an essential premise for the progressive in-vestigation of high quality visualization and intra operative navigation systems in medical AR. This helps us to avoid targeting wrong objectives in an early stage, to detect real problems of the final user and environ-ment and to determine the potentials of AR technology. The true-scale VKHP was printed with the rapid prototyping technique "laser sinter" from the Visible Korean Human CT data set. This allows us to aug-ment the VKHP with real medical imaging data such as MRI and CT. Thanks to the VKHP, advanced AR visualization techniques have been developed to augment real CT data on the phantom. In addition, we used the phantom within the scope of a feasibility study investigating the integration of an AR system into the operating room.