Content uploaded by George Papagiannakis
Author content
All content in this area was uploaded by George Papagiannakis
Content may be subject to copyright.
Interactive Scenario Immersion:
Health Emergency Decision Training in JUST Project
Michal Ponder (*), Bruno Herbelin (*), Tom Molet (**),
Sebastien Schertenleib (*), Branislav Ulicny (*), George Papagiannakis (**),
Nadia Magnenat-Thalmann (**), Daniel Thalmann (*)
(*) Virtual Reality Lab (VRlab)
Swiss Federal Institute of Technology (EPFL)
(**) MIRALab, University of Geneva
Abstract
The paper discusses the main requirements, constraints and challenges involved in
practical realization of an immersive VR situation training system that would support
simulation of interactive scenarios of various type. A special attention is paid to the
demanding health emergency decision training domain. As an example an immersive
JUST VR health emergency training system built in frame of EU IST JUST project is
presented in more detail.
Keywords
VR situation training system, health emergency, VR immersion, interactive scenario
1 Introduction
One of the effects of the rapid technological progress that we witness nowadays is a
quickly rising need for more effective training methodologies concerning broad and
continuously growing spectrum of human activities. Using those novel methodologies
individuals should be able to learn and then continuously maintain and upgrade their
knowledge and skills.
Nowadays common training methods include formal classes, books, multimedia
applications, interactive simulations, on-job training, etc. The latter, on-job training
method is particularly effective in complex tasks where a great deal of independence is
granted to the task performer. Unfortunately, on-job training is the most expensive.
Furthermore it is limited by the need of an experienced personnel that would be able to
conduct such training as well as by frequent unavailability of the training context itself.
Due to the reasons given above, in the recent years, many Virtual Reality (VR)
applications has been implemented that try to address the challenge of effective training
addressing nowadays requirements. Successful use stories as well as new attempts can
be found in a very broad range of domains such as space and aviation (e.g. flight and
mission simulators), medicine (e.g. surgery), industry (e.g. machine operation, mining),
psychology (e.g. public speech training), emergency (e.g. fire fighting, mining, healt
emergency), etc. An overview of many techniques used to implement VR applications in
different domains is given in [1][2].
Use of VR is particularly useful when the training domain is complex and difficult to
master and when the audio-visual (possibly assisted by haptic feedback) features of the
training environment are crucial to the overall training success. This makes virtual
environments the solution of choice for practicing and learning domains where the
context of the training is not easily available, such as health emergency situations.
Most of the VR-based medical trainers developed up to date target training of the
specific manual procedures and skills involving advanced medical instruments (e.g.
laparoscopy, neurosurgery, orthopedics, etc.) [3][4][5][6]. They fall into the category of
task training. In contrast, health emergency trainers should allow the trainees to develop
their psychological skills required to face reality of an emergency site. In effect the
training should lower the psychological barriers by focusing on rapid situation
assessment and decision-making under highly stressful conditions. This type of
application falls into the situation training category.
In the following sections of this paper we will discuss the main issues involved in a
practical realization of a VR situation training system targeting health emergency
domain. We will conclude with a concrete example of the health emergency training
system (JUST VR System) developed in frame of the EU IST JUST. As a project JUST
addresses the domain of training of civil non-professional health emergency operators
(citizens volunteers).
2 Problem: Requirements, Constraints, Open Issues
2.1 Health Emergency Training: Requirements
In health emergency care, a clear distinction is made between a) the advanced
training aimed at highly qualified professionals and b) the training of para-medical
personnel (including citizen volunteers). The scope of the second type of training
includes the following three key aspects:
- knowledge,
- skills,
- facing reality.
While knowledge and skills can be efficiently acquired through the methods
currently at hand (e.g. courses, books, multimedia applications, use of manikin and real
devices, etc.), the third aspect of the training is still being achieved through the risky,
leaving no place for mistakes, on-job training.
In course of the on-job training a trainee is to face an emergency situation reality in
usually difficult to predict context (e.g. day, night, busy street, metro station,
appartment, empty park, crying people, etc.). The main stress factors are uncertainty of a
health emergency situation (limited cues, confusion), pressure of passing time and
consequences of mistakes (risk, responsibility). In such conditions the trainee should be
able to asses the situation and make a sequence of decisions in order to perform the most
suitable medical rescue procedure. In effect of an on-job experience a trainee is
supposed to lower psychological barriers, handle stress, use knowledge, make decision,
take responsibility and as a result build self-confidence. Unfortunately the main inherent
problems of the on-job training are no room for mistakes, not easily available training
context (random, not controlled) making it in effect very risky, long and expensive.
2.2 VR Situation Training: Promising Option
Taking into account the very recent rapid advancements of the real-time computer
graphics and virtual character simulation technologies it seems that VR becomes a
promising option to address the current problem related to health emergency on-job
training. In short the main objective of a required VR situation training system would be
immersion of the trainee inside an interactive scenario. The system would need to
provide the following three functional elements:
(1) Immerse a trainee in virtual environment:
Taking into account the current state of the art of the VR immersion hardware, and
the main requirement of the VR situation training assuming various combinations of
medical cases and contexts, this part requires special attention and will be treated further
in more detail.
(2) Simulate a health emergency situation:
Provision of an interactive, multi-path, believable health emergency scenario is a
very challenging task. The system must seamlessly combine multiple hardware devices
and various heterogeneous simulation and interaction technologies that while being there
must at the certain moment become invisible to the trainee in order to generate the
required sense of presence. Special attention must be paid to: 3D graphical rendering,
3D surround sound rendering, realistic virtual human modeling and animation,
consistent and natural behaviors of the virtual humans and the elements of the virtual
scene, intuitive and possibly invisible multi-modal interaction and finally tools allowing
for authoring, execution and control of the interactive scenarios.
(3) Let the trainee act and handle stress factors:
When using the system actions of a trainee should be focused on the situation
assessment and decision making. The stress factors would be a direct result of the
uncertainty of a health emergency situation, surrounding virtual environment conditions,
pressure of passing time and a fear of making decisions combined with a need to take
responsibility of the decisions made.
2.3 VR Immersive Technologies: Constraints
From the technological point of view, the VR challenge did not change much since
Sutherland’s first experiments [7]: trying to achieve sense of presence as an interface
metaphor to a virtual world. Despite of the significant technological improvements over
the last 40 years, the intrinsic limitations of the immersive hardware and the man-
machine communication complexity are still there. As Gobbetti et al.[8] conclude: it is
common to have misconceptions on what VR can and cannot do, and to have negative
reactions when noticing that VR “is not that real”.
In the case of health emergency situation training, we face the same major classes of
problems. The interaction devices available do not allow for a large scene exploration
and a direct manipulation simultaneously. The two are actually mutually exclusive
because of the haptic feedback kinaesthetic interfaces cumbersomeness.
Finally the medical users expect a highly realistic simulation, particularly in case of
the human body appearance and behaviours. This of course conflicts with the real-time
requirements of such interactive applications and thus must lead to certain trade-offs.
2.4 Existing Approaches and Solutions: Some Examples
The existing systems targeting health emergency training vary substantially in scope
of immersion, interaction and scenarios offered to the trainees. Majority of them are
console based featuring strong multimedia character with 3D graphics and sound as a
main immersion methods. The Virtual Medical Trainer/Trauma Patient Simulator from
Research Triangle Institute is a console based application showing 3D model of a virtual
human with realistic visible injuries and internal trauma that exhibit medical signs and
symptoms with real-time, physiological behavior. LIMIT from HITL Washington
University uses Augmented Reality (AR) and focuses rather on medical data
representation providing only rudimentary models of virtual humans. The Medical
Readiness Trainer from University of Michigan focuses on combination of state of the
art instrumented manikin technology with some elements of VR.
Medical Emergency Training: MediSim and BioSimMER Projects
MediSim and BioSimMER from Sandia Laboratories are design for training first-
responders in military situations (battle field and bio-terrorism). In order to constrain VR
simulation requirements and objectives the range of MediSim cases has been limited to
the medical assessment and stabilization tasks. In BioSimMER, the trainees enter an
airport contaminated with an unknown agent to evacuate the victims.
Stanfield et al.[9] present the VR platform used for this application. The VR interface
is immersive; the medic/trainee wears a Head Mounted Display (HMD) and a set of four
position trackers. The navigation in the Virtual Environment (VE) is allowed by two
locomotion modes interfaced with the body movements (fly or walk modes). Through a
specific interaction paradigms employing inverse kinematics and interpretation of hand
motion, the trainee can act directly on the patient and perform such actions as head
bandaging or in mouth J-tube insertion. The BioSimMER system is also capable of
supporting multiple trainees. They could see each other and act in a collaborative way
using specific signs.
As for all VR application, the critical choices have been made in the design and the
development of the system. The hardware configuration actually uses SGI Octane
workstations running across a LAN multiple instances of the program. The simulation
software is based on SGI Performer (actually porting to PC OpenGL) and take full
advantage of Ethernet multicasting.
Then, decision was made to adopt fully immersive devices (HMD and sensors). This
is also motivated by the particular dressing of the emergency responders in contaminated
environment that limits the field of view in the same way as HMD. Natural navigation is
allowed for close interactions, but some locomotion paradigms are introduced to move
within the VE beyond the physical limitations imposed by the range of motion of the VR
tracking devices. The trainee is represented as tracked full-body avatar; he sees through
his eyes and controls his movements. To achieve this goal with a limited set of sensors
(on the head, the lower back, and each hand), the avatar driver computes pelvis and legs
positions with inverse kinematics algorithms and interpolates the torso and the head
positions. In order to fit the animation skeleton with the user’s body (i.e. resize height),
the system imposes a calibration phase in four steps. Then, the trainee can act in the VE
through his avatar; proximity-based grasping and “let go” gesture (twisting the wrist) are
the only requested paradigms. Indeed, smart objects are designed to automatically
perform the actions; pop into place (J-tube in the mouth), dock at the appropriate place
the appropriate time (injection), initiates an action of the avatar (unrolling the bandage),
or change its state. This simplifies the interactions but also introduce some artefacts. For
example, when performing an injection, the avatar remains bound in position to the auto-
injector even though the user moves his hand. Anyway, the only solution to this would
be a force-feedback interface, and this would introduce many other problems. A
particular attention is paid to the interactions with the virtual patients. Limited voice
interactions are possible thanks to a commercial speech-recognition software; given a
list of recognized words, the trainee may check the responsiveness of the patient that can
be programmed to answer to a specific key-word. The virtual humans are also
specifically designed for medical purpose (i.e. positive pupillary response test).
Over the 23 test subjects, a global satisfactory opinion was given over the system.
However, according to Stanfield et al., the comments collected from the questionnaires
“indicates need for improvements in input devices, naturalistic handling and use of
virtual objects”.
As there is no existing equivalent system, it is not possible to compare and to
determine whether or not this kind of training is efficient. Nevertheless, this shows the
complexity of such a system and presents interesting solutions to the VR intrinsic
difficulties.
2.5 Open Issues
Immersion
The “direct immersion” paradigm implemented usually through a head Mounted
Display (HMD) and various combinations of human body tracking technologies
introduces many limitations and artifacts: mapping between the virtual and the real
although direct is usually very imprecise leading to user confusion, very limited field of
view and cyber-sickness effects may lead even to injuries. Direct immersion, even if
combined with limited and cumbersome nowadays haptic devices, does not solve a
problem how to interact with the virtual objects. Moreover use of any haptic devices
automatically limits the range of the scenarios that the system may support which
conflicts with the main requirement of the situation training. In effects it seems that the
immersion must be based on certain metaphors and extensive use of audio-visual
sensory channel. Ideally, technologies should be hidden as much as possible; the
equipment should be light and the interface intuitive.
Haptic Feedback
As already mentioned above the haptic feedback devices use in case of generic
situation training is rather difficult if not prohibitive due to the functional constraints that
such devices impose. In case of health emergency training one could consider a use of
instrumented manikin as the most natural haptic feedback devices. Unfortunately in case
of HMD based immersion it would require tracking of the manikin posture leading to
virtual-real correspondence problems due to the tracking imprecision and after all still
such procedures like mouth-to-mouth breathing would be impossible due to the HMD
dimensions.
Interactive scenario
In majority of the VR applications and particularly in case of the skill training
systems a user faces an interactive scene. Such a scene contains multiple virtual objects
that undergo interaction and respond with certain behaviors (interaction with VR space).
In case of the situation training we need to bring this idea to the higher level: an
interactive scenario. Ideally and interactive scenario should tell a timeline story of
pedagogical nature leaving at the same time clear places for trainee’s interactions and
decisions that affect scenario direction (interaction with VR space and time). At present
interactive story telling is an intensive research topic with promising results but still
there is no clear and widely accepted guidelines how to capture the problem and deliver
a practical implementation.
Multi-modal interaction
Speech recognition/generation is an evolving and promising research topic.
However, its limitations force the user to adopt a limited vocabulary that is not sufficient
in many cases. Moreover, the users in an emergency situation may not have time to think
about the exact words or to repeat the command several times. Gestures are also often
used as a sign language in specific professional spheres. It would be appropriate to
investigate this possibility in addition to the vocal interaction.
3 JUST VR System
3.1 Approach and Concept
The main goal of the JUST VR situation training system is provision of a quasi-
random health emergency scenario that would expose a trainee to an immersive VR
simulation allowing check his decision-making capabilities under possibly close to real
stressful conditions.
As JUST project is targeting civilian applications we had to face a very challenging
list of requirements (advanced VR system, easy to use, easy to maintain, extendible) and
constraints ( limits of VR hardware, limited budget, affordable PC platform). Finally it
was important that the high-end VR hardware components could be replaced by low-end
ones and the system is still fully functional so that the particular installations of the same
system may range from high-VR-end (e.g. stereo vision, surround sound, natural
navigation) to low-multimedia-end (e.g. monitor, stereo speakers, mouse navigation).
Fig. 1 JUST VR System concept: key elements.
Being conscious of the limitations of the “direct immersion”, haptic feedback
technological limits and finally the above list of requirements and constraints we have
proposed a system concept captured schematically in (Fig. 1). In short, during the
interactive scenario the trainee faces a huge rear-projection screen displaying stereo
simulation PC
simulation
control PC
simulation supervisor
trainee
navigation inside
virtual environment
natural voice
communication
virtual victim
virtual
assistant
rear stereo
projection
images of the simulation. He is immersed in surround sound. In course of the simulation
he is able to navigate freely around the virtual environment in order to locate an
emergency site. The intuitive navigation paradigm is based on a single magnetic tracker
attached to the trainee’s head. On the emergency site the trainee is interacting with his
Virtual Assistant (VA) using natural voice commands and hearing respective replies and
prompts from VA. The role of the trainee is to asses the situation and give commands to
the VA who is executing them showing proper manual skills. Simulation supervisor
stays behind the scenes and has the following tasks: direction of the scenario path,
“speech recognition” of the voice commands given by the trainee and triggering
respective actions to be executed by VA (the supervisor can be regarded as the “ears of
the VA”), putting pressure on the trainee by triggering of prompts spoken by VA or
triggering of “disturbing” virtual events.
Audio-Visual Immersion
The visual immersion is realized through the rear projection and active stereo. The
trainee wears lightweight (compared to HMD) shutter glasses and faces the screen of the
~3.0x2.8m size. Due to the rear projection he is able to approach the screen very closely
without any shadow being cast on the screen. Compared to HMD this solution is less
cumbersome, we avoid problems of very limited field of view, cyber sickness (related to
the lack of peripheral vision) and claustrophobic effects in case of some more sensitive
trainees. The trainee faces the virtual wall being a natural extension of the reality into a
different, virtual dimension. Finally compared to HMD this solution offer high
scalability from stereo capable high-end CRT project through cheap LCD projector
(loosing stereo) to simple monitor.
Concerning audio immersion the trainee is surrounded by 5.1 home cinema speaker
system that can be as well scaled down to simple stereo one.
Navigation in Virtual Environment
Navigation paradigm is based on a single magnetic tracker attached to the trainee’s
head (Fig. 2). Ascensions PC Bird is used for this purpose. In order to “walk around”
the virtual environment the trainee needs to step into the navigation ring which in effect
triggers camera motion in the desired direction. The trainee can still move inside the
central area of the ring without causing any camera motion. In order to “look around”
the trainee needs to look at the margins of the projection screen. This analogously
triggers horizontal and vertical camera rotation.
The paradigm is lightweight and easy intuitive to understand. Moreover following
the requirement of the modularity and scalability the magnetic tracker can be replaced
by a wireless hand held mouse or a normal mouse in case of scale down.
Multi-modal Interaction: Trainee, Virtual Assistant, Simulation Supervisor
While targeting situation training we had to address an inherent lack of generic
haptic feedback solution that would support whole spectrum of scenarios and simulation
contexts. In order to do so we have clearly separated decision making from decision
execution by introduction of the Virtual Assistant (VA) concept. In course of the
simulation the trainee (decision maker) navigates, assesses the health emergency
situation and issues natural voice commands to the VA. He may use any type of wording
or language (facilitates localization issues). VA (decision executor) waits for commands
and executes them showing correct medical skills. It refuses to execute erroneous
decisions prompting the trainee for retrial. In case the trainee is slow in reactions or
unable to act the VA prompts him to hurry up or auto-executes proper actions
respectively. A simulation supervisor (ears of VA) directs and advances interactive
scenario execution by triggering actions available at the given scenario step and ordered
by the trainee.
Fig. 2 JUST VR System navigation paradigm: “navigation ring” metaphor
allowing for “walking around” the virtual environment; “sliding margin”
metaphor allowing for “looking around” the virtual environment.
Interactive Scenario: Authoring and Execution
As it has been already mentioned above definition and handling of an interactive
scenario (as opposed to the interactive scene) is still an open issue. Nevertheless for the
purpose of the JUST VR System we elaborated our own semantics allowing for
representation and semantics of the interactive scenario as a tree-like graph of states.
During execution the scenario advances through the subsequent scenario steps. At each
scenario step there is a set of actions available (defined by medical procedures) and the
trainee must command his VA to execute one of them e.g “Do chest compressions”,
“Check responsiveness”, “Make mouth to mouth breathing”, etc.
Fig. 3 JUST VR System scenario execution: the main GUI used by the simulation
supervisor to direct interactive scenario execution
Only correct actions are available on each step. In this way the trainee is protected
from making bad decisions which has very important educational impact. In case of such
attempt to make a bad decision the supervisor may trigger one of the optional actions
that make the VA say e.g. “No, I do not think I should do that”, “Are you sure ?”, etc.
The optional actions which are always available are used by the simulation supervisor to
prompt the trainee to hurry up and to put in general some pressure on the trainee e.g
“OK, and now ?!”, “Hurry up, tell me what to do!”. In case the trainee is not able to
scenario tree expands
step by step in course
of the scenario execution
current
scenario
step
scenario actions available
at the current scenario step
optional actions
(always available)
make a correct decision the supervisor advances the scenario further automatically
which is another stress generating factor for the trainee who starts to understand that he
cannot keep up with the scenario pace.
3.2 Implementation Details
Hardware Platform
One of the challenges we had to face in case of JUST project was implementation of
the advanced immersive VR system on affordable PC platform with the use of off-the-
shelf VR devices. It was important as well that the system is modular enough so that if
necessary it is possible to migrate smoothly from the high VR end to low multimedia
end of the spectrum. As a result the system is implemented on two networked PCs
running under Windows 2000. One of the PCs equipped with the high-end graphics
board (Quadro7 XGL 900) and EAX compatible 5.1 surround sound board (Sound
Blaster Audigy) is responsible for the hosting and execution of the simulation. Another,
standard PC is responsible for simulation control and display of the respective control
GUIs operated by simulation supervisor.
Software Platform
The overall JUST VR System software architecture is developed based on the
VHD++ real-time development framework [10] being a proprietary middleware solution
of both MIRAlab and VRlab laboratories. VHD++ is a highly flexible and extendible
real-time framework supporting component based development of interactive
audio-visual simulation applications in the domain of VR/AR with particular focus on
virtual character simulation technologies. It uses C++ as the main implementation
language with possibility of scripting in Python. The most important features and
functionalities of the VHD++ framework are: a) support for real-time audio-visual
applications, b) extendible spectrum of heterogeneous simulation technologies provided
in from of plug-able vhdServices, c) extendibility and scalability, e) runtime flexibility:
XML based system and content configuration, f) complexity curbing: multiple design
patterns improve clarity while abstraction levels simplify implementation constructs, h)
large scale code reuse: fundamental components providing core system level
functionalities and readymade components (vhdServices) encapsulating heterogeneous
simulation level technologies.
The JUST system architecture is designed around a vhdRuntimeEngine being an
active software element power supplying set of plug-able vhdServices that encapsulate
the following main simulation technologies: 3D stereoscopic rendering, 3D sound, VR
navigation, virtual human animation, skin deformation, face animation, speech,
behaviors, interactive scenario authoring and execution, etc.
4 First Results
At the moment JUST VR System enters its validation phase focusing mainly on the
usability, sense of presence and educational aspects. Currently the system offers two
multi-path interactive scenarios built around Basic Life Support medical procedures and
involving cardiac arrest. The first scenario takes place late at night, on the empty office
floor. The second scenario is situated in the city park at night. Snapshots showing system
in use and the two respective scenarios are presented in (Fig. 4), (Fig. 5), (Fig. 6).
Already at the beginning it is important to note that given current state of the art of
motion capture technology (we used optical motion capture: Vicon) it is still very
difficult to generate high-quality human motion data required to visualize precise
medical procedures (especially in case of hands). Nevertheless it seems to be of
secondary importance in case of situation training where the focus is on situation
assessment and decision taking. It is as well very difficult to synchronize such high-
quality motion data when played at the same time on the Virtual Victim and the Virtual
Assistant. Finally, concerning the usability tests, the assessment may be difficult due to
the fact that there is no similar systems that could be used for comparison.
After the very first test it seems that the overall system is easy to use and maintain.
Navigation and interaction paradigms seem to be intuitive and trainees get used to them
quickly. As the system supports scalability it is possible to use it equally with high-end
VR immersion hardware as well as in the desktop configuration by just changing
configuration files (helps development). The technology is hidden from the trainee and
the use of trainee-VA natural speech communication, as expected, is natural and robust.
Fig. 4 A trainee interacting with the system: hiding technology, natural voice
interaction with the virtual assistant, natural navigation in virtual
environment.
Fig. 5 Crating a health emergency situation training context: late evening, empty
office, the trainee and his virtual assistant try to help the victim found on
the floor.
Fig. 6 Creating a health emergency situation training context: city at night, a
victim found in the public park, the trainee and his virtual assistant try to
asses the situation and handle the emergency situation.
5 Conclusions
In this paper we have presented the main requirement, constraints and challenges
involved in the practical realization of an immersive VR situation training system
targeting in particular health emergency training domain as a promising alternative for
risky, long and costly on-job training.
We showed as well that despite of revolutionary progress in real time graphics and
virtual character simulation technologies still many open issues remain and they are
related mainly to the VR immersion, haptic feedback , interactive scenario (as opposed
to the interactive scene) and multi-modal interaction paradigms limitations.
Finally, using an example of the JUST VR health emergency training system we
showed that some of the problems can be avoided by careful system conceptualization,
selection of a proper hardware and its use in context of proper interaction paradigms.
Concerning the future development of the JUST VR system we plan to a) elaborate
more complex health emergency scenarios, b) investigate system usability in more detail
(sense of presence, educational aspects, stress factors), c) try to use the system in
applications related to mental health and rehabilitation requiring immersion and active
role of the user as a main decision maker e.g. public speech, phobias etc.
The latter is based on our strong belief that the JUST VR system can address much
broader scope of applications, reaching far beyond the initially aimed health emergency
training domain.
6 Acknowledgments
Presented JUST VR system research and development is supported by the Swiss
Federal Office for Education and Science in frame of EU IST JUST project.
7 References
[1] R.S. Kalawsky, The Science of Virtual Reality and Virtual Environments,
Addison-Wesley, 1993, ISBN 0-201-63171-7
[2] G.Burdea, P.Coiffet, Virtual Reality Technology, Wiley Interscience, John Wiley
& Sons, Inc. 1993, ISBN 2-866601-386-7
[3] M. Dinsmore, N. Langrana, G. Burdea, J. Ladeji, Virtual Reality Training
Simulation for Palpation of Subsurface Tumors, Proceedings of the Virtual
Reality Annual International Symposium, VRAIS’95, March 1-5, Albuquerque,
New Mexico, 54-60, 1995
[4] N. J. Avis, I. P. Logan, D. P. M. Wills, The use of Virtual Environments to Assist
the Teaching of Knee Arthroscopy Procedures, Royal Society conference on VR
in Society, Engineering and Science, Chapman-Hall, London, July 1995
[5] S. L. Delp, J. P. Loan, M. G. Hoy, F. E. Zajac, E. L. Topp, and J. M. Rosen, An
Interactive Graphics-Based Model of the Lower Extremity to Study Orthopaedics
Surgical Procedures, IEEE Transactions on Biomedical Engineering, vol 37, no 8,
August, 1990
[6] H. Delingette, S. Cotin, and N. Ayache, A Hybrid Elastic Model Allowing Real-
Time Cutting, Deformations, and Force-Feedback for Surgery Training and
Simulation, Proceedings of Computer Animation’99, Geneva, 70-81, 1999
[7] Sutherland I. E. The ultimate display. In Proceedings of IFIPS Congress (New
York City, NY, May 1965), vol. 2, pp. 506–508
[8] Enrico Gobbetti and Riccardo Scateni. Virtual reality: Past, present, and future. In
G. Riva, B. K. Wiederhold, and E. Molinari, editors, Virtual Environments in
Clinical Psychology and Neuroscience: Methods and Techniques in Advanced
Patient-Therapist Interaction, pages 3-20. IOS, Amsterdam, The Netherlands,
November 1998
[9] Stanfield S, Shawver D, Sobel A, Prasad M, Tapia L. Design and Implementation
of a Virtual Reality System and Its Application to Training Medical First
Responders. In PRESENCE – Teleoperators and virtual environments, vol 9(6),
pp.524-556, MIT Press, December 2000
[10] Michal Ponder, George Papagiannakis, Tom Molet, Nadia Magnenat-Thalmann,
Daniel Thalmann, VHD++ Framework: Extendible Game Engine with Reusable
Components for VR/AR R&D featuring Advanced Virtual Character Simulation
Technologies, submitted to IEEE VR 2003.