Virtual Surgery on Geometric Model of Real Human Organ Data
ABSTRACT Abstract Traditional Surgery methods involve collection of patient's data, data study and providing inferences to a surgeon to perform a specific procedure if required. General issues while conducting surgery are limited degree of freedom, side effects such as infection, haemorrhage etc. Virtual Reality Technology has aided these methods by incorporating techniques such as 3D model visualisation, training in synthetic environments, pre-operative planning etc., thereby assisting a medical personnel in conducting a real surgery.
Existing VR systems are implemented in two fronts i.e. visualisation, where real datasets are analysed in 2D/3D and surgery simulation, where pre-modelled datasets are used for simulating surgery scenarios. The literature documents that, a few attempts have been made to use real datasets for both visualisation and simulation. Hence, there exists a need to bridge this gap. The dissertation presents the development of a surgery application that processes any real dataset in its native format into a geometric model, deploying realistic surgery scenarios.
CT Images in DICOM format generated by the OsiriX Image Navigation Software, are processed into a geometric model containing contour surfaces. The model is deployed in the visualisation and surgery procedure scenes. The scenes are developed such that the user can perform 3D visualisation of the input dataset, navigate in the Virtual Environment and also conduct an incision procedure for user training and study. The implementation also involves a scalpel model used to conduct the procedure and applying textured environments for realism. Menu and text based GUI, with VR device interfacing are implemented in the system for user interaction.
The system proves the concept of virtual surgery. Enhancements can be made in future to develop a full fledged system that would be beneficial to surgeons for pre- operative planning and performing mock surgery on real data. Also, medical students can acquire surgical experience while exercising surgery procedures.
- SourceAvailable from: psu.edu[Show abstract] [Hide abstract]
ABSTRACT: With growing interest in Virtual Reality (VR) there has been a dramatic increase in the number of development environments for VR 1 . This paper presents a discussion of features to look for when choosing a development environment for virtual reality applications. These features include the software's capabilities, such as graphics and VR hardware support, the interface it provides for developers, and other items such as support for performance monitoring. It then gives a brief overview of several popular VR development environments from academic and commercial sources, discussing how well they address the needs of VR developers. The paper concludes with an introduction to VR Juggler, a development environment currently in development at Iowa State University. This includes a discussion of how Juggler's developers approached each of the desired features of VR development environments presented in the first section of the paper. 1. Introduction As interest in Virtual Reality technol...02/2001;
- [Show abstract] [Hide abstract]
ABSTRACT: Virtual endoscopy (VE) is a new method of diagnosis using computer processing of 3D image datasets (such as CT or MRI scans) to provide simulated visualizations of patient specific organs similar or equivalent to those produced by standard endoscopic procedures. Conventional endoscopy is invasive and often uncomfortable for patients. It sometimes has serious side effects such as perforation, infection and hemorrhage. VE visualization avoids these risks and can minimize difficulties and decrease morbidity when used before actual endoscopic procedures. In addition, there are many body regions not compatible with real endoscopy that can be explored with VE. Eventually, VE may replace many forms of real endoscopy. There remains a critical need to refine and validate VE visualizations for routine clinical use. We have used the Visible Human Dataset from the National Library of Medicine to develop and test these procedures and to evaluate their use in a variety of clinical applications. We have developed specific clinical protocols to compare virtual endoscopy with real endoscopy. We have developed informative and dynamic on-screen navigation guides to help the surgeon or physician interactively determine body orientation and precise anatomical localization while performing the VE procedures. Additionally, the adjunctive value of full 3D imaging (e.g. looking "outside" of the normal field of view) during the VE exam is being evaluated. Quantitative analyses of local geometric and densitometric properties obtained from the virtual procedures ("virtual biopsy") are being developed and compared with other direct measures. Preliminary results suggest that these virtual procedures can provide accurate, reproducible and clinically useful visualizations and measurements. These studies will help drive improvements in and lend credibility to VE procedures and simulations as routine clinical tools. VE holds significant promise for optimizing endoscopic diagnostic procedures, minimizing patient risk and morbidity, and reducing health care costs.Computerized Medical Imaging and Graphics 01/2000; 24(3):133-51. · 1.50 Impact Factor
Article: 3D Modeling for Endoscopic Surgery[Show abstract] [Hide abstract]
ABSTRACT: Surgical training systems based on virtual reality (VR) and simulation techniques may represent a more cost effective and efficient alternative to traditional training methods. Additionally, VR is a technology that can teach surgeons new procedures and can determine their level of competence before they operate on patients. At Forschungszentrum Karlsruhe, a virtual reality training system for minimally invasive surgery (MIS), based on the simulation software KISMET, has been developed An overview of the current state of development for the "Karlsruhe Endoscopic Surgery Trainer" is presented. For quick and easy creation of surgical scenes containing deformable anatomical organ models, the spline based modeller KisMo has been developed, which generates beside the geometry also a spatial mass-spring network of the objects for the elastodynamic simulation in KISMET. The MIS trainer provides several surgical interaction modules for deformable objects like grasping, application of clips, cut...11/1999;
MSc (Engg) in Computer Graphics and Virtual Reality
Virtual Surgery on Geometric Model of Real
Human Organ Data
Brian Gee Chacko 1, Harshal Sawant 2
1-Student, M.Sc. (Engg.) MSRSAS, 2-Assistant Professor & Programme Manager - MSRSAS.
Computer Graphics and Virtual Reality Centre, M. S. Ramaiah School of Advanced Studies, Bangalore.
Traditional Surgery methods involve collection of patient's data, data study and providing inferences to a surgeon to
perform a specific procedure if required. General issues while conducting surgery are limited degree of freedom, side
effects such as infection, haemorrhage etc. Virtual Reality Technology has aided these methods by incorporating
techniques such as 3D model visualisation, training in synthetic environments, pre-operative planning etc., thereby
assisting a medical personnel in conducting a real surgery.
Existing Virtual Reality(VR) systems are implemented in two fronts i.e. visualisation, where real datasets are
analysed in 2D/3D and surgery simulation, where pre-modelled datasets are used for simulating surgery scenarios. The
literature documents that, a few attempts have been made to use real datasets for both visualisation and simulation.
Hence, there exists a need to bridge this gap. The paper presents the development of a surgery application that processes
any real dataset in its native format into a geometric model, deploying realistic surgery scenarios.
Computed Tomography(CT) Images in Digital Imaging and Communications in Medicine(DICOM) format generated
by the OsiriX Image Navigation Software, are processed into a geometric model containing contour surfaces. The model
is deployed in the visualisation and surgery procedure scenes. The scenes are developed such that the user can perform
3D visualisation of the input dataset, navigate in the Virtual Environment and also conduct an incision procedure for user
training and study. The implementation also involves a scalpel model used to conduct the procedure and applying
textured environments for realism. Menu and text based Graphics User Interface (GUI), with VR device interfacing are
implemented in the system for user interaction.
The system proves the concept of virtual surgery. Enhancements can be made in future to develop a full fledged
system that would be beneficial to surgeons for pre-operative planning and performing mock surgery on real data. Also,
medical students can acquire surgical experience while exercising surgery procedures.
Key Words: Virtual Surgery, Virtual Reality, Surgery, Surgery Procedure, Incision Simulation
Several diseases are diagnosed by Medical
Professionals, resulting in the treatment of infected
areas by careful observation, procedure affirmation and
procedure implementation using quality instruments
needed for the task. Operations are considered a success
or failure after undergoing systematic procedures
followed by surgeons, who would perform them i.e. as
schedule based procedures or emergency procedures
depending on the need.
A usual scenario would be a patient being brought
for diagnosis with complaints of pain in specific areas.
The surgeon would then schedule his or her task
depending on the seriousness of the pain after
preliminary diagnostic procedures have been conducted.
Careful observation and validation of the patient data
acquired by scanning the area under investigation is
done. Surgeons would then have to operate on the
patient with the given information using surgical
instruments by following standard surgical procedures.
Common side-effects caused due to operations would
be obstruction of other organ parts while conducting the
surgery, a surgeon's limited view and degree of freedom
in accessing tissue structures.
Moreover, an operation to be performed at its best
would require a considerable amount of practice by
surgeons, so that he or she can conduct them without
hesitation. Medical students would suffer this because
he or she would need to gain experience for the first
time, while young surgeons would need to develop and
practice their usual skills. Practitioners would be trained
on animal cadavers or mannequin to experience before
performing an actual surgery. To the surgeons,
visualising on light boxes needed placement or removal
of scanned images for study and imagination of the
actual anatomy. Animal cadavers used by medical
students exhibit a different anatomy and simulation on
mannequins is expensive. Fig. 1 shows traditional
practices used in surgery. In Fig. 1 (a), 
professionals are studying the patient data on a light
box and in (b),  a mannequin is being used as part
of a human patient simulator.
Fig. 1 Early Surgery Methods (a) Surgeons viewing
data on light boxes (b) A mannequin used as
part of a simulator 
VR Technology assists traditional approaches by
generating graphical images (2D or 3D) of real patient
data, allowing user immersion in a synthetic
environment, giving the practitioner a realistic feel of
human organs. The Medical Practitioner can conduct
virtual training on 3D models to gain experience on any
MSc (Engg) in Computer Graphics and Virtual Reality
specific procedure. Systems are developed that would
help visualise complex patient data, to the maximum
closest extent, thereby assisting the surgeon or medical
practitioner to gain understanding of the data. Surgeons
can conduct a pre-operative surgery before touching the
real patient. The entire process would be cost effective
since the only requirement would be to attain
computing resources having high processing capability
and storage capacity and interfacing devices (Haptic) to
generate feedbacks. Practitioners can develop their
skills by undergoing virtual training on basic
procedures such as cutting, grasping, suturing etc. Fig.
2, shows software's session screen shots of Virtual
Environment (VE) generation  and Dataset
Fig. 2 Current VR Techniques (a) Rendering a VE
in KISMET  (b) Dataset Visualisation in Mimics
Existing VR systems are developed on two main
streams. The first incorporating the development of
synthetic trainers using pre-developed 3D models of
human organs. The second for visualisation of Real
Human Organ Datasets. The paper discusses the
development of a Virtual Surgery application using
Real Human Organ datasets bringing the best of the two
streams mentioned. This enables the surgeon not only to
understand and analyse an anatomical structure but also
to practice mock surgery.
2. LITERATURE REVIEW
2.1 Review Classification
The literature documents a wide spectrum of work
carried in the past, with respect to the present scenario.
The literature review is classified as (1) Choice of
Library/Toolkit (2) Current VR Systems (3) Methods
used Simulating Surgical Incision.
2.1.1 Choice of Library/Toolkit
Bierbaum and Just , discusses vital facts in the
development of VR system. Firstly, effective immersive
environments need high frame rates (15 Hz or better)
and low latency. Secondly, the developed environment
should be flexible enough to be able to adapt to many
hardware and software configurations. Finally, the
developed system should be easy to configure and to
learn. A developer can choose between high-level
interfaces (For E.g. Virtools, Quest3D etc.) that contain
scripting languages or low-level interfaces that requires
rigorous development for a particular event. Modelling
Interfaces can be chosen between external modelling
interfaces or interfaces that contain the API to model
them. The authors discusses some of the VR softwares
(Iris Performer, Alice, Cave Automatic Virtual
Environment (CAVE) Library etc.) along with their
strengths and limitations. The paper gives a clear
understanding on the features that need to be looked
into while choosing a software.
Dreeson  has given a study on open-source
softwares for medical images. The author describes the
features of Open Source Software Libraries - The
Visualization Toolkit (VTK) and Insight Segmentation
and Registration Toolkit (ITK). VTK as an open source
library containing many algorithms for 2D and 3D
image processing and visualisation. The library
implements the concept of data processing pipeline and
contains numerous filters for reading, modifying and
writing data. ITK is an extension of VTK, used for
image analysis. The author describes software
applications developed out of these open-source
softwares i.e. OsiriX, Julius and (X)MedCon. A clear
understanding on the software libraries, VTK and ITK
is given and also its usability in a variety of platforms.
2.1.2 Current VR Systems
Robb , discusses Virtual Endoscopy or
computed endoscopy a method of diagnosis using
computer processing of 3D image datasets, CT or
Magnetic Resonance Imaging (MRI) scans to provide
simulated visualisations of patient specific organs
similar or equivalent to those produced by standard
endoscopic procedures. The system provides viewing
controls and options such as direction and angle of
view, scale of view, immediate translocation to new
views, lighting and measurement are implemented.
Visible Human Male (VHM) datasets from the National
Library of Medicine, US, are taken as input datasets to
Fig. 3 Visualisation Scenario in Virtual Endoscopy
(a) Different sections of the stomach  (b)
Different views of human organ models with an
isolated colon model 
The datasets are first pre-processed by transforming it
to isotropic elements. These images are then brought
into spatial synchrony and segmented to reduce the
dataset to the desired specified anatomic structure(s).
Surfaces are extracted by segmenting single anatomic
objects from the 3D images. Theses surface are then
converted to a meshwork of polygons, manipulating
information such as color, lighting, textural patterns etc.
A preview of the rendering is done by radiologists,
surgeon or endoscopist to confirm whether the model is
acceptable. If acceptable, visualisation of the models
are done using advanced display procedures else re-
processing of the datasets is done. The Virtual
endoscopy models of the torso and its contents acquired
from the VHM have been used for interactive
visualisation. Simulations are done using fly-throughs
captured of the segmented and modelled stomach (Fig.
3 (a) ), selected locations of the trachea, esophagus,
colon (Fig. 3 (b) ) and aorta.
MSc (Engg) in Computer Graphics and Virtual Reality
Kühnapfel and Maaβ , discusses a VR system
for simulation and training. The system is used for
simulating Minimally Invasive Surgery (MIS) using
the simulation software KISMET. The software
consists of a spline based modeler KisMo for the
creation of surgical scenes containing deformable
anatomical organ models, generating a spatial mass-
spring network of objects for elastodynamic simulation.
Simulation of organ behaviour on interaction with
virtual instruments are done making use of dynamic
spring/mass-node system solved by second order
ordinary differential equations (Lagrange equation). A
predictive approach (Newton-Euler) is used for
determining the velocity and position of masses at
specific intervals of time. The elastostatic Finite
Element Method (FEM) is used due to high
computational demand and the risk involved in
numerical instability. Modelling sessions using KisMo
can be seen in Fig. 4 .
Fig. 4 KisMo modelling session  (a) Modelling the
venal tree (b) Rendering a volume block
2.1.3 Methods used for Simulating Surgical
A paper presented by Bielser et al , describes
ways by which real-time modelling and interactive
cutting of 3D soft tissue can be used for surgery
simulation using a physical framework. The method of
tetrahedral subdivision is used to perform this
simulation. First, collision is detected between the
trajectory of the virtual scalpel and individual edges of
the tetrahedral mesh followed by a tetrahedralization of
the mesh modifying its geometry and topology. Cutting
of the mesh is done by the scalpel intersection at each
edge where each cut action is stored in a look-up table
entry. The actions include insertion of new mass nodes,
connectivity assignment and insertion of new faces.
Mesh manipulation is implemented using three
operations - edge splitting, face splitting, face insertion.
Tissue elasticity simulation is done applying mass-
spring systems ordinary second order differential
equations. The equation is solved using two-level
Runge Kutta method where new positions and
velocities of each mass node is determined. The
equation solver is different from Kühnapfel and Maaβ,
that uses Newton-Euler method of equation-variable
determination. The tetrahedral approach of cutting is
useful but considerable mesh computation is involved
especially in manipulating geometry and topology.
Nienhuys and Stappen , have presented
Delaunay approach to interactive cutting in triangulated
surfaces. The authors then proposed the Delaunay
Triangulation (DT) approach to simulate cutting. The
authors defined DT of a set of points, as a triangulation
of that set where the circumcircle of every triangle does
not contain any other points from the set. The approach
is simulated on a 2D mesh using a line object as scalpel.
The line object is intersected on the mesh, where the
active (intersected) nodes are moved. This is followed
by local remeshing of the active triangles. The meshing
procedure involves edge flipping, node removal, nodes
insertion for split. The methodology is implemented on
3D surfaces using single incision and multiple incisions.
The authors uses the Delaunay approach primarily to
produce well-shaped meshes with few elements.
The summary of the Literature Review is given
based on the Review Classification.
Choice of Library/Toolkits
The literature review give a clear understanding on
the choice of toolkits based on certain criterion
such as software performance, flexibility, import
and export capability and capability to interface
with different VR devices.
Libraries/Toolkits can be divided into two
categories i.e. (1) High-Level and (2) Low Level.
Current VR Systems
Preliminary tasks of acquiring organ data, handling
large datasets, pre-processing datasets needed to be
given importance for the development of a surgery
Few systems have used Real Datasets for both
visualisation and surgical procedure simulation.
Methods used for Simulating Surgical Incisions
From the literature study, different approaches
were used for simulating real life surgery.
Incisions were simulated using subdivision,
Delaunay mesh refining.
Organ datasets are modelled based on Physics
Principles i.e. mass spring, finite element method
to simulate organ deformation.
3. PROBLEM DEFINITION
3.1 Problem Definition
Current Simulators make use of Pre-Modelled
Datasets for training procedures and simulating real
surgery scenes where as Real Datasets are used mostly
for interactive visualisation. A few attempts are made in
the past to use Real Human Organ Data both for
visualisation and to perform mock surgeries. Thus
exists a need for such attempts.
3.2 Problem Statement and Objectives
The aim is to develop a VR application for
visualisation and practicing surgery on Geometric
Model of Real Human Organ Data. The objectives are.
To develop an application that would process
patient's Real Data into usable form.
To visualise the processed data, through user
controlled camera navigation using appropriate
To develop a Virtual Surgery Procedure and
surgical tool to conduct the procedure.
To develop the application GUI and VR devices
MSc (Engg) in Computer Graphics and Virtual Reality
The methodologies used for the development are
1. Patient’s Real Data in DICOM format, is
processed into a Geometric format which is then
rendered in a synthetic environment.
The visualisation scene is developed making use of
a user controlled virtual camera for user navigation
and dataset visualisation in the 3D environment.
Textured environments, model material properties
and lighting effects are applied to the virtual scene
The algorithm for the surgery procedure is
developed, for which a Real Organ Dataset is
taken and manipulated for simulation. A synthetic
surgical instrument is modelled using basic
geometric primitives - cylinder, cube and sphere.
The GUI is developed for dataset inputs and
camera controls specification.
interfacing is done for controlling camera
movements in the visualisation module and the
virtual instrument in the surgery training module.
3.4 Scope of Development
The following scope is used for the development
of the application.
An exclusive surgery procedure is considered for
simulation. The surgery instrument is modelled for
Two datasets are considered for the testing and
implementation of the application. The datasets are
chosen, based on the size, commercial use,
memory consumption and computation criterion.
The application is built with hardware resources of
appreciated configuration to deploy realistic
4. SYSTEM ANALYSIS AND DESIGN
4.1 System Analysis
A structured analysis is required for the
development of the application. This stage of system
development helps in fully realizing the system which is
to be designed, developed and implemented.
4.1.1 System Parameters
This part of the analysis include the Input/Output
requirements as well as the processes involved in the
development of the application. Each of the
requirements are specified below.
2D DICOM Image Datasets
Real-Time User Inputs from VR/Traditional input
3D Geometric Model Construction of the dataset
Surgery Simulation in Real-Time
Surface Generation of the Input Dataset
Model Property Setting
Virtual Light Setting
Real Time Collision with Dataset
Real Time Dataset Manipulation
Surgical Tool Modelling
4.1.2 Hardware Requirements
RAM ~ 4GB
Graphic Card ~ 512 MB
Processor Speed ~ 2GHz
VR devices - Pressure sensing DataGlove and
4.1.3 Software Requirements Analysis
Choice of Toolkit
The preliminary task involves the study of
different Software Libraries and Tools available for the
development of this application. A comparative study
on various toolkits are done and it is found that
OpenGL (Open Graphics Library), VTK and ITK
Software Libraries are useful for the development of the
application. These tools are chosen for the following
The libraries are Open-Source hence, users are
benefited by their adaptability in various
Operating Systems and availability to the general
public free of cost.
Each of the tools have got their specific areas of
implementation that can be incorporated in the
application development. For instance, VTK for
complex visualisations, ITK for medical image
analysis and OpenGL for its ability to interact
between the former libraries, Interfacing user-
interaction devices and GUI development.
4.2 System Design
4.2.1 Application Design Flow
The application begins with an initial display
scene. Continuous polling of the interface devices is
done. The user needs to input an event choice i.e.
Visualising a dataset or Perform a surgery procedure.
The visualisation process
visualisation which is implemented by providing 'user-
specification' of the folder consisting of DICOM
images. The surgical procedure module consists of
training and simulation, where the user can perform a
real life surgical procedure. The system ends when the
user chooses the close option from the application. Fig.
5, shows the overall design of the application which are
further divided into visualisation process and surgery
consists of dataset
MSc (Engg) in Computer Graphics and Virtual Reality
Visualisation Process Design Flow
Visualisation of real human organ dataset is an
essential component in a virtual surgery application.
Four operations are done in the visualisation module i.e.
reading the dataset, writing a volume, camera
navigation and virtual scene setup. The folder
containing the DICOM images is set by the user and the
images are read by the system. The volume is written
either in memory or to the storage disk, for
visualisation. Camera movements are given to the user
to navigate in the virtual environment. To give a
realistic implementation of the actual surgery, virtual
environments are created using organ textures mapped
on to 3D objects. These textured objects along with the
dataset input is placed in the scene for visualisation.
Fig. 6, shows the visualisation design.
Surgical Procedure Design Flow
The development of surgical events involve three
main operations either instrument collision with the
dataset, manipulation of the dataset geometry and give a
visual response. The design is shown in Fig. 7, where
the system receives coordinates of the virtual surgical
instrument in real-time. Real-time collision check has to
be performed to show instrument intersection with the
dataset. When this step is achieved a visual response
due to dataset manipulation is given to the user.
Fig. 5 Application Design Flow
Fig. 6 Visualisation Module Design Flow
Fig. 7 Virtual Surgery Scene Design Flow
4.2.2 Application GUI Design Flow
The GUI design of the system consists of primarily
two parts i.e. Input GUI and Output GUI. The Input
GUI consists of the user input of the dataset directory
for visualisation or a surgery event. A choice of any of
these two events leads to a 3D environment setup scene
where the user can visualise. Fig. 8, shows the layout of
the GUI. The Input GUI consists of menu based
selection of the directory list or an event.
Fig. 8 System GUI Design
The Output GUI consists of Application Run-Time
Information and Control Specifications for the devices.
The control specifications mentions user-inputs for
camera navigation and instrument motion.
5. IMPLEMENTATION AND RESULTS
Data Acquisition, Geometric Model Creation,
Camera Navigation, Incision Simulation are tasks that
are developed. The stages for the development of the
tasks are described below.
MSc (Engg) in Computer Graphics and Virtual Reality
5.1.1 Stages Of Development
1. Data Acquisition
Experimental DICOM datasets are obtained from a
CT/MRI source . These datasets are used as sample
datasets for the OsiriX Image Navigation Software
(Mac OS X). The images contained in the datasets are
segregated to specific folders for user input of the
dataset directory path. The first dataset is a surgical-
repair-of-facial-deformity of size 25.4 MB and a total of
48 images (Fig. 9 (a) ). The second dataset is a
normal CT-Coronary acquired on a 16 detectors CT
scanner of size 110 MB and a total of 220 images (Fig.
9 (b) ).
Fig. 9 CT Images in DICOM format  (a)
Surgical-Repair-of-Facial-Deformity (b) Coronary
2. Rendering the Geometric Model
The directory containing dataset images is set
using the ITK Grass Roots Dicom (GDCM) Library,
Input Class. A series of DICOM images are read and
copied to the computer memory as a volume dataset and
also acquired as a 3D Model (.vtk) file for evaluation.
The volume dataset in memory is exported to the VTK
pipeline for visualisation, where contouring technique
of visualisation is used. The contoured datasets can be
seen in Fig. 10.
Fig. 10 Contoured Datasets (a) Facial-deformity
dataset (b) Coronary dataset
3. Implementing the Visualisation Event
Programming Camera Controls:-
Camera movements azimuth, elevation, yaw, pitch,
roll, dolly are set for the user to move the virtual
camera in the synthetic environment. These
movements are controlled using keyboard inputs.
Rendering a realistic surgery scene:-
A textured source is applied in the visualisation
scene for realism. A human organ image is
mapped to a 3D sphere object as a texture. The
radius of the sphere is set with a value higher than
the input dataset bounds.
4. Implementing the Surgery Training event
The Geometric Model of one of the test datasets is
used as evaluation dataset for the simulation event.
Dataset reading is achieved by making use of the
dataset reader class in VTK. Part of the dataset is
extracted and processed
technique for visualisation.
Determining line object coordinates in real-time:-
A line object is used as a preliminary virtual tool
for detecting collision with the dataset. The end
points of the line object are determined in real time
and checked if any part of the dataset is collided.
Line-dataset collision (intersection) is achieved in
real-time by using cell search .The point along the
line object that intersects with the dataset cell is
stored in a 1D array of size 3, one each for x, y and
z coordinates. The closest points surrounding the
intersection point in the dataset within a specific
radius are determined for cell removal to achieve
incision simulation. This is done by making use of
the point selection algorithm. Fig. 11, shows the
implementation in real-time.
Fig. 11 Scalpel Collision Detection Process
Virtual Instrument Modelling:-
A virtual scalpel is modelled using 3D sphere for
the scalpel blade and cylinder for the scalpel
handle. The sphere and the cylinder geometry are
modified to look like the scalpel blade and handle.
The two objects are combined in an assembly and
then translated, rotated and scaled to be rendered
in the scene. The virtual scalpel is shown in Fig. 12
Using scalpel coordinates for collision detection:-
The scalpel coordinates are then used to detect
collision with the dataset.
Surgical incision is done after achieving collision
detection. The closest points located in the dataset
are stored in an array and the attribute values i.e.
scalar values corresponding to each of these points
are altered for dataset clipping.
Fig. 12 Incision process (a) Virtual Scalpel (b)
Simulating Incision with the Virtual Scalpel
MSc (Engg) in Computer Graphics and Virtual Reality
The end result is a dataset with (i) Non-Intersected
points and their corresponding attributes, (ii)
Intersected points and
attributes. The intersected points are clipped from
the dataset by specifying a clip value. The surgical
incision is shown in Fig. 12 (b).
Realism Implementation Process:-
Two datasets are extracted from the evaluation
dataset to depict skin and flesh structure. The
diffuse and specular properties of the two surfaces
are set to shininess to imitate the effect of
reflection from a camera-light object. A texture
object is applied to a 3D sphere and placed in the
virtual scene with a radius greater than the
coordinate bounds of the flesh surface.
5. VR Device Interfacing
VR Devices are interfaced for user interaction with
the application. The joystick device is used to control
the virtual scalpel movement and the DataGlove is used
to control the camera movements in visualisation. Each
of these devices are set using their user-interface
libraries. These devices are then polled to retrieve the
device information when the application begins. The
devices are seen in the Fig. 13 below.
Fig. 13 VR Devices (a) Joystick Interface (b)
6. GUI Development
This stage of development involves user
interaction with the application through display
windows or widgets. The display consists of bitmap
texts rendered on the OpenGL render window and menu
consisting of three options i.e. setting the dataset
directory for visualisation, visualising the set dataset
with a different contour value and performing the
incision simulation event. The dataset directory is
initialised as the system directory 'c:'. The files and
folders contained in the system directory are set by the
VTK directory reader class and copied to a string array.
Each string is added as a menu entry to the OpenGL
menu creation function. The final display menu consists
of a list of files and folders currently present in the
system directory 'c:' The DICOM Image Dataset Folder
is also included in the list. 2D Scene Title texts are used
for the Visualisation and Surgery Training scenes.
The final stages of the application that involves
realistic scene creation can be seen in Fig 14 and Fig.
15. Fig. 14, shows the visualisation of the Surgery-
Repair-of-facial-Deformity dataset and Fig. 15, shows
the Incision Procedure being implemented. Two Parts
of the facial-deformity dataset are extracted and
visualised in the surgery scene incorporating distinct
material properties. The cutting simulation is
performed, revealing the 'flesh' structure beneath the
Fig. 14 Applying realism (a) Normal Visualisation
Fig. 15 Final Implementation of Surgery Training
Systems are developed that make use of Real
Human Organ data for visualisation and Pre-Modelled
Data for simulation and training. Few attempts have
been made in the development of applications
incorporating Real Data for both these events. The
paper discusses the solution to the problem and defines
a system that serves both Visualisation and Simulation
using Real Datasets. These datasets are processed in
their native format to a usable form for the development
of the application. The following conclusions can be
DICOM datasets are read by the system as input
and processed for visualisation.
The VE is developed for user navigation in the VE.
The environment is developed imitating a real life
surgery scenario, using appropriate scene settings.
The surgery procedure is developed making use of
a modelled surgical instrument to perform the
MSc (Engg) in Computer Graphics and Virtual Reality
Necessary GUI has been developed for the
application. VR devices are also incorporated for
user interaction with the synthetic environment.
The system can be developed to deliver advanced
features compared to the existing ones. These features
can include organ part segmentation, inclusion of
multiple scene cameras for flexible navigation,
interfacing other VR devices etc. Hand models in 3D
can be incorporated in the surgery scene to simulate a
surgeon's hand movement, tracked by using a motion
tracker. The system GUI can be built in relation with
the developed scenes. The fully developed system can
be beneficial to surgeons for pre-operative planning and
to perform mock surgery as well as medical students to
practice medical procedures.
 Allen, B., Christopher, J., (1998) "Software Tools
for Virtual Reality Application
SIGGRAPH '98 Course 14, Applied Virtual Reality, pp.
 Daniel, B., Volker, A.M., Markus, H.G., (1998)
"Interactive Cuts through 3- Dimensional Soft tissue",
Computer Graphics Research Group, CS Technical
 DICOM sample image sets. Retrieved February 7,
2007 from http://188.8.131.52/DIC OM_FILES/DIC
 Four different views of models of various body
organs of the Visible Human [Image]. Retrieved
February 7, 2007 from http://www.nlm.nih.gov/research
 Hüseyin K.Ç., (1999) KisMo: A spline-based
modeller for elastodynamic objects, [Image]. Retrieved
February 7, 2007 from http://www- kismet.iai.fzk.de/
 Hüseyin K.Ç., (2000), "Visual simulation of
bleeding Organs" [Image]. Retrieved February 17, 2007
 Han-Wen Nienhuys, Frank van der Stappen, A.,
(2003) "A Delaunay approach to interactive cutting in
triangulated surfaces ", Institute of information and
computing sciences, utrecht university , Technical
Report UU-CS-2002-044 .
 Kühnapfel, U., Cakmak, H.K., Maaß, H., (1999)
"3D Modeling for Endoscopic Surgery", Proc. IEEE
Symposium on Simulation, Delft University, Delft, NL,
 Robb, R.A., (2000)
development and evaluation using the Visible Human
Graphics 24, pp. 133-151.
 Ralf, D., (2005) "Open source software for medical
images", Graduate Seminar in Computer Graphics:
Medical Imaging and
 South Texas Center, The Texas A & M University
System Health Science Center [Image]. Retrieved
February 7, 2007 from http://www.stc.tamhsc.edu/imag
 Simulation Module, Materialise, (2007) [Image].
Retrieved February 17, 2007 from http://www.material
 Scott L., (2006) [Image]. Realistic mannequins
provide life-and-death lessons without the actual life
and death. Retrieved March 5, 2007 from http://www.s
 Virtual Endoscopic Views of two different sections
of the Stomach of Visible Human Male, [Image].
Retrieved February 7, 2007 from http://www.nlm.nih.g