Creating a Vision Channel
for Observing Deep-Seated Anatomy
in Medical Augmented Reality
A Cut-Away Technique for In-Situ Visualization
Felix Wimmer1, Christoph Bichlmeier1, Sandro M. Heining2, Nassir Navab1
1Chair for Computer Aided Medical Procedures (CAMP), TU M¨ unchen, Germany
2Trauma Surgery Department, Klinikum Innenstadt, LMU M¨ unchen, Germany
Abstract. The intent of medical Augmented Reality (AR) is to augment
the surgeon’s real view on the patient with the patient’s interior anatomy
resulting from a suitable visualization of medical imaging data. This
paper presents a fast and user-defined clipping technique for medical AR
allowing for cutting away any parts of the virtual anatomy and images of
the real part of the AR scene hindering the surgeon’s view onto the deep-
seated region of interest. Modeled on cut-away techniques from scientific
illustrations and computer graphics, the method creates a fixed vision
channel to the inside of the patient. It enables a clear view on the focussed
virtual anatomy and moreover improves the perception of spatial depth.
Using Augmented Reality (AR) for in-situ visualization of medical data has been
a subject of intensive research during the last two decades [1, 2]. The objective
of this research is the use of AR technology for preoperative diagnoses and sur-
gical planning as well as intraoperative navigation. The purpose of medical AR
is to augment the surgeon’s real view on the patient with the patient’s interior
anatomy. A stereo video see-through head mounted display and an external op-
tical tracking system allow for precise registration of visualized medical imaging
data such as computed tomography (CT), magnetic resonance imaging (MRI)
and ultrasound with the patient .
Improving data presentation for correct perception of depth, relative dis-
tances and layout of ”hidden and occluded objects”, for instance the position of
anatomy inside the human body from the surgeons point of view, is a major
issue in AR [4, 5]. In one of the first publications  about medical AR, Bajura
et al. already identified the problem of misleading depth perception when virtual
anatomy occludes the patient. To handle the problem, they render a ”synthetic
hole [...] around ultrasound images in an attempt to avoid conflicting visual
Following this model of creating a virtual hole inside the body of the patient,
we developed a fast and user-defined clipping technique, which enables the user to
cut away parts of the virtual anatomy and the camera image. The method allows
for a clear visibility of the focussed virtual anatomy and furthermore improves
the perception of spatial depth. In a preparative step the medical imaging data
is segmented into a region of the anatomy the user is interested in (the focus)
and an unimportant region. The user defines and positions a volume cutting
away all parts of the pre-segmented unimportant region lying inside of it. The
pre-defined focus region is not clipped and remains inside the volume. Shape
and size of the clipping volume can be defined individually by the user. The
remaining part of the unimportant region outside the volume and the borders of
the volume provide context information and thus positional and depth cues.
Medical imaging data usually includes a barrage of information. The present
cut-away technique is capable of reducing this information to an important frac-
tion of the data set and to reveal the view onto the focussed anatomy. We combine
cut-away views known from technical and medical illustrations and various ap-
proaches in the field of computer graphics [7, 8, 9] with the beneficial potentials
of medical AR technology.
2Materials and Methods
Medical imaging data taken from a CT or MRI scan is presented using a stereo-
scopic video see-through HMD. The entire tracking system allowing for tracking
the HMD, the patient and several surgical instruments is described in detail at
. We use pre-segmented surface models to visualize the anatomy.
The application of the present cut-away technique is illustrated in Figures 1.
Different shapes can be used for the clipping volume. Figures 1 (a) and (b) show
a box and Figure 1 (c) a sphere used as clipping volume. The clipping technique
is intended to be applied in conjunction with a technique described at  used to
modify the transparency of the camera image. Figure 1 (a) shows a clipping box
without integrating the transparency modulation technique (ghosting), whereas
Figures 1 (b) and (c) illustrate the combination of clipping volumes and the
transparency modulation technique.
(a) (b) (c)
Fig.1. Cut-away with Volume Clipping: (a) using a box as the clipping volume, (b)
the clipping box in conjunction with ghosting of the skin  and (c) the usage of a
clipping sphere in conjunction with ghosting of the skin 
The present technique generates a depth map by grabbing the depth buffer
after off-screen rendering of the skin and the chosen clipping volume. This depth
map is used to clip objects in the scene by performing a depth test in a shader
2.1Setting up Size and Position of the Clipping Volume
When using a box for clipping, the user is able to interactively define the coordi-
nates of each vertex of the box by key-pressing, mouse movement or movement
of his head (see Figure 2 (a)). If a sphere or a cylinder has been chosen, the
user can define the position and the radius of the volume. When the user found
the right location and size of the volume, it can be fixed permanently to this
2.2Generation of the Clipping Depth Map
In order to generate the depth map, first all buffers are cleared and the skin is
rendered to the depth buffer. Afterwards the depth test is flipped, which means
that fragments pass the depth test, that have a higher depth value than those,
which are already in the depth buffer. Consequentially the clipping volume is
rendered to the depth buffer. The depth values of the clipping volume are stored
in the depth buffer at the corresponding pixel positions, if they are greater than
those of the previously rendered skin. Additionally the stencil bit1is set at the
corresponding pixel positions. The resulting stencil mask is used to clip the
camera image and to display the volume. The depth values of the skin remain in
the buffer at all pixel positions where they are bigger than the depth values of
the clipping volume. The resulting depth buffer is stored to a designated buffer
called depth map.
2.3 Clipping of Virtual Anatomy and the Camera Image
A render mode chosen by the user determines, if a segmented virtual object
should be clipped or not and thus if it should be a focussed object or not.
When rendering an object that should be clipped, a fragment shader program
is performing a second depth test in addition to the standard depth test, using
the previously created depth map as an input parameter. This shader performs
a lookup into the depth map at the screen coordinates of the fragment and
compares the received depth value with the depth value of the fragment. If the
depth value is smaller (closer to the eye), the fragment is discarded. For this
reason every fragment, which has a smaller depth value than the corresponding
fragment of the clipping volume is discarded and thus the part of the dataset
lying inside the clipping volume is clipped.
1The stencil buffer is an additional buffer beside the color buffer and depth buffer,
and available on usual modern computer graphics hardware.
Fig.2. (a) Setting up the clipping volume and (b) texturing the clipping volume
The camera image gets only clipped, if the clipping volume is not used in
conjunction with ghosting of the skin. If applied without ghosting of the skin,
the generated stencil mask can be utilized to display the camera image only at
those screen positions where the stencil bit is not set. In this way, every part of
the skin lying inside the clipping volume is clipped and a window on the skin
is generated offering the depth cue occlusion for improved perception of virtual
anatomy (see Figure 1 (a)).
2.4 Displaying and Texturing the Clipping Volume
In order to provide further depth cues, the borders and planes of the clipping
volume can also be displayed. Therefore, fragments of the clipping volume lying
in front of the skin are removed using the generated stencil mask. The borders
of the clipping volume can be displayed in a semi-transparent or opaque way.
Shading and texturing the clipping volume provides further context information
for perceiving the focussed anatomy. Figure 2 (b) illustrates these effects. Here
a semi-transparent texture showing a linear gradient from white (close to the
observer) to dark (distant to observer) was chosen to improve depth estimation.
We used a thorax phantom to evaluate qualitatively the present visualization
technique. Originally, the phantom only consisted of a spinal column installed
inside the phantom. However, we extended the virtual anatomy by surface models
segmented from the Visible Korean Human (VKH)2data set. Virtual models are
registered manually with the thorax phantom, which is in our case sufficient for
evaluating the visualization method.
2The Visible Korean Human project provides different full body data sets: CT, MRI
and anatomical images http://vkh3.kisti.re.kr/new
302 Download full-text
The present method uses polygonal surface models to allow for real time vi-
sualization. Surface models have to be segmented and triangulated before the
visualization can be performed. Future work will include the integration of this
approach into a ray cast based volume renderer to be able to render volumetric
CT or MRI data directly without time wasting, preparative steps. However, such
a rendering technique requires powerful hardware to obtain real-time rendering.
Acknowledgement. We want to express our gratitude to the radiologists and
surgeons of Klinikum Innenstadt M¨ unchen, Frank Sauer, Christopher Stapleton,
Mohammad Rustaee and the NARVIS group for their support. This work was
granted by the BFS within the NARVIS project (www.narvis.org).
1. Cleary K, Chung HY, Mun SK. OR 2020: The operating room of the future.
Laparoendosc Adv Surg Tech. 2005;15(5):495–500.
2. Peters TM. Image-guidance for surgical procedures.
3. Sauer F, Khamene A, Bascle B, et al. Augmented reality visualization in iMRI oper-
ating room: System description and pre-clinical testing. Proc SPIE. 2002;4681:446–
4. Furmanski C, Azuma R, Daily M. Augmented-reality visualizations guided by
cognition: Perceptual heuristics for combining visible and obscured information.
In: Proc. IEEE and ACM Int’l Symp. on Mixed and Augmented Reality (ISMAR).
Washington, DC, USA: IEEE Computer Society; 2002. p. 215.
5. Sielhorst T, Bichlmeier C, Heining S, et al. Depth perception a major issue in
medical AR: Evaluation study by twenty surgeons.
2006; p. 364–72.
6. Bajura M, Fuchs H, Ohbuchi R. Merging virtual objects with the real world: Seeing
ultrasound imagery within the patient. Proc SIGGRAPH. 1992; p. 203–10.
7. Feiner S, Seligmann DD. Cutaways and ghosting: Satisfying visibility constraints
in dynamic 3D illustrations. Vis Comp. 1992;8(5&6):292–02.
8. Diepstraten J, Weiskopf D, Ertl T. Interactive cutaway illustrations. Comp Graph
9. Weiskopf D, Engel K, Ertl T. Volume clipping via per-fragment operations in
texture-based volume visualization. In: Proc IEEE Vis. Washington, DC, USA:
IEEE Computer Society; 2002. p. 93–100.
10. Bichlmeier C, Wimmer FJ, Heining SM, et al.
Hybrid in-situ visualization method for improving multi-sensory depth perception
in medical augmented reality. In: Proc. IEEE and ACM Int’l Symp. on Mixed and
Augmented Reality (ISMAR); 2007. p. 129–38.
Phys Med Biol.
Lect Notes Computer Sci.
Contextual anatomic mimesis: