ArticlePDF Available

Nonphotorealistic Rendering of Medical Volume Data

Authors:
  • Zhenjiang university

Abstract and Figures

The article introduces volumetric hatching, a novel technique that produces pen-and-ink-style images from medical volume data. Unlike previous approaches that generate full-surface models, our technique uses characteristics of the volume near the stroke being produced to generate a local intermediate surface. Because global isosurfaces can't exactly model many medical subjects, our volume-based method has considerable advantages. Our method is largely insensitive to surface artifacts. We focus on hatching with line strokes to portray muscles, intestines, brains, and so on. Hatching with line strokes requires determining not just the position of the line strokes, but also their orientation. Thus, the strokes not only illustrate the subject's shape, but also describe its character in some way - for example, by displaying fiber orientations for muscles.
Content may be subject to copyright.
Nonphotorealistic rendering, which pro-
duces images in the manner of tradition-
al styles such as painting or drawing, is proving to be a
useful alternative to conventional volume or surface
rendering in medical visualization. Typically, such illus-
trations use simple lines to demon-
strate medical shapes and features,
omitting a great deal of detail. They
frequently highlight the most rele-
vant information better than glossy,
colored images. Medical illustration
has long regarded pen and ink as an
important medium. Thus, because
medical professionals are familiar
with pen-and-ink illustrations, they
could readily accept them as an
alternative to conventional render-
ing of medical images.
Several authors, such as Treavett
and Chen,1have begun using NPR
in medical visualization (see the
“Related Work in Nonphotorealistic
Rendering” sidebar for a discussion
of other visualization techniques).
The general approach has been to
apply existing surface-hatching techniques to surface
models created from the volume data by marching
cubes2or a similar method. Surface visualization, how-
ever, might not be well suited to portraying soft tissues,
which are difficult to model with isosurfaces.
This article introduces volumetric hatching, a novel
technique that produces pen-and-ink-style images from
medical volume data. Unlike previous approaches that
generate full-surface models, our technique uses the
characteristics of the volume near the stroke being pro-
duced to generate a local intermediate surface. Because
global isosurfaces can’t exactly model many medical
subjects, our volume-based method has considerable
advantages. Our method is largely insensitive to surface
artifacts. We focus on hatching with line strokes to por-
tray muscles, intestines, brains, and so on. Hatching
with line strokes requires determining not just the posi-
tion of the line strokes, but also their orientation. Thus,
the strokes not only illustrate the subject’s shape, but
also describe its character in some way—for example,
by displaying fiber orientations for muscles.
Volumetric hatching can’t replace conventional
medical visualization. Rather, it can supply an alter-
native description that can enhance medical under-
standing. It might be particularly useful with the
vector-based images of functional anatomy, where it
could both describe motion sequences more com-
pactly and focus on specific medical features demon-
strated in the animation.
Volumetric hatching
Figure 1 shows the volumetric hatching pipeline. Vol-
ume data is a 3D grid comprising a number of 2D
images. A set of eight adjacent volumetric data points
(VDPs)—sample points in the volume data—makes up
a volume cube, the basic unit of volume data.
The silhouette, or outline of the subject, provides
information about the subject—enough that some
objects are recognizable from their silhouettes. The sil-
houette changes if the subject is viewed from a differ-
ent direction.
Silhouette computation identifies a set of 3D points—
Nonphotorealistic Rendering
Volumetric hatching
produces pen-and-ink
illustrations suited for
medical illustration. This
approach accounts for data
beneath the surface,
producing images showing
the subject’s shape and
reflecting its character.
Feng Dong, Gordon J. Clapworthy, Hai Lin, and
Meleagros A. Krokos
De Montfort University
Nonphotorealistic
Rendering of
Medical Volume
Data
44 July/August 2003 Published by the IEEE Computer Society 0272-1716/03/$17.00 © 2003 IEEE
Volume data
Rendering
Stroke
directions
Silhouette
points
Stroke
generation
1Overview of
volumetric
hatching. Vol-
ume data is the
input for pro-
ducing silhou-
ette points and
strokes, which
feed into the
rendering
module.
silhouette points—that pass to the rendering module to
generate silhouette lines in the final image. During sil-
houette computation, we collect 3D silhouette points
instead of lines. We project the points on the final image
and generate 2D lines connecting the projections dur-
ing rendering. We have had poor results with the alter-
native approach of tracing lines from one volume cube
to another to obtain 3D lines and projecting these lines
during rendering.
A direction field finds the overall stroke orientations.
The direction field gives the stroke direction at each VDP
and thus dictates how the hatching will occur. Compu-
tation or user interaction determines the direction field
to be used.
IEEE Computer Graphics and Applications 45
Work related to volumetric hatching includes surface
hatching, volume data hatching, and more general uses of
nonphotorealistic rendering in medical visualization.
Although NPR can take many forms, we concentrate on
pen-and-ink illustrations.
Surface hatching
Surface hatching in the pen-and-ink style illustrates a 3D
surface using strokes instead of colors, with the hatching
occurring on the surface.
A crucial problem in surface hatching is defining stroke
direction to best illustrate surface shape and features.
Although some authors have suggested isoparameter lines,
in general, the two directions of principal curvature appears
to be the favored approach.1,2 The main obstacles to fast
surface hatching are silhouette detection and visibility
computation. Although many approaches address these
problems, none stands out as a definitive solution.
We can apply surface hatching to many surface models,
including parametric and implicit surfaces. To date,
however, relatively few researchers have applied pen-and-ink
illustration to 3D volume data. Treavett and Chen generated
pen-and-ink images via volume rendering of 3D medical
data sets.3They performed 3D drawing, which generates 3D
strokes as textures in object space and then projects them
on the image plane during rendering, and 2.5D rendering,
which forms 2D strokes in 2D image space using
information gained from the 3D object during prerendering.
Volume data hatching
Interrante used strokes to illustrate a surface shape within
volume data.2She defined 3D scan-converted textures by
computing directions associated with particles
predistributed on the subject; she suggested the directions
of principal curvature as the best candidates for stroke
directions. Interrante then applied these textures to the
volume to increase the opacity during the rendering stage.
When displaying a transparent surface, the strokes help
enhance surface shape. This technique is useful for
visualizing layered isosurfaces in volume data.
NPR in medical visualization
Saito proposed an NPR-related method to preview
volume data in real time.4He collected sample points
uniformly from an isosurface and projected them on the
image plane as geometric primitives such as lines. Because
the primitives’ orientation relies on the isosurface’s local
geometry, the method is restricted to isosurfaces.
Girshick and colleagues used principal-direction line
drawings to show curvature flow over the surface. The
technique aims at surfaces represented by 3D volume data
or a polygon surface mesh.5
Lu and colleagues presented an interactive direct-volume
illustration system that simulates stipple drawing.6They
explored several feature-enhancement techniques for
creating effective, interactive visualizations of scientific and
medical data sets, and introduced a mechanism for
generating appropriate point lists for all resolutions.
Researchers have believed for some time that NPR could
be useful in medical visualization. Levoy and colleagues
proposed NPR in radiation treatment planning.7 More
recently, Interrante and colleagues enhanced transparent
skin surfaces with ridge and valley lines.8Ebert and
Rheingans used volume illustration—basically a feature-
enhanced form of volume rendering—to highlight features
such as boundaries, silhouettes, and depth and orientation
cues.9The results were not pen-and-ink illustrations.
Other methods have enhanced object contours within
the data.
References
1. G. Elber, “Line Art Illustrations of Parametric and Implicit Forms,”
IEEE Trans. Visualization & Computer Graphics, vol.4, no.1, 1998,
pp. 71-81.
2. V. Interrante, “Illustrating Surface Shape in Volume Data via Prin-
cipal Direction-Driven 3D Line Integral Convolution,” Proc. Sig-
graph, ACM Press, 1997, pp. 109-116.
3. S.M.F. Treavett and M. Chen, “Pen-and-Ink Rendering in Volume
Visualization,” Proc. IEEE Visualization, IEEE CS Press, 2000, pp.
203-210.
4. T. Saito, “Real-Time Previewing for Volume Visualization,” Proc.
Symp. Volume Visualization, IEEE Press, 1994, pp. 99-106.
5. A. Girshick et al., “Line Direction Matters: An Argument for the
Use of Principal Directions in 3D Line Drawings,” Proc. NPAR 2000:
First Int’l Symp. Nonphotorealistic Animation & Rendering, ACM
Press, 2000, pp. 43-52.
6. A. Lu et al., “Nonphotorealistic Volume Rendering Using Stippling
Techniques,” Proc. IEEE Visualization 2002, IEEE CS Press, 2002,
pp. 211-218.
7. M. Levoy et al., “Volume Rendering in Radiation Treatment Plan-
ning,” Proc. 1st Conf. Visualization in Biomedical Computing, IEEE
CS Press, 1990, pp. 4-10.
8. V. Interrante, H. Fuchs, and S. Pizer, “Enhancing Transparent Skin
Surfaces with Ridge and Valley Lines,” Proc. IEEE Visualization, IEEE
CS Press, 1995, pp. 52-59.
9. D. Ebert and P. Rheingans, “Volume Illustration: Nonphotorealis-
tic Rendering of Volume Models,” Proc. IEEE Visualization, IEEE CS
Press, 2000, pp. 195-202.
Related Work in Nonphotorealistic Rendering
The stroke generation module generates the 3D
strokes following the directions indicated by the direc-
tion field. The illumination defined for 3D strokes filters
the generated strokes, and their projections contribute
to the final image. This provides visual coherence—if
we move the viewpoint, we need only reproject the 3D
strokes on a new image plane, thus maintaining consis-
tency between images.
Detecting silhouette points
The silhouette conveys the most important informa-
tion about a subject’s shape, and might be its simplest
portrayal. The silhouette depends on the viewpoint and
exists only at the subject’s visible boundary. We consid-
er only VDPs for silhouette points because medical data
volume cubes are quite small and therefore a set of VDPs
represents silhouettes with sufficient accuracy.
To find silhouette points, we first mark the VDPs and
volume cubes. VDPs belonging to the subject are in, and
those outside the subject are out, based on the data seg-
mentation. We further categorize in VDPs as follows:
Boundary point. The VDP is on the subject’s bound-
arythat is, it’s neighboring an out VDP.
Inner point. The VDP’s neighbors are all in.
We categorize the volume cubes as follows:
Interior cube. All eight VDPs are in.
Exterior cube. At least one VDP is out.
Because these categorizations don’t change with the
viewpoint, we perform them only once.
During silhouette detection, we search only among
the boundary points in the exterior cubes. Because these
form a small fraction of the total VDPs, searching is fast.
To check whether a VDP is a silhouette point, we cast
a view line from the viewpoint toward the VDP. If the
line pierces any cubes belonging to the subject before it
reaches the VDP, the VDP can’t be a silhouette point. If
it doesn’t, we check the two cubes immediately follow-
ing the VDP along the line, such as cubes A and B after
the VDP Pin Figure 2. If neither cube belongs to the sub-
ject, the VDP is a silhouette point. In Figure 2, Qis a sil-
houette point because cubes C and D are outside the sub-
ject; Pis not a silhouette point.
This process ensures that a concave subject is dealt
with properly. A view line could hit an interior cube posi-
tioned on the farther branch of the concavityfor
example, the line through Qstrikes interior cube E. The
resulting image will show one part of the subject sil-
houetted against the other part.
In practice, it’s more efficient to check the volume
cubes along the view line from Pto the viewpoint. So,
we check the volume cube neighboring Pfirst, then the
next cube toward the viewpoint, and so on. This way,
we don’t have to find the first volume cube along the
view line, which is typically quite time consuming.
Drawing the silhouette
To draw the silhouette, we project the 3D silhouette
points on the final image. During projection, we check
each point’s visibility. Because many of the silhouette
points come from the same volume cubes and are con-
nected in the 3D volume, we can form silhouette lines
by connecting the points’ projections using straight lines.
Because we can project many silhouette points close
to each other in the image, the projections can be
dense in areas, and we might create many unneces-
sary lines. To overcome this problem, we remove some
silhouette points in the dense areas before connecting
the projections.
We define two thresholds to identify areas in which
there are too many projections:
A float number, dist, defines a minimum distance
between projections. We use the volume cube’s size
as the unit of measure to keep dist independent of
sample size.
An integer, Neigh, defines next(P, Neigh), a set of VDPs
that are close to silhouette point P.
If Neigh = 1, the set next(P, 1) consists only of P’s
neighboring VDPs. If Neigh = 2, then next(P, 2) includes,
in addition to P’s neighbors, the neighbors of the VDPs
in next(P, 1). We likewise extend the definition for larg-
er values of Neigh.
Given a fixed dist and Neigh for each silhouette point
P, we consider the set of silhouette points that are in
next(P, Neigh) and discard the points that project with-
in a distance dist of the projection of P. After removing
these points, we join the remaining points using straight
lines to create the silhouette. Removing the points can
create some gaps in the silhouette. Increasing dist and
Neigh removes more unnecessary lines, but creates larg-
er gaps. Typically, dist is between 0 and 1 and Neigh is
between 1 and 3. Because silhouette detection and
drawing is quick, users can readily experiment with
these values.
Determining stroke directions
Determining stroke directions relies on both the
subject (for instance, to portray muscles, medical
artists typically use strokes indicating muscle fiber
direction) and the individual, as different artists
have their own hatching styles. Hence, designing a
Nonphotorealistic Rendering
46 July/August 2003
Interior cube Exterior cube
A
B
Viewpoint
P
C
D
E
Q
2Finding
silhouette
points. Because
the two cubes
(C and D)
immediately
following Q
along the line
are outside the
subject, Qis a
VDP. Because
cube B is inside
the subject, Pis
not a VDP.
general algorithm to automatically define stroke
directions for any subject is difficult. Rather, stroke
direction decisions should depend on the character
of the subjects.
For muscles, stroke orientation must follow the direc-
tion of the muscle fibers. In earlier work, we describe a
method for detecting muscle fiber orientation from vol-
ume data.3The process involves quickly estimating an
approximate fiber orientation for each VDP and refin-
ing it into a more accurate direction. We then associate
each VDP inside the muscle volume with a direction
indicating the fiber orientation at that point. For mus-
cles, we perform this process only for those VDPs with-
in a prescribed distance of the muscle surface.
For other subjects, such as a human brain, strokes fol-
low the direction of principal curvature, which we cal-
culate from the volume data using Thirion and
Gourdon’s method.4They derived principal curvature
formulas that use only partial differentials of the 3D vol-
ume; hence, we can compute the principal curvature
directly from the volume without extracting a surface.
The stroke lies along one of the principal directions asso-
ciated with the principal curvature.
Both methods are very versatile—the principal-
curvature method has been widely used to illustrate dif-
ferent subjects, and the other works on any subject with
line textures.
To produce a hatching style for a particular subject,
you should adopt an approach tailored to that subject.
If this requires introducing a new approach, the rest of
the algorithm will be unaffected, as hatching style is
independent of other components.
Producing strokes
In volumetric hatching, unlike surface hatching, inte-
rior data make contributions. Some interior strokes are
portrayed to improve rendering quality.
In general, to produce a stroke at a VDP, we fit a local
surface patch approximating the geometry at the VDP
and then intersect that patch with a normal plane fol-
lowing the stroke direction. The intersection of the patch
and the plane defines the stroke.
In Figure 3, we use a linear blue surface patch to esti-
mate the geometry at VDP P. The patch intersects the
green plane containing the stroke direction and gradi-
ent at Pto produce the stroke. The final form of each
stroke is a piecewise succession of straight lines across
the patch’s tessellated surface.
In this step, the main work is generating a useful
patch. Simply creating an isosurface patch that passes
through the VDP gives unsmooth results. To generate a
smooth patch around the VDP, we fit the patch from the
smoothed volume gradient. The sidebar “Smooth Patch-
es for Generating Strokes” gives technical details.
IEEE Computer Graphics and Applications 47
P
Stroke direction
Normal
Plane
Patch
Stroke
3Stroke gener-
ation. To pro-
duce a stroke,
we intersect a
linear surface
patch with a
plane and gradi-
ent at VDP P.
Smooth Patches for Generating Strokes
Before generating a smooth patch at a volumetric data
point (VDP), we must decide the patchs primary
orientation so we can represent the smooth patch in terms
of a height eld.
The patchs primary orientation is the main direction the
patch facesx, y, or z. We determine its orientation by
checking the gradient at the VDP. If the patchs z
component is greater than its xand ycomponents, its
primary orientation is z.
Here, we assume the primary orientation is z, but you
can apply the procedure equally to xor yorientations. The
regular grid of the height elds 2D domain is then in the
xy plane.
Creating an isosurface patch
Because we describe the smooth patch as a height eld,
to create it we must nd the discrete height values at grid
points G(x, y) in the 2D domain. From these, generating a
mesh is straightforward.
At each G(x, y), we compute the isosurface patch height,
denoted hiso(x, y), to produce the 3D points S(x, y, hiso).
We then obtain the normals to the isosurface by calculating
the gradient at points S(x, y, hiso) using linear interpolation.
The normal nassociated with grid point G(x, y) is denoted
n(x, y).
We rst generate an isosurface patch local to the VDP,
using the gradients on the isosurface to estimate its surface
normals. Because these gradients are smooth, they provide
a good foundation for generating the smooth patch.
Fitting a Smooth Patch
We t a smooth patch such that the normals on the patch
are close to those obtained in the previous step. We regard
n(x, y) as the normal of the smooth patch at T(x, y, hsmo),
so we can expand the xyz components of n(x, y) as linear
combinations of hsmo. We nd height values hsmo for the
grid points G(x, y) using least squares tting, thus dening
the 3D points T(x, y, hsmo). Once weve found all the
hsmo(x, y), making the surface patch is simple.
Figure 4 shows the volumetric hatching of an image with
and without smoothing.
Figure 5 shows how we would fit a smooth patch at
VDP P. The yellow polygon is the isosurface patch at P,
and the green arrows are surface normals generated on
the isosurface patch. We fit the smooth patch from these
normals. The red grid is the height field domain.
A height field h(x, y) gives a height value for each
point (x, y) in a 2D domain. It therefore defines a set of
3D points (x, y, h(x, y)). Obviously, we can easily make
a mesh from the height field by joining these 3D points.
Strokes are produced only at VDPs. If users require
more strokes to build their desired tones, we can insert
more data points into the cubes (via trilinear interpola-
tion) and produce more strokes from these points. If
users prefer a lighter tone, they can filter out some of
the strokes. Another alternative for tone building is illu-
mination, as described in the next section.
In Figures 3 and 5, computing strokes at Puses 2 ×2
×2 neighboring cubes. In practice, if you prefer longer
strokes, you can use more neighboring cubes. We’ve
often used 4 ×4 ×4.
Rendering
During the rendering stage, the 3D silhouette points
and strokes are presented in a 2D final image. This
involves several processes: illuminating the strokes,
determining the contribution of the strokes to the final
image, and drawing the silhouette.
Stroke illumination. Lighting is fundamental to
providing a 3D feel to a subject. In pen-and-ink illustra-
tion, including more or fewer strokes in an area can pro-
duce darker or lighter intensities. Thus, adjusting the
number of strokes that pass through an area controls
the intensity associated with that area.
We apply a volumetric illumination method based in
object space—that is, we calculate the volume cubes
lighting intensity. We linearly convert each cube’s light-
ing intensity to the number of strokes in the cube. If the
intensity is less than the number of existing strokes, we
reduce the number of strokes in the cube until it corre-
sponds to the lighting intensity at the cube.
Because the illumination occurs in the object space
and results in filtered strokes, we can reuse the strokes
even if we reposition the viewpoint, as long as the view-
ing distance and lighting sources remain approximate-
ly unchanged. The sidebar “Calculating Stroke
Illumination” provides further details.
VDP contribution. A VDP’s contribution to the
final image is the projection of its associated stroke. As
in volume rendering, exterior VDPs occlude the contri-
butions of interior VDPs. We therefore only consider
contributions from the interior VDPs near the subject’s
surface—that is, those within the user-defined distance
depth of the surface. During volume data segmentation,
identifying these VDPs, which form a set called the shell,
is straightforward. We need strokes only at the VDPs
within the shell.
In volume rendering, opacity controls the visibility of
internal structures. Likewise, volumetric hatching pre-
sents only the data within a certain distance beneath
the subject’s exterior surface, as controlled by parame-
ter depth. A proper depth choice lets users portray sub-
ject parts that lie just below the surface, but nevertheless
influence the subject’s appearance, while excluding sub-
ject parts that lie deep inside the subject. Because this
part of the process is fast, we can manually adjust the
depth value during rendering.
We further classify interior cubes as
shell cubes, which have all eight VDPs in the shell, and
core cubes, which have at least one VDP not in the
shell.
To calculate the contribution of a VDP in the shell,
given the viewpoint and image plane position, we again
use a view line. If the line doesn’t encounter any core
cubes before it reaches the VDP, the VDP is visible and
its stroke is projected. In Figure 6, the view line toward
Phits only the shell cube A before it reaches P, so Pis a vis-
ible point and its associated stroke contributes to the final
image. If the line hits a core cube before it reaches the
VDP, the VDP is invisible and doesn’t contribute to the
Nonphotorealistic Rendering
48 July/August 2003
4Volumetric
smoothing of
an image (a)
without
smoothing and
(b) with
smoothing.
(a)
(b)
Y
X
Z
P
Normals Smooth
patch
Isosurface
patch
5Fitting a
smooth patch.
Fitting the
yellow isosur-
face patch gives
us the blue
smooth patch.
final image (for example, VDP Qin Figure 6 is invisible).
For speed, we perform the calculation in reverse
direction—from the cube neighboring the VDP toward
the viewpoint—as in silhouette detection.
Experimental results
We’ve applied volumetric hatching to various sets of
medical data, including segmented muscles from visi-
ble human data, a human brain from a magnetic reso-
nance imaging data set, and part of the human digestive
system from a cat scan data set.
Figure 7 (next page) gives the results of a silhouette
computation applied to a muscle data set; the silhouette
lines display the subject in a simple form. The data set is
190 ×162 ×500 pixels. Figure 8 compares our method
for creating silhouettes with directly projecting 3D sil-
houette lines on the final image. Figure 8a shows a result
from 3D silhouette lines, while Figure 8b shows the
results for our method, with parameters Neigh = 2, dist
= 0.6. Figure 8b looks much better, as Figure 8a has too
many unnecessary lines.
Figure 9 illustrates the muscles at the front of an
upper leg. As we segmented the data, we removed mus-
cles that were not to be displayed from the data. We cal-
culated strokes using 4 ×4 ×4 neighboring cubes and a
depth of 16 for the rendering shell, because the effects
of surface muscle fibers penetrate a few volume cubes
beneath the surface.
In Figure 9a, the ratio of the cube’s lighting intensity
to the number of strokes in the cube is smaller than in
IEEE Computer Graphics and Applications 49
Calculating Stroke Illumination
To perform stroke illumination, we rst convert the
intensity of a volume cube into the number of strokes in the
cube. We dene cube intensity as the average lighting
intensity at the cubes eight VDPs. If we normalize the range
of cube intensity values to [0, 1], cube intensity is related to
the stroke number by
StrokesNum = (1 cube intensity) ×ratio + base (1)
where ratio denes the relation between the intensity and
the number of strokes, and base is the ambient tone, which
controls the number of strokes at the brightest cube.
Increasing ratio gives greater contrast. We typically set base
between 0 and 3.
We apply this illumination model to the strokes weve
created. The strokes are fairly evenly distributed because we
generate a stroke at each VDP. If we obtain the average
number of strokes in the volume cubes before illumination
and regard this as the number of strokes at the darkest cube
(that is, where cube intensity is 0), we calculate ratio as
Ratio = average stroke number base.
The illumination process then removes strokes from the
volume cubes in which cube intensity is greater than 0. The
number to be removed at a cube is equal to the difference
between the number of strokes before illumination and
number of strokes at the cube, which we determine from
Equation 1.
If the number of strokes in the volume cube is larger than
StrokesNum (overtoned), we remove strokes to reduce the
number of strokes at the cube. That is, we select a stroke and
remove the segment that lies within the cube. In practice, it
doesnt matter which stroke we cut, so we cut them randomly.
If we select a stroke for removal using this method, we
check the cubes through which it passes. If any of them are
overtoned, we cut the stroke from it, too. Thus, if a stroke is
cut from a volume cube, it becomes the rst candidate to
be cut from other volume cubesin Figure A, the bold
stroke is cut from cube A and also from the overtoned cubes
B and C, through which it passes. This lessens the number
of strokes to be cut and reduces the proliferation of small,
scattered line segments.
When we reposition the viewpoint, we dont need to
recompute the illumination if the distance between the
subject and the viewpoint remains nearly unchanged. If the
subject moves closer, however, we need more strokes. We
create extra strokes inside the volume cubes by generating
extra data points. If the subject moves away from the
observer, we use a smaller ratio to generate a lighter tone.
AB
C
A2D illustration of stroke removal.
P
Q
Viewpoint
A
Exterior cube Shell cube Core cube
6Computing VDP contribution. Because the view line
hits only the shell cube A before reaching P, Pis a visible
point and its associated stroke contributes to the image.
Figure 9b, giving the image greater contrast.
Figure 10a shows part of the human digestive system,
and Figure 10b is a human brain. Because illustrations
of these organs rarely use many strokes, we set a small
value to the intensity–stroke ratio to reduce the num-
ber of strokes, and set base (the ambient tone) to 0. We
calculated strokes using principle curvature directions
and 2 ×2 ×2 neighboring cubes. Because we needed a
light tone and were interested only in VDPs close to the
surface, we used a rendering shell depth of 1.
Table 1 gives the computational times associated with
some images in Figures 9 and 10. As the table shows,
stroke generation is the most computationally expensive
part of the process. Fortunately, because a user can store
strokes, subsequent reexamination of the data wouldn’t
require regenerating them, making viewing much faster.
Rendering consists of two parts: the first corresponds
to illumination calculation, and the second to VDP pro-
jection. Although illumination calculation is rather time
consuming (see the “Calculating Stroke Illumination”
sidebar), the results are reusable as long as the viewing
distance and lighting sources remain approximately
unchanged, in which case the user could skip this part
of the process. Part 2, VDP projection, is relatively fast.
Although it must be performed each time the viewpoint
changes, the rerendering time is acceptable as long as
the viewing distance and lighting sources don’t change
greatly.
Volumetric hatching is also efficient in terms of stor-
age because the images are based purely on the projec-
tion of 3D strokes, and thus can be stored in vector form.
Table 2 compares the storage requirements of images
in vector and raster forms, both compressed and uncom-
pressed. Storing the images in vector form saves a lot of
Nonphotorealistic Rendering
50 July/August 2003
7Muscle data
presented using
silhouettes. The
silhouette lines
display the basic
outline of the
subject.
8Comparison of silhouette generation methods on
medical data: (a) 3D silhouette lines and (b) volumetric
hatching using silhouette points.
(a)
(b)
9Volumetric hatching of a human leg using different ratio of the lighting
intensity to the number of strokes. Because the ratio in (a) is smaller than in
(b), (b) has greater contrast.
(a) (b)
space, even though the raster images are not very large.
The space required for images stored in raster form
increases with the size of the image, but doesn’t change
for images stored in vector form.
Because volumetric hatching deals directly with vol-
ume data, it differs greatly from most existing tech-
niques, which are based on surface hatching. As we
mentioned at the start of this article, a straightforward
approach to hatching volume data is to generate iso-
surfaces from
the data using
marching cubes
and then apply
standard sur-
face-hatching
techniques. Fig-
ure 11 compares
this method with
our volumetric
hatching. Hatch-
ing on the iso-
surface generated a poor result
(Figure 11a) compared with volu-
metric hatching (Figure 11b). We
applied the same lighting and stroke
illumination to the surface and vol-
ume. Stroke illumination failed to
generate a good result for the
strokes embedded on the surface.
The result from volumetric hatching
is more impressive because the vol-
umetric strokes (including those
underneath the surface) better
describe the subject.
Conclusions and future
work
The silhouette computation tech-
nique still requires improvement.
The current method depends too
much on the sample distance of vol-
IEEE Computer Graphics and Applications 51
(a) (b)
10 Volumetric
hatching exam-
ples: (a) human
digestive sys-
tem from a cat
scan data set
and (b) human
brain from a
magnetic reso-
nance imaging
data set.
Table 1. Timing results for volumetric hatching.
Stroke
Data Generation Rendering (seconds)
Figure Size (seconds) Part 1 Part 2
9a 190 ×162 ×500 305 52 5
9b 190 ×162 ×500 305 63 2
10a 120 ×82 ×300 78 13 2
10b 150 ×102 ×200 82 15 2
Table 2. Storage requirements comparison.
Raster Form (Kbytes) Vector Form (Kbytes)
Figure Size (pixels) Uncompressed Compressed Uncompressed Compressed
9a 319 ×783 734 213 240 116
9b 319 ×783 734 206 220 102
10a 326 ×444 424 49 42 21
10b 467 ×317 434 45 43 21
(a) (b)
11 Comparison of (a) surface hatching with (b) volumetric hatching. Volumetric strokes better
describe the subject, resulting in a more defined image.
ume data. Because we choose the silhouette points in a
discrete space, errors can’t be ignored if the sample dis-
tance increases.
To date, the images we’ve produced have been static.
A possibility for future work is to consider visual coher-
ence, particularly in animated sequences. Because we
treat illuminated strokes as 3D objects, the techniques
have a built-in visual coherence. If the viewpoint moves
much closer, however, we will have to generate more
strokes in the focused area to retain the required detail.
Another limitation is that volume hatching only works
with segmented volume data. Because line strokes are
designed to indicate a subject’s shape, we must identi-
fy the subjects before hatching can occur.
Pen-and-ink illustration using line strokes is just one
of many NPR styles used in medical illustrations and
books. Thus, our future work will focus on a more gen-
eral approach incorporating many NPR styles.
Acknowledgments
The European Commission, within the MultiMod pro-
ject no. IST-2000-28377 and the Chinese Natural Sci-
ence Foundation, award no. 60003009, supported the
work presented in this article.
References
1. S.M.F. Treavett and M. Chen, “Pen-and-Ink Rendering in
Volume Visualization,” Proc. IEEE Visualization, IEEE CS
Press, 2000, pp. 203-210.
2. W.E. Lorensen and H.E. Cline, “Marching Cubes: A High
Resolution 3D Surface Construction Algorithm,” Comput-
er Graphics, vol. 21, no. 4, 1987, pp. 163-169.
3. F. Dong, G.J. Clapworthy, and M. Krokos, “Volume Ren-
dering of Fine Details Within Medical Data,” Proc. IEEE
Visualization, IEEE CS Press, 2001, pp. 387-394.
4. J.P. Thirion and A. Gourdon, “Computing the Differential
Characteristics of Isointensity Surfaces,” Computer Vision
and Image Understanding, vol. 61, no. 2, 1995, pp. 190-202.
Feng Dong is a research fellow in
computer graphics in the Depart-
ment of Computer and Information
Sciences, De Montfort University,
UK. His research interests include
fundamental computer graphics
algorithms, medical visualization,
volume rendering, human modeling, and virtual reality.
Dong received a PhD in computer science from Zhejiang
University, China. He is a member of the UK Virtual Real-
ity Special Interest Group (VRSIG).
Gordon J. Clapworthy is a pro-
fessor of computer graphics in the
Department of Computer and Infor-
mation Sciences, De Montfort Uni-
versity, UK. His research interests
include medical visualization, com-
puter animation, biomechanics, vir-
tual reality, surface modeling, and fundamental computer
graphics algorithms. Clapworthy received a PhD in aero-
nautical engineering from the University of London. He is
a member of the ACM, ACM Siggraph, Eurographics, and
the UK-VRSIG, and is secretar y of the British Chapter of
the ACM.
Hai Lin is a research fellow in com-
puter graphics in the Department of
Computer and Information Sciences,
De Montfort University, UK. His
research interests include medical
visualization, volume rendering and
virtual reality. Lin received a PhD in
computer science from Zhejiang University, China.
Meleagros A. Krokos is a
research fellow in computer graphics
in the Department of Computer and
Information Sciences, De Montfort
University, UK. His research interests
include computer-aided geometric
modeling of curves and surfaces,
medical visualization, and virtual reality. Krokos was edu-
cated at the University of London. He is a member of the
IEEE Computer Society, ACM Siggraph, and the UK-VRSIG.
Readers may contact Feng Dong at the Dept. of Com-
puter and Information Sciences, De Montfort Univ., UK,
MK7 6HP; fdong@dmu.ac.uk.
For further information on this or any other computing
topic, please visit our Digital Library at http://computer.
org/publications/dlib.
Nonphotorealistic Rendering
52 July/August 2003
... With an adapted transfer function, these lines can be highlighted, which is similar to the approach by Lawonn et al. [LSPV15]. In contrast to feature lines, hatching approaches were also applied to volume datasets [DCLK03,CD05,PVW08]. Mostly particles or points are placed in the dataset on a specific isovalue and then the points are traced along the principle curvature directions. ...
Article
Full-text available
Multi-modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice-based viewing. We give an overview of state of the art in multi-modal medical data visualization techniques. Multi-modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three-dimensional visualization techniques for multi-modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication. Over the years, multiple techniques have been developed in order to cope with the various associated challenges and present the relevant information from multiple sources in an insightful way. We present an overview of these techniques and analyse the specific challenges that arise in multi-modal data visualization and how recent works aimed to solve these, often using smart visibility techniques. We provide a taxonomy of these multi-modal visualization applications based on the modalities used and the visualization techniques employed. Additionally, we identify unsolved problems as potential future research directions.
... With an adapted transfer function, these lines could then be highlighted, which is similar to the approach by Lawonn et al. [28]. In contrast to feature lines, hatching approaches were also applied to volume datasets [155][156][157][158][159]. Mostly particles or points are placed in the dataset on a specific isovalue and then the points are traced along the principle curvature directions. ...
Thesis
This thesis deals with visualizing anatomical data for medical education and surgical planning purposes. To this end, we have developed a detailed virtual atlas, the Virtual Surgical Pelvis (VSP), which unifies surgically relevant knowledge on pelvic anatomy. We provide methods to share the knowledge contained in the VSP for educational purposes, and to visualize the VSP in the context of individual patients for pre-operative planning purposes.
... In the medical domain, Interrante et al. [IFP95] enhanced transparent skin surfaces with the aforementioned ridge and valley lines for radiation therapy treatment planning. Dong et al. [DCLK03] presented NPR techniques for segmented volumetric medical data. They generate silhouette points and strokes in order to provide volumetric hatching, but only employ this to generate static images. ...
Conference Paper
In medical visualization of surface information, problems often arise when visualizing several overlapping structures simultaneously. There is a trade-off between visualizing multiple structures in a detailed way and limiting visual clutter, in order to allow users to focus on the main structures. Illustrative visualization techniques can help alleviate these problems by defining a level of abstraction per structure. However, clinical uptake of these advanced visualization techniques so far has been limited due to the complex parameter settings required. To bring advanced medical visualization closer to clinical application, we propose a novel illustrative technique that offers a seamless transition between various levels of abstraction and detail. Using a single comprehensive parameter, users are able to quickly define a visual representation per structure that fits the visualization requirements for focus and context structures. This technique can be applied to any biomedical context in which multiple surfaces are routinely visualized, such as neurosurgery, radiotherapy planning or drug design. Additionally, we introduce a novel hatching technique, that runs in real-time and does not require texture coordinates. An informal evaluation with experts from different biomedical domains reveals that our technique allows users to design focus-and-context visualizations in a fast and intuitive manner.
... For instance Nagy et al. introduced a method [27] that directly depends on volume characteristics and results in images that seem to be sketched quickly. A similar method was presented by Dong et al. [12]. However they show more densely hatched images, together with contours. ...
Thesis
Full-text available
This thesis presents selected applications with the aim of visually enhancing focus structures. It covers four different applications with a clear problem statement. The solution is anything but simple. Thus, four applications will be presented and analyzed, complemented by their respective solutions. Novel visualization techniques to enhance focus structures will be presented. Furthermore, two overviews of the current state of the art are described, which cover the fields of illustrative visualization techniques and multimodal data visualization.
Article
Full-text available
This survey provides an overview of perceptually motivated techniques for the visualization of medical image data, including physics-based lighting techniques as well as illustrative rendering that incorporate spatial depth and shape cues. Additionally, we discuss evaluations that were conducted in order to study the perceptual effects of these visualization techniques as compared to conventional techniques. These evaluations assessed depth and shape perception with depth judgment, orientation matching, and related tasks. This overview of existing techniques and their evaluation serves as a basis for defining the evaluation process of medical visualizations and to discuss a research agenda.
Article
Visual Computing for Medicine, Second Edition, offers cutting-edge visualization techniques and their applications in medical diagnosis, education, and treatment. The book includes algorithms, applications, and ideas on achieving reliability of results and clinical evaluation of the techniques covered. Preim and Botha illustrate visualization techniques from research, but also cover the information required to solve practical clinical problems. They base the book on several years of combined teaching and research experience. This new edition includes six new chapters on treatment planning, guidance and training; an updated appendix on software support for visual computing for medicine; and a new global structure that better classifies and explains the major lines of work in the field.
Article
The past years have seen tremendous advances in medical technology to acquire data about the human body with ever increasing resolution, quality, and accuracy. At a similar pace, visualization research for medicine has progressed, employing this medical data. Considering this technological revolution, we feel that it is time to write a textbook on this topic to support students, researchers, and practioners in this field, or those who want to become a part of it. This book is based on basic and applied research in computer-assisted radiology and surgery. It combines our knowledge and experiences in research and in computer science education since the year 2000.
Article
Full-text available
Gaining a better understanding of the human brain continues to be one of the greatest and most elusive of challenges. Its extreme complexity can only be addressed through the coordinated and collaborative work of researchers from a range of disciplines. 3D visualization has proven to be a useful tool for simplifying the analysis of complex systems, where gaining meaningful understanding from unstructured raw data is almost impossible, such as in the case of the brain. This paper presents a novel approach for visualizing neurons directly from the morphological descriptions extracted by neuroscience laboratories, pursuing two goals: improving the readability of complex neuronal scenarios and avoiding the need to store 3D models of the intricate geometry of neurons, since such models are demanding of computer resources. The proposed rendering method involves illustration techniques that facilitate the visual analysis of dense neural scenes. The work presented here brings the field of neuroscience and the benefits of 3D visualization environments closer together, increasing the interpretability of massive neural scenarios through visual inspection. A preliminary user study has proven the utility of the proposed rendering techniques for the visual exploration of dense neuronal scenes. The feasibility of parallelizing the implemented algorithms has also been assessed, representing a further step towards interactive illustrative visualization of neuronal forests.
Conference Paper
Full-text available
Accurately and automatically conveying the structure of a volumemodel is a problem not fully solved by existing volume renderingapproaches. Physics-based volume rendering approaches createimages which may match the appearance of translucent materialsin nature, but may not embody important structural details.Transfer function approaches allow flexible design of the volumeappearance, but generally require substantial hand tuning for eachnew data set in order to be effective. We introduce the volumeillustration approach, combining the familiarity of a physics-basedillumination model with the ability to enhance importantfeatures using non-photorealistic rendering techniques. Sincefeatures to be enhanced are defined on the basis of local volumecharacteristics rather than volume sample value, the applicationof volume illustration techniques requires less manual tuning thanthe design of a good transfer function. Volume illustrationprovides a flexible unified framework for enhancing structuralperception of volume models through the amplification offeatures and the addition of illumination effects.
Article
Full-text available
We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.
Conference Paper
Full-text available
There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display
Conference Paper
A new method for volume visualization is proposed with the aim of real-time previewing of 3D scalar fields. Instead of voxels, hierarchically sampled points are used to store scalar values, where each sampled point has a priority value, and the set of points that have higher priority than any particular level are evenly distributed in 3D. In rendering, points are selected by priority level, scalar value, and gradient magnitude. Each selected point is drawn as a simple primitive such as a line(s) or polygon. With this approach, the high performance graphics hardware built into commercial workstations can directly accelerate drawing operations. This enables users to control viewing angle, clipping plane, and scalar values for isosurface in real-time or near real-time. The level of detail can be controlled for each frame to ensure constant time rendering. Also, by enhancing such geometric features as horiaontal and vertical lines on isosurface, viewers can understand the overview 3D shapes with a limited number of primitives.
Conference Paper
This paper presents a method concerning the volume rendering of fine details, such as blood vessels and nerves, from medical data. The realistic and efficient visualization of such structures is often of great medical interest, and conventional rendering techniques do not always deal with them adequately. Our method uses preprocessing to reconstruct fine details that are difficult to segment and label. It detects the presence of fine geometrical structures, such as cracks or cylinders that suggest the existence of, for example, blood vessels or nerves; the subsequent volume rendering then displays fine geometrical objects that lie on a surface. The method can also show structures within the volume, using a special "integration sampling" scheme to portray reconstructed volume texture, such as that exhibited by muscle fibers. By combining the surface structure and volume texture in the rendering, realistic results can be produced; examples are provided.
Article
Accurately and automatically conveying the structure of a volume model is a problem not fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature, but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance, but generally require substantial hand tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using nonphotorealistic rendering techniques. Since features to be enhanced are defined on the basis of local volume characteristics rather than volume sample value, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing structural perception of volume models through the amplification of features and the addition of illumination effects.
Article
In this paper, we present a new method to compute the differential characteristics of isointensity surfaces from three-dimensional images. We show applications where those differentials properties are used to extract characteristic lines from 3D images, called crest lines. The crest lines extracted from different images of the same object are then registered to demonstrate the precision and robustness of our computation. Those experiments also show the direct correspondence between geometrical and anatomical features, for medical images. To compute the differential characteristics of surfaces, such as the principal curvatures, and directions, a traditional approach is to fit a parametric surface model to the 3D image and then to compute the differential characteristics of the surface in the local coordinate system. On the contrary, our method is based on the implicit representation of the surface, and the differential values of the isointensity surfaces are directly computed from the voxel image, without extracting any surface first. In our method, the principal curvatures and directions equations have been derived from the implicit functions theorem, leading to entirely new formulas, which make use of only the differentials of the 3D image, and which allow us to get rid of the problem of parametrizing the surfaces.
Conference Paper
Simulating hand-drawn illustration techniques can succinctly express information in a manner that is communicative and informative. We present a framework for an interactive direct volume illustration system that simulates traditional stipple drawing. By combining the principles of artistic and scientific illustration, we explore several feature enhancement techniques to create effective, interactive visualizations of scientific and medical datasets. We also introduce a rendering mechanism that generates appropriate point lists at all resolutions during an automatic preprocess, and modifies rendering styles through different combinations of these feature enhancements. The new system is an effective way to interactively preview large, complex volume datasets in a concise, meaningful, and illustrative manner. Volume stippling is effective for many applications and provides a quick and efficient method to investigate volume models.
Conference Paper
Concerns the development of non-photorealistic rendering techniques for volume visualisation. In particular, we present two pen-and-ink rendering methods, a 3D method based on non-photorealistic solid textures, and a 2<sup>+</sup>D method that involves two rendering phases in the object space and the image space respectively. As both techniques utilize volume- and image-based data representations, they can be built upon a traditional volume rendering pipeline, and can be integrated with the photorealistic methods available in such a pipeline. We demonstrate that such an integration facilitates an effective mechanism for enhancing visualisation and its interpretation.