Content uploaded by Andreas Pommert
Author content
All content in this area was uploaded by Andreas Pommert on Jul 01, 2020
Content may be subject to copyright.
Creating a high-resolution spatial/symbolic model
of the inner organs based on the Visible Human
Andreas Pommert Karl Heinz H¨
ohne Bernhard Pflesser
Ernst Richter Martin Riemer Thomas Schiemann
Rainer Schubert Udo Schumacher Ulf Tiede
Institute of Mathematics and Computer Science in Medicine (IMDM)
University Hospital Hamburg-Eppendorf, Hamburg, Germany
Dept. of Pediatric Radiology
University Hospital Hamburg-Eppendorf, Hamburg, Germany
Institute of Anatomy
University Hospital Hamburg-Eppendorf, Hamburg, Germany
Abstract
Computerized three-dimensional models of the human body, based on the Visible Human
Project of the National Library of Medicine, so far do not reflect the rich anatomical de-
tail of the original cross-sectional images. In this paper, a spatial/symbolic model of the
inner organs is developed, which is based on more than 1000 cryosections and congruent
fresh and frozen CT images of the male Visible Human. The spatial description is created
using color-space segmentation, graphic modeling, and a matched volume visualization
with subvoxel resolution. It is linked to a symbolic knowledge base, providing an ontology
of anatomical terms. With over 650 three-dimensional anatomical constituents, this model
offers an unsurpassed photorealistic presentation and level of detail. A three-dimensional
atlas of anatomy and radiology based on this model is available as a PC-based program.
Key words: Visible Human, three-dimensional body model, anatomical atlas, color-space
segmentation, volume visualization
1 Introduction
While in classical medicine, knowledge about the human body is represented in
books and atlases, present-day computer science allows for new, more powerful and
Email address: pommert@uke.uni-hamburg.de (Andreas Pommert).
Article published in Med. Image Anal. 5 (3), 221-228, 2001
versatile computer-based representations of knowledge. Their most simple man-
ifestations are multimedia CD-ROMs containing collections of classical pictures
and text, which may be browsed arbitrarily or according to various criteria. Al-
though computerized, such media still follow the old paradigm of text printed on
pages accompanied by pictures. This genre includes impressive atlases of cross-
sectional anatomy, notably from the photographic cross-sections of the Visible Hu-
man Project (Ackerman, 1991; Spitzer et al., 1996).
In the past years, however, it has been shown that spatial knowledge, especially
about the structure of the human body, may be much more efficiently represented
by computerized three-dimensional models (H¨ohne et al., 1995). These can be
constructed from cross-sectional images generated by computer tomography (CT),
magnetic resonance imaging (MRI), or histologic cryosectioning, as in the case of
the Visible Human Project. Such models may be used interactively on a computer
screen or in virtual reality environments. If such models are connected to a knowl-
edge base of descriptive information,they can even be interrogated or disassembled
by addressing names of organs (H¨ohne et al., 1995; Brinkley et al., 1999; Golland
et al., 1999). They can thus be regarded as a “self-explaining body”.
Until now, the Visible Human Project has not reported three-dimensional models
that reflect the rich anatomical detail of the original cross-sectional images. This
is largely due to the fact that, for the majority of anatomical objects contained in
the data, the cross-sectional images could not be converted into a set of coherent
realistic surfaces. If we succeed in converting all the detail into a 3D model, we
gain an unsurpassed representation of human structure that opens new possibilities
for learning anatomy and simulating interventions or radiological examinations.
2 Earlier Work
Building a comprehensive model of the inner organs of the Visible Human requires
both a spatial description consisting of three-dimensional objects, which are dis-
played using methods of volume visualization, as well as a linked symbolic de-
scription of relevant anatomical terms and their relations.
In general, volume visualization may or may not include a segmentation step. In
volume rendering, transparency values are assigned to the individual voxels ac-
cording to the intensity values and changes at the object borders (Levoy, 1988). In
the case of the Visible Human, this method yields semitransparent views, which
are suitable e.g. for visualization of the outer surface and the musculoskeletal sys-
tem (Stewart et al., 1996; Tsiaras, 1997). This way, impressive animations could
be created (Gagvani and Silver, 2000; Tsiaras, 2000). It fails, however, to display
internal structures properly. In addition, organ borders are not explicitly indicated,
thus making the removal or exclusivedisplay of an organ impossible.
2
Segmentation, i. e. the exact determination of the surface location of an organ, is
therefore crucial for building a realistic model. So far, complete automatic seg-
mentation using methods of computer vision is suitable for very special applica-
tion areas only, and could not be used to build an extensive model of the human
body. The brute force approach to segmentation is manual outlining of objects on
the cross-sections (Mullick and Nguyen, 1996; Seymour and Kriebel, 1998). Be-
sides the fact that this procedure is tedious and very time consuming, it is largely
observer-dependent and, even more important,does not yield exact and continuous
surfaces. Furthermore, despite the high resolution of the dataset, important details
such as nerves and small blood vessels cannot be identified clearly, because their
size and contrast is too small.
So far, no symbolic description of the inner organs which is suitable for our pur-
poses is available. A general discussion of the problems arising, focusing on the
thorax, may be found elsewhere (Rosse et al., 1998).
3 Methods and Materials
We therefore aimed at a method that yields surfaces for the segmentable organs
that are as exact as possible and textured with their original color. In order to arrive
at a complete model, we decided to model non-segmentable objects like nerves
and small blood vessels artificially on the basis of landmarks present in the image
volume. Even though none of the methods presented here is entirely new, building
a complex model required a number of substantial improvements.
3.1 Data
The original dataset of the male Visible Human consists of 1871 photographic
cross-sections with a slice distance of 1 mm and a spatial resolution of 0.33 mm
(Figure 1, left). For reasons of data storage and computing capacity, resolution of
the cross-sections was reduced to 1 mm by averaging 3 3 pixels. From 1049 such
slices, an image volume of 573 330 1049 voxels of 1 mm was composed, where
each voxel is represented by a set of red, green and blue intensities (RGB-tuple).
The VisibleHuman dataset also includes two sets of computer tomographic images
of 1 mm slice distance, one taken from the fresh, the other (like the photographic
one) from the frozen cadaver. Both were transformed into an image volume con-
gruent with the photographic one, using an interactive, landmark-based registration
(Schiemann et al., 1994). Since the frozen body was cut into four large blocks be-
fore image acquisition, all these parts had to be aligned individually, leaving some
noticeable gaps in the data volume.
3
Fig. 1. Left: Photographic cross-section of the abdomen of the male Visible Human. Right:
Parameterized ellipsoids in color-space, used for classification of various tissue types in
the abdomen. Many objects show similar colors, resulting in overlapping ellipsoids.
3.2 Segmentation
The image volume thus created was segmented with an interactive tool, based on
classification in color-space (Schiemann et al., 1997). It can be summarized as fol-
lows: On one or several cross-sections, an expert marks a typical region of the organ
under consideration. All voxels in the volume with similar RGB-tuples are then col-
lected by the program and shown as a painted three-dimensional mask. This mask
usually needs to be refined by repeating this procedure in order to discriminate the
target organ from the surroundingstructures more clearly.
A cluster thus defined in color-space usually has an ellipsoidal shape, due to the cor-
relation of the color components. Since a set of tuples is difficult to handle during
subsequent visualization, this cluster is approximated by a parameterized ellipsoid,
which is described by its center and three axis vectors. In general, there are other
regions present in the volume which also match this color-space description. If they
are not connected to the target organ, it can be isolated easily by a 3D connected
component analysis. If not, borders are manually sculptured using a volumeeditor.
The result of this procedure is a description of an object in terms of an ellipsoid
in color-space and a set of voxels, which are marked by object membership la-
bels. Some of the ellipsoids defined for segmentationof the abdomen are shown in
Figure 1 (right). As can be seen, there are anatomical constituents like the intes-
tine which could not be described using one ellipsoid only; in this case, actually
seven ellipsoids were required. On the other hand, the same ellipsoid may be valid
for (parts of) various anatomical constituents, such as small intestine and colon, or
even for hundreds of muscles.
As a general strategy, we applied our segmentationprocedure going from simple to
4
difficult tasks. This way, borders already defined could be used to facilitate segmen-
tation of other objects. As a first step, several tissue classes such as fat, muscles,
cartilage etc. were defined, for which the ellipsoids could be easily determined
within a few minutes. For segmentation of bone, it proved easier to use the frozen
CT dataset, applying a threshold value.
Since many objects show similar colors, the resulting ellipsoids are often overlap-
ping (Figure 1, right). Therefore, some regions such as the anterior parts of the lung
or the pericardium could not be segmented this way. In case of the lung, the miss-
ing parts could be determined using the frozen CT dataset and a threshold. For the
pericardium and similar cases, the volume editor was used.
3.3 Graphic modeling
For several small constituents such as nerves and blood vessels, which were con-
sidered essential for a comprehensive anatomical model, our color-space segmen-
tation proved impossible. As regards nerves, this is mostly due to very low contrast
between nervous and fat tissues, while many small arteries are collapsed as a post-
mortem artifact. Both problems also appear for the full resolutiondata.
For these cases, we developed a tube editor which allows us to include tube-like
structures into the model (Figure 2). Ball-shaped markers of variable diameter are
imposed by an expert onto the landmarks still visible on the cross-sections or on the
3D image. These markers are subsequently automatically connected using Over-
hauser splines (Yamaguchi, 1988). If one of the markers is moved, these splines
will cause only local changes, which makes them easy to handle. Unlike the seg-
mented objects, which are represented as sets of voxels, objects modeled with the
tube editor are represented as polygon surfaces.
Fig. 2. Small nerves or arteries which could not be segmented were interactively modeled
using a tube editor. Tubes are defined by placing spheres of varying diameter into the
volume, which are connected by interpolating splines.
5
3.4 Volume visualization
The volume visualization algorithm we developed is characterized by the fact that
it renders surfaces from volume data, using a ray casting approach (Tiede et al.,
1998). Local surface texture (color) and inclination, as neededfor surface shading,
are calculated from the RGB-tuples at the segmented border line.
A decisive quality improvement is achieved by determining the surface positions
with subvoxel resolution.This is done by considering both the ellipsoids (or thresh-
olds, for CT) and the object membership labels. If a surface was created using la-
bels only, it would appear blocky, especially when zooming into the scene. On the
other hand, if only the ellipsoids were used, objects usually could not be identified
without ambiguity.
In order to avoid these problems, ellipsoids and labels are combined using a color-
driven algorithm (Schiemann et al., 1997; Tiede et al., 1998). Depending on the
RGB-tuple found at a sampling point on a viewing ray, all ellipsoids enclosing
this tuple in color-space are collected, defining a set of “object candidates”. In a
second step, it is tested whether a matching object label is present in the vicinity
of the sampling point. In that case, an object has been found. Its subvoxel surface
position is determined by interpolating the color at the sampling point (inside the
ellipsoid) and the color at the previous sampling point on the viewing ray (outside
the ellipsoid), such that the color at the surface is representing the object border (on
the surface of the ellipsoid). Since this approach considers colors (or intensities,for
CT) before labels, a smooth, continuous surface is obtained, which is not limited
by voxel size.
The objects modeled with the tube editor are visualized with standard computer
graphics methods within the context of the segmented objects. The visualization
program, an extended version of the VOXEL-MAN system (H¨ohne et al., 1995),
runs on Linux workstations. Because of the size and resolution of the model, com-
putation of a single image may take several minutes, even on a high-end worksta-
tion.
3.5 Knowledge modeling
While segmentation and graphic modeling provide a spatial description of anatom-
ical objects, a comprehensive model also requires a linked symbolic description
regarding anatomical terms and their relations. For this purpose, we developed a
knowledge base system, usinga semantic network approach (Pommert et al., 1994;
H¨ohne et al., 1995). Among others, an object is described by
names (preferred terms, synonyms, colloquial terms) in various languages
6
pointers to related medical information (texts, histological images, references
etc.)
segmentation and visualization parameters (ellipsoid or threshold, object label,
shading method, etc.)
For choosing anatomical terms, we built on standardized nomenclature wherever
available (Federative Committee on Anatomical Terminology, 1998).
The knowledge base describes not only elementary parts found in the spatial model
(e.g. left rib 3), but also compositions of these objects (e.g. true ribs, ribs, thoracic
skeleton, thoracic wall, body wall, body), thus building a part hierarchy. This ontol-
ogy is composed of several subnets, modeling various “views” commonly used in
anatomy.For example, the kidneys can be seen according to structural or functional
criteria:
regional anatomy
in this view, the kidneys are shown as part of the abdominal viscera
systemic anatomy
in this view, the kidneys are shown as part of the urogenital system
relation to peritoneum
in this view, the kidneys are shown as part of the primary retroperitoneal organs.
Views are represented as attributes of relations. Besides the “part of” relation type,
our model also contains a “branching from” type, modeling the arterial blood flow.
As was pointed out earlier, an anatomical constituent may be a combination of sev-
eral segmented objects, each with an individual name, ellipsoid, and object label.
In order to hide these rather technical objects from a user, a relation type “hid-
den part of” was introduced, which is extending the part hierarchy. For a user, an
anatomical constituent constructed of several hidden parts appears as one single
entity.
4 Results
Using the methods described above, we built a model of the inner organs of the
male VisibleHuman. It contains more then 650 three-dimensional anatomical con-
stituents and more than 2000 relations between them. The size of segmented anatom-
ical constituents varies between 3.8 millionvoxels (or mm , equivalent to 3.8 liters)
for visceral fat and 124 voxels for the cystic duct. Preparation of the model using
the described methods involved up to 10 people and required about 5 man years.
Figure 3 gives an impression of image quality and the level of detail (see also the
movie in the electronic annex - available via www.elsevier.com/locate/media).
7
Fig. 3. The model of the inner organs contains more than 650 anatomical constituents, with
a spatial resolution of 1 mm . It can be viewed from any direction, cuts may be placed
in any number and direction, and objects may be removed or added. Annotations may be
called by mouse click.
Since the model is volume-based, cut planes, which can be placed in any number
and direction, show the texture of the original photographic images and thus look
realistic. This virtual dissection capability not only allows an interactivedissection
for learning purposes, but can also be used for the rehearsal of a surgical procedure.
In addition, the image of a “self-explaining body” allows us to inquire about com-
plex anatomical facts. The more traditional way of annotating structures of interest
is demonstrated within the user-specified scene in Figure 3. These annotations can
be obtained simply by pointing and clicking with the mouse on the structure of
interest. Likewise, objects may be painted. Pressing another button of the mouse
will call several popup menus, which provide structured knowledge about anatomy
and function (Figure 4). Such information is available because every voxel, and
therefore any visible point of any user-created 3D scene, is linked to the knowledge
base.
Vice versa, the user may navigate through the contents of the knowledge base, go-
ing to more general or more specific terms in systemicor regional part hierarchies.
Images may be composed by selecting terms from the knowledge base (Figure 5).
A special feature of the model involves the possibility of simulating radiological
examinations. Since the absorption values for every voxel are available in the orig-
8
Fig. 4. Exploring the semantic network behind the spatial model. The user has clicked onto
a blood vessel and a nerve and received information about systemic (red) and regional
(blue) anatomy.
Fig. 5. Visualization of various terms, selected from the knowledge base. Left to right: car-
diovascular system; nervous system (with skeleton and iliopsoas muscles); thoracic organs;
abdominal viscera.
inal tomographic data, artificial X-ray images from anydirection can be computed
(Figure 6, left; see also the movie in the electronic annex). Based on the infor-
mation of the model, both the contributing anatomical structures and the extent of
their contribution to the final absorption can be calculated. Similarly, the informa-
tion present in computer tomographic images can be clarified by presenting them in
the corresponding context of 3D anatomy (Figure 6, right). For an improved spatial
9
impression, stereoscopic views can also be created.
5 Conclusions
In this paper, we presented an approach for creating a high-resolution model of
the inner organs, based on the Visible Human data. The following features of this
model represent innovations:
Because of the exact, color-space segmentation and the matched visualization
method, the visual impression is one of unsurpassed realism.
There is, to date, no computer model of the inner organs that contains and de-
scribes so many three-dimensional anatomical constituents.
The model is space-filling, i. e. any voxel is labeled as an element of a three-
dimensional object.
The integrated formal organization of spatial and symbolic information allows a
virtually unlimited number of ways of using the model.
Fig. 6. Different viewing modes such as X-ray imaging (left) or computer tomography
(right) may be chosen from any direction and for any part of the model.
The model is a general knowledge representation of gross anatomy, from which all
classical representations (pictures, movies, solid models) may be derived via mouse
click. The versatility of the approach makes it suitable for anatomy and radiology
teaching as well as for simulation of interventional procedures. While the general
principle was reported earlier (H¨ohne et al., 1995), the model we describe is the first
10
to offer sufficient detail and comprehensiveness to serve these purposes seriously.
A three-dimensional atlas of anatomy and radiology based on this model, called
VOXEL-MAN 3D-Navigator: Inner Organs, is available as a PC-based program
(H¨ohne et al., 2000).
Yet there are still improvements to be made. First of all, from an anatomist’s point
of view, an even more detailed segmentation wouldbe desirable for many applica-
tions. Currently, improvements are under way. A more serious limitation is the fact
that the data is derived from one single individual. The inter-individual variability
of organ shape and topology in space and time is thus not yet part of the model.
Inclusion of variability into three-dimensional models is a difficult problem not yet
generally solved. So far, most progress has been achieved for 3D atlases of the brain
(Mazziotta et al., 1995; Styner and Gerig, 2001).
However, the current model should be an excellent basis for further developments.
One such development is the inclusion of physiology, e. g. the modeling of blood
flow or propagation of electrical fields throughout the body (Spitzer and Whit-
lock, 1998). Applications such as the computation of body surface potential maps
(Sachse et al., 2000) should profit from the increased level of detail. Furthermore,
because of the more detailed characterization of tissues, a more realistic surgi-
cal simulation involving cutting (Pflesser et al., 2000) and soft tissue deformation
(Cotin et al., 1999) can be achieved. This approach is thus an important, albeit early
step towards computer models that not only look real, but also act like a real body.
Acknowledgements
We thank Victor Spitzer and David Whitlock, University of Colorado, and Michael
Ackerman, National Library of Medicine (US), for providing the Visible Human
dataset. We are also grateful to Jochen Dormeier,Jan Freudenberg, Sebastian Gehrmann,
Stefan Noster, and Norman vonSternberg-Gospos, who substantially contributed to
the segmentation and modeling work. The tube editor was implemented by Klaus
Rheinwald. The movie in the electronic annex was produced by Andreas Petersik.
The knowledge modeling work was supported by the German Research Council
(DFG) under grant number Ho 899/4-1. An earlier version of this work was pre-
sented at The Third Visible Human Project Conference, Bethesda, MD, October
2000.
References
Ackerman, M. J., 1991. Viewpoint: The Visible Human Project. J. Biocommun.18,
14.
11
Brinkley, J. F., Wong, B. A., Hinshaw, K. P., Rosse, C., 1999. Design of an anatomy
information system. IEEE Comput. Graphics Appl. 19 (3), 38–48.
Cotin, S., Delingette, H., Ayache, N., 1999. Real-time elastic deformations of
soft tissues for surgery simulation. IEEE Trans. Visualization Comput. Graph-
ics 5 (1), 62–73.
Federative Committee on Anatomical Terminology (Ed.), 1998. Terminologia
Anatomica: International Anatomical Terminology. Thieme, Stuttgart.
Gagvani, N., Silver, D., 2000. Animating the Visible Human Dataset (VHD). In:
Banvard, R. A. (Ed.), The Third Visible Human Project Conference Proceedings.
National Library of Medicine (US), Office of High Performance Computing and
Communications, Bethesda, MD, (CD-ROM, ISSN 1524-9008).
Golland, P., Kikinis, R., Halle, M., Umans, C., Grimson, W. E. L., Shenton, M. E.,
Richolt, J. A., 1999. AnatomyBrowser: A novel approach to visualization and
integration of medical information. Comput. Aided Surg. 4 (3), 129–143.
H¨ohne, K. H., Pflesser, B., Pommert, A., Priesmeyer, K., Riemer, M., Schiemann,
T., Schubert, R., Tiede, U., Frederking, H., Gehrmann, S., Noster, S., Schu-
macher, U., 2000. VOXEL-MAN 3D Navigator: Inner Organs. Regional, Sys-
temic and Radiological Anatomy.Springer-Verlag Electronic Media, Heidelberg,
(3 CD-ROMs, ISBN 3-540-14759-4).
H¨ohne, K. H., Pflesser, B., Pommert, A., Riemer, M., Schiemann, T., Schubert, R.,
Tiede, U., 1995. A new representation of knowledge concerning human anatomy
and function. Nat. Med. 1 (6), 506–511.
Levoy, M., 1988. Display of surfaces from volume data. IEEE Comput. Graphics
Appl. 8 (3), 29–37.
Mazziotta, J. C., Toga, A. W., Evans, A. C., Fox, P., Lancaster, J., 1995. A proba-
bilistic atlas of the human brain: Theory and rationale for its development.Neu-
roImage 2 (2), 89–101.
Mullick, R., Nguyen, H. T., 1996. Visualization and labeling of the Visible Human
dataset: Challenges and resolves. In: H¨ohne, K. H., Kikinis, R. (Eds.), Visualiza-
tion in Biomedical Computing, Proc. VBC ’96. Vol. 1131 of Lecture Notes in
Computer Science. Springer-Verlag, Berlin, pp. 75–80.
Pflesser, B., Tiede, U., H¨ohne, K. H., Leuwer, R., 2000. Volume based planning and
rehearsal of surgical interventions. In: Lemke, H. U., Vannier, M. W., Inamura,
K., Farman, A. G., Doi, K. (Eds.), Computer Assisted Radiology and Surgery,
Proc. CARS 2000. Vol. 1214 of Excerpta Medica International Congress Series.
Elsevier, Amsterdam, pp. 607–612.
Pommert, A., Schubert, R., Riemer, M., Schiemann, T., Tiede, U., H¨ohne, K. H.,
1994. Symbolic modeling of human anatomy for visualization and simulation.
In: Robb, R. A. (Ed.), Visualization in Biomedical Computing 1994, Proc. SPIE
2359. Rochester, MN, pp. 412–423.
Rosse, C., Mejino, J., Modayur, B., Jakobovits, R., Hinshaw, K., Brinkley, J. F.,
1998. Motivation and organizational principles for anatomical knowledge repre-
sentation: The Digital Anatomist symbolic knowledge base. J. Am. Med. Inform.
Assoc. 5 (1), 17–40.
Sachse, F. B., Werner, C. D., Meyer-Waarden, K., D¨ossel, O., 2000. Development
12
of a human body model for numerical calculation of electrical fields. Comput.
Med. Imaging Graph. 24 (3), 165–171.
Schiemann, T., H¨ohne, K. H., Koch, C., Pommert, A., Riemer, M., Schubert, R.,
Tiede, U., 1994. Interpretation of tomographic images using automatic atlas
lookup. In: Robb, R. A. (Ed.), Visualization in Biomedical Computing 1994,
Proc. SPIE 2359. Rochester, MN, pp. 457–465.
Schiemann, T., Tiede, U., H¨ohne, K. H., 1997. Segmentation of the Visible Human
for high quality volume based visualization.Med. Image Anal. 1 (4), 263–271.
Seymour, J., Kriebel, T. L., 1998. Virtual Human: Live volume rendering of the
segmented and classified Visible Human Male in a CD-ROM product for PCs.
In: Banvard, R. A., Pinciroli, F., Cerveri, P. (Eds.), The Second Visible Human
Project Conference Proceedings. National Library of Medicine (US), Office of
High Performance Computing and Communications, Bethesda, MD, (CD-ROM,
ISSN 1524-9808).
Spitzer, V. M., Ackerman, M. J., Scherzinger, A. L., Whitlock, D. G., 1996. The
Visible Human Male: A technical report. J. Am. Med. Inform. Assoc. 3 (2),
118–130.
Spitzer, V. M., Whitlock, D. G., 1998. The Visible Human data set: The anatomical
platform for human simulation. Anat. Rec. 253 (2), 49–57.
Stewart, J. E., Broaddus, W. C., Johnson, J. H., 1996. Rebuilding the Visible Man.
In: H¨ohne, K. H., Kikinis, R. (Eds.), Visualization in Biomedical Computing,
Proc. VBC ’96. Vol. 1131 of Lecture Notes in Computer Science. Springer-
Verlag, Berlin, pp. 81–85.
Styner, M., Gerig, G., 2001. Medial modelsincorporating object variability for 3D
shape analysis. In: Insana, M. F., Leahy, R. M. (Eds.), Information Processing
in Medical Imaging, Proc. IPMI 2001. Vol. 2082 of Lecture Notes in Computer
Science. Springer-Verlag, Berlin, pp. 502–516.
Tiede, U., Schiemann, T., H¨ohne, K. H., 1998. High quality rendering of attributed
volume data. In: Ebert, D., Hagen, H.,Rushmeier, H. (Eds.), Proc. IEEEVisual-
ization ’98. IEEE Computer Society Press, Los Alamitos, CA, pp. 255–262.
Tsiaras, A., 1997. Body Voyage. Time Warner, New York, NY.
Tsiaras, A., 2000. Volumetric imaging for the media. In: Banvard, R. A. (Ed.),
The Third Visible Human Project Conference Proceedings. National Library of
Medicine (US), Office of High Performance Computing and Communications,
Bethesda, MD, (CD-ROM, ISSN 1524-9008).
Yamaguchi, F., 1988. Curves and Surfaces in Computer Aided Geometric Design.
Springer-Verlag, Berlin.
13