ArticlePDF Available

Creating a high-resolution spatial/symbolic model of the inner organs based on the Visible Human

Authors:

Abstract and Figures

Computerized three-dimensional models of the human body, based on the Visible Human Project of the National Library of Medicine, so far do not reflect the rich anatomical detail of the original cross-sectional images. In this paper, a spatial/symbolic model of the inner organs is developed, which is based on more than 1000 cryosections and congruent fresh and frozen CT images of the male Visible Human. The spatial description is created using color-space segmentation, graphic modeling, and a matched volume visualization with subvoxel resolution. It is linked to a symbolic knowledge base, providing an ontology of anatomical terms. With over 650 three-dimensional anatomical constituents, this model offers an unsurpassed photorealistic presentation and level of detail. A three-dimensional atlas of anatomy and radiology based on this model is available as a PC-based program.
Content may be subject to copyright.
Creating a high-resolution spatial/symbolic model
of the inner organs based on the Visible Human
Andreas Pommert Karl Heinz H¨
ohne Bernhard Pflesser
Ernst Richter Martin Riemer Thomas Schiemann
Rainer Schubert Udo Schumacher Ulf Tiede
Institute of Mathematics and Computer Science in Medicine (IMDM)
University Hospital Hamburg-Eppendorf, Hamburg, Germany
Dept. of Pediatric Radiology
University Hospital Hamburg-Eppendorf, Hamburg, Germany
Institute of Anatomy
University Hospital Hamburg-Eppendorf, Hamburg, Germany
Abstract
Computerized three-dimensional models of the human body, based on the Visible Human
Project of the National Library of Medicine, so far do not reflect the rich anatomical de-
tail of the original cross-sectional images. In this paper, a spatial/symbolic model of the
inner organs is developed, which is based on more than 1000 cryosections and congruent
fresh and frozen CT images of the male Visible Human. The spatial description is created
using color-space segmentation, graphic modeling, and a matched volume visualization
with subvoxel resolution. It is linked to a symbolic knowledge base, providing an ontology
of anatomical terms. With over 650 three-dimensional anatomical constituents, this model
offers an unsurpassed photorealistic presentation and level of detail. A three-dimensional
atlas of anatomy and radiology based on this model is available as a PC-based program.
Key words: Visible Human, three-dimensional body model, anatomical atlas, color-space
segmentation, volume visualization
1 Introduction
While in classical medicine, knowledge about the human body is represented in
books and atlases, present-day computer science allows for new, more powerful and
Email address: pommert@uke.uni-hamburg.de (Andreas Pommert).
Article published in Med. Image Anal. 5 (3), 221-228, 2001
versatile computer-based representations of knowledge. Their most simple man-
ifestations are multimedia CD-ROMs containing collections of classical pictures
and text, which may be browsed arbitrarily or according to various criteria. Al-
though computerized, such media still follow the old paradigm of text printed on
pages accompanied by pictures. This genre includes impressive atlases of cross-
sectional anatomy, notably from the photographic cross-sections of the Visible Hu-
man Project (Ackerman, 1991; Spitzer et al., 1996).
In the past years, however, it has been shown that spatial knowledge, especially
about the structure of the human body, may be much more efficiently represented
by computerized three-dimensional models (H¨ohne et al., 1995). These can be
constructed from cross-sectional images generated by computer tomography (CT),
magnetic resonance imaging (MRI), or histologic cryosectioning, as in the case of
the Visible Human Project. Such models may be used interactively on a computer
screen or in virtual reality environments. If such models are connected to a knowl-
edge base of descriptive information,they can even be interrogated or disassembled
by addressing names of organs (H¨ohne et al., 1995; Brinkley et al., 1999; Golland
et al., 1999). They can thus be regarded as a “self-explaining body”.
Until now, the Visible Human Project has not reported three-dimensional models
that reflect the rich anatomical detail of the original cross-sectional images. This
is largely due to the fact that, for the majority of anatomical objects contained in
the data, the cross-sectional images could not be converted into a set of coherent
realistic surfaces. If we succeed in converting all the detail into a 3D model, we
gain an unsurpassed representation of human structure that opens new possibilities
for learning anatomy and simulating interventions or radiological examinations.
2 Earlier Work
Building a comprehensive model of the inner organs of the Visible Human requires
both a spatial description consisting of three-dimensional objects, which are dis-
played using methods of volume visualization, as well as a linked symbolic de-
scription of relevant anatomical terms and their relations.
In general, volume visualization may or may not include a segmentation step. In
volume rendering, transparency values are assigned to the individual voxels ac-
cording to the intensity values and changes at the object borders (Levoy, 1988). In
the case of the Visible Human, this method yields semitransparent views, which
are suitable e.g. for visualization of the outer surface and the musculoskeletal sys-
tem (Stewart et al., 1996; Tsiaras, 1997). This way, impressive animations could
be created (Gagvani and Silver, 2000; Tsiaras, 2000). It fails, however, to display
internal structures properly. In addition, organ borders are not explicitly indicated,
thus making the removal or exclusivedisplay of an organ impossible.
2
Segmentation, i. e. the exact determination of the surface location of an organ, is
therefore crucial for building a realistic model. So far, complete automatic seg-
mentation using methods of computer vision is suitable for very special applica-
tion areas only, and could not be used to build an extensive model of the human
body. The brute force approach to segmentation is manual outlining of objects on
the cross-sections (Mullick and Nguyen, 1996; Seymour and Kriebel, 1998). Be-
sides the fact that this procedure is tedious and very time consuming, it is largely
observer-dependent and, even more important,does not yield exact and continuous
surfaces. Furthermore, despite the high resolution of the dataset, important details
such as nerves and small blood vessels cannot be identified clearly, because their
size and contrast is too small.
So far, no symbolic description of the inner organs which is suitable for our pur-
poses is available. A general discussion of the problems arising, focusing on the
thorax, may be found elsewhere (Rosse et al., 1998).
3 Methods and Materials
We therefore aimed at a method that yields surfaces for the segmentable organs
that are as exact as possible and textured with their original color. In order to arrive
at a complete model, we decided to model non-segmentable objects like nerves
and small blood vessels artificially on the basis of landmarks present in the image
volume. Even though none of the methods presented here is entirely new, building
a complex model required a number of substantial improvements.
3.1 Data
The original dataset of the male Visible Human consists of 1871 photographic
cross-sections with a slice distance of 1 mm and a spatial resolution of 0.33 mm
(Figure 1, left). For reasons of data storage and computing capacity, resolution of
the cross-sections was reduced to 1 mm by averaging 3 3 pixels. From 1049 such
slices, an image volume of 573 330 1049 voxels of 1 mm was composed, where
each voxel is represented by a set of red, green and blue intensities (RGB-tuple).
The VisibleHuman dataset also includes two sets of computer tomographic images
of 1 mm slice distance, one taken from the fresh, the other (like the photographic
one) from the frozen cadaver. Both were transformed into an image volume con-
gruent with the photographic one, using an interactive, landmark-based registration
(Schiemann et al., 1994). Since the frozen body was cut into four large blocks be-
fore image acquisition, all these parts had to be aligned individually, leaving some
noticeable gaps in the data volume.
3
Fig. 1. Left: Photographic cross-section of the abdomen of the male Visible Human. Right:
Parameterized ellipsoids in color-space, used for classification of various tissue types in
the abdomen. Many objects show similar colors, resulting in overlapping ellipsoids.
3.2 Segmentation
The image volume thus created was segmented with an interactive tool, based on
classification in color-space (Schiemann et al., 1997). It can be summarized as fol-
lows: On one or several cross-sections, an expert marks a typical region of the organ
under consideration. All voxels in the volume with similar RGB-tuples are then col-
lected by the program and shown as a painted three-dimensional mask. This mask
usually needs to be refined by repeating this procedure in order to discriminate the
target organ from the surroundingstructures more clearly.
A cluster thus defined in color-space usually has an ellipsoidal shape, due to the cor-
relation of the color components. Since a set of tuples is difficult to handle during
subsequent visualization, this cluster is approximated by a parameterized ellipsoid,
which is described by its center and three axis vectors. In general, there are other
regions present in the volume which also match this color-space description. If they
are not connected to the target organ, it can be isolated easily by a 3D connected
component analysis. If not, borders are manually sculptured using a volumeeditor.
The result of this procedure is a description of an object in terms of an ellipsoid
in color-space and a set of voxels, which are marked by object membership la-
bels. Some of the ellipsoids defined for segmentationof the abdomen are shown in
Figure 1 (right). As can be seen, there are anatomical constituents like the intes-
tine which could not be described using one ellipsoid only; in this case, actually
seven ellipsoids were required. On the other hand, the same ellipsoid may be valid
for (parts of) various anatomical constituents, such as small intestine and colon, or
even for hundreds of muscles.
As a general strategy, we applied our segmentationprocedure going from simple to
4
difficult tasks. This way, borders already defined could be used to facilitate segmen-
tation of other objects. As a first step, several tissue classes such as fat, muscles,
cartilage etc. were defined, for which the ellipsoids could be easily determined
within a few minutes. For segmentation of bone, it proved easier to use the frozen
CT dataset, applying a threshold value.
Since many objects show similar colors, the resulting ellipsoids are often overlap-
ping (Figure 1, right). Therefore, some regions such as the anterior parts of the lung
or the pericardium could not be segmented this way. In case of the lung, the miss-
ing parts could be determined using the frozen CT dataset and a threshold. For the
pericardium and similar cases, the volume editor was used.
3.3 Graphic modeling
For several small constituents such as nerves and blood vessels, which were con-
sidered essential for a comprehensive anatomical model, our color-space segmen-
tation proved impossible. As regards nerves, this is mostly due to very low contrast
between nervous and fat tissues, while many small arteries are collapsed as a post-
mortem artifact. Both problems also appear for the full resolutiondata.
For these cases, we developed a tube editor which allows us to include tube-like
structures into the model (Figure 2). Ball-shaped markers of variable diameter are
imposed by an expert onto the landmarks still visible on the cross-sections or on the
3D image. These markers are subsequently automatically connected using Over-
hauser splines (Yamaguchi, 1988). If one of the markers is moved, these splines
will cause only local changes, which makes them easy to handle. Unlike the seg-
mented objects, which are represented as sets of voxels, objects modeled with the
tube editor are represented as polygon surfaces.
Fig. 2. Small nerves or arteries which could not be segmented were interactively modeled
using a tube editor. Tubes are defined by placing spheres of varying diameter into the
volume, which are connected by interpolating splines.
5
3.4 Volume visualization
The volume visualization algorithm we developed is characterized by the fact that
it renders surfaces from volume data, using a ray casting approach (Tiede et al.,
1998). Local surface texture (color) and inclination, as neededfor surface shading,
are calculated from the RGB-tuples at the segmented border line.
A decisive quality improvement is achieved by determining the surface positions
with subvoxel resolution.This is done by considering both the ellipsoids (or thresh-
olds, for CT) and the object membership labels. If a surface was created using la-
bels only, it would appear blocky, especially when zooming into the scene. On the
other hand, if only the ellipsoids were used, objects usually could not be identified
without ambiguity.
In order to avoid these problems, ellipsoids and labels are combined using a color-
driven algorithm (Schiemann et al., 1997; Tiede et al., 1998). Depending on the
RGB-tuple found at a sampling point on a viewing ray, all ellipsoids enclosing
this tuple in color-space are collected, defining a set of “object candidates”. In a
second step, it is tested whether a matching object label is present in the vicinity
of the sampling point. In that case, an object has been found. Its subvoxel surface
position is determined by interpolating the color at the sampling point (inside the
ellipsoid) and the color at the previous sampling point on the viewing ray (outside
the ellipsoid), such that the color at the surface is representing the object border (on
the surface of the ellipsoid). Since this approach considers colors (or intensities,for
CT) before labels, a smooth, continuous surface is obtained, which is not limited
by voxel size.
The objects modeled with the tube editor are visualized with standard computer
graphics methods within the context of the segmented objects. The visualization
program, an extended version of the VOXEL-MAN system (H¨ohne et al., 1995),
runs on Linux workstations. Because of the size and resolution of the model, com-
putation of a single image may take several minutes, even on a high-end worksta-
tion.
3.5 Knowledge modeling
While segmentation and graphic modeling provide a spatial description of anatom-
ical objects, a comprehensive model also requires a linked symbolic description
regarding anatomical terms and their relations. For this purpose, we developed a
knowledge base system, usinga semantic network approach (Pommert et al., 1994;
ohne et al., 1995). Among others, an object is described by
names (preferred terms, synonyms, colloquial terms) in various languages
6
pointers to related medical information (texts, histological images, references
etc.)
segmentation and visualization parameters (ellipsoid or threshold, object label,
shading method, etc.)
For choosing anatomical terms, we built on standardized nomenclature wherever
available (Federative Committee on Anatomical Terminology, 1998).
The knowledge base describes not only elementary parts found in the spatial model
(e.g. left rib 3), but also compositions of these objects (e.g. true ribs, ribs, thoracic
skeleton, thoracic wall, body wall, body), thus building a part hierarchy. This ontol-
ogy is composed of several subnets, modeling various “views” commonly used in
anatomy.For example, the kidneys can be seen according to structural or functional
criteria:
regional anatomy
in this view, the kidneys are shown as part of the abdominal viscera
systemic anatomy
in this view, the kidneys are shown as part of the urogenital system
relation to peritoneum
in this view, the kidneys are shown as part of the primary retroperitoneal organs.
Views are represented as attributes of relations. Besides the “part of” relation type,
our model also contains a “branching from” type, modeling the arterial blood flow.
As was pointed out earlier, an anatomical constituent may be a combination of sev-
eral segmented objects, each with an individual name, ellipsoid, and object label.
In order to hide these rather technical objects from a user, a relation type “hid-
den part of” was introduced, which is extending the part hierarchy. For a user, an
anatomical constituent constructed of several hidden parts appears as one single
entity.
4 Results
Using the methods described above, we built a model of the inner organs of the
male VisibleHuman. It contains more then 650 three-dimensional anatomical con-
stituents and more than 2000 relations between them. The size of segmented anatom-
ical constituents varies between 3.8 millionvoxels (or mm , equivalent to 3.8 liters)
for visceral fat and 124 voxels for the cystic duct. Preparation of the model using
the described methods involved up to 10 people and required about 5 man years.
Figure 3 gives an impression of image quality and the level of detail (see also the
movie in the electronic annex - available via www.elsevier.com/locate/media).
7
Fig. 3. The model of the inner organs contains more than 650 anatomical constituents, with
a spatial resolution of 1 mm . It can be viewed from any direction, cuts may be placed
in any number and direction, and objects may be removed or added. Annotations may be
called by mouse click.
Since the model is volume-based, cut planes, which can be placed in any number
and direction, show the texture of the original photographic images and thus look
realistic. This virtual dissection capability not only allows an interactivedissection
for learning purposes, but can also be used for the rehearsal of a surgical procedure.
In addition, the image of a “self-explaining body” allows us to inquire about com-
plex anatomical facts. The more traditional way of annotating structures of interest
is demonstrated within the user-specified scene in Figure 3. These annotations can
be obtained simply by pointing and clicking with the mouse on the structure of
interest. Likewise, objects may be painted. Pressing another button of the mouse
will call several popup menus, which provide structured knowledge about anatomy
and function (Figure 4). Such information is available because every voxel, and
therefore any visible point of any user-created 3D scene, is linked to the knowledge
base.
Vice versa, the user may navigate through the contents of the knowledge base, go-
ing to more general or more specific terms in systemicor regional part hierarchies.
Images may be composed by selecting terms from the knowledge base (Figure 5).
A special feature of the model involves the possibility of simulating radiological
examinations. Since the absorption values for every voxel are available in the orig-
8
Fig. 4. Exploring the semantic network behind the spatial model. The user has clicked onto
a blood vessel and a nerve and received information about systemic (red) and regional
(blue) anatomy.
Fig. 5. Visualization of various terms, selected from the knowledge base. Left to right: car-
diovascular system; nervous system (with skeleton and iliopsoas muscles); thoracic organs;
abdominal viscera.
inal tomographic data, artificial X-ray images from anydirection can be computed
(Figure 6, left; see also the movie in the electronic annex). Based on the infor-
mation of the model, both the contributing anatomical structures and the extent of
their contribution to the final absorption can be calculated. Similarly, the informa-
tion present in computer tomographic images can be clarified by presenting them in
the corresponding context of 3D anatomy (Figure 6, right). For an improved spatial
9
impression, stereoscopic views can also be created.
5 Conclusions
In this paper, we presented an approach for creating a high-resolution model of
the inner organs, based on the Visible Human data. The following features of this
model represent innovations:
Because of the exact, color-space segmentation and the matched visualization
method, the visual impression is one of unsurpassed realism.
There is, to date, no computer model of the inner organs that contains and de-
scribes so many three-dimensional anatomical constituents.
The model is space-filling, i. e. any voxel is labeled as an element of a three-
dimensional object.
The integrated formal organization of spatial and symbolic information allows a
virtually unlimited number of ways of using the model.
Fig. 6. Different viewing modes such as X-ray imaging (left) or computer tomography
(right) may be chosen from any direction and for any part of the model.
The model is a general knowledge representation of gross anatomy, from which all
classical representations (pictures, movies, solid models) may be derived via mouse
click. The versatility of the approach makes it suitable for anatomy and radiology
teaching as well as for simulation of interventional procedures. While the general
principle was reported earlier (H¨ohne et al., 1995), the model we describe is the first
10
to offer sufficient detail and comprehensiveness to serve these purposes seriously.
A three-dimensional atlas of anatomy and radiology based on this model, called
VOXEL-MAN 3D-Navigator: Inner Organs, is available as a PC-based program
(H¨ohne et al., 2000).
Yet there are still improvements to be made. First of all, from an anatomist’s point
of view, an even more detailed segmentation wouldbe desirable for many applica-
tions. Currently, improvements are under way. A more serious limitation is the fact
that the data is derived from one single individual. The inter-individual variability
of organ shape and topology in space and time is thus not yet part of the model.
Inclusion of variability into three-dimensional models is a difficult problem not yet
generally solved. So far, most progress has been achieved for 3D atlases of the brain
(Mazziotta et al., 1995; Styner and Gerig, 2001).
However, the current model should be an excellent basis for further developments.
One such development is the inclusion of physiology, e. g. the modeling of blood
flow or propagation of electrical fields throughout the body (Spitzer and Whit-
lock, 1998). Applications such as the computation of body surface potential maps
(Sachse et al., 2000) should profit from the increased level of detail. Furthermore,
because of the more detailed characterization of tissues, a more realistic surgi-
cal simulation involving cutting (Pflesser et al., 2000) and soft tissue deformation
(Cotin et al., 1999) can be achieved. This approach is thus an important, albeit early
step towards computer models that not only look real, but also act like a real body.
Acknowledgements
We thank Victor Spitzer and David Whitlock, University of Colorado, and Michael
Ackerman, National Library of Medicine (US), for providing the Visible Human
dataset. We are also grateful to Jochen Dormeier,Jan Freudenberg, Sebastian Gehrmann,
Stefan Noster, and Norman vonSternberg-Gospos, who substantially contributed to
the segmentation and modeling work. The tube editor was implemented by Klaus
Rheinwald. The movie in the electronic annex was produced by Andreas Petersik.
The knowledge modeling work was supported by the German Research Council
(DFG) under grant number Ho 899/4-1. An earlier version of this work was pre-
sented at The Third Visible Human Project Conference, Bethesda, MD, October
2000.
References
Ackerman, M. J., 1991. Viewpoint: The Visible Human Project. J. Biocommun.18,
14.
11
Brinkley, J. F., Wong, B. A., Hinshaw, K. P., Rosse, C., 1999. Design of an anatomy
information system. IEEE Comput. Graphics Appl. 19 (3), 38–48.
Cotin, S., Delingette, H., Ayache, N., 1999. Real-time elastic deformations of
soft tissues for surgery simulation. IEEE Trans. Visualization Comput. Graph-
ics 5 (1), 62–73.
Federative Committee on Anatomical Terminology (Ed.), 1998. Terminologia
Anatomica: International Anatomical Terminology. Thieme, Stuttgart.
Gagvani, N., Silver, D., 2000. Animating the Visible Human Dataset (VHD). In:
Banvard, R. A. (Ed.), The Third Visible Human Project Conference Proceedings.
National Library of Medicine (US), Office of High Performance Computing and
Communications, Bethesda, MD, (CD-ROM, ISSN 1524-9008).
Golland, P., Kikinis, R., Halle, M., Umans, C., Grimson, W. E. L., Shenton, M. E.,
Richolt, J. A., 1999. AnatomyBrowser: A novel approach to visualization and
integration of medical information. Comput. Aided Surg. 4 (3), 129–143.
ohne, K. H., Pflesser, B., Pommert, A., Priesmeyer, K., Riemer, M., Schiemann,
T., Schubert, R., Tiede, U., Frederking, H., Gehrmann, S., Noster, S., Schu-
macher, U., 2000. VOXEL-MAN 3D Navigator: Inner Organs. Regional, Sys-
temic and Radiological Anatomy.Springer-Verlag Electronic Media, Heidelberg,
(3 CD-ROMs, ISBN 3-540-14759-4).
ohne, K. H., Pflesser, B., Pommert, A., Riemer, M., Schiemann, T., Schubert, R.,
Tiede, U., 1995. A new representation of knowledge concerning human anatomy
and function. Nat. Med. 1 (6), 506–511.
Levoy, M., 1988. Display of surfaces from volume data. IEEE Comput. Graphics
Appl. 8 (3), 29–37.
Mazziotta, J. C., Toga, A. W., Evans, A. C., Fox, P., Lancaster, J., 1995. A proba-
bilistic atlas of the human brain: Theory and rationale for its development.Neu-
roImage 2 (2), 89–101.
Mullick, R., Nguyen, H. T., 1996. Visualization and labeling of the Visible Human
dataset: Challenges and resolves. In: H¨ohne, K. H., Kikinis, R. (Eds.), Visualiza-
tion in Biomedical Computing, Proc. VBC ’96. Vol. 1131 of Lecture Notes in
Computer Science. Springer-Verlag, Berlin, pp. 75–80.
Pflesser, B., Tiede, U., H¨ohne, K. H., Leuwer, R., 2000. Volume based planning and
rehearsal of surgical interventions. In: Lemke, H. U., Vannier, M. W., Inamura,
K., Farman, A. G., Doi, K. (Eds.), Computer Assisted Radiology and Surgery,
Proc. CARS 2000. Vol. 1214 of Excerpta Medica International Congress Series.
Elsevier, Amsterdam, pp. 607–612.
Pommert, A., Schubert, R., Riemer, M., Schiemann, T., Tiede, U., H¨ohne, K. H.,
1994. Symbolic modeling of human anatomy for visualization and simulation.
In: Robb, R. A. (Ed.), Visualization in Biomedical Computing 1994, Proc. SPIE
2359. Rochester, MN, pp. 412–423.
Rosse, C., Mejino, J., Modayur, B., Jakobovits, R., Hinshaw, K., Brinkley, J. F.,
1998. Motivation and organizational principles for anatomical knowledge repre-
sentation: The Digital Anatomist symbolic knowledge base. J. Am. Med. Inform.
Assoc. 5 (1), 17–40.
Sachse, F. B., Werner, C. D., Meyer-Waarden, K., D¨ossel, O., 2000. Development
12
of a human body model for numerical calculation of electrical fields. Comput.
Med. Imaging Graph. 24 (3), 165–171.
Schiemann, T., H¨ohne, K. H., Koch, C., Pommert, A., Riemer, M., Schubert, R.,
Tiede, U., 1994. Interpretation of tomographic images using automatic atlas
lookup. In: Robb, R. A. (Ed.), Visualization in Biomedical Computing 1994,
Proc. SPIE 2359. Rochester, MN, pp. 457–465.
Schiemann, T., Tiede, U., H¨ohne, K. H., 1997. Segmentation of the Visible Human
for high quality volume based visualization.Med. Image Anal. 1 (4), 263–271.
Seymour, J., Kriebel, T. L., 1998. Virtual Human: Live volume rendering of the
segmented and classified Visible Human Male in a CD-ROM product for PCs.
In: Banvard, R. A., Pinciroli, F., Cerveri, P. (Eds.), The Second Visible Human
Project Conference Proceedings. National Library of Medicine (US), Office of
High Performance Computing and Communications, Bethesda, MD, (CD-ROM,
ISSN 1524-9808).
Spitzer, V. M., Ackerman, M. J., Scherzinger, A. L., Whitlock, D. G., 1996. The
Visible Human Male: A technical report. J. Am. Med. Inform. Assoc. 3 (2),
118–130.
Spitzer, V. M., Whitlock, D. G., 1998. The Visible Human data set: The anatomical
platform for human simulation. Anat. Rec. 253 (2), 49–57.
Stewart, J. E., Broaddus, W. C., Johnson, J. H., 1996. Rebuilding the Visible Man.
In: H¨ohne, K. H., Kikinis, R. (Eds.), Visualization in Biomedical Computing,
Proc. VBC ’96. Vol. 1131 of Lecture Notes in Computer Science. Springer-
Verlag, Berlin, pp. 81–85.
Styner, M., Gerig, G., 2001. Medial modelsincorporating object variability for 3D
shape analysis. In: Insana, M. F., Leahy, R. M. (Eds.), Information Processing
in Medical Imaging, Proc. IPMI 2001. Vol. 2082 of Lecture Notes in Computer
Science. Springer-Verlag, Berlin, pp. 502–516.
Tiede, U., Schiemann, T., H¨ohne, K. H., 1998. High quality rendering of attributed
volume data. In: Ebert, D., Hagen, H.,Rushmeier, H. (Eds.), Proc. IEEEVisual-
ization ’98. IEEE Computer Society Press, Los Alamitos, CA, pp. 255–262.
Tsiaras, A., 1997. Body Voyage. Time Warner, New York, NY.
Tsiaras, A., 2000. Volumetric imaging for the media. In: Banvard, R. A. (Ed.),
The Third Visible Human Project Conference Proceedings. National Library of
Medicine (US), Office of High Performance Computing and Communications,
Bethesda, MD, (CD-ROM, ISSN 1524-9008).
Yamaguchi, F., 1988. Curves and Surfaces in Computer Aided Geometric Design.
Springer-Verlag, Berlin.
13
... These tools suit several purposes: promote novel educational methods (Papa and Vaccarezza 2013;Chung et al. 2016;Zilverschoon et al. 2017), allow statistical analysis of anatomical variability Shepherd et al. (2012), and support clinical practice to optimize decisions Malmberg et al. (2017). It should be noted that 3DRAS tools are a complementary medium to live dissection, not their replacement (Ackerman 1999;Park et al. 2005;Pflesser et al. 2001;Uhl et al. 2006). 3DRAS make possible the virtual dissection resulting in accurate and interactive 3D anatomical models. ...
... Their system was piloted in a comparative study, where three display paradigms (2D monitor, stereo monitor, and Oculus Rift) and two input devices (space mouse and standard keyboard) were tested. In a master thesis, this idea was further evolved by giving the user more possibilities for manipulation and guidance (Pohlandt 2017). Here, the medical student can choose between several anatomical structures and scale them freely from their original size to large scales. ...
... These tools suit several purposes: promote novel educational methods (Papa and Vaccarezza 2013; Chung et al. 2016;Zilverschoon et al. 2017), allow statistical analysis of anatomical variability Shepherd et al. (2012), and support clinical practice to optimize decisions Malmberg et al. (2017). It should be noted that 3DRAS tools are a complementary medium to live dissection, not their replacement (Ackerman 1999;Park et al. 2005;Pflesser et al. 2001;Uhl et al. 2006). 3DRAS make possible the virtual dissection resulting in accurate and interactive 3D anatomical models. ...
... Their system was piloted in a comparative study, where three display paradigms (2D monitor, stereo monitor, and Oculus Rift) and two input devices (space mouse and standard keyboard) were tested. In a master thesis, this idea was further evolved by giving the user more possibilities for manipulation and guidance (Pohlandt 2017). Here, the medical student can choose between several anatomical structures and scale them freely from their original size to large scales. ...
Chapter
Image-guidance has been the mainstay for most neurosurgical procedures to aid in accuracy and precision. Developments in visualization tools have brought into existence the current microscope and even sophisticated augmented reality devices providing a human–computer interface. The current microscope poses an ergonomic challenge particularly in scenarios like sitting position. Also, the cost associated with the present microscope hinders the accessibility of micro neurosurgery in most low-to-middle-income countries.
... High fidelity 3D human models have been created using the original transverse cross-sectional images of the VHP male subject Pommert et al., 2001;Schiemann et al., 1997). Creating such models typically involves labor-intensive manual segmentation work due to the difficulty of isolating tissues. ...
Article
The current study proposes a new method to predict the body shape and mass distribution of the trunk (Tl-L5) of a human male using 15 anthropometric measurements acquired at various locations of the body. Trunk cross-sectional images adopted from the Visible Human male project database were segmented into fat, bone, and lean tissue. Assuming that all male subjects have similar cross-sectional composition at a given body height percentile, areas of the segmented cross-sectional images of the Visible Human male along the trunk were scaled to match those of the predicted body shape. The trunk mass distribution of the target subject can then be computed using the density values of fat, bone, and lean tissue. Comparison of the predicted body shape circumference with ground truth values measured using digital and actual measurements yielded maximum mean error of 13.3mm and 30.3mm, respectively. The accuracy of the image segmentation was evaluated, and the results showed a high Jaccard index (>0.95). The proposed method was able to predict the trunk mass distribution of two volunteers with a maximum deviation of 384 g at T4 level and a minimum deviation of 12 g at L4 level and the corresponding centers of mass fell within the experimental data at most levels. Thus, our method can be considered as a feasible option to calculate subject-specific trunk mass distribution.
Preprint
Full-text available
In this paper, we present a new workflow for the computer-aided generation of physicalizations, addressing nested configurations in anatomical and biological structures. Physicalizations are an important component of anatomical and biological education and edutainment. However, existing approaches have mainly revolved around creating data sculptures through digital fabrication. Only a few recent works proposed computer-aided pipelines for generating sculptures, such as papercrafts, with affordable and readily available materials. Papercraft generation remains a challenging topic by itself. Yet, anatomical and biological applications pose additional challenges, such as reconstruction complexity and insufficiency to account for multiple, nested structures--often present in anatomical and biological structures. Our workflow comprises the following steps: (i) define the nested configuration of the model and detect its levels, (ii) calculate the viewpoint that provides optimal, unobstructed views on inner levels, (iii) perform cuts on the outer levels to reveal the inner ones based on the viewpoint selection, (iv) estimate the stability of the cut papercraft to ensure a reliable outcome, (v) generate textures at each level, as a smart visibility mechanism that provides additional information on the inner structures, and (vi) unfold each textured mesh guaranteeing reconstruction. Our novel approach exploits the interactivity of nested papercraft models for edutainment purposes.
Article
Background Anatomy is a required course for all medicine-related industries. In recent decades, the teaching quality and effect of anatomy have been compromised by factors including a decrease in human body specimens, dampened enthusiasm for the discipline, reduced teaching hours of anatomy, scale expansion of medical education, and obstacles in performing field autopsies and observations. Methods Based on China's digitalized visible human research achievements, this article extracts the boundary information of anatomic structures from tomographic images, constructs threedimensional (3D) digital anatomical models with authentic texture information, and develops an anatomy assistive teaching system for teachers and students based on the knowledge points of anatomy, to meet the anatomy teaching requirements of different majors at various levels. Results This scientific, complete, and holistic system has produced over 6000 3D digital anatomical models, 5000 anatomy knowledge points, 50 anatomical operation videos, and 150 micro demonstration classes, with teaching contents for different majors and levels, such as systematic anatomy, topographic anatomy, sectional anatomy, anatomy of motion, and virtual anatomical operation table. Ranging from network terminals, desktops, touchscreen 3D displays, desktops, and projection 3D volumetric displays to augmented reality, its diversified interactive forms meet the requirements for a learning environment in different settings. Conclusions With multiple teaching and learning links covered, such as teaching environment, teaching resources, instructional slides, autonomous learning, and learning effect evaluation, this novel teaching system serves as a vital component and a necessary resource in anatomy teaching and functions as an important supplement to traditional anatomy teaching. Applied and promoted in most medical colleges and schools in China, this system has been recognized and approved by anatomy teachers and students, and plays a positive role in guaranteeing the effect and quality of anatomy teaching.
Chapter
Recent progress in VR and AR hardware enables a wide range of educational applications. Anatomy education, where the complex spatial relations of the human anatomy need to be imagined, may benefit from the immersive experience. Also the integration of virtual information and real information, e.g., muscles and bone overlaid on the user’s body, are beneficial for imaging the interplay of various anatomical structures. VR and AR systems for anatomy education compete with other media to support anatomy teaching, such as interactive 3D visualization and anatomy textbooks. We discuss the constraints that must be considered when designing VR and AR systems that enable efficient knowledge transfer.
Conference Paper
Full-text available
In this paper, we develop a method for grounding medical text into a physically meaningful and interpretable space corresponding to a human atlas. We build on text embedding architectures such as Bert and introduce a loss function that allows us to reason about the semantic and spatial relatedness of medical texts by learning a projection of the embedding into a 3D space representing the human body. We quantitatively and qualitatively demonstrate that our proposed method learns a context sensitive and spatially aware mapping, in both the inter-organ and intra-organ sense, using a large scale medical text dataset from the “Large-scale online biomedical semantic indexing” track of the 2020 BioASQ challenge. We extend our approach to a self-supervised setting, and find it to be competitive with a classification based method, and a fully supervised variant of approach.
Article
During the last decades, anatomy has become an interesting topic in education—even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging—sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi‐transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computer‐generated crafting approach within an anatomical edutainment context. Our approach follows a cost‐effective, simple, and easy‐to‐employ workflow, resulting in assemblable data sculptures (i.e., semi‐transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user‐selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations.
Preprint
Full-text available
During the last decades, anatomy has become an interesting topic in education---even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging---sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computer-generated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations.
Conference Paper
Full-text available
A combination of interactive classification and supersampling visualization algorithms is described, which delivers greatly enhanced realism of 3D reconstructions of the Visible Human data set. Objects are classified on basis of ellipsoidal regions in RGB-space. The ellipsoids are used for supersampling in the visualization process.
Article
Full-text available
We describe a novel method for surgery simulation including a volumetric model built from medical images and an elastic modeling of the deformations. The physical model is based on elasticity theory which suitably links the shape of deformable bodies and the forces associated with the deformation. A real time computation of the deformation is possible thanks to a preprocessing of elementary deformations derived from a finite element method. This method has been implemented in a system including a force feedback device and a collision detection algorithm. The simulator works in real time with a high resolution liver model
Article
One goal of a medical school education is to teach the anatomy of the living human. With the exception of some surface anatomy, the morphology education that goes on during a surgical procedure, and patient observation, live human anatomy is most often taught by simulation, Medical anatomy courses utilize cadavers to approximate the live human. Case-based curricula simulate a patient and present symptoms, signs, and history to mimic reality for the future practitioner. Radiology has provided images of the morphology, function, and metabolism of living humans but with images foreign to most novice observers. With the Visible Human database, computer simulation of the live human body will provide revolutionary transformations in anatomical education. (C) 1998 Wiley-Liss, Inc.
Article
In this work, we describe a method to animate the visible human dataset. The animation is done using a volumetric skeleton which is computed directly from the visible human dataset. The skeleton is then imported into a commercial animation package and animated using existing toolkits. The volumetric skeleton can be used for many other applications since it act as an advanced data structure for referencing all of the voxels in a volumetric model.
Article
We describe a system that automates atlas look-up when viewing cross-sectional images at a viewing station. Using simple specification of landmarks a linear transformation to a volume based anatomical atlas is performed. As a result corresponding atlas pictures containing information about structures, function, or blood supply, or classical atlas pages (like Talairach) appear next to the patient data for any chosen slice. In addition the slices are visible in the 3D context of the VOXEL-MAN 3D atlas, providing all its functionality.
Conference Paper
This paper describes a collection of techniques designed to create photo-realistic computer models of the National Library of Medicine's Visible Man. An image segmentation algorithm is described which segments anatomical structures independent of the variation in color contained in the anatomical structure. The generation of pseudo-radiographic images from the 24-bit digital color images is also described. Three-dimensional manifold surfaces are generated from these images with the appropriate anatomical colors assigned to each surface vertex. Finally, three separate smoothing algorithms — surface, normal, and color, are applied to the surface to create surprisingly realistic surfaces. A number of examples are presented which include solid surfaces, surface cutaways, and mixed opaque and translucent models.