Article

Selective surface normal estimation for volume rendering

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Three-dimensional volumetric data is discrete by nature. It does not preserve continuous surface. Therefore, to render surfaces realistically, the normal vectors must be predicted before processing the image. During the rendering, if the normals are averaged directly, then the sharp edges are smoothed out as well. In this paper, we show how to avoid over-smoothing problems by analyzing a few neighbor voxels, and selectively averaging the normals to keep details, within a predefined kernel.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Voxel-based volume rendering or direct volume rendering (DVR) is used to display a 2D projection of 3D discretely sampled datasets using 3D volume raycasting technique (34)(35)(36). Though volume rendering allows single visualization to depict all the information within a volumetric dataset (37), its generation inevitably requires time-consuming manual specification of a large number of the rendering parameters. ...
Article
Full-text available
Introduction: Three-dimensional (3D) tools have played a significant role in advancing anatomical knowledge, in simulation and clinical practice in Otology. Technology is evolving at a rapid rate with new applications being reported at an overwhelming pace. It is important to continuously review new applications, assess emerging trends, and identify challenges to innovation so clinical translation progresses in an efficient and evidenced-based manner. Methods: A review of 20 years of literature in 3D technology specific to Otology was undertaken using Medline, Embase, PubMed, and Google Scholar search engines. Trends in the literature were analyzed as applications are evaluated and adopted into clinical practice. A literature review was conducted to identify barriers to translation. Results: There was increasing volume of literature reporting innovations in 3D technology in Otology with a more recent increase in reviews and meta-analysis. The most marked trend was in literature regarding clinical applications of such technology and in 3D printing. Where this may indicate that translation of these technologies is adequate, this is not reflected in routine clinical practice or even in education and training platforms. Conclusion: Barriers to translation of 3D tools specific to Otology include ongoing challenges in attaining high-resolution data, rendering parameters and with the advent of 3D printing a multitude of new variables in software, printers, and materials adding complexity to selecting most appropriate options. These need methodical evaluation to selectively customize solutions to clinical challenges so effective translation, scale, and adoption can occur without causing confusion about choices.
Article
Full-text available
In this paper a feature-preserving volume filtering method is presented. The basic idea is to minimize a three-component global error function penalizing the density and gradient errors and the curvature of the unknown filtered function. The optimization problem leads to a large linear equation system defined by a sparse coefficient matrix. We will show that such an equation system can be efficiently solved in frequency domain using fast Fourier transformation (FFT). For the sake of clarity, first we illustrate our method on a 2D example which is a dedithering problem. Afterwards the 3D extension is discussed in detail since we propose our method mainly for volume filtering. We will show that the 3D version can be efficiently used for elimination of the typical staircase artifacts of direct volume rendering without losing fine details. Unlike local filtering techniques, our novel approach ensures a global smoothing effect. Previous global 3D methods are restricted to binary volumes or segmented iso-surfaces and they are based on area minimization of one single reconstructed surface. In contrast, our method is a general volume-filtering technique, implicitly smoothing all the iso-surfaces at the same time. Although the strength of the presented algorithm is demonstrated on a specific 2D and a specific 3D application, it is considered as a general mathematical tool for processing images and volumes.
Conference Paper
Full-text available
The paper presents two methods that improve gradient estimation in 3D voxel space. The gradient estimation is an important step in the rendering and shading process to obtain realistic and smooth final images of visualized objects. The most used gradient estimation methods, gray-level gradient methods, Z-buffer gradient methods and binary gradient methods in some cases produce artifacts that appear as dark areas and staircase structures in a final image. To deal with the problem, two new methods for gradient estimation are suggested, the reverse gradient method and the angle difference method. The new methods were tested and compared with other gradient estimation methods. Measurements, which have been made on both the data and image levels, have shown that both developed methods improve the quality of volume data rendering.
Conference Paper
Full-text available
For applications of volume visualization in medicine, it is important to assure that the 3D images show the true anatomical situation, or at least to know about their limitations. In this paper, various methods for evaluation of image quality are reviewed. They are classified based on the fundamental terms of intelligibility and fidelity, and discussed with respect to the question what clues they provide on how to choose parameters, or improve imaging and visualization procedures.
Conference Paper
Full-text available
The task of reconstructing the derivative of a discrete function is essential for its shading and rendering as well as being widely used in image processing and analysis. We survey the possible methods for normal estimation in volume rendering and divide them into two classes based on the delivered numerical accuracy. The three members of the first class determine the normal in two steps by employing both interpolation and derivative filters. Among these is a new method which has never been realized. The members of the first class are all equally accurate. The second class has only one member and employs a continuous derivative filter obtained through the analytic derivation of an interpolation filter. We use the new method to analytically compare the accuracy of the first class with that of the second. As a result of our analysis we show that even inexpensive schemes can in fact be more accurate than high order methods. We describe the theoretical computational cost of applying the schemes in a volume rendering application and provide guidelines for helping one choose a scheme for estimating derivatives. In particular we find that the new method can be very inexpensive and can compete with the normal estimations which pre-shade and pre-classify the volume (M. Levoy, 1988).
Article
Full-text available
Two-dimensional images of 3D objects require realistic shading to create the illusion of depth. Traditional (object space) shading methods require extra data (normal vectors) to be stored with the object description. When object representations are obtained directly from measured data, these normal vectors may be expensive to compute; if the object is modified interactively, they must be recomputed frequently. To avoid these problems a simple shading method is devised which uses only information available in image space, after coordinates have been transformed, hidden surfaces removed, and a complete pre-image of all objects has been assembled. The method uses both the distance from the light source and the surface orientation as the basis for shading. The theory and its implementation are discussed and shaded images of a number of objects are presented.
Article
Full-text available
With the increasing availability of high-resolution isotropic three- or four-dimensional medical datasets from sources such as magnetic resonance imaging, computed tomography, and ultrasound, volumetric image visualization techniques have increased in importance. Over the past two decades, a number of new algorithms and improvements have been developed for practical clinical image display. More recently, further efficiencies have been attained by designing and implementing volume-rendering algorithms on graphics processing units (GPUs). In this paper, we review volumetric image visualization pipelines, algorithms, and medical applications. We also illustrate our algorithm implementation and evaluation results, and address the advantages and drawbacks of each algorithm in terms of image quality and efficiency. Within the outlined literature review, we have integrated our research results relating to new visualization, classification, enhancement, and multimodal data dynamic rendering. Finally, we illustrate issues related to modern GPU working pipelines, and their applications in volume visualization domain.
Conference Paper
Full-text available
This paper evaluates and compares four volume rendering algorithms that have become rather popular for rendering datasets described on uniform rectilinear grids: raycasting, splatting, shear-warp, and hardware-assisted 3D texture-mapping. In order to assess both the strengths and the weaknesses of these algorithms in a wide variety of scenarios, a set of real-life benchmark datasets with different characteristics was carefully selected. In the rendering, all algorithm-independent image synthesis parameters, such as viewing matrix, transfer functions, and optical model, were kept constant to enable a fair comparison of the rendering results. Both image quality and computational complexity were evaluated and compared, with the aim of providing both researchers and practitioners with guidelines on which algorithm is most suited in which scenario. Our analysis also indicates the current weakness in each algorithm's pipeline, and possible solutions to these as well as pointers for future research are offered.
Article
Full-text available
Gradient information is used in volume rendering to classify and color samples along a ray. In this paper, we present an analysis of the theoretically ideal gradient estimator and compare it to some commonly used gradient estimators. A new method is presented to calculate the gradient at arbitrary sample positions, using the derivative of the interpolation filter as the basis for the new gradient filter. As an example, we will discuss the use of the derivative of the cubic spline. Comparisons with several other methods are demonstrated. Computational efficiency can be realized since parts of the interpolation computation can be leveraged in the gradient estimation
Article
Full-text available
For the 3D-reconstruction of organ surfaces from tomograms, a shading method based on the partial volume effect is presented. In contrast to methods based on the depth and/or the angle of the voxel surface, here the gray-level gradient along the surface is used for shading. It is shown, that at least for bone and soft tissue surfaces, the results are superior to conventional shading. This is due to the high dynamic range of the gray levels within a small spatial neighborhood.
Article
Full-text available
Shaded overlays for maps give the user an immediate appreciation for the surface topography since they appeal to an important visual depth cue. A brief review of the history of manual methods is followed by a discussion of a number of methods that have been proposed for the automatic generation of shaded overlays. These techniques are compared using the reflectance map as a common representation for the dependence of tone or gray level on the orientation of surface elements.
Conference Paper
Full-text available
For applications of volume visualization in medicine, it is important to assure that the 3D images show the true anatomical situation, or at least to know about their limitations. In this paper, various methods for evaluation of image quality are reviewed. They are classified based on the fundamental terms of intelligibility and fidelity, and discussed with respect to the question what clues they provide on how to choose parameters, or improve imaging and visualization procedures.
The fifth international Conference in Medical Image Computing and Computer Assisted Intervention (MICCAI 2002) was held in Tokyo from September 25th to 28th, 2002. This was the first time that the conference was held in Asia since its foundation in 1998. The objective of the conference is to offer clinicians and scientists the opportunity to collaboratively create and explore the new medical field. Specifically, MICCAI offers a forum for the discussion of the state of art in computer-assisted interentions, medical robotics, and image processing among experts from multi-disciplinary professions, including but not limited to clinical doctors, computer scientists, and mechanical and biomedical engineers. The expectations of society are very high; the advancement of medicine will depend on computer and device technology in coming decades, as they did in the last decades. We received 321 manuscripts, of which 41 were chosen for oral presentation and 143 for poster presentation. Each paper has been included in these proceedings in eight-page full paper format, without any differentiation between oral and poster papers. Adherence to this full paper format, along with the increased number of manuscripts, surpassing all our expectations, has led us to issue two proceedings volumes for the first time in MICCAI’s history. Keeping to a single volume by assigning fewer pages to each paper was certainly an option for us considering our budget constraints. However, we decided to increase the volume to offer authors maximum opportunity to argue the state of art in their work and to initiate constructive discussions among the MICCAI audience.
Article
Validation is now considered as a mandatory step in any development of new medical image processing methods, systems, or components. It allows studying performances and adequacy with clinical applications, and comparison with similar solutions. Being able to correctly answer these three issues requires proper design, performance, and report of validation studies and results. This could be made easier by 1) the use of standardized solutions and 2) the explicit formalization and description of validation study components when designing and reporting such studies. In this paper, we will identify the major components involved in a validation study of a medical image processing (MIP) method, embedded or not in a wider clinical system. Emphasis will be given on the study conditions, and especially on the validation data sets. Main freely available validation data sets usable for MIP validation will be listed and shortly described. Finally, we will outline the need for validating the validation study itself and explain the two main aspects it includes.
Article
The generation of 3D solid objects, and more generally solid geometric modelling, is very important in Computer Aided Design (CAD). This paper presents a simple but effective algorithm for automated display of perspective views of Constructive Solid Geometry (CSG) scene models. This algorithm can be implemented as a module in such a way that it is easily integrated, without any modification, to the present systems of “pixel-based Z-Buffer” CSG renderers. An implementation of the algorithm for such a system is also given in the paper.
Article
Interface problems arise in many applications. For example, when there are two different materials, such as water and oil, or the same material but at different states, such as water and ice, we are dealing with an interface problem. If partial or ordinary differential equations are used to model these applications, the parameters in the governing differential equations are typically discontinuous across the interface separating two materials or two states, and the source terms are often singular to reflect source/sink distributions along codimensional interfaces. Because of these irregularities, the solutions to the differential equations are typically nonsmooth, or even discontinuous as in the example of the pressure inside and outside an inflated balloon. As a result, many standard numerical methods based on the assumption of smoothness of solutions do not work or work poorly for interface problems. Another type of problem involves differential equations defined on irregular domains. Examples include underground water flow passing through different objects such as stones, sponges, etc. In a free boundary problem, not only is the domain arbitrary but it also changes with time. For interface problems and problems defined on irregular domains, analytic solutions are rarely available. The rapid development of modern computers has made it possible to find numerical solutions of these problems. Standard finite difference methods based on simple grids will likely lead to loss of accuracy in a neighborhood of interfaces or near irregular boundaries. While there are some sophisticated methods and software packages for interface and irregular domain problems, the complexity and/or the extra effort needed for learning these methods and software packages are obstacles for nonexperts. The cost and limitations of possible mesh generation processes for complicated geometries at every or every other time step are also major concerns for moving interface or free boundary problems.
Conference Paper
As mobile robotics is gradually moving towards a level of semantic environment understanding, robust 3D object recognition plays an increasingly important role. One of the most crucial prerequisites for object recognition is a set of fast algorithms for geometry segmentation and extraction, which in turn rely on surface normal vectors as a fundamental feature. Although there exists a plethora of different approaches for estimating normal vectors from 3D point clouds, it is largely unclear which methods are preferable for online processing on a mobile robot. This paper presents a detailed analysis and comparison of existing methods for surface normal estimation with a special emphasis on the trade-off between quality and speed. The study sheds light on the computational complexity as well as the qualitative differences between methods and provides guidelines on choosing the 'right' algorithm for the robotics practitioner. The robustness of the methods with respect to noise and neighborhood size is analyzed. All algorithms are benchmarked with simulated as well as real 3D laser data obtained from a mobile robot.
Article
The currently-existing 3-D imaging technologies provide 3-D digital images of scenes in the form of 3-D arrays of numbers. In the representation of an object as a set of voxels, a boundary surface of the object is a set of faces of voxels. In spite of the large number of faces that form a boundary surface of a typical object, fast algorithms for hidden part suppression and shading of such surfaces are made possible by the simplicity of the geometry of the 3-D voxel environment. However, because of the very limited number (only three) of orientations of the faces, the shading rule based on the direction cosine of the face normals and distance of the faces from the observer sometimes produces rough display images of originally smooth surfaces. This causes an undesirable change of smoothness of the display image from one view to another in a dynamic mode of display. We propose a contextual shading scheme which assigns shading to a face based on the local shape of the surface in the neighborhood of the face. The number of computations required per face is kept to a minimum by precomputing and storing all the possible direction cosines used for shading. The new shading algorithm has speeds comparable to that of the algorithm based on three face orientations, and produces far better display images.
Article
Three-dimensional voxel-based objects are inherently discrete and do not maintain any notion of a continuous surface or normal values, which are crucial for the simulation of light behavior. Thus, in volume rendering, the normal vector of the displayed surfaces must be estimated prior to rendering. We survey several methods for normal estimation and analyze their performance. One unique method, the context-sensitive approach, employs segmentation and segment-bounded operators that are based on object and slope discontinuities in order to achieve high fidelity normal estimation for rendering volumetric objects.
Article
The understanding of complex craniofacial deformities has been aided by high resolution computed tomography. Nonetheless, the planar format limits spatial comprehension. Reconstruction of fully three-dimensional bony and soft tissue surfaces from high resolution CT scans has been accomplished by a level slicing edge detector coupled to a hidden surface processor without perspective depth transformation. This method has clarified aberrant anatomy, facilitated surgical planning and improved quantitative postoperative evaluation in more than 200 clinical cases. Advanced computer aided design techniques, originally developed for the manufacture of military aircraft, have been applied to the planning and evaluation of craniofacial procedures as well. This allows the application of interactive digital graphic technology to surgical patient management.
Article
Modern scanning techniques, such as computed tomography, have begun to produce true three-dimensional imagery of internal structures. The first stage in finding structure in these images, like that for standard two-dimensional images, is to evaluate a local edge operator over the image. If an edge segment in two dimensions is modeled as an oriented unit line segment that separates unit squares (i.e., pixels) of different intensities, then a three-dimensional edge segment is an oriented unit plane that separates unit volumes (i.e., voxels) of different intensities. In this correspondence we derive an operator that finds the best oriented plane at each point in the image. This operator, which is based directly on the 3-D problem, complements other approaches that are either interactive or heuristic extensions of 2-D techniques.
Article
A method for the qualitative understanding of low-resolution binary objects is presented. To the observer the object appears as a pseudophotograph of a continuous object. A good impression of the 3D shape of the object is conveyed by the shaded planar image. By using a fast storage and boundary extraction scheme, the shaded image is produced quickly.
Article
In this paper a new gradient estimation method is presented which is based on linear regression. Previous con- textual shading techniques try to fit an approximate function to a set of surface points in the neighborhood of a given voxel. Therefore a system of linear equations has to be solved using the computationally expensive Gaussian elimination. In contrast, our method approximates the density function itself in a local neighborhood with a 3D regression hyperplane. This approach also leads to a system of linear equations but we will show that it can be solved with an efficient convolution. Our method provides at each voxel location the normal vector and the trans- lation of the regression hyperplane which are considered as a gradient and a filtered density value respectively. Therefore this technique can be used for surface smoothing and gradient estimation at the same time.
Article
This paper presents a novel technique for estimating normals on unorganized point clouds. Methods from robust statistics are used to detect the best local tangent plane for each point. Therefore the algorithm is capable to deal with points located in high curvature regions or near/on complex sharp features, while being highly robust with respect to noise and outliers. In particular, the presented method reliably recovers sharp features but does not require tedious manual parameter tuning as done by current methods.The key ingredients of our approach are a robust noise-scale estimator and a kernel density estimation (KDE) based objective function. In contrast to previous approaches the noise-scale estimation is not affected by sharp features and achieves high accuracy even in the presence of outliers. In addition, our normal estimation procedure allows detection and elimination of outliers. We confirm the validity and reliability of our approach on synthetic and measured data and demonstrate applications to point cloud denoising.
Article
Various shaded hidden-surface display techniques have been used to render voxel data. In this paper, an approach to using general ray tracing for the rendering of voxel data is presented. Central to this approach is the interpolation of a surface with respect to the volume of a given voxel and its neighbors. Nine columns (of three voxel volumes each) provide sufficient constraints on the integral of a biquadratic function over the column's base to solve its specific coefficients. The use of these locally interpolated surfaces to define a scene to be ray traced is investigated.
Article
A shading technique for voxel-based images, termedcongradient shading, is presented. As the surface information is not available in voxel representation, the surface normal must be recovered from the 3D discrete voxel map itself. The technique defines the normal as one of a finite set of neighborhood-estimated gradients and can thus employ precalculated look-up tables. Furthermore, a table-driven mechanism permits changing the light source parameters by merely redefining the look-up table. The technique uses only simple arithmetic operations and is thus suitable for hardware implementation. Since it has been implemented not as a post-processor, but as part of the projection pipeline of the cube architecture, congradient shading can be executed in real time. Two versions of the technique in real time. Two versions of the technique have been conceived and implemented:unidirectional shading, in which the gradient is estimated only from neighborhoods along the scan-lines;bidirectional shading, in which both horizontal and vertical components of the gradient are considered. In spite of the simplicity of the technique, the results are practically indistinguishable from images generated by conventional techniques.
Article
The use of fully interactive 3-D workstations with true real-time performance will become increasingly common as technology matures and economical commercial systems become available. This paper provides a comprehensive introduction to high speed approaches to the display and manipulation of 3-D medical objects obtained from tomographic data acquisition systems such as CT, MR, and PET. A variety of techniques are outlined including the use of software on conventional minicomputers, hardware assist devices such as array processors and programmable frame buffers, and special purpose computer architecture for dedicated high performance systems. While both algorithms and architectures are addressed, the major theme centers around the utilization of hardware-based approaches including parallel processors for the implementation of true real-time systems.
Article
Stair-step artifacts in helical computed tomography (CT) are associated with inclined surfaces in longitudinal sections. The authors investigated the origin and the characteristics of the artifacts. A cone phantom and a skull were dry-scanned with a helical CT scanner, and images were reconstructed by using the half-scan interpolation algorithm with combinations of detector collimation (1 and 5 mm), table feed (1, 2, 5, and 10 mm), and reconstruction interval (1, 2, 5, and 10 mm). Stair-step artifacts were perceived in most instances. Stair-step artifacts arose from two sources: large reconstruction intervals and asymmetric helix interpolation, forming isoclosed curves and spirallike patterns in three-dimensional axial views, respectively. To eliminate the stair-step artifacts, both the collimation and the table feed should be less than the longitudinal dimension of the important feature on inclined surfaces, and the reconstruction interval should be less than the table feed. Adaptive interpolation may correct the artifacts.
Conference Paper
To visualize the volume data acquired from computation or sampling, it is necessary to estimate normals at the points corresponding to object surfaces. Volume data does not holds the geometric information for the surface comprising points, so it is necessary to calculate normals using local information at each point. The existing normal estimation methods have some problems of estimating incorrect normals at discontinuous, aliased or noisy points. Yagel et al. (1992) solved some of these problems using their context-sensitive method. However, this method requires too much processing time and it loses some information on detailed parts of the object surfaces. This paper proposes the surface-characteristic-sensitive normal estimation method which applies different operators according to characteristics of each surface for the normal calculation. This method has the same advantages of the context-sensitive method, and also some other advantages such as less processing time and the reduction of the information loss on detailed parts
Article
Introduces a new concept for alias-free voxelization of geometric objects based on a voxelization model (V-model). The V-model of an object is its representation in 3D continuous space by a trivariate density function. This function is sampled during the voxelization and the resulting values are stored in a volume buffer. This concept enables us to study general issues of sampling and rendering separately from object-specific design issues. It provides us with a possibility to design such V-models, which are correct from the point of view of both the sampling and rendering, thus leading to both alias-free volumetric representation and alias-free rendered images. We performed numerous experiments with different combinations of V-models and reconstruction techniques. We have shown that the V-model with a Gaussian surface density profile combined with tricubic interpolation and Gabor derivative reconstruction outperforms the previously published technique with a linear density profile. This enables higher fidelity of images rendered from volume data due to increased sharpness of edges and thinner surface patches
Article
Medical imaging devices (such as computed tomography scanners) often produce data on real objects by assigning values to small, rectangular, abutting volumes of space. The cuberille model, in which space is dissected by three mutually orthogonal sets of parallel planes, represents such data quite well. Segmentation of the cuberille into object and background, followed by boundary detection, gives an approximation (consisting of faces in the cuberille) to the real object's surface. In order to render the detected boundary surface on a display screen so that details are retained but the appearance of a real object (as opposed to one consisting of cubes) is provided, the authors propose a number of shading methods. Their performance with existing methods using medical and artificial objects is compared. Overall, they find the new normal-based contextual shading method superior to the others they have tested; it provides images practically indistinguishable from those provided by the Phong shading method at significantly reduced cost.
Article
This paper is a survey ofvolume visualization. It includes an introduction to volumetric data; surface rendering techniques for volume data; volume rendering techniques, including image-order,objectorder, and domain techniques; optimization methods for volume rendering; special-purpose volume rendering hardware; global illumination of volumetric data, including volumetric ray tracing and volumetric radiosity; irregular grid rendering; and volume graphics, with several volume modeling techniques, such as voxelization, texture mapping, amorphous phenomena, block operations, constructive solid modeling, and volume sculpting.
Article
This paper is a survey of volume visualization, volume graphics, and volume rendering techniques. It focuses specifically on the use of the voxel representation and volumetric techniques for geometric applications. 1. Introduction Volume data are 3D entities that may have information inside them, might not consist of surfaces and edges, or might be too voluminous to be represented geometrically . Volume visualization is a method of extracting meaningful information from volumetric data using interactive graphics and imaging, and it is concerned with volume data representation, modeling, manipulation, and rendering [49]. Volume data are obtained by sampling, simulation, or modeling techniques. For example, a sequence of 2D slices obtained from Magnetic Resonance Imaging (MRI) or Computed Tomography (CT) is 3D reconstructed into a volume model and visualized for diagnostic purposes or for planning of treatment or surgery. The same technology is often used with industrial CT for non-des...
Article
Stony Brook, New York 11794-4400 hree-dimensional voxel-based objects are inherently discrete and do not l f maintain any notion of a continuous surface or normal values, which are crucia or the simulation of light behavior. Thus in volume rendering, the normal s vector of the displayed surfaces must be estimated prior to rendering. We urvey several methods for normal estimation and analyze their performance. s One unique method, the context sensitive approach, employs segmentation and egment-bounded operators that are based on object and slope discontinuities in c o order to achieve high fidelity normal estimation for rendering volumetri bjects. Key Words: discrete shading, volume rendering, filtering, segmentation, volume visualization. -2T 1. INTRODUCTION he use of volume representation in graphics and imaging has seen great , m progress in the last decade. The availability of multi-dimensional scanners ainly in the biomedical fields, coupled with enhanced computing power for t ...
Image quality metrics encyclopedia of imaging science and technology
  • N Burningham
  • Z Pizlo
  • J P Allebach
Burningham, N., Pizlo, Z., & Allebach, J. P. (2002). Image quality metrics encyclopedia of imaging science and technology. New York: John Wiley and Sons Ltd.
Efficient normal estimation using variable-size operator
  • B S Shin
Shin, B. S. (1999). Efficient normal estimation using variable-size operator. The Journal of Visualization and Computer Animation, 10, 91-107. doi:10.1002/(SICI)1099-1778(199904/06)10:2<91:AID-VIS199>3.0.CO;2-W
Volume graphics: Field-based modeling and rendering (Unpublished doctoral dissertation)
  • A S Winter
Winter, A. S. (2002). Volume graphics: Field-based modeling and rendering (Unpublished doctoral dissertation). University of Wales, Swansea.
Feature-preserving volume filtering
  • L Neumann
  • B Csebfalvi
  • I Violax
  • M Mlejnek
  • E Gröllerk
Neumann L., Csebfalvi B., Violax I., Mlejnek M., & Gröllerk E. (2002). Feature-preserving volume filtering. In Proceedings of the symposium on Data Visualization (pp. 105-114).
Three-dimensional display of medical image volumes
  • D S Schlusselberg
  • K Smith
  • D J Woodward
Schlusselberg D. S., Smith K., & Woodward D. J. (1986). Three-dimensional display of medical image volumes. In Proceedings of NCGA Conference (vol.3 pp. 114-123).
Three dimensional computer graphics for craniofacial surgical planning and evaluation
  • M W Vannier
  • J L Marsh
  • J O Warren
Vannier, M. W., Marsh, J. L., & Warren, J. O. (1983). Three dimensional computer graphics for craniofacial surgical planning and evaluation. ACM SIGGRAPH Computer Graphics, 17, 263-273. doi:10.1145/800059.801157