Figure 2 - uploaded by Kenny Gruchalla
Content may be subject to copyright.
1: A volume rendering of the visible human male CT data. A two-dimensional transfer function was used to visualize the skeletal structure and the surface of the skin from this data. Many existing volume rendering techniques are tailored to isolating material boundaries in medical data, and are far less applicable to CFD data in which the structures of interest are not characterized by distinct boundary transitions. Data courtesy of National Library of Medicine and the National Institutes of Health.
Source publication
One of the barriers to visualization-enabled scientific discovery is the difficulty in clearly and quantitatively articulating the meaning of a visualization, particularly in the exploration of relationships between multiple variables in large-scale data sets. This issue becomes more complicated in the visualization of three-dimensional turbu- lenc...
Contexts in source publication
Context 1
... vast majority of applied volume rendering research has been in the field of medical imaging. The canonical example is visualization of medical data obtained by computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET), which can be constructed into a volume model and visualized in three dimensions (see Figure 2.1) for diagnostic or surgical planning purposes. ...
Context 2
... discrete volumetric data set is a set of samples (x,y,z,v), or voxels, representing the value v of some property at the spatial position (x,y,z). A voxel, the three-dimensional analogue to the two-dimensional pixel (see Figure 2.2), is most correctly [Smith, 1995] represented as a point sample in three space. The sample point value may represent a scalar property (e.g., density, heat, or pressure) or a vector property (e.g., a velocity or magnetic field vector). ...
Context 3
... equation is called the volume rendering integral. It computes the color and intensity, I(x, y, , r), from a ray of light, r, passing through the volume that is received at the point (x, y) on the viewing plane (see Figure 2.3). How the ray is generated varies among volume rendering techniques, which can generally be classified as image-order or object-order algorithms (see Table 2 ...
Context 4
... most common is the central differences method. A volume can be rendered without using the diffuse and specular terms of the illumination model (Equation 2.3), and many volume rendering implementations do not provide illumination support; however, these terms can substantially increase the realism and understanding of a volume rendering for some types of data (see Figure 2.4). filter. ...
Context 5
... evaluation of this optical model can be viewed as a pipeline of computational and data flow stages. The common pipeline stages of the volume rendering process are: data traversal, interpolation, gradient estimation, classification, shading, and compositing [Pfister, 2005] (see Figure 2.5). In the data traversal stage, sampling positions are chosen throughout the volume, creating the basis for the discretization of the volume rendering integral. ...
Context 6
... Transfer functions have many degrees of freedom in which the user can become lost. For example, a simple transfer function defined as a series of linear ramps (see Figure 2.6) adds two degrees of freedom for every control point. ...
Context 7
... gradient characterizes how values change in a localized region. A homogeneous neighborhood will have small gradient magnitude, while a neighborhood near a boundary transition will have a large gradient magnitude (see Figure 2.7). Multidimensional transfer functions can be constituted, as Levoy suggests, on a single scalar value using various derivative measures of that variable (e.g., the gradient magnitude, components from the Hessian) as the second and third dimensions, or they can be constituted by multiple data variables from a multivariate data set. ...
Context 8
... transfer functions remained relatively unused until the introduction of the programmable graphics processing unit (GPU). GPUs make the vertex transformation, lighting, and fragment-processing stages of the graphics hardware pipeline programmable (see Figure 2.8, replacing fixed hardware functionality with two types of user programs: vertex shaders and fragment shaders. ...
Context 9
... the programmability and the power of commodity GPU hardware has improved, two-dimensional and threedimensional transfer functions have received growing attention. These transfer functions can be implemented in the fragment processor as a dependent texture look-up [Kniss the texture value at that re-sampled point is used as a texture coordinate to index into the texture that represents the transfer function (see Figure 2.9). The gradient for each re-sampled position can be calculated on-the-fly in the GPU, or precomputed and stored in the 3D texture. ...
Context 10
... and Durkin [1998] developed a semi-automatic transfer function model based on the principles of edge detection. They were able to demonstrate that meaningful boundaries could be isolated using a multidimensional transfer function defined by a scalar data value, gradient magnitude, and second-order derivative along the gradient direction (see Figure 2.10). Kniss et al. ...
Context 11
... are many approaches to feature-based visualization. Selective visualization, a general approach described by van Walsum [1995], divides the visualization process into a pipeline with four stages: selection, clustering, attribute calculation, and iconic mapping (as shown in Figure 2.11). Post et al. [2003] separate specific feature-based visualization techniques into three categories based on image processing, topological analysis, and physical characteristics. ...
Context 12
... type of approach often relies on the detection and classification of critical points [Helman and Hesselink, 1989], where the vector magnitude is zero. A critical point can be characterized by the flow in its immediate neighborhood through an eigenanalysis of the Jacobian, which classifies a critical point as an attracting node, attracting focus, repelling node, repelling focus, center, or saddle point (see Figure 2.12). This concept has been used in the detection, classification, and visualization of critical points in steady and unsteady two-dimensional flows [Hel- man and Hesselink, 1989], in steady three-dimensional flows [Helman and Hesselink, 1991], and a complete classification of three-dimensional critical points has been introduced [Weinkauf et al., 2004[Weinkauf et al., , 2005. ...
Citations
The volume of data and the velocity with which it is being generated by com-
putational experiments on high performance computing (HPC) systems is quickly
outpacing our ability to effectively store this information in its full
fidelity. There- fore, it is critically important to identify and study
compression methodologies that retain as much information as possible,
particularly in the most salient regions of the simulation space. In this
paper, we cast this in terms of a general decision-theoretic problem and
discuss a wavelet-based compression strategy for its solution. We pro- vide a
heuristic argument as justification and illustrate our methodology on several
examples. Finally, we will discuss how our proposed methodology may be utilized
in an HPC environment on large-scale computational experiments.