Article

Accelerating volume rendering by ray leaping with back steps

Article

Accelerating volume rendering by ray leaping with back steps

If you want to read the PDF, try requesting it from the authors.

Abstract

The methods for visualizing sampled spatial scientific data are known as volume rendering, where images are generated by computing 2D projections of 3D volume data. Since all the discrete data cells participate in the generation of each image, rendering time grows linearly with the resolution and complexity of the dataset. Empty cells in the data, which do not contribute to the final image, are of the important factors that increase the rendering time. During recent years, researchers have highly concentrated on improving the performance of these methods to achieve real time rendering. Skipping the empty space provides significant speedup and known as space leaping which requires implementation of special data structures and pre-processing. This paper presents a simple and efficient technique, that we name "ray-leaping," for the acceleration of total rendering process and eliminates the need for special data structures and pre-processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For example, an image with a PSNR value of 25 dB may look better than another image with a PSNR value of 35 dB. The corresponding PSNR metric for acceptable images for use in digital radiology varies from 40 dB to 50 dB depending on the metric used [36]. In comparing an original picture and a coded picture, the PSNR figures typically range from +25 to +35 dB [37]. ...
Article
Full-text available
In this study, a new and rapid hidden resource decomposition method has been proposed to determine noisy pixels by adopting the extreme learning machines (ELM) method. The goal of this method is not only to determine noisy pixels, but also to protect critical structural information that can be used for disease diagnosis. In order to facilitate the diagnosis and also the treatment of patients in medicine, two-dimensional (2-D) images were calculated tomography (CT) which is obtained using medical imaging techniques. Utilizing a large number of CT images, promising results have been obtained from these experiments. The proposed method has shown a significant improvement on mean squared error and peak signal-to-noise ratio. The experimental results indicate that the proposed method is statistically efficient, and it has a good performance with a high learning speed. In the experiments, the results demonstrated that remarkable successive rates were obtained through the ELM method.
... This method is called the stepping technique. Intensity values are sampled at evenly spaced intervals along the ray by trilinear interpolation of the surrounding voxels [5]. Transfer functions are used to map the sampled values to an appropriate color and opacity. ...
Article
Full-text available
Displaying useful and meaningful information from 3D data is known as volume rendering. Ray casting is one of the most frequently used direct volume rendering methods. It consists of data preparation, sampling, classification, compositing, and shading steps. Normal values are needed for efficient shading. However, 3D volumetric data are discrete and cannot be used directly for shading. Hence, the estimation of normal values, at each voxel on the surface, is needed for realistic shading. In normal estimation, the use of small voxel neighborhoods results in the staircase effect. On the other hand, the use of larger voxel neighborhoods causes loss of details in the nal image. In this work, an alternative normal estimation method that uses large voxel neighborhoods is proposed for providing smoother images without losing details.
... The ideas of parallel processing and constant information transfer in pervasive com- munication networks will pave the way to integrated health information systems which enable personalized care. On the medical side, these systems will acquire physiological data, [94] 2012 - GA - --A - [21] 2011 [118] 2010 SIM Si - ✓ ✓ A S p Celebi and Cevik [119] 2010 CPU Im - -✓ I Acceleration Andrews et al. [120] 2010 SIM Im - ✓ -A - support diagnosis and treatment monitoring. On the infrastructure side, these systems will generate, distribute and store heath records; thereby they shape clinical real-time work flows. ...
Article
Purpose:The concept of real-time is very important, as it deals with the realizability of computer based health care systems. Method:In this paper we review biomedical real-time systems with a meta-analysis on Computational Complexity (CC), Delay (Δ) and Speedup (Sp). Results:During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim, of achieving real-time, with a discussion on CC or S p. These papers were analyzed in terms of: processing system used, Application Area (AA), CC, Δ, Sp, Implementation / Algorithm (I / A) and competition. Conclusions:The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on Implementation (I). Most authors compete on Big O notation (O) and Processing Time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important.
Article
Ray casting Algorithm is one of the basic algorithms used in the volume rendering of three dimensional data, which has a good operating and a simple description, but there are some obvious problems on the rendering speed and the precision. In this paper, a new intelligent volume rendering method, integrated the particle swarm optimization on transfer function design with the Ray-leaping rendering algorithm, is described, which make up the lack of rendering speed and precision in traditional ray casting algorithm. Then, the MITK is used to render a skull CT image sequence.
Article
Full-text available
In this paper the GPU implementation of a real-time iso- surface volume-rendering system is described in detail, which aims at autostereoscopic displays. Since autostereoscopic displays provide im- ages for many views, and thus require difierent camera settings in each pixel, and even in the three color channels of a pixel, naive rendering approaches would slow down the rendering process by a factor of the number of views of the display. To maintain interactive rendering, our approach is image centric, that is, we independently set the eye position for each pixel and implement iso-surface ray-casting in the pixel shader of the GPU. To handle the difierent camera settings for difierent color channels, geometric and color computation processes are decomposed into multiple rendering passes. This solution allows rendering rates that are independent of the number of main views of the autostereoscopic display, i.e. we cannot observe speed degradation when real 3D images are generated.
Conference Paper
Full-text available
Rendering deformable volume data currently needs separate processes for deformation and rendering, and is expensive in terms of both computational and memory costs. Recognizing the importance of unifying these processes, we present a new approach to the direct rendering of deformable volumes without explicitly constructing the intermediate deformed volumes. The volume deformation is done by a radial basis function that is piecewise linearly approximated by an adaptive subdivision of the octree encoded target volume. The octree blocks in the target volume are then projected, reverse morphed and texture mapped, using the SGI 3D texture mapping hardware, in a back-to-front order. A template-based Z-plane/block intersection method is used to expedite the block projection computation.
Conference Paper
Full-text available
For applications of volume visualization in medicine, it is important to assure that the 3D images show the true anatomical situation, or at least to know about their limitations. In this paper, various methods for evaluation of image quality are reviewed. They are classified based on the fundamental terms of intelligibility and fidelity, and discussed with respect to the question what clues they provide on how to choose parameters, or improve imaging and visualization procedures.
Conference Paper
Full-text available
Virtual endoscopy has proven to be a very powerful tool in endoscopic surgery. However, most virtual endoscopy systems are restricted to rendering isosurfaces or require segmentation in order to visualize additional objects behind occluding tissue. This paper presents a system for real-time perspective direct volume and isosurface rendering, which allows to simultaneously visualize both the interesting tissue and everything that is behind. Large volume data can be viewed seamlessly from inside or outside the volume without any pre-computation or segmentation. Our system uses a novel ray-casting pipeline for GPUs that has been optimized for the requirements of virtual endoscopy and also allows easy incorporation of auxiliary geometry, e.g., for displaying parts of the endoscopic device, pointers, or grid lines for orientation purposes. We present three main applications of this system and the underlying ray-casting algorithm. Although our ray-casting approach is of general applicability, we have specifically applied it to virtual colonoscopy, virtual angioscopy, and virtual pituitary surgery.
Article
Full-text available
The increasing capabilities of magnetic resonance (MR) imaging and multisection spiral computed tomography (CT) to acquire volumetric data with near-isotropic voxels make three-dimensional (3D) postprocessing a necessity, especially in studies of complex structures like intracranial vessels. Since most modern CT and MR imagers provide limited postprocessing capabilities, 3D visualization with interactive direct volume rendering requires expensive graphics workstations that are not available at many institutions. An approach has been developed that combines fast visualization on a low-cost PC system with high-quality visualization on a high-end graphics workstation that is directly accessed and remotely controlled from the PC environment via the Internet by using a Java client. For comparison of quality, both techniques were applied to several neuroradiologic studies: visualization of structures related to the inner ear, intracranial aneurysms, and the brainstem and surrounding neurovascular structures. The results of pure PC-based visualization were comparable with those of many commercially available volume-rendering systems. In addition, the high-end graphics workstation with 3D texture-mapping capabilities provides visualization results of the highest quality. Combining local and remote 3D visualization allows even small radiologic institutions to achieve low-cost but high-quality 3D visualization of volumetric data.
Article
Full-text available
This paper describes a multistage perceptual quality assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible differences between original and processed images. The MPQA produces visible distortion maps and quantitative error measures informed by considerations of the human visual system (HVS). Original and decompressed images are decomposed into different spatial frequency bands and orientations modeling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are then made across frequency bands and orientations to produce a single perceptual distortion visibility map (PDVM). A perceptual quality rating (PQR) is calculated from the PDVM and transformed into a one to five scale, PQR(1-5), for direct comparison with the mean opinion score, generally used in subjective ratings. The proposed MPQA model is based on existing perceptual quality assessment models, while it is differentiated by the inclusion of contrast masking as a function of background uncertainty. A pilot study of clinical experiments on wavelet-compressed digital angiogram has been performed on a sample set of angiogram images to identify diagnostically acceptable reconstruction. Our results show that the PQR(1-5) of diagnostically acceptable lossy image reconstructions have better agreement with cardiologists' responses than objective error measurement methods, such as peak signal-to-noise ratio A Perceptual thresholding and CSF-based Uniform quantization (PCU) method is also proposed using the vision models presented in this paper. The vision models are implemented in the thresholding and quantization stages of a compression algorithm and shown to produce improved compression ratio performance with less visible distortion than that of the embedded zerotrees wavelet (EZWs).
Article
Full-text available
Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/∼lcv/ssim/.
Chapter
Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission and reproduction. For example, lossy video compression techniques, which are almost always used to reduce the bandwidth needed to store or transmit video data, may degrade the quality during the quantization process. For another instance, the digital video bitstreams delivered over error-prone channels, such as wireless channels, may be received imperfectly due to the impairment occurred during transmission. Package-switched communication networks, such as the Internet, can cause loss or severe delay of received data packages, depending on the network conditions and the quality of services. All these transmission errors may result in distortions in the received video data. It is therefore imperative for a video service system to be able to realize and quantify the video quality degradations that occur in the system, so that it can maintain, control and possibly enhance the quality of the video data. An effective image and video quality metric is crucial for this purpose.
Article
For applications of volume visualization in medicine, it is important to assure that the 3D images show the true anatomical situation, or at least to know about their limitations. In this paper, various methods for evaluation of image quality are reviewed. They are classified based on the fundamental terms of diagnostic and technical image quality, and discussed with respect to the question what clues they provide on how to choose parameters, or improve imaging and visualization procedures.
Article
The fundamentals of hierarchical data structures are reviewed and it is shown how they are used in the implementation of some basic operations in computer graphics. The properties of hierarchical structures are discussed, focusing on quadtrees and octrees. The latter are defined, some of the more common ways in which they are implemented are examined, and an explanation of the quadtree/octree complexity theorem is provided. Vector quadtrees and vector octrees are discussed. The performance of basic operations using quadtrees is considered.
Article
Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Article
Abstract Virtual endoscopy,(VE) is a new,method,of diagnosis using computer,processing of 3D image datasets (such as CT or MRI scans) to provide simulated visualizations of patient specific organs similar or equivalent to those produced,by standard endoscopic,procedures. Conventional endoscopy is invasive and often uncomfortable for patients. It sometimes has serious side effects such as perforation, infection and hemorrhage. VE visualization avoids these risks and can minimize,difficulties and decrease morbidity,when,used before actual endoscopic procedures. In addition, there are many body regions not compatible with real endoscopy that can be explored with VE. Eventually, VE may replace many forms of real endoscopy. There remains a critical need to refine and validate VE visualizations for routine clinical use. We have used the Visible Human,Dataset from the National Library of Medicine to develop and test these procedures and to evaluate their use in a variety of clinical applications. We have developed specific clinical protocols to compare,virtual endoscopy,with real endoscopy. We have developed informative and dynamic on-screen navigation guides to help the surgeon or physician interactively determine body orientation and precise anatomical localization while performing the VE procedures. Additionally, the adjunctive value of full D imaging (e.g. looking “outside” of the normal field of view) during the VE exam,is being evaluated. Quantitative analyses of local geometric and densitometric properties obtained from the virtual procedures (“virtual biopsy”) are being developed,and compared,with other direct measures. Preliminary results suggest that these virtual procedures can provide accurate, reproducible and clinically useful visualizations and measurements. These studies will help drive improvements in and lend credibility to VE procedures and simulations as routine clinical tools. VE holds significant promise for optimizing endoscopic diagnostic procedures, minimizing patient risk and morbidity, and reducing health care costs. q 2000 Elsevier Science Ltd. All rights reserved. Keywords: Virtual endoscopy; Visible humans; Anatomic modeling; Volume rendering
Article
Several existing volume rendering algorithms operate by factor-ing the viewing transformation into a 3D shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2D warp to form an undistorted final image. We extend this class of algorithms in three ways. First, we describe a new object-order rendering algorithm based on the factorization that is significan tly faster than published algorithms with minimal loss of image quality. Shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. We use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. We use spatial data structures based on run-length encoding for both the volume and the intermediate image. Our implemen-tation running on an SGI Indigo workstation renders a 256, voxel medical data set in one second. Our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. Third, we introduce a data structure for encoding spatial coherence in unclassifie d volumes (i. e. scalar fields with no precomputed opacity). When combined with our shear-warp rendering algo-rithm this data structure allows us to classify and render a 256, voxel volume in three seconds. The method extends to support mixed volumes and geometry and is parallelizable. CR Categories: I. 3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I. 3.3 [Computer Graphics]: Picture/Image Generation, Display Algorithms.
Article
This paper introduces a novel approach for speeding up the ray casting process commonly used in volume visualization methods. This new method called Ray Acceleration by Distance Coding (RADC) uses a 3-D distance transform to determine the minimum distance to the nearest interesting object; the implementation of a fast and accurate distance transform is described in detail. High distance values, typically found at off-center parts of the volume, cause many sample points to be skipped, thus significantly reducing the number of samples to be evaluated during the ray casting step. The minimum distance values that are encountered while traversing the volume can be used for the identification of rays that do not hit objects. Our experiments indicate that the RADC method can reduce the number of sample points by a factor between 5 and 20.
Article
This paper introduces a novel approach for speeding up the ray casting process commonly used in volume visualization methods. This new method called Ray Acceleration by Distance Coding (RADC) uses a 3-D distance transform to determine the minimum distance to the nearest interesting object; the implementation of a fast and accurate distance transform is described in detail. High distance values, typically found at off-center parts of the volume, cause many sample points to be skipped, thus significantly reducing the number of samples to be evaluated during the ray casting step. The minimum distance values that are encountered while traversing the volume can be used for the identification of rays that do not hit objects. Our experiments indicate that the RADC method can reduce the number of sample points by a factor between 5 and 20.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
Article
This paper presents a new parallel volume rendering algorithm that can render 2563 voxel medical data sets at over 10 Hz and 1283 voxel data sets at over 30 Hz on a 16-processor Silicon Graphics Challenge. The algorithm achieves these results by minimizing each of the three components of execution time: computation time, synchronization time, and data communication time. Computation time is low because the parallel algorithm is based on the recently-reported shear-warp serial volume rendering algorithm which is over five times faster than previous serial algorithms. Synchronization time is minimized by using dynamic load balancing and a task partition that minimizes synchronization events. Data communication costs are low because the algorithm is implemented for shared-memory multiprocessors, a class of machines with hardware support for low-latency fine-grain communication and hardware caching to hide latency. We draw two conclusions from our implementation. First, we find that on shared-memory architectures data redistribution and communication costs do not dominate rendering time. Second, we find that cache locality requirements impose a limit on parallelism in volume rendering algorithms. Specifically, our results indicate that shared-memory machines with hundreds of processors would be useful only for rendering very large data sets.
Article
In this paper we present a new method for the acceleration of ray traversal through a regular 3D grid. A distance transformation is precomputed and mapped onto the empty grid space. A ray traversing the empty space is assisted by the distance values which permit it to perform long skips along the ray direction. We show that the City-Block metric simplifies the preprocessing with no penalty at the traversal phase. Different schemes are discussed and the trade-off between the preprocessing time and the speed-up is analyzed.
Article
For applications of volume visualization in medicine, it is important to assure that the 3D images show the true anatomical situation, or at least to know about their limitations. In this paper, various methods for evaluation of image quality are reviewed. They are classified based on the fundamental terms of diagnostic and technical image quality, and discussed with respect to the question what clues they provide on how to choose parameters, or improve imaging and visualization procedures.
Conference Paper
We present a fast volume rendering algorithm for time-varying fields. We propose a new data structure, called time-space partitioning (TSP) tree, that can effectively capture both the spatial and the temporal coherence from a time-varying field. Using the proposed data structure, the rendering speed is substantially improved. In addition, our data structure helps to maintain the memory access locality and to provide the sparse data traversal so that our algorithm becomes suitable for large-scale out-of-core applications. Finally, our algorithm allows flexible error control for both the temporal and the spatial coherence so that a trade-off between image quality and rendering speed is possible. We demonstrate the utility and speed of our algorithm with data from several time-varying CFD simulations. Our rendering algorithm can achieve substantial speedup while the storage space overhead for the TSP tree is kept at a minimum.
Conference Paper
We present new volume rendering techniques for efficiently generating high quality stereoscopic images and propose criteria to evaluate stereo volume rendering algorithms. Specifically, we present fast stereo volume ray casting algorithms using segment composition and linearly- interpolated re-projection. A fast stereo shear-warp volume rendering algorithm is also presented and discussed.
Conference Paper
This paper presents a method concerning the volume rendering of fine details, such as blood vessels and nerves, from medical data. The realistic and efficient visualization of such structures is often of great medical interest, and conventional rendering techniques do not always deal with them adequately. Our method uses preprocessing to reconstruct fine details that are difficult to segment and label. It detects the presence of fine geometrical structures, such as cracks or cylinders that suggest the existence of, for example, blood vessels or nerves; the subsequent volume rendering then displays fine geometrical objects that lie on a surface. The method can also show structures within the volume, using a special "integration sampling" scheme to portray reconstructed volume texture, such as that exhibited by muscle fibers. By combining the surface structure and volume texture in the rendering, realistic results can be produced; examples are provided.
Conference Paper
In this paper we present a new extended space leaping method which allows the drastic speed up and the effcient load balancing by performing skipping processes not only in data space domain but also in screen space domain, and based on the method, present a fast and well balanced parallel ray casting algorithm. We propose a novel forward projection technique for computing information on skipping processes in both domains very fast by combining run-length encoding and line drawing algorithm, and shall show that it can be implemented in parallel with ease and efficiency by the proper distribution of the encoded data while the information produced from the technique being exploited to provide the load balancing. Also, we implemented our algorithms on PVM(Parallel Virtual Machine), and show our experimental result.
Article
We present an efficient three-phase algorithm for volume viewing that is based on exploit- - t ing coherency between rays in parallel projection. The algorithm starts by building a ray emplate and determining a special plane for projection -- the base-plane. Parallel rays are cast t into the volume from within the projected region of the volume on the base-plane, by repeating he sequence of steps specified in the ray-template. We carefully choose the type of line to be s employed and the way the template is being placed on the base-plane in order to assure uniform ampling of the volume by the discrete rays. We conclude by describing an optimized software K implementation of our algorithm and reporting its performance. eywords: volume rendering, ray casting, template, parallel projection 1. Introduction Volume visualization is the process of converting complex volume data to a format that is p amenable to human understanding while maintaining the integrity and accuracy of the data. Th...
Article
There are several optimization techniques available for improving rendering speed of direct volume rendering. An acceleration method using the hierarchical min-max map requires little preprocessing and data storage while preserving image quality. However, this method introduces computational overhead because of unnecessary comparison and level shift between blocks. In this paper, we propose an efficient space-leaping method using optimal-sized blocks. To determine the size of blocks, our method partitions an image plane into several uniform grids and computes the minimum and the maximum depth values for each grid. We acquire optimal block sets suitable for individual rays from these values. Experimental results show that our method reduces rendering time when compared with the previous min-max octree method.
Article
Volume rendering is a technique for visualizing sampled scalar or vector fields of three spatial dimensions without fitting geometric primitives to the data. A subset of these techniques generates images by computing 2-D projections of a colored semitransparent volume, where the color and opacity at each point are derived from the data using local operators. Since all voxels participate in the generation of each image, rendering time grows linearly with the size of the dataset. This paper presents a front-to-back image-order volume-rendering algorithm and discusses two techniques for improving its performance. The first technique employs a pyramid of binary volumes to encode spatial coherence present in the data, and the second technique uses an opacity threshold to adaptively terminate ray tracing. Although the actual time saved depends on the data, speedups of an order of magnitude have been observed for datasets of useful size and complexity. Examples from two applications are given: medical imaging and molecular graphics.
Article
The authors discuss the assessment of the contribution of diagnostic imaging to the patient management process. A hierarchical model of efficacy is presented as an organizing structure for appraisal of the literature on efficacy of imaging. Demonstration of efficacy at each lower level in this hierarchy is logically necessary, but not sufficient, to assure efficacy at higher levels. Level 1 concerns technical quality of the images; Level 2 addresses diagnostic accuracy, sensitivity, and specificity associated with interpretation of the images. Next, Level 3 focuses on whether the information produces change in the referring physician's diagnostic thinking. Such a change is a logical prerequisite for Level 4 efficacy, which concerns effect on the patient management plan. Level 5 efficacy studies measure (or compute) effect of the information on patient outcomes. Finally, at Level 6, analyses examine societal costs and benefits of a diagnostic imaging technology. The pioneering contributions of Dr. Lee B. Lusted in the study of diagnostic imaging efficacy are highlighted.
Article
Three-dimensional (3D) medical images of computed tomographic (CT) data sets can be generated with a variety of computer algorithms. The three most commonly used techniques are shaded surface display, maximum intensity projection, and, more recently, 3D volume rendering. Implementation of 3D volume rendering involves volume data management, which relates to operations including acquisition, resampling, and editing of the data set; rendering parameters including window width and level, opacity, brightness, and percentage classification; and image display, which comprises techniques such as "fly-through" and "fly-around," multiple-view display, obscured structure and shading depth cues, and kinetic and stereo depth cues. An understanding of both the theory and method of 3D volume rendering is essential for accurate evaluation of the resulting images. Three-dimensional volume rendering is useful in a wide variety of applications but is just now being incorporated into commercially available software packages for medical imaging. Although further research is needed to determine the efficacy of 3D volume rendering in clinical applications, with wider availability and improved cost-to-performance ratios in computing, 3D volume rendering is likely to enjoy widespread acceptance in the medical community.
Article
Virtual endoscopy (VE) is a new method of diagnosis using computer processing of 3D image datasets (such as CT or MRI scans) to provide simulated visualizations of patient specific organs similar or equivalent to those produced by standard endoscopic procedures. Conventional endoscopy is invasive and often uncomfortable for patients. It sometimes has serious side effects such as perforation, infection and hemorrhage. VE visualization avoids these risks and can minimize difficulties and decrease morbidity when used before actual endoscopic procedures. In addition, there are many body regions not compatible with real endoscopy that can be explored with VE. Eventually, VE may replace many forms of real endoscopy.
Article
A numerical measure, which is able to predict diagnostic accuracy rather than subjective quality, is required for compressed medical image assessment. The objective of this study is to present a proposal for a new vector measure of image quality, reflecting diagnostic accuracy. Construction of such measure includes the formation of a diagnostic quality pattern based on the subjective ratings of local image features playing an essential role in the detection and classification of any lesion. Experimental results contain the opinions of 9 radiologists: 2 test designers and 7 observers who rated digital mammograms. The correlation coefficient between the numerical equivalent of the vector measure and subjective pattern is over 0.9.
Article
Image quality measurement methods are reviewed and difficulties in various approaches are highlighted. The main emphasis of the paper is on objective image quality measurements, however, subjective assessment methods are also discussed briefly.
Conference Paper
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data are larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we use a fast incremental polygon slicing algorithm. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Conference Paper
We present a space leaping technique for accelerating volume rendering with very low space and run-time complexity. Our technique exploits the ray coherence during ray casting by using the distance a ray traverses in empty space to leap its neighboring rays. Our technique works with parallel as well as perspective volume rendering, does not require any preprocessing or 3D data structures, and is independent of the transfer function. Being an image-space technique, it is independent of the complexity of the data being rendered. It can be used to accelerate both time-coherent and noncoherent animation sequences.
Conference Paper
We present a fast and reliable space-leaping scheme to accelerate ray casting during interactive navigation in a complex volumetric scene, where we combine innovative space-leaping techniques in a number of ways. First, we derive most of the pixel depths at the current frame by exploiting the temporal coherence during navigation, where we employ a novel fast cell-based reprojection scheme that is more reliable than the traditional intersection-point based reprojection. Next, we exploit the object space coherence to quickly detect the remaining pixel depths, by using a precomputed accurate distance field that stores the Euclidean distance from each empty (background) voxel toward its nearest object boundary. In addition, we propose an effective solution to the challenging new-incoming-objects problem during navigation. Our algorithm has been implemented on a 16-processor SGI Power Challenge and reached interactive rendering rates at more than 10 Hz during the navigation inside 512<sup>3</sup> volume data sets acquired from both a simulation phantom and actual patients.
Conference Paper
This paper introduces the field of volume visualization, volumetric data representations, and volume rendering algorithms. It further discusses volume graphics and its underlying voxelization algorithms. Special-purpose volume rendering architectures have been researched for over two decades. Recently, commercial real-time volume rendering boards have been introduced, most notably the VolumePro board which is based on the Cube-4 architecture developed at Stony Brook University.
Conference Paper
In this work we present a method for speeding the process of volume animation. It exploits coherency between consecutive images to shorten the path rays take through the volume. Rays are provided with the information needed to leap over the empty space and commence volume traversal at the vicinity of meaningful data. The algorithm starts by projecting the volume onto a C-buffer (coordinates-buffer) which stores the object-space coordinates of the first non-empty voxel visible from a pixel. Following a change in the viewing parameters, the C-buffer is transformed accordingly. Next, coordinates that possibly became hidden are discarded. The remaining values serve as an estimate of the point where the new rays should start their volume traversal. This method does not require 3-D preprocessing and does not suffer from any image degradation. It can be combined with existing acceleration techniques and can support any ray traversal algorithm and material modeling scheme
Article
Magnetic resonance imaging (MRI) has been used in clinical applications for several years. It offers a physician excellent anatomic detail and tissue characterisation. Within the past decade, advances in MRI have allowed us to image with a submillimeter, in-plane resolution. Achieving such reduction in voxel volume requires novel technological developments in every aspect of the MRI acquisition process: radio frequency (RF) and gradient hardware, and pulse sequence software. We have developed new techniques for inserting a custom-designed and custom-built, high-strength gradient coil into an existing clinical MR imager. Such inserts can produce strong gradient fields that lead to maximum spatial resolution and excellent signal sensitivity. Here, we describe the interfacing and calibration methods that we have developed for our custom insert into a 1.5 T General Electric MR scanner and show that all the tests have met the specifications of the clinical gradient coil. The implementation of our method allows for switching between clinical and research modes without having to purchase and maintain another MR system
Article
For pt.I see ibid., vol.8, no.3, p.48-68, May (1988). Advanced applications for preliminary display methods are focused on, with emphasis on the octree. Topics include use of the quadtree as a basis for hidden-surface algorithms, parallel and perspective projection methods to display a collection of objects represented by an octree, and the use of octrees to facilitate such image-rendering techniques as ray tracing and radiosity.< >
Article
Coherency (data locality) is one of the most important factors that influences the performance of distributed ray tracing systems, especially when object dataflow approach is employed. The enormous cost associated with remote fetches must be reduced to improve the efficiency of the parallel renderer. Objects once fetched should be maximally utilized before replacing them with other objects. In this paper, we report on the results obtained from the implementation of two coherent parallel rendering algorithms on distributed memory architectures. The algorithms are tested with several experimental datasets, and the timings delineate the coherent nature of our algorithms. Several network issues like latency hiding, communication overheads, load balancing, scalability, and limited channel bandwidth are considered during the design process. Our algorithms optimize communication between nodes, eliminate the need for any explicit synchronization, and support latency hiding in an efficient mann...
Article
f i In this paper we present a method for speeding the process of volume rendering a sequence o mages. Speedup is based on exploiting coherency between consecutive images to shorten the n path rays take through the volume. This is achieved by providing each ray with information eeded to leap over the empty space and commence volume traversal at the vicinity of mean- - b ingful data. The algorithm starts by projecting the volume into a C-buffer (Coordinates uffer) which stores, at each pixel location, the object-space coordinates of the first non-empty s t voxel visible from that pixel. For each change in the viewing parameters, the C-buffer i ransformed accordingly. In the case of rotation the transformed C-buffer goes through a pro- - b cess of eliminating coordinates that possibly became hidden. The remaining values in the C uffer serve as an estimate of the point where the new rays should start their volume traverc sal. This space-leaping method can be combined with existin...
Article
The task of real time rendering of today's volumetric datasets is still being tackled by several research groups. A quick calculation of the amount of computation required for real-time rendering of a high resolution volume puts us in the teraflop range. Yet, the demand to support such rendering capabilities is increasing due to emerging technologies such as virtual surgery simulation and rapid prototyping. There are five main approaches to overcoming this seemingly insurmountable performance barrier: (i) data reduction by means of model extraction or data simplification, (ii) realization of special-purpose volume rendering engines, (iii) software-based algorithm optimization and acceleration, (iv) implementation on general purpose parallel architectures, and (v) use of contemporary of-the-shelf graphics hardware. In this presentation we first describe the vision of real-time high-resolution volume rendering and estimate the computing power it demands. We survey the state-of-the art in...
Article
c s The task of the rendering process is to display the primitives used to represent the 3D volumetri cene onto a 2D screen. Rendering is composed of a viewing process which is the subject of this paper, b and the shading process. The projection process determines, for each screen pixel, which objects are seen y the sight ray cast from this pixel into the scene. The viewing algorithm is heavily dependent on the e e display primitives used to represent the volume and whether volume rendering or surface rendering ar mployed. Conventional viewing algorithms and graphics engines can be utilized to display geometric d primitives, typically employing surface rendering. However, when volume primitives are displayed irectly, a special volume viewing algorithm should be employed. This algorithm should capture the conp tents of the voxels on the surface as well as the inside of the volumetric object being visualized. This aper surveys and compares previous work in the field of direct volume vie...
Article
We examine various simple algorithms that exploit homogeneity and accumulated opacity for tracing rays through shaded volumes. Most of these methods have error criteria which allow them to trade quality for speed. The time vs. quality tradeoff for these adaptive methods is compared to fixed step multiresolution methods. These methods are also useful for general light transport in volumes. 1 Introduction We are interested in speeding volume ray tracing computations. We concentrate on the one dimensional problem of tracing a single ray, or computing the intensity at a point from a single direction. In addition to being the kernel of a simple volume ray tracer, this computation can be used to generate shadow volumes and as an element in more general light transport problems. Our data structures will be view independent to speed the production of animations of preshaded volumes and interactive viewing. In [11] Levoy introduced two key concepts which we will be expanding on: presence accel...