Temporally Coherent Interactive Ray Tracing

Article (PDF Available) · June 2003with 53 Reads 
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
Cite this publication
Ray tracing exhibits visible temporal aliasing artifacts in interactive viewing of complex datasets. We present a technique to reduce scintillation in point sampled imagery by targeting new rays to intersection points from previous frames.
Figures - uploaded by Erik Reinhard
Author content
All content in this area was uploaded by Erik Reinhard
Content may be subject to copyright.
Temporally Coherent Interactive
Ray Tracing
W. Martin S. Parker E. Reinhard
P. Shirley W. Thompson
School of Computing
University of Utah
Salt Lake City, UT 84112 USA
May 15, 2001
Although ray tracing has been successfully applied to interactively render largedatasets, su-
persampling pixels will not be practical in interactive applications for some time. Because
large datasets tend to have subpixel detail, one-sample-per-pixel ray tracing can produce
visually distracting popping and scintillation. We present an algorithm that directs primary
rays toward locations rendered in previous frames, thereby increasing temporal coherence.
Our method tracks intersection points over time, and these tracked points are used as an or-
acle to aim rays for the next frame. This is in contrast to traditional image-based rendering
techniques which advocate color reuse. We so acquire coherence between frames which
reduces temporal artifacts without introducing significant processing overhead or causing
unnecessary blur.
Temporally Coherent Interactive Ray Tracing
W. Martin S. Parker E. Reinhard P. Shirley W. Thompson
University of Utah www.cs.utah.edu
Abstract. Although ray tracing has been successfully applied to interactively
render large datasets, supersampling pixels will not be practical in interactive ap-
plications for some time. Because large datasets tend to have subpixel detail,
one-sample-per-pixel ray tracing can produce visually distracting popping and
scintillation. We present an algorithm that directs primary rays toward locations
rendered in previous frames, thereby increasing temporal coherence. Our method
tracks intersection points over time, and these tracked points are used as an oracle
to aim rays for the next frame. This is in contrast to traditional image-based ren-
dering techniques which advocate color reuse. We so acquire coherence between
frames which reduces temporal artifacts without introducing significant process-
ing overhead or causing unnecessary blur.
1 Introduction
Improvements in general-purpose computer systems have recently allowed ray tracing
to be used in interactive applications [10,13,18]. On some applications such as isosur-
face rendering [14], ray tracing is superior to even highly optimized z-buffer methods.
In this paper we present an algorithm to reduce dynamic artifacts in an interactive ray
tracing system. This algorithm is relatively simple, and attacks what we believe is the
dominant visual artifact in current interactive ray tracing systems.
Because ray tracing has a computational cost roughly linear in the number of sam-
ples (primary rays), interactive implementations have used one primary ray per pixel
and relatively small image sizes, with 512 by 512 images being common. As raw pro-
cessor performance increases, these systems can improve by creating larger images,
increasing frame rate, or by taking multiple samples per pixel. Once the framerate is
at interactive levels (approximately 15 frames per second or higher), our experience is
that increasing the number of pixels is usually a better option than using multiple pri-
mary rays per pixel. Current systems that ray trace large datasets are approximately
one hundred times too slow to render to the highest resolution screens at thirty frames
per second
. This suggests that even if Moore’s Law continues to hold for another ten
years, we will still be taking only one primary ray per pixel until at least that time.
Unfortunately, one sample per pixel ray tracing introduces aliasing issues which
need to be addressed. Most anti-aliasing approaches were developed in an era where
the available processing power allowed only relatively simple scenes to be rendered and
displayed. As a consequence, the ratio of polygons to pixels was low, usually resulting
in long straight edges. Aliasing issues are particularly evident in such images, revealing
themselves in the form of jaggies and Moir
e patterns [1,2]. Animations of such scenes
cause the running ants effect along edges and visually distracting scintillations when
texture mapping is applied.
Recent advances in display technology have produced LCD displays with approximately 9million pixels,
which is a factor of 3 to 8 higher than currently common.
new window
3D hitpoints from last frame
window from last frame
new window
new eye
new sample
discarded sample
new 3D hit point
discarded 3D hit point
Fig. 1. Point tracking algorithm.
Two solutions are common: spatially filtering the scene before sampling, or spa-
tially filtering the image after sampling [5, 9,11,12]. Pre-filtering the scene, e.g. using
LOD management or MIP-mapping, may not always be possible, as is the case for the
35 million sphere crack propagation dataset used in Parker et al. [14]. Super-sampling
the image is considered too costly for interactive systems, as argued above. Another
trade-off exists in the form of stochastic sampling to turn aliasing into visually less of-
fensive noise [4,6]. A by-product of spatial filtering is that temporal artifacts are also
reduced, although temporal anti-aliasing methods have received some attention [8,16].
Since the development of these anti-aliasing schemes, two things have been chang-
ing. Scene complexity now commonly far exceeds pixel resolutions and display devices
are about to become available at much higher resolutions. The combination of these two
leads to the presence of an approximate continuum of high frequencies in the image,
which causes spatial aliasing to be visually masked [3,7]. Whenever this occurs, spatial
filtering produces images that contain more blur than necessary. Higher scene complex-
ity and device resolution do not tend to mask temporal aliasing artifacts as well. A good
solution to these issues would therefore preserve spatial crispness while at the same time
minimizing temporal artifacts.
Our approach is to revert to point sampling where sample locations in the current
frame are guided by results from previous frames. Such guidance introduces temporal
coherence that will reduce scintillation and popping. It is also computationally inexpen-
sive so that most of the processing power can be devoted to producing point samples of
highly complex scenes. Scene complexity in turn masks spatial aliasing. The method is
demonstrated using an interactive ray tracing engine for complex terrain models.
2 Point tracking algorithm
For high polygon to pixel ratios, the scene is sparsely sampled. In interactively ray
traced animations, a brute-force approach is likely to possess little temporal coherence.
With high probability, each pixel will have different scene features projected onto them
from frame to frame. This causes popping/scintillation which is visually distracting.
This effect can be minimized by using an algorithm which projects the same feature
within a region of a scene onto the screen over the course of a number of frames. We
therefore seek a method which keeps track of which points in the scene were sampled
during previous frames. This can be easily accomplished within a ray tracing frame-
work by storing the intersection points of the primary rays with their corresponding
pixels. Prior to rendering the next frame, these intersection points are associated with
new pixels using the perspective projection determined by the modified camera param-
eters. Instead of painting the reprojected points directly on the screen [19], which may
lead to inaccuracies, this reprojection serves the sole purpose of directing new rays
towards objects that have been previously hit.
After the reprojection of the last frame’s intersection points, for each pixel one of
three situations may arise:
One intersection point was projected onto a pixel. This is the ideal case where
scintillation is avoided.
Two or more intersection points were projected onto a single pixel. Here, an
oracle is required to choose which of the points is going to be used. We use a
simple heuristic which chooses the point with the smallest distance to the camera
origin. Points further away are more likely to be occluded by intervening edges
and are therefore less suitable candidates.
Zero points were projected onto a pixel. For such pixels no guidance can be
provided and a regular or jittered ray is chosen instead.
For each pixel a new ray is traced. Instead of using image space heuristics to choose
ray directions, an object space criterion is used: we aim for the intersection points that
were projected onto each pixel. A new regular or jittered ray direction is chosen only
in the absence of a reprojected point for a given pixel.
Once stored intersection points have been used for the current frame, they are dis-
carded. The ray tracing process produces shadedpixel information for the current frame
as well as a list of intersection points to be used for the next frame.
3 Results
To demonstrate the effectiveness of this point tracking method, we use the interactive
ray tracer of Parker et. al. [13] to render two different terrain models, which are called
Hogum and Range-400. Images of these data sets are given in Figure 2 and their geo-
metric complexity is given in Table 1. The intersection routine is similar to that used for
isosurface rendering [14]. The height field is stored in a multiply nested 2D regular grid
with the minimum and maximum heights recorded for each grid cell. When a leaf cell
is an intersection candidate, the exact intersection is computed with the bilinear patch
defined by the four points. The measurements presented in this paper are all collected
using a 32 processor SGI Origin 3800. Each processor is an R12k running at 400 MHz.
We used 30 processors for each run plus an additional processor running the display
The computational cost of the point tracking system is assessed in Figure 3. In
this graph, we compare the interactive ray tracer with and without point tracking for
different image resolutions. For both data sets it appears that satisfactory frame-rates
can be achieved for image sizes up to 512
The performance penalty of using the point tracking algorithm is roughly 20%,
which is small enough to warrant its use. Although we compare with point sampled
images, to obtain similar (but more blurred) results, one would have to compare its
performance with super-sampled images, whose cost is linear in the number of samples
taken per pixel. Both single sample images with and without point tracking are vastly
less expensive than super-sampled images.
The visual appearance of the animations with and without point tracking are pre-
sented in the accompanying video
. Assuming a worst-case scenario, the point tracking
algorithm is most effective if a single intersection point projects onto a given pixel. This
The video is in SVHS format and is captured using the direct SVHS output of the Origin system. This
does cause some blurring and also some extra popping. However, the relative magnitude ofthe visual artifacts
is approximately what we observe on the actual display.
Fig. 2. Hogum and Range-400 terrain datasets.
Terrain Resolution Texture resolution
Hogum 175 × 199 1740 × 1980
Range-400 3275 × 2163 3275 × 2163
Table 1. Terrain complexity
0 2 4 6 8 10 12
x 10
Image size (in pixels)
Time per frame (seconds/frame)
Time per frame vs. image size
Hogum (brute force)
Range−400 (brute force)
Hogum (point tracking)
Range−400 (point tracking)
44−57 fps
5.4 fps
2.8 fps
3.7 fps
9.4 fps
5.0 fps
20.8 fps
12.4 fps
Fig. 3. Hogum and Range-400 data sets rendered with and without point tracking at image
resolutions of 256
, 384
, 512
, 640
, 768
, 896
, and 1024
pixels. Shown is the average
frame-rate achieved for each animation.
Intersection points per pixel
Hogum 21.93% 58.22% 18.18% 1.63% 0.04%
Range-400 19.68% 64.73% 14.21% 1.33% 0.05%
Table 2. Percentage of pixels that have 0 to 4 intersection points reproject to them. The ideal
case is where 1 intersection point projects to a pixel.
is when perfect frame-to-frame coherence is achieved. When zero points project to a
pixel, a ray cast for that pixel is likely to produce a contrast change, causing scintil-
lation. When more than one point reprojects to a single pixel, some information will
locally disappear for the next frame. This may also lead to scintillation.
For the above experiments, roughly 60% of the pixels in each frame are covered
by exactly one intersection point (Table 2). In the remaining cases, either zero or more
than one intersection point reprojects to the same pixel. Hence, one could argue that our
algorithm presents a 60% visual improvement over standard point sampled animated
imagery. Note that these numbers are dependent on both the model and camera speed.
4 Conclusions
Our paper has assumed that multisampled spatial filtering will not soon be practical for
interactive ray tracing. Anticipating a trend towards higher resolution display devices,
it is also likely that spatial filtering to achieve anti-aliasing will become less necessary,
because the effect of spatial aliasing will largely be masked. However, the same is not
true for temporal aliasing. Combining the performance advantage of point sampling
over super-sampling with a mechanism to reduce temporal aliasing, we have achieved
visually superior animations at a cost that is only slightly higher than point sampling
alone. Moreover, its computational expense is much less than the cost of conventional
anti-aliasing schemes such as super-sampling. Using this point tracking algorithm, the
increase in computational power which is to be expected according to Moore’s Law can
be spent on rendering larger images, rather than on expensive filtering operations.
We speculate that, in the future, the point tracking technique may be applied inde-
pendent of the reconstruction used for display. If we allow tracked points to persist even
when they co-occupy a pixel, then more sophisticated reconstruction methods could be
applied to increase point reuse. This would be in the spirit of Simmons et al. [17]
and Popescu et al. [15]. We have not pursued this in the present work, noting that the
former use sparser sample sets than we do, and the latter technique requires special pur-
pose hardware. Nonetheless, we believe progress in computational performance will
continue to outpace improvements in screen resolution, and conclude that such sophis-
ticated reconstruction using tracked points will eventually be possible.
The authors would like to thank Brian Smits for insightful discussions. This work was
supported in part by the NSF Science and Technology Center for Computer Graph-
ics and Scientific Visualization (ASC-89-20219), and NSF grants 97-31859, 99-78099,
and 97-20192. All opinions, findings, conclusions or recommendations expressed in
this document are those of the authors and do not necessarily reflect the views of the
sponsoring agencies.
1. BLINN, J. F. Return of the jaggy. IEEE Computer Graphics and Applications 9, 2 (March
1989), 82–89.
2. B
LINN, J. F. What we need around here is more aliasing. IEEE Computer Graphics and
Applications 9, 1 (January 1989), 75–79.
3. B
OLIN,M.R.,AND MEYER, G. W. A perceptually based adaptive sampling algorithm.
Proceedings of SIGGRAPH 98 (July 1998), 299–310. ISBN 0-89791-999-8. Held in Or-
lando, Florida.
4. C
OOK, R. L. Stochastic sampling in computer graphics. ACM Transactions on Graphics 5,
1 (January 1986), 51–72.
5. C
ROW, F. C. A comparison of antialiasing techniques. IEEE Computer Graphics and
Applications 1, 1 (January 1981), 40–48.
6. D
E,M.A.Z.,AND WOLD, E. H. Antialiasing through stochastic sampling. InComputer
Graphics (SIGGRAPH ’85 Proceedings) (July 1985), B. A. Barsky, Ed., vol. 19, pp. 69–78.
7. F
of visual masking for computer graphics. Proceedings of SIGGRAPH 97 (August 1997),
143–152. ISBN 0-89791-896-7. Held in Los Angeles, California.
8. G
RANT, C. W. Integrated analytic spatial and temporal anti-aliasing for polyhedra in 4-
Space. In Computer Graphics (SIGGRAPH ’85 Proceedings) (July 1985), B. A. Barsky,
Ed., vol. 19, pp. 79–84.
9. M
ITCHELL, D. P. Generating antialiased images at low sampling densities. In Computer
Graphics (SIGGRAPH ’87 Proceedings) (July 1987), M. C. Stone, Ed., vol. 21, pp. 65–72.
held in Anaheim, California; 27 – 31 July 1987.
10. M
UUSS,M.J.towards real-time ray-tracing of combinatorial solid geometric models. In
Proceedings of BRL-CAD Symposium (June 1995).
11. N
ORTON,A.,ROCKWOOD,A.P.,AND SKOLMOSKI,P.T. Clamping: A method of an-
tialiasing textured surfaces by bandwidth limiting in object space. In Computer Graphics
(SIGGRAPH ’82 Proceedings) (July 1982), vol. 16, pp. 1–8.
12. P
AINTER,J.,AND SLOAN,K.Antialiased ray tracing by adaptive progressive refinement.
In Computer Graphics (SIGGRAPH ’89 Proceedings) (July 1989), J. Lane, Ed., vol. 23,
pp. 281–288. held in Boston, Massachusetts; 31 July – 4 August 1989.
13. P
Interactive ray tracing. 1999 ACM Symposium on Interactive 3D Graphics (April 1999),
14. P
Interactive ray tracing for volume visualization. In IEEE Transactions on Visualization and
Computer Graphics (July-September 1999).
15. P
L. The WarpEngine: An architecture for the post-polygonal age. Proceedings of SIG-
GRAPH 2000 (July 2000), 433–442. ISBN 1-58113-208-5.
16. S
HINYA,M. Spatial anti-aliasing for animation sequences with spatio-temporal filtering.
In Computer Graphics (SIGGRAPH ’93 Proceedings) (1993), J. T. Kajiya, Ed., vol. 27,
pp. 289–296.
17. S
EQUIN,C.H.Tapestry: A dynamic mesh-based display representa-
tion for interactive rendering. Rendering Techniques 2000: 11th Eurographics Workshop on
Rendering (June 2000), 329–340. ISBN 3-211-83535-0.
18. W
ALD,I.,SLUSALLEK,P.,BENTHIN,C.,AND WAGNER,M.Interactive rendering with
coherent ray tracing. In Eurographics 2001 (2001).
19. W
ALTER,B.,DRETTAKIS,G.,AND PARKER,S. Interactive rendering using the render
cache. In Rendering Techniques ’99 (1999), D. Lischinski and G. W. Larson, Eds., Euro-
graphics, Springer-Verlag Wien New York, pp. 19–30.
  • ... However, when the property varies inside the voxels and the empty voxels change along time, the C-Buffer must be recomputed. The reprojection technique has also been used to track points across frames in order to reduce temporal aliasing 28 . ...
    Full-text available
    The goal of this paper is the proposal and evaluation of a ray-casting strategy that takes advantage of the spatial and temporal coherence in image-space as well as in object-space in order to speed up rendering. It is based on a double structure: in image-space, a temporal buffer that stores for each pixel the next instant of time in which the pixel must be recomputed, and in object-space a Temporal Run-Length Encoding of the voxel values through time. The algorithm skips empty and unchanged pixels through three different space-leaping strategies. It can compute the images sequentially in time or generate them simultaneously in batch. In addition, it can handle simultaneously several data modalities. Finally, an on-purpose out-of-core strategy is used to handle large datasets. The tests performed on two medical datasets and various phantom datasets show that the proposed strategy significantly speeds-up rendering.
  • ... Other applications are not currently close to being interactive on GPUs regardless of image quality because their number of primitive objects N is very large. These include many scientific simulations [38], the display of scanned data [39], and terrain rendering [40]. While level-of-detail (LOD) techniques can sometimes make display of geometrically simplified data possible, such procedures typically require costly preprocessing and can create visual errors [41]. ...
    Full-text available
    Threaded Ray eXecution (TRaX) is a highly parallel multithreaded multicore processor architecture designed for real-time ray tracing. The TRaX architecture consists of a set of thread processors that include commonly used functional units (FUs) for each thread and that share larger FUs through a programmable interconnect. The memory system takes advantage of the application's read-only access to the scene database and write-only access to the frame buffer output to provide efficient data delivery with a relatively simple memory system. One specific motivation behind TRaX is to accelerate single-ray performance instead of relying on ray packets in single-instruction-multiple-data mode to boost throughput, which can fail as packets become incoherent with respect to the objects in the scene database. In this paper, we describe the TRaX architecture and our performance results compared to other architectures used for ray tracing. Simulated results indicate that a multicore version of the TRaX architecture running at a modest speed of 500 MHz provides real-time ray-traced images for scenes of a complexity found in video games. We also measure performance as secondary rays become less coherent and find that TRaX exhibits only minor slowdown in this case while packet-based ray tracers show more significant slowdown.
  • ... The dominant visual artifact of interactive ray tracing programs is flickering caused by subpixel objects of different colors moving over the pixel center when the objects or eye moves [Martin et al. 2002]. This is best alleviated by sending multiple viewing rays per pixel and averaging the result, which not only alleviates flickering but also provides antialiasing. ...
    The modern graphics processing units (GPUs), found on almost every personal computer, use the z-buffer algorithm to compute visibility. Ray tracing, an alternative to the z-buffer algorithm, delivers higher visual quality than the z-buffer algorithm but has historically been too slow for interactive use. However, ray tracing has benefited from improvements in computer hardware, and many believe it will replace the z-buffer algorithm as the graphics engine on PCs. If this replacement happens, it will imply fundamental changes in both the API to and capabilities of 3D graphics engines. This paper overviews the backgrounds in z-buffer and ray tracing, presents our case that ray tracing will replace z-buffer in the near future, and discusses the implications for graphics oriented classes should this switch to ray tracing occur. Since computer gaming is one of the most important industry driving graphics hardware and the fact that recently there are many computer science courses related to games and games development, we also describe the potential impact on games related classes.
  • ... By reusing the VI data for shading in ESTARA we can decrease the computational cost for one sample. In addition, since the algorithm of reusing the VI data spreads the shading data into the time domain (Fig. 1(b)), similarly to [6], the temporal aliasing known as flickering and scintillation is highly reduced. Let us describe the algorithm of reusing in more detail. ...
    Conference Paper
    Full-text available
    Complex procedural shaders are commonly used to enrich the appearance of high-quality computer animations. In traditional ren- dering architectures the shading computation is performed independently for each animation frame which leads to signican t costs. In this paper we propose an approach which eliminates redundant computation be- tween subsequent frames by exploiting temporal coherence in shading. The shading computation is decomposed into view-dependent and view- independent parts and the results of the latter one are shared by a num- ber of subsequent frames. This leads to a signican t improvement of the computation performance. Also, the visual quality of resulting anima- tions is much better due to the reduction of temporal aliasing in shading patterns.
  • ... With these mechanisms we expect to improve scalability when using more machines. The performance will allow us to realize animation in " real time " and to face new topics like dynamic moving of objects [ al, 2000, Lext and al, 2001] or the elimination of temporal artifacts [William and al, 2001]. ...
    This paper presents a parallel ray-tracing algorithm in order to compute very large models (more than 100 million triangles) with distributed computer architecture. On a single computer, the size of the used dataset generates an out of core computation. Cluster architectures designed with off-the-shelf components offer extended capacities which allow to keep the large dataset inside the aggregated main memories. Then, to achieve scalability of operational applications, the real challenge is to exploit efficiently the amount of available memory and computing power. Ray-tracing, for high quality image rendering, spawns non-coherent rays which generate irregular tasks difficult to distribute on such architectures. In this paper we present a cache mechanism for main memory management distributed on each parallel computer and we implement a load balancing solution based on an auto adaptive algorithm to distribute the computation efficiently. Full Text at Springer, may require registration or fee
  • Conference Paper
    Interactive ray tracing applications running on commodity hardware can suffer from objectionable temporal artifacts due to a low sample count. We introduce stable ray tracing, a technique that improves temporal stability without the over-blurring and ghosting artifacts typical of temporal post-processing filters. Our technique is based on sample reprojection and explicit hole filling, rather than relying on hole-filling heuristics that can compromise image quality. We make reprojection practical in an interactive ray tracing context through the use of a super-resolution bitmask to estimate screen space sample density. We show significantly improved temporal stability as compared with supersampling and an existing reprojection techniques. We also investigate the performance and image quality differences between our technique and temporal antialiasing, which typically incurs a significant amount of blur. Finally, we demonstrate the benefits of stable ray tracing by combining it with progressive path tracing of indirect illumination.
  • Conference Paper
    Dynamic sound rendering is attracting a lot of attention in recent years due to its applications in computer games and architecture simulation. Although physical based methods can produce realistic outputs, they typically involve recursive tracing of sound rays, which may be computationally too expensive for interactive dynamic environments. In this paper, we propose a ray caching method that exploits ray coherence to accelerate the ray-tracing process. The proposed method is tailored for interactive sound rendering based on two approximation techniques: spatial and angular approximation. The ray cache supports intra-frame, inter-frame and inter-observer sharing of rays. We show the performance of the new method through a number of experiments.
  • Article
    Full-text available
    Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a rendering architecture for computing multiple frames at once by exploiting the coherence between image samples in the temporal domain. For each sample representing a given point in the scene we update its view-dependent components for each frame and add its contribution to pixels identified through the compensation of camera and object motion. This leads naturally to a high quality motion blur and significantly reduces the cost of illumination computations. The required visibility information is provided using a custom ray tracing acceleration data structure for multiple frames simultaneously. We demonstrate that precise and costly global illumination techniques such as bidirectional path tracing become affordable in this rendering architecture.
  • Article
    Full-text available
    Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented. Preprint
  • Article
    Ray-tracing is essential when it is necessary to accurately compute reflection and refraction effects, shadows with penumbrae, and energy transport through participating media. In this newe ffort, a combination of hardware and software techniques is employed to produce ray-traced renderings of models of enormous complexity (>500,000,000 polygon equivalents) in near-real-time, with the goal being to achieve per-formance of fivet elevision-resolution frames per second. Fort his class of problem, the ray-tracing technique stands to produce images at a substantially faster rate than any known scanline-rendering system, while simultaneously bestowing all the advantages of ray-traced rendering. This effort uses a commercial off-the-shelf (COTS) parallel processor array,h igh-speed HIPPI and ATM networks, and real-time image display hardware. The initial sys-tem is being implemented using an 8-node, 96-processor Silicon Graphics Incorporated (SGI) Power Challenge Array.B ya dding existing BRL-CAD network-distributed ray-tracing software, the result is a system capable of producing real-time ray-traced optical image generation of low-complexity geometry.
  • Article
    An object space method is given for interpolating between sampled and locally averaged signals, resulting in an antialiasing filter which provides a continuous transition from a sampled signal to its selectively dampened local averages. This method is applied to the three standard Euclidean dimensions and time, resulting in spatial and frame to frame coherence. The theory allows filtering of a variety of functions, including continuous and discrete representations of planar texture.
  • Three antialiasing techniques were applied to a scene of moderate complexity. Early results suggest that prefiltering is still the most computationally effective method.
  • Conference Paper
    Full-text available
    We describe an antialiasing system for ray tracing based on adaptive progressive refinement. The goals of the system are to produce high quality antialiased images at a modest average sample rate, and to refine the image progressively so that the image is available in a usable form early and is refined gradually toward the final result.The method proceeds by adaptive stochastic sampling of the image plane, evaluation of the samples by ray tracing, and image reconstruction from the samples. Adaptive control of the sample generation process is driven by three basic goals: coverage of the image, location of features, and confidence in the values at a distinguished "pixel level" of resolution.A three-stage process of interpolation, filtering, and resampling is used to reconstruct a regular grid of display pixels. This reconstruction can be either batch or incremental.
  • Anti-aliasing is generally an expensive process because it requires super-sampling or sophisticated rendering. This paper presents a new type of anti-aliasing filter for animation sequences, the pixeltracing filter, that does not require any additional sample nor additional calculation in the rendering phase. The filter uses animation information to calculate correlation among the images, and sub-pixel information is extracted from the sequence based on the correlation. Theoretical studies prove that the filter becomes an ideal antialiasing filter when the filter size is infinite. The algorithm is simple image processing implemented as postfiltering. The computational cost is independent of the complexity of the scene. Experiments demonstrate the efficiency of the filter. Almost complete anti-aliasing was achieved at the rate of about 30 seconds per frame for very complex scenes at a resolution of 256x256 pixels. The pixel tracing filter provides effective antialiasing for animation sequences at a very modest computational cost.
  • Stochastic sampling techniques, in particular Poisson and fittered sampling, are developed and analyzed. These approaches allow the construction of alias-free approximations to continuous functions using discrete calculations. Stochastic sampling scatters high frequency information into broadband noise rather than generating the false patterne produced by regular sampling. The type of randomness used in the sampling process controls the spectral character of the noise. The average sampling rate and the function being sampled determine the amount of noise that is produced. Stochastic sampling is applied adaptively so that a greater number of samples are taken where the function varies most. An estimate is used to determine how many samples to take over a given region. Noise reducing filters are used to increase the efficacy of a given sampling rate. The filter width is adaptively controlled to further improve performance. Stochastic sampling can be applied spatiotemporally as well as to other aspects of scene simulation. Ray tracing is one example of an image synthesis approach that can be antialiased by stochastic sampling.
  • A visible surface algorithm with integrated analytic spatial and temporal anti-aliasing is presented. This algorithm models moving polygons as four dimensional (X Y Z T) image space polyhedra, where time (T) is treated as an additional spatial dimension. The linearity of these primitives allows simplification of the analytic algorithms. The algorithm is exact for non-intersecting primitives, and exact for the class of intersecting primitives generated by translation and scaling of 3-d (X,Y,Z) polygons in image space. This algorithm is an extension of Catmull's analytic visible surface algorithm for independent pixel processing, based on the outline of integrated spatial and temporal anti-aliasing given by Korien and Badler. An analytic solution requires that the visible surface calculations produce a continuous representation of visible primitives in the time and space dimensions. Visible surface algorithm, graphical primitives, and filtering algorithm, (by Feibush, Levoy and Cook) are extended to include continuous representation of the additional dimension of time. A performance analysis of the algorithm contrasted with a non-temporally anti-aliased version is given.