Steven G. Parker

NVIDIA, Santa Clara, California, United States

Are you Steven G. Parker?

Claim your profile

Publications (75)43.44 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The NVIDIA® OptiX™ ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls.
    No preview · Article · May 2013 · Communications of the ACM
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The NVIDIA® OptiX™ ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls.
    Full-text · Article · Jul 2010 · ACM Transactions on Graphics
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a physically based method for visualizing deformation in particle simulations, such as those describing structural mechanics simulations. The method uses the deformation gradient tensor to transform carefully chosen glyphs representing each particle. The visualization approximates how simulated objects responding to applied forces might look in reality, allowing for a better understanding of material deformation, an important indicator of, for example, material failure. It can also help highlight possible errors and numerical deficiencies in the simulation itself, suggesting how simulations might be changed to yield more accurate results.
    No preview · Article · Jul 2010 · Computer Modeling in Engineering and Sciences
  • Source
    Vincent Pegoraro · Mathias Schott · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Due to the intricate nature of the equation governing light transport in participating media, accurately and efficiently simulating radiative energy transfer remains very challenging in spite of its broad range of applications. As an alternative to traditional numerical estimation methods such as ray-marching and volume-slicing, a few analytical approaches to solving single scattering have been proposed but current techniques are limited to the assumption of isotropy, rely on simplifying approximations and/or require substantial numerical precomputation and storage. In this paper, we present the very first closed-form solution to the air-light integral in homogeneous media for general 1-D anisotropic phase functions and punctual light sources. By addressing an open problem in the overall light transport literature, this novel theoretical result enables the analytical computation of exact solutions to complex scattering phenomena while achieving semi-interactive performance on graphics hardware for several common scattering modes.
    Preview · Article · Jun 2010 · Computer Graphics Forum
  • Conference Paper: OptiX

    No preview · Conference Paper · Jan 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The OptiX™ engine is a programmable ray tracing system designed for NVIDIA® GPUs and other highly parallel architectures. OptiX builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and appli- cations, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific sim- ulations such as sound propagation. OptiX achieves high perfor- mance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it ex- poses a single-ray programming model with full support for recur- sion and a dynamic dispatch mechanism similar to virtual function calls.
    Full-text · Conference Paper · Jan 2010

  • No preview · Chapter · Dec 2009
  • Source

    Full-text · Chapter · Dec 2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Ray tracing has long been a method of choice for off-line rendering, but traditionally was too slow for interactive use. With faster hardware and algorithmic improvements this has recently changed, and real-time ray tracing is finally within reach. However, real-time capability also opens up new problems that do not exist in an off-line environment. In particular real-time ray tracing offers the opportunity to interactively ray trace moving/animated scene content. This presents a challenge to the data structures that have been developed for ray tracing over the past few decades. Spatial data structures crucial for fast ray tracing must be rebuilt or updated as the scene changes, and this can become a bottleneck for the speed of ray tracing. This bottleneck has received much recent attention by researchers that has resulted in a multitude of different algorithms, data structures, and strategies for handling animated scenes. The effectiveness of techniques for ray tracing dynamic scenes vary dramatically depending on details such as scene complexity, model structure, type of motion, and the coherency of the rays. Consequently, there is so far no approach that is best in all cases, and determining the best technique for a particular problem can be a challenge. In this STAR, we aim to survey the different approaches to ray tracing animated scenes, discussing their strengths and weaknesses, and their relationship to other approaches. The overall goal is to help the reader choose the best approach depending on the situation, and to expose promising areas where there is potential for algorithmic improvements.
    Full-text · Article · Sep 2009 · Computer Graphics Forum
  • Vincent Pegoraro · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Despite their numerous applications, efficiently rendering participating media remains a challenging task due to the intricacy of the radiative transport equation. As they provide a generic means of solving a wide variety of problems, numerical methods are most often used to solve the air-light integral even under simplifying assumptions. In this paper, we present a novel analytical approach to single scattering from isotropic point light sources in homogeneous media. We derive the first closed-form solution to the air-light integral in isotropic media and extend this formulation to anisotropic phase functions. The technique relies neither on pre-computation nor on storage, and we provide a practical implementation allowing for an explicit control on the accuracy of the solutions. Finally, we demonstrate its quantitative and qualitative benefits over both previous numerical and analytical approaches.
    No preview · Article · Apr 2009 · Computer Graphics Forum
  • Source
    Siu Yau · Vijay Karamcheti · Denis Zorin · Kostadin Damevski · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a system deployed on parallel clusters to manage a collection of parallel simulations that make up a computational study. It explores how such a system can extend traditional parallel job scheduling and resource allocation techniques to incorporate knowledge specific to the study. Using a UINTAH-based helium gas simulation code (ARCHES) and the SimX system for multi-experiment computational studies, this paper demonstrates that, by using application-specific knowledge in resource allocation and scheduling decisions, one can reduce the run time of a computational study from over 20 hours to under 4.5 hours on a 32-processor cluster, and from almost 11 hours to just over 3.5 hours on a 64-processor cluster.
    Full-text · Conference Paper · Feb 2009
  • Source
    Vincent Pegoraro · Mathias Schott · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Despite their numerous applications, efficiently rendering participating media remains a challenging task due to the intricacy of the radiative transport equation. While numerical techniques remain the method of choice for addressing complex problems, a closed-form solution to the air-light integral in optically thin isotropic media was recently derived. In this paper, we extend this work and present a novel analytical approach to single scattering from point light sources in homogeneous media. We propose a combined formulation of the air-light integral which allows both anisotropic phase functions and light distributions to be adequately handled. The technique relies neither on precomputation nor on storage, and we provide a robust and efficient implementation allowing for an explicit control on the accuracy of the results. Finally, the performance characteristics of the method on graphics hardware are evaluated and demonstrate its suitability to real-time applications.
    Preview · Conference Paper · Jan 2009
  • Source
    Thiago Ize · Ingo Wald · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the most fundamental concepts in computer graphics is binary space subdivision. In its purest form, this concept leads to binary space partitioning trees (BSP trees) with arbitrarily oriented space partitioning planes. In practice, however, most algorithms use kd-trees—a special case of BSP trees that restrict themselves to axis-aligned planes—since BSP trees are believed to be numerically unstable, costly to traverse, and intractable to build well. In this paper, we show that this is not true. Furthermore, after optimizing our general BSP traversal to also have a fast kd-tree style traversal path for axis-aligned splitting planes, we show it is indeed possible to build a general BSP based ray tracer that is highly competitive with state of the art BVH and kd-tree based systems. We demonstrate our ray tracer on a variety of scenes, and show that it is always competitive with—and often superior to—state of the art BVH and kd-tree based ray tracers.
    Full-text · Conference Paper · Sep 2008
  • Source
    A.N.M. Imroz Choudhury · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Though the goal of ray tracing and other physically based rendering techniques is ultimately to produce photorealistic images, it is often helpful to use non-photorealistic rendering techniques to illustrate or highlight certain features in a rendering. We present a method for ray tracing constant screen-width NPR-style feature lines on top of regularly rendered scenes, demonstrating how a variant of line rasterization can be included in a ray tracer, thus allowing for the inclusion of NPR-style enhancements. We are able to render silhouette edges, marking the boundary of an object in screen space against the background (or against farther parts of the same object), intersection lines, marking the curves along which two primitives intersect, and crease edges, indicating curves along which a primitive’s normal field is dicontinuous. Including these lines gives the viewer an additional cue to relative positions of objects within the scene, and also enhances particular features within objects, such as sharp corners. The method in this paper was developed in particular for enhancing glyph-based scientific visualization; however, the basic technique can be adapted for many illustrative purposes in different settings.
    Preview · Conference Paper · Sep 2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel method that effectively combines both control variates and importance sampling in a sequential Monte Carlo context while handling general single-bounce global illumination effects. The radiance estimates computed during the rendering process are cached in an adaptive per-pixel structure that defines dynamic predicate functions for both variance reduction techniques and guarantees well-behaved PDFs, yielding continually increasing efficiencies thanks to a marginal computational overhead. While remaining unbiased, the technique is effective within a single pass as both estimation and caching are done online, exploiting the coherency in illumination while being independent of the actual scene representation. The method is relatively easy to implement and to tune via a single parameter, and we demonstrate its practical benefits with important gains in convergence rate and applications to both off-line and progressive interactive rendering.
    Preview · Conference Paper · Sep 2008
  • Source
    Vincent Pegoraro · Ingo Wald · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract This paper presents a novel method that effectively combines both control variates and importance sampling in a sequential Monte Carlo context. The radiance estimates computed during the rendering process are cached in a 5D adaptive hierarchical structure that defines dynamic predicate functions for both variance reduction techniques and guarantees well-behaved PDFs, yielding continually increasing efficiencies thanks to a marginal computational overhead. While remaining unbiased, the technique is effective within a single pass as both estimation and caching are done online, exploiting the coherency in illumination while being independent of the actual scene representation. The method is relatively easy to implement and to tune via a single parameter, and we demonstrate its practical benefits with important gains in convergence rate and competitive results with state of the art techniques.
    Full-text · Article · Jun 2008 · Computer Graphics Forum
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present the Memory Trace Visualizer (MTV), a tool that provides inte ractive visualization and analysis of the sequence of memory operations performed by a program as it runs. A s improvements in processor performance continue to outpace improvements in memory performance, tools to unders tand memory access patterns are in- creasingly important for optimizing data intensive programs such as those fo und in scientific computing. Using visual representations of abstract data structures, a simulated cache, and animating memory operations, MTV can expose memory performance bottlenecks and guide programmers towa rd memory system optimization opportu- nities. Visualization of detailed memory operations provides a powerful and intuitive way to expose patterns and discover bottlenecks, and is an important addition to existing statistical perfor mance measurements.
    Full-text · Article · May 2008 · Computer Graphics Forum
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a system supporting reuse of simulation results in multi-experiment computational studies involving independent simulations and explores the benefits of such reuse. Using a SCIRun-based defibrillator device simulation code (DefibSim) and the SimX system for computational studies, this paper demonstrates how aggressive reuse between and within computational studies can enable interactive rates for such studies on a moderate-sized 128-node processor cluster; a brute-force approach to the problem would require two thousand nodes or more on a massively parallel machine for similar performance. Key to realizing these performance improvements is exploiting optimization opportunities that present themselves at the level of the overall workflow of the study as opposed to focusing on individual simulations. Such global optimization approaches are likely to become increasingly important with the shift towards interactive and universal parallel computing.
    Full-text · Conference Paper · Apr 2008
  • Source
    Ingo Wald · Thiago Ize · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent developments have produced several techniques for interactive ray tracing of dynamic scenes. In particular, bounding volume hierarchies (BVHs) are efficient acceleration structures that handle complex triangle distributions and can accommodate deformable scenes by updating (refitting) the bounding primitive without restructuring the entire tree. Unfortunately, updating only the bounding primitive can result in a degradation of the quality of the BVH, and in some scenes will result in a dramatic deterioration of rendering performance. In this paper, we present three different orthogonal techniques to avoid that deterioration: (a) quickly rebuilding the BVH using a fast, binning-based approach; (b) a parallel variant of that build to better exploit the multi-core architecture of modern CPUs; (c) asynchronously rebuilding the BVH concurrently with rendering and animation, allowing it to scale to even larger models by stretching the (parallel) BVH build over one or more frames. Our approach is particularly targeted toward future ''many-core'' architectures, and allows for flexibly allocating how many cores are used for rebuilding vs. how many are used for rendering.
    Full-text · Article · Feb 2008 · Computers & Graphics
  • Source
    Christiaan P. Gribble · Carson Brownlee · Steven G. Parker
    [Show abstract] [Hide abstract]
    ABSTRACT: Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales. Effective visualizations of the resulting state will communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves. We present two algorithms targeting upcoming, highly parallel multicore desktop systems to enable interactive navigation and exploration of large particle data sets with global illumination effects. Monte Carlo path tracing and texture mapping are used to capture computationally expensive illumination effects such as soft shadows and diffuse interreflection. The first approach is based on precomputation of luminance textures and removes expensive illumination calculations from the interactive rendering pipeline. The second approach is based on dynamic luminance texture generation and decouples interactive rendering from the computation of global illumination effects. These algorithms provide visual cues that enhance the ability to perform analysis and feature detection tasks while interrogating the data at interactive rates. We explore the performance of these algorithms and demonstrate their effectiveness using several large data sets.
    Full-text · Article · Feb 2008 · Computers & Graphics

Publication Stats

2k Citations
43.44 Total Impact Points

Institutions

  • 2008-2013
    • NVIDIA
      Santa Clara, California, United States
    • Mission College
      Santa Clara, California, United States
  • 1970-2010
    • University of Utah
      • • School of Computing
      • • Scientific Computing and Imaging Institute
      Salt Lake City, Utah, United States
  • 2006
    • Indiana University Bloomington
      • Department of Computer Science
      Bloomington, Indiana, United States
  • 2003
    • University of Oregon
      • Department of Computer and Information Sciences
      Eugene, Oregon, United States