Per H. Christensen’s research while affiliated with Pixar Animation Studios and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (13)


We create high‐dimensional samples which simultaneously stratify all bivariate projections, here shown for a set of 25 4D samples, along with their six stratified 2D projections and expected power spectra. We can achieve 2D jittered stratifications (a), optionally with stratified 1D (multi‐jittered) projections (b). We can further improve stratification using correlated multi‐jittered (c) offsets for primary dimension pairs (xy and uv) while maintaining multi‐jittered properties for cross dimension pairs (xu, xv, yu, yv). In contrast to random padding, which degrades to white noise or Latin hypercube sampling in cross dimensional projections (cf. Fig. 2), we maintain high‐quality stratification and spectral properties in all 2D projections.
A common way to create samples for higher‐dimensional integration is to pad together high‐quality 2D point sets. (a) Kollig and Keller [KK02] proposed scrambling a (0,2) sequence by XORing each dimension with a different random bit vector. While odd‐even dimension pairings produce high‐quality point sets, even‐even (xu) or odd‐odd (yv) dimensions have severe deficiencies. These issues can be eliminated by randomly shuffling the points across dimension pairs (b and c), but this decorrelates all cross dimensions, providing no 2D stratification.
For a CMJ3D pattern with 3³ = 27 points, the nesting property ensures that all 3 z‐slices produce CMJ2D points when projected onto xy. The same nesting property simultaneously holds for the x and y slices when projected onto yz and xz.
Variance behavior of 11 samplers on 4D analytic integrands of different complexity (columns) and continuity (rows). We only show one OA sampler for each strength since these tend to perform similarly (see supplemental for additional variants). We list the best‐fit slope of each technique, which generally matches the theoretically predicted convergences rates (Table 3). Our samplers always perform better than traditional padding approaches, but are asymptotically inferior to high‐dimensional QMC sequences for general high‐dimensional integrands. When strength t < d (right two columns), convergence degrades to 𝒪(N⁻¹), but higher strengths attain lower constant factors.
Variance behavior and best‐fit slope of various samplers for a pixel in the yellow inset in BlueSpheres and the blue inset of CornellBox in Fig. 6. Our samplers always perform better than traditional padding approaches and even outperform the global Halton and Sobol samplers in CornellBox.

+1

Orthogonal Array Sampling for Monte Carlo Rendering
  • Article
  • Publisher preview available

July 2019

·

81 Reads

·

10 Citations

Wojciech Jarosz

·

Afnan Enayet

·

Andrew Kensler

·

[...]

·

Per Christensen

We generalize N‐rooks, jittered, and (correlated) multi‐jittered sampling to higher dimensions by importing and improving upon a class of techniques called orthogonal arrays from the statistics literature. Renderers typically combine or “pad” a collection of lower‐dimensional (e.g. 2D and 1D) stratified patterns to form higher‐dimensional samples for integration. This maintains stratification in the original dimension pairs, but looses it for all other dimension pairs. For truly multi‐dimensional integrands like those in rendering, this increases variance and deteriorates its rate of convergence to that of pure random sampling. Care must therefore be taken to assign the primary dimension pairs to the dimensions with most integrand variation, but this complicates implementations. We tackle this problem by developing a collection of practical, in‐place multi‐dimensional sample generation routines that stratify points on all t‐dimensional and 1‐dimensional projections simultaneously. For instance, when t=2, any 2D projection of our samples is a (correlated) multi‐jittered point set. This property not only reduces variance, but also simplifies implementations since sample dimensions can now be assigned to integrand dimensions arbitrarily while maintaining the same level of stratification. Our techniques reduce variance compared to traditional 2D padding approaches like PBRT's (0,2) and Stratified samplers, and provide quality nearly equal to state‐of‐the‐art QMC samplers like Sobol and Halton while avoiding their structured artifacts as commonly seen when using a single sample set to cover an entire image. While in this work we focus on constructing finite sampling point sets, we also discuss potential avenues for extending our work to progressive sequences (more suitable for incremental rendering) in the future.

View access options

RenderMan: An Advanced Path-Tracing Architecture for Movie Rendering

August 2018

·

581 Reads

·

94 Citations

ACM Transactions on Graphics

Pixar’s RenderMan renderer is used to render all of Pixar’s films and by many film studios to render visual effects for live-action movies. RenderMan started as a scanline renderer based on the Reyes algorithm, and it was extended over the years with ray tracing and several global illumination algorithms. This article describes the modern version of RenderMan, a new architecture for an extensible and programmable path tracer with many features that are essential to handle the fiercely complex scenes in movie production. Users can write their own materials using a bxdf interface and their own light transport algorithms using an integrator interface—or they can use the materials and light transport algorithms provided with RenderMan. Complex geometry and textures are handled with efficient multi-resolution representations, with resolution chosen using path differentials. We trace rays and shade ray hit points in medium-sized groups, which provides the benefits of SIMD execution without excessive memory overhead or data streaming. The path-tracing architecture handles surface, subsurface, and volume scattering. We show examples of the use of path tracing, bidirectional path tracing, VCM, and UPBP light transport algorithms. We also describe our progressive rendering for interactive use and our adaptation of denoising techniques.


Progressive Multi-Jittered Sample Sequences

July 2018

·

111 Reads

·

37 Citations

Computer Graphics Forum

We introduce three new families of stochastic algorithms to generate progressive 2D sample point sequences. This opens a general framework that researchers and practitioners may find useful when developing future sample sequences. Our best sequences have the same low sampling error as the best known sequence (a particular randomization of the Sobol’ (0,2) sequence). The sample points are generated using a simple, diagonally alternating strategy that progressively fills in holes in increasingly fine stratifications. The sequences are progressive (hierarchical): any prefix is well distributed, making them suitable for incremental rendering and adaptive sampling. The first sample family is only jittered in 2D; we call it progressive jittered. It is nearly identical to existing sample sequences. The second family is multi‐jittered: the samples are stratified in both 1D and 2D; we call it progressive multi‐jittered. The third family is stratified in all elementary intervals in base 2, hence we call it progressive multi‐jittered (0,2). We compare sampling error and convergence of our sequences with uniform random, best candidates, randomized quasi‐random sequences (Halton and Sobol'), Ahmed's ART sequences, and Perrier's LDBN sequences. We test the sequences on function integration and in two settings that are typical for computer graphics: pixel sampling and area light sampling. Within this new framework we present variations that generate visually pleasing samples with blue noise spectra, and well‐stratified interleaved multi‐class samples; we also suggest possible future variations.


The Path to Path-Traced Movies

January 2016

·

16 Reads

·

24 Citations

Path tracing is one of several techniques to render photorealistic images by simulating the physics of light propagation within a scene. The roots of path tracing are outside of computer graphics, in the Monte Carlo simulations developed for neutron transport. A great strength of path tracing is that it is conceptually, mathematically, and often-times algorithmically simple and elegant, yet it is very general. Until recently, however, brute-force path tracing techniques were simply too noisy and slow to be practical for movie production rendering. They therefore received little usage outside of academia, except perhaps to generate an occasional reference image to validate the correctness of other (faster but less general) rendering algorithms. The last ten years have seen a dramatic shift in this balance, and path tracing techniques are now widely used. This shift was partially fueled by steadily increasing computational power and memory, but also by significant improvements in sampling, rendering, and denoising techniques. The Path to Path-Traced Movies provides the reader with an overview of path tracing and highlights important milestones in its development that have led to it becoming the preferred movie rendering technique today. It identifies major hurdles that stood in the way of that transition, describing the technical milestones that pushed the field forward over the last couple of decades, and discusses the combination of circumstances that came together to propel the CG and VFX movie industry into a path-traced world. Since the journey is not yet complete, it also discusses on-going challenges and open questions that practitioners and researchers will need to address in the years to come.


The Path to Path-Traced Movies

January 2016

·

106 Reads

·

41 Citations

Foundations and Trends® in Computer Graphics and Vision

Path tracing is one of several techniques to render photorealistic images by simulating the physics of light propagation within a scene. The roots of path tracing are outside of computer graphics, in the Monte Carlo simulations developed for neutron transport. A great strength of path tracing is that it is conceptually, mathematically, and often-Times algorithmically simple and elegant, yet it is very general. Until recently, however, brute-force path tracing techniques were simply too noisy and slow to be practical for movie production rendering. They therefore received little usage outside of academia, except perhaps to generate an occasional reference image to validate the correctness of other (faster but less general) rendering algorithms. The last ten years have seen a dramatic shift in this balance, and path tracing techniques are now widely used. This shift was partially fueled by steadily increasing computational power and memory, but also by significant improvements in sampling, rendering, and denoising techniques. In this survey, we provide an overview of path tracing and highlight important milestones in its development that have led to it becoming the preferred movie rendering technique today.


An approximate reflectance profile for efficient subsurface scattering

July 2015

·

78 Reads

·

36 Citations

Computer graphics researchers have developed increasingly sophisticated and accurate physically-based subsurface scattering BSSRDF models: from the simple dipole diffusion model [Jensen et al. 2001] to the quantized diffusion [d'Eon and Irving 2011] and beam diffusion [Habel et al. 2013] models. We present a BSSRDF model based on an empirical reflectance profile that is as simple as the dipole but matches brute-force Monte Carlo references better than even beam diffusion.


Photon Beam Diffusion: A Hybrid Monte Carlo Method for Subsurface Scattering

July 2013

·

69 Reads

·

67 Citations

We present photon beam diffusion, an efficient numerical method for accurately rendering translucent materials. Our approach interprets incident light as a continuous beam of photons inside the material. Numerically integrating diffusion from such extended sources has long been assumed computationally prohibitive, leading to the ubiquitous single-depth dipole approximation and the recent analytic sum-of-Gaussians approach employed by Quantized Diffusion. In this paper, we show that numerical integration of the extended beam is not only feasible, but provides increased speed, flexibility, numerical stability, and ease of implementation, while retaining the benefits of previous approaches. We leverage the improved diffusion model, but propose an efficient and numerically stable Monte Carlo integration scheme that gives equivalent results using only 3–5 samples instead of 20–60 Gaussians as in previous work. Our method can account for finite and multi-layer materials, and additionally supports directional incident effects at surfaces. We also propose a novel diffuse exact single-scattering term which can be integrated in tandem with the multi-scattering approximation. Our numerical approach furthermore allows us to easily correct inaccuracies of the diffusion model and even combine it with more general Monte Carlo rendering algorithms. We provide practical details necessary for efficient implementation, and demonstrate the versatility of our technique by incorporating it on top of several rendering algorithms in both research and production rendering systems.



State of the art in photon density estimation

August 2012

·

55 Reads

·

6 Citations

Photon-density estimation techniques are a popular choice for simulating light transport in scenes with complicated geometry and materials. This class of algorithms can be used to accurately simulate inter-reflections, caustics, color bleeding, scattering in participating media, and subsurface scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error completely in the limit, and methods that use higher-order samples and queries to reduce error in participating media. This two-part course explains how to implement all these latest advances in photon-density estimation. It begins with a short introduction using classical photon mapping, but the remainder of the course provides new, hands-on explanations of the latest developments in this area by experts in each technique. Attendees gain concrete and practical understanding of the latest developments in photon-density-estimation techniques that have not been presented before in SIGGRAPH courses.


Ray Tracing for the Movie 'Cars'

September 2006

·

265 Reads

·

70 Citations

This paper describes how we extended Pixar's RenderMan renderer with ray tracing abilities. In order to ray trace highly complex scenes we use multiresolution geometry and texture caches, and use ray differentials to determine the appropriate resolution. With this method we are able to efficiently ray trace scenes with much more geometry and texture data than there is main memory. Movie-quality rendering of scenes of such complexity had only previously been possible with pure scanline rendering algorithms. Adding ray tracing to the renderer enables many additional effects such as accurate reflections, detailed shadows, and ambient occlusion. The ray tracing functionality has been used in many recent movies, including Pixar's latest movie 'Cars'. This paper also describes some of the practical ray tracing issues from the production of 'Cars'


Citations (12)


... Different operation scenarios are generated based on the orthogonal array sampling (OAS) method [25]. It aims to make the sample distribution more uniform and improve topology identifiers' generalization ability, then collects the measurement from AMI, SCADA, etc., and compute the power flow of scenarios. ...

Reference:

Data-Model Hybrid Driven Topology Identification Framework for Distribution Networks
Orthogonal Array Sampling for Monte Carlo Rendering

... Finding surface parameterizations of a 3D object is a challenging topological problem, and there is a large body of work tackling specific issues associated with it (e.g., identifying cuts or finding low-distortion mappings [34,35,56,65,66,74]). Due to the intrinsic difficulties of surface parameterization, non-parametric texturing has been proposed [82]. These approaches, such as Renderman's Ptex [6,11] and mesh colors [80,81] use per-primitive textures, circumventing all topological issues entirely. Though non-parametric textures do not offer editing in UV space, projective painting [17,21,61,77] and procedural texture synthesis [20,52,53] are sufficient for production-level quality. ...

RenderMan: An Advanced Path-Tracing Architecture for Movie Rendering
  • Citing Article
  • August 2018

ACM Transactions on Graphics

... On the other hand, the disadvantages of these sequences include (i) the absence of known CS recovery guarantees and (ii) a logistically challenging implementation of these schemes in the field due to off-grid locations of sources (or receivers). Furthermore, the numerical studies on CS reconstruction with the Hammersley points demonstrate discouraging results [38] and, finally, low-discrepancy sequences may require additional transformation to reduce aliasing [39]. ...

Progressive Multi-Jittered Sample Sequences
  • Citing Article
  • July 2018

Computer Graphics Forum

... RAY OPTICS OR WAVE OPTICS? the same can not be said about evaluating the wave equations. Therefore, given its appropriateness, the ray model is far preferable, allowing us to form strikingly realistic images for various graphics applications [9,10]. We completely agree with this argument. ...

The Path to Path-Traced Movies
  • Citing Article
  • January 2016

Foundations and Trends® in Computer Graphics and Vision

... Phong shading [Phong 1998] interpolates surface normals for smooth highlights, followed by Physically Based Rendering (PBR) [Pharr et al. 2016] which models realistic light interactions, and later, Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF) [Jensen et al. 2001] extends this by considering light that penetrates and scatters within surfaces. Some following works [Borshukov and Lewis 2005;Habel et al. 2013;Hanrahan and Krueger 2023] focus on facial rendering, which significantly influence the color and realism of facial assets in digital imagery. d'Eon and Luebke [2007] enhances the realism of specular reflections on the skin, adapting them to different lighting conditions and angles. ...

Photon Beam Diffusion: A Hybrid Monte Carlo Method for Subsurface Scattering
  • Citing Article
  • July 2013

... Ray tracing is a graphics rendering technology that simulates the propagation, reflection and refraction of light. In a scene with complex ray reflection, the image performance of ray tracing is very close to the real scene [1][2][3]. Although the image display effect of ray tracing is very good, its disadvantage is that it requires too much computation [4]. ...

Ray Tracing for the Movie 'Cars'
  • Citing Conference Paper
  • September 2006

... The last few years have seen a decisive move of the movies making industry towards rendering utilizing physicallybased approaches, mostly implemented in terms of path tracing algorithm [1][2][3] . Besides, because of their generality, fast start-up, and progressive nature, path tracing has been an important method in many applications in scientific visualization, such as video, games et al. [4][5] . Unfortunately, such method takes a prohibitive a lot of time to obtain the images with better quality because of the large number of samples required per pixel [6] . ...

Multiresolution Radiosity Caching for Global Illumination in Movies
  • Citing Article
  • August 2012