Conference PaperPDF Available

Light Transport Editing with Ray Portals

Authors:
CGI2015 manuscript No.
(will be inserted by the editor)
Light Transport Editing with Ray Portals
Thomas Subileau · Nicolas Mellado · David Vanderhaeghe · Mathias Paulin
Abstract Physically based rendering, using path-space for-
mulation of global illumination, has become a standard tech-
nique for high-quality computer generated imagery. Nonethe-
less, being able to control and edit the resulting picture so
that it corresponds to the artist vision is still a tedious trial-
and-error process. We show how the manipulation of light
transport translates into the path-space integral formulation
of the rendering equation. We introduce portals as a path-
space manipulation tool to edit and control renderings and
show how our editing tool unifies and extends previous work
on lighting editing. Portals allow the artist to precisely con-
trol the final aspect of the image without modifying neither
scene geometry nor lighting setup. According to the setup
of two geometric handles and a simple path selection filter,
portals capture specific lightpaths and teleport them through
3D space. We implement portals in major path based algo-
rithms (Photon Mapping, Progressive Photon Mapping and
Bi-directional Path Tracing) and demonstrate the wide range
of control this technique allows on various lighting effects,
from low frequency color bleeding to high frequency caus-
tics as well as view-dependent reflections.
Keywords rendering · global illumination · editing ·
manipulation · physically-based
1 Introduction
High-quality digital contents, such as movies, rely on the
ability and creativity of the artist to fully exploit the capa-
bilities offered by content creation and rendering software.
In the current content creation workflow, setups of the scene
parameters (e.g. geometry, material, camera) and light pa-
rameters are separated. Once the scene parameters are en-
tirely fixed, lighting designers add and tune light sources one
by one [2].
T. Subileau, N. Mellado, D. Vanderhaeghe, M. Paulin
Universit
´
e de Toulouse ; UPS; IRIT
Fig. 1 Portals allow the manipulation of lighting effects in a scene.
Here, the caustic created by the sphere is captured (blue portal) and
moved (pink portal) under the cube, revealing the shiny Suzanne mon-
key. The edit is smoothly integrated in the rendering of the scene.
Even though lighting parameters are perfectly set, the
physically-based rendering does not necessarily match the
artistic goal. In other words, the artistic freedom is limited to
the physical simulation of light transport. As a consequence,
methods have been developed to either modify the renderer
input (scene configuration) or output (layered images) to ob-
tain a desired result.
Even if widely used, post-processing techniques are lim-
ited to image-based processing and cannot provide the full
set of changes the artists would need. On the other hand, the
lighting design is also tweaked and tricked, adding for in-
stance lights not casting shadows [4], only giving specular
lighting or lights that only affect a given object (i.e. light
linking). The involved trial-and-error process is tedious, re-
quires a lot of experience and several time-consuming re-
computations of physically-based renderings.
2 Thomas Subileau et al.
Following the pioneer methods exploring the editing and
control of direct illumination, recent results propose to mod-
ify the global illumination of a scene in order to add artis-
tic control through the process. However, despite their ef-
ficiency, these approaches are limited to a restrained set of
effects.
In order to tackle these limitations, we propose to edit
lighting effects directly in path-space, i.e. by changing how
light propagates in the scene. These changes affect only the
rendering stage and do not require to edit either the geometry
or the lighting setup.
Our key idea is to allow artists to directly manipulate the
light propagation (e.g. geometrical optics) of user-selected
lightpaths through the use of portals that capture and tele-
port lightpaths through 3D-space (Figure 1 and 2). We de-
fine portals as a manipulator composed of an input surface,
a selection filter and an output surface. Every path that hits
the input surface and corresponds to the user-defined path
filter is teleported to the output surface.
While modifications might not be physically correct, we
still rely on physically-based renderers and edits thus re-
main visually coherent in the resulting rendering. Also, if
not intentionally modified by the user, the light energy is
conserved and sampling functions are not modified. This en-
sures the conservation of consistence and convergence prop-
erties of the original rendering technique.
We present in this paper two main contributions: a re-
formulation of the path-space integral enabling light propa-
gation editing (Section 3.2), and a mechanism to alter light
transport which we call portals (Section 3.3). Our formula-
tion is versatile and allows a wide range of manipulations.
For instance, we show that existing related techniques can be
defined as specific portal configurations. By definition, por-
tals are totally decorrelated from the scene description. They
can be easily tuned and animated with keyframing. The tech-
nique correctly handles shadow rays (Section 4) and could
be integrated in any path-based renderer.
Finally, we show how our approach can be used for ad-
vanced modifications of complex lighting effects resulting
from global illumination (Section 5).
P
i
P
o
Fig. 2 Portals alter the light propagation. When a ray intersects P
i
(in
blue) and matches the selection filter, it is teleported to P
o
(in pink).
2 Previous work
In the context of digital content creation, artists have to man-
ually setup materials and lights parameters of a 3D scene to
obtain at the end the expected rendering effects. This work
can be tedious and requires to estimate correctly how light
interacts and propagates in the scene.
Previous work on intuitive editing allows user to paint
an expected lighting effect in the scene, and optimizes its
configuration to produce an as-correct-as-possible render-
ing output [17,15,16,19]. These approaches provide effi-
cient design interfaces but remain limited to the impact of
scene parameters on the final rendering. This impact can
be hard to control, especially in a global illumination con-
text. Our work is orthogonal to these methods as we choose
not to modify the lights and materials description but only
edit light propagation during the rendering process. Several
editing methods [18,14,12] focus on shadow manipulation.
These methods provide different means to control shadows
shape, position and smoothness. We focus on the editing of
light transport, shadows shape cannot be directly manipu-
lated in our approach.
Seminal works in lighting editing algorithms, limited to
either direct lighting or specular reflection have inspired our
work. Extending early work of Barzel [1], BendyLights [9]
offers a direct lighting manipulation by bending the light
propagation for spotlights and hence facilitates the control of
direct illumination effects. Ritschel et al. [21] allow the user
to change the reflected direction of purely specular surfaces.
Ritschel et al. [22] define on-surface signal deformation
to manipulate the appearance of any signal over the surface
of 3D objects. The manipulation itself stays on surface and
is prone to sliding artifacts when applied on animated sur-
faces. As our approach is totally defined in 3D-space, we
do not have these kinds of drawbacks. Using a robust vi-
sualization tool, Reiner et al. [20] show that light propaga-
tion can be well comprehended and particle flows creating
specific lighting effects can be spatially and semantically
clustered. Following this approach, Schmidt et al. [23] pro-
pose a path retargeting technique: the user manipulation of
a shading effect (referred as lighting effect in our paper) af-
fects the outgoing tangent frame of the previous interaction
surface. While allowing various edits, there are some lim-
itations in the freedom of the manipulation. For instance
moving a lighting effect below or through an object is not
achievable as the object will intersect the edited paths. To
overcome this limitation, the authors choose to add a spe-
cific proxy-object approach to specify that an object does
not interact with the edited paths.
We define a general theoretical framework for path space
editing. We also formulate previous approaches [22,23] in
this framework. We propose a manipulator which provides
a flexible mechanism to edit path space. This mechnanism
Light Transport Editing with Ray Portals 3
allows artists to position the manipulator anywhere between
the source surface (as in Schmidt et al. [23]) and the des-
tination surface (as in Ritschel et al. [22]) of an effect and
handles all sort of situations inbetween. This editing free-
dom tackles the previous works limitations stated above.
3 Editing light propagation in path-space
3.1 Path-space formulation
As introduced by Veach [24], the light transport problem
can be expressed as an integral over path-space. This path-
integral defines the color of a pixel on the computed image
as the integral of the flux transported by each lightpath from
sources to this pixel. The measurement I
j
for each pixel j of
an image is written as
I
j
=
Z
f
j
(x)dµ(x), (1)
with the set of all transport paths also called path-space,
and dµ the area-product measure of a path x. Paths are of the
form x = x
0
x
1
. . . x
k
, with 1 < k < and x
i
S , S being
the union of all surfaces. The i
th
segment hx
i
, x
i+1
i of a path
is a straight line between the two consecutive positions x
i
and x
i+1
. The contribution of a lightpath is measured as
f
j
(x) =L(x
0
x
1
)G(x
0
x
1
)
k1
i=1
f
s
(x
i1
x
i
x
i+1
)G(x
i
x
i+1
)
!
W(x
k1
x
k
),
with L the emitted light, G the geometric or propagation
term, f
s
the scattering function (e.g. the bsdf ) and W the
importance of the path for the pixel j.
If we ignore participating media, the propagation of the
light between two surfaces along a path segment is influ-
enced only by the relative incoming and outgoing light di-
rections, the length of the segment, and the surfaces visibil-
ity V at the interaction points:
G(x
i
x
j
) = V (x
i
x
j
)
(n
i
· |x
j
x
i
|) (n
j
· |x
i
x
j
|)
||x
i
x
j
||
2
,
with · the dot product, |x| =
x
||x||
, n
i
the normal vector of the
surface at x
i
, and V (x
i
x
j
) the visibility function.
3.2 Path-based propagation editing
As seen in Section 2, lighting can be edited by changing the
light properties L, the scattering functions f
s
or the propa-
gation G. Artists usually tune lights and materials properties
to obtain a desired lighting effect while propagation is left
untouched. In this work, we propose a new formalism en-
abling the definition of a large panel of transformations to
modify how light propagates in the scene, independently of
the lighting and material configuration. The general idea is
to capture light flux somewhere in the scene and release it
somewhere else, this idea is translated in the path-integral
as a modification of the propagation function.
Changing how light is transmitted without changing ma-
terials and lights is equivalent to move a light contribution
from a point x
j
to another point x
e
S . This can be done
by modifying the propagation function G(x
i
x
j
) to a new
editable propagation function G(x
i
x
j
, x
e
), where the con-
tribution reaching x
j
is moved to x
e
. As shown in Figure 3,
modifications of the light propagation can be applied from
any point p
e
= x
i
+t|x
j
x
i
| on the segment hx
i
, x
j
i.
Let’s now assume that we know the edited receiver po-
sition x
e
and the editing position p
e
. To edit the propagation
of the initial lightpath, we remove its contribution at the po-
sition x
j
and add it at the new receiver position x
e
. As a con-
sequence, the propagation between x
i
and x
j
is cancelled,
hence G(x
i
x
j
) = 0. The light is now transmitted to x
e
and combined to other incoming paths. We now present how
this concept translates to the lightpath contribution measure
f
j
(x), and then focus on the definition of G.
Edited ligthpath propagation We want to modify only how
the light is received, so the scattering function f
s
is evaluated
at x
i
with the initial incoming and outgoing directions, even
for edited segments. Thus we replace, in Equation 3.1, the
term f
s
(x
i1
x
i
x
i+1
)G(x
i
x
i+1
) with
f
s
(x
i1
x
i
x
i+1
)G(x
i
x
i+1
)+
m
i
n=1
f
s
(x
i1
x
i
x
n
ii+1
)G(x
i
x
n
ii+1
, x
i+1
)
where m
i
is the number of segments starting from x
i
and
edited to x
i+1
. G is the editing propagation function describ-
ing how the light propagation from x
i
to the n
th
original po-
sition x
n
ii+1
is modified to reach x
i+1
(see Figure 3). We ap-
ply an analogous modification to the first segment x
0
x
1
G(x
i
x
e
)
G(x
i
x
j
)
x
i
x
e
x
j
(a) (b)
G(x
i
x
e
)
G(x
i
x
j
)=0
x
i
x
e
x
j
G(x
i
x
j
,x
e
)
p
e
p
0
e
Fig. 3 (a) Non-edited scene, where the light propagation along seg-
ments hx
i
, x
j
i and hx
i
, x
e
i is computed using G. (b) Edited scene: prop-
agation along hx
i
, x
j
i is cancelled at p
e
, and transformed to p
0
e
in order
to finally fall on x
e
. This whole process is defined by the edited prop-
agation term G . Red lines represent propagation segments, potentially
shared between multiple lightpaths.
4 Thomas Subileau et al.
Input
(b)
(a)
(c)
Fig. 4 Several path-space modifications configuration. An input scene
is modified by changing how light is transmitted to a target surface.
Red segments are actual lightpaths. Blue dashed lines correspond to
lightpath transformations. By increasing complexity: (a) Light rays are
re-oriented. (b) Light field is sliced and teleported. (c) Each ray is tele-
ported and locally re-oriented according to the geometry.
and note f
j
(x) the resulting contribution measure (see full
definition in Appendix).
Editable transmission function One could define G(x
i
x
j
, x
e
) as G(x
i
, x
e
), but as shown in Figure 4a, this covers
only a subset of the possible modifications. As already stated,
we want to modify only how the light is received, so we need
to keep |x
j
x
i
| as outgoing direction to evaluate the propa-
gation from x
i
. We note n
e
the normal vector at x
e
, and define
the editable propagation function as
G(x
i
x
j
, x
e
) =V (x
i
p
e
)V(p
0
e
x
e
) (2)
(n
i
· |x
j
x
i
|) (n
e
· |d
e
|)
(||x
i
p
e
|| + ||p
0
e
x
e
||)
2
where d
e
is the edited incoming light direction at x
e
. We
define p
0
e
and d
e
as
p
0
e
=M
x
j
,x
e
(p
e
) (3)
d
e
=M
x
j
,x
e
(p
e
) x
e
,
where M
x
j
,x
e
transforms the light segment initially coming
in x
j
to x
e
. M
x
j
,x
e
is what we call a portal, and can be con-
figured to obtain a wide range of modifications of the light
propagation, described in the next section. By construction,
the non-edited scenario can be retrieved by setting M
x
j
,x
e
as
the identity function and x
e
as x
j
.
3.3 Portal-based propagation editing
According to Reiner et al. [20], humans are good at identify-
ing and grouping lightpaths corresponding to lighting effects
(e.g. caustics) in the 3D scene. Assuming a path selection
mechanism, portals can be applied in two ways on groups of
lightpaths.
In a general setting, one can define M
x
j
,x
e
per-path in or-
der to locally adapt the transformation to the scene, e.g. the
geometry surrounding x
j
and x
e
. Here the artistic freedom is
maximal, however care must be taken to define a practica-
ble transformation according to path-based rendering algo-
rithms. For instance, Ritschel et al. [22] evaluate lighting at
x
e
as if it is physically located at x
j
(see Figure 4c). Hence
p
e
= x
j
and d
e
has the same angle with n
e
than |x
i
x
j
| with
n
j
. As a result, M
x
j
,x
e
defines a mapping of the shading con-
figuration from x
j
to x
e
. This function is not trivial to define,
and is computed by optimization in Ritschel’s approach.
In a simplified setting, the propagation of the light can
be edited uniformly within a group of paths, in that case we
note M the functional transformation applied to a group of
paths. According to Reiner et al. [20], grouped paths usually
share common properties (e.g. geometry), and can thus be
edited uniformly to transform the resulting lighting effect.
For instance, Schmidt et al. [23] propose to rotate the output
direction from x
i
to reach x
e
(see Figure 4a). In other words,
M is defined as the identity function and p
e
= x
i
, thus
p
0
e
=M (p
e
) = x
i
(4)
d
e
=M (p
e
) x
e
= x
i
x
e
Using our formalism one can see that both aforementioned
existing techniques represent very specific edits and do not
span a wide range of modifications of the light propagation.
We propose to edit the light propagation with a more ex-
pressive portal definition, according to the following con-
straints. First, portals must be compatible with any path-
based rendering algorithm, so we need to ensure that M
x
j
,x
e
is invertible. This allows bi-directional path traversal, a step
usually required to compute shadow rays. Second, we want
to include the path selection mechanism in our definition, to
provide a unified path-based selection and editing metaphor.
Third, we want to combine both geometric and photometric
effects at once.
Geometrically, we define portals as a pair of surfaces
parametrized over the same domain, the former P
i
grabbing
incoming light segments and the latter P
o
releasing them
somewhere else in the scene. Selecting a group of paths is
achieved by intersecting them with P
i
. The resulting inter-
section points define the editing positions p
e
used in Equa-
tion 2. We note G
x
j
,x
e
the invertible geometric transforma-
tion moving a segment from x
j
to x
e
through P
i
and P
o
. We
also define R
x
j
,x
e
an arbitrary photometric transformation of
the incoming light segment, leading to
M
x
j
,x
e
(p
e
) = G
x
j
,x
e
(p
e
)R
x
j
,x
e
(p).
In practice, the complexity of G
x
j
,x
e
can be adapted ac-
cording to the surfaces geometric properties. For instance,
uniformly parameterized planar surfaces define G
x
j
,x
e
as a
Light Transport Editing with Ray Portals 5
x
1
x
2
G(x
1
x
2
)
G(x
1
x
j
, x
2
)
x
j
p
e
p
0
e
Fig. 5 When evaluating the propagation between two points, we need
to take portals into account. Here, G(x
1
x
2
) is nil but going through
the portals, a shadow ray lands on x
2
and transmits light from x
1
to x
2
.
linear transformation matrix encoding the rotation, transla-
tion and scale between multiple surfaces, noted M in the fol-
lowing. We can attach to portals additional embedded func-
tions to easily supplement the definition of M
x
j
,x
e
. These
transformations may be used to modify both the geometric
term G
x
j
,x
e
and the photometric term R
x
j
,x
e
(Section 5).
Finally, spatial path selection can be extended with a se-
mantic path selection using path regular expressions. This
notion has been introduced by Heckbert [6] and extended by
Veach [24]. Each path is characterized by a set of symbols,
each symbol represents the interaction that occurs at a ver-
tex of the path. Regular expressions based on these symbols
can then be attached to any portal and used to filter indepen-
dently each incoming paths. We use the syntax proposed by
Schmidt et al. [23] that adds light and object identifiers to
each interaction, plus a specific symbol to represent portal
traversal. Since paths sharing similar scattering events and
similar trajectory in 3D-space produce a coherent variation
of shading in the 3D scene, we formally define a lighting ef-
fect by the set of lightpaths going through a given region in
3D-space, i.e. intersecting P
i
, and matching a given regular
expression.
4 Implementation
We have implemented portals in Mitsuba software [7] for
three rendering algorithms: Photon mapping [8], Progres-
sive photon mapping [5] and Bi-directional path tracing [10].
We have interfaced portals with Blender [3] and Mitsuba ex-
port addon to allow an interactive setup and manipulation
of portals. For each portal, we add a manipulator in the 3D
scene defined as a pair of planar surfaces P
i
, P
o
parameter-
ized over the same domain, typically [0, 1]
2
. We note P
i
(u, v)
a point defined by the parametric coordinates (u, v) on P
i
.
Theoretically, the path-integral formulation of light trans-
port considers the path-space to be entirely known. In
practice, it is partially evaluated by sampling. The imple-
mented algorithms construct samples of with a two-step
routine:
During the first step, paths are built by casting rays from
a source (light or camera) toward the scene and bouncing
x
1
x
2
x
1
P
o
P
i
P
i
M
1
P
o
O
o
u
o
O
i
u
i
M
1
x
2
p
e
p
0
e
Fig. 6 We are searching the ray leaving x
1
, intersecting P
i
and trans-
formed by the portal such that, when leaving P
o
, it lands on x
2
. It is
defined by the intersection between the segment hx
1
, M
1
x
2
i and P
i
.
iteratively on the geometry. The correct evaluation of G dur-
ing this step is direct. When a ray intersects P
i
at position
p
e
= P
i
(u, v), it is recast from edited position p
0
e
in the same
direction relatively to the local tangent frame.
– During the second step, a shadow ray is cast between each
couple of vertices x
1
and x
2
to evaluate if the segment hx
1
, x
2
i
transports energy. In Photon mapping, this corresponds to
final gathering whereas in Bi-directional path tracing, this
corresponds to the connection step between the camera sub-
path and the light subpath.
When using portals, we need to correctly evaluate the
propagation between x
1
and x
2
. As shown in Figure 3, we
need to evaluate G(x
1
x
2
) and G(x
1
x
j
, x
2
) for every
edits. Evaluating G is done as usual by casting a ray from
x
1
toward x
2
. Evaluating G corresponds to finding rays that
leave x
1
toward unknown points x
j
and intersect P
i
such that,
when leaving P
o
, they land on x
2
. Figure 5 represents the
geometric setup of this case. Finding all the shadow rays
that connect the two vertices ensures the correct evaluation
of the rendering equation.
To solve this problem, we define g(u, v) R
+
a function
that returns the distance between a position along the ray
r(u, v) and the vertex x
2
, with r(u, v) = P
o
(u, v)+t M(P
i
(u, v)
x
1
) and t R
+
. The set of rays possibly connecting x
1
and x
2
are defined by (u, v) ker(g), i.e. (u, v) such that g(u,v) = 0.
We have implemented the solution for our planar portal
objects. Considering the points p
e
and p
0
e
defined as:
p
e
= P
i
(u, v) = O
i
+ u u
i
+ v v
i
p
0
e
= P
o
(u, v) = O
o
+ u u
o
+ v v
o
with O
i
the origin of portal P
i
and u
i
, v
i
the parameteriza-
tion vectors, respectively O
o
, u
o
and v
o
for portal P
o
, we are
searching for u, v and t such that:
x
2
= p
0
e
+tM(p
e
x
1
)
Multiplying both sides by M
1
:
M
1
x
2
= p
e
+t(p
e
x
1
)
6 Thomas Subileau et al.
Fig. 7 The same portal is used in a scene with three different objects
projecting a caustic. The modification applies consistently and shows
that portals are highly independent from the scene.
Developing and factorizing by p
e
:
1
1 +t
M
1
x
2
+
t
1 +t
x
1
= p
e
This corresponds to the intersection between the segment
hx
1
, M
1
x
2
i and the surface of P
i
, as shown Figure 6. We
solve this equation using the algorithm presented by Lagae
and Dutr
´
e [11]. After finding the (u, v) coordinates, we need
to evaluate V (x
1
p
e
) and V (p
0
e
x
2
) to verify that no
geometry blocks the visibility.
5 Results
In this section, we will present several results showing how
portals can reproduce previous work (Section 5.1) as well
as other various geometric (Section 5.2) and photometric
(Section 5.3) transformations. In any example, portals are
freely positioned in 3D-space and are thus highly indepen-
dent from the scene. This is shown Figure 7 where the same
portal is applied to different geometries.
5.1 Implementation of previous work
Ritschel et al. [21] allow the user to define what part of
the scene is seen through mirror reflections. This translates
into editing the propagation term G(x
k1
x
k
) in a path
of length k and only if the interaction at x
k1
is a specular
reflection. Such edits are handled with portals filtering ES
paths. Examples are shown in the supplemental video.
As shown in Section 3, Schmidt et al. [23] translates
smoothly in our formulation. We show an example of light
retargeting using portals in Figure 8b. Here, M is defined as
the identity matrix and the edited rays are rotated using a
constant function embedded in the portal. Light retargeting
does not conserve the geometric terms in the computation of
G, i.e. either the distance or the relative directions to the sur-
faces are preserved (see Eq. 2). Therefore, after retargeting,
the illuminance of the lighting effect is not conserved and
the resulting caustic is visually different from the original.
In Figure 8c, we move the output portal in order to conserve
all terms for one point x
i
. For other points, terms will vary
depending on the curvature of the surfaces. We can see that
after teleportation, the caustic illuminance remains visually
similar to the original.
(b)
(a)
(c)
Fig. 8 Application of Figure 4. (a) Original rendering. (b) Light retar-
geting with portals. (c) Light field teleportation with portals.
(b)(a) (c)
Fig. 9 A portal captures the caustic and replicates it with multiple out-
put surfaces, disposed around the glass egg. (a) Original scene. (b) Du-
plication. (c) Duplication and shifting spectrum hue.
5.2 Geometric transformation
A typical editing of the propagation using portals is illus-
trated Figure 1. Caustic paths are teleported onto the Suzanne
model. We can see that, after transformation, paths continue
to interact with the geometry, hence creating caustic sparkles
on the walls due to the facetted model. As we evaluate the
propagation toward the original position, the blue cube does
not block the propagation. The setup of this portal is shown
in the supplemental video.
Portals also allow to replicate a lighting effect. To do
this, we associate in a portal one input surface to multi-
ple output surfaces, each resending captured rays. Figure 9b
shows how a lighting effect is copy/pasted in the scene. Du-
plicated output surfaces act as new lights and thus do not
conserve overall energy.
Portals are finite surfaces and may create sharp disconti-
nuities when overlapping lighting effect boundaries. For in-
stance, if the input portal covers only one part of the caustic
(Figure 10a), it will create a sharp visual discontinuity after
editing, as shown Figure 10b. A soft selection can be done
Light Transport Editing with Ray Portals 7
(a) (b) (c)
Fig. 10 Using a stochastic map allows to progressively select and
transform the paths. (a) Original result. (b) Raw editing. (c) Smooth
editing.
(b)
(d)(c)
(a)
Fig. 11 Using textures to control lighting effect. (a) Unmodified
scene. (b) Using a texture to control the intensity. (c) Using a texture to
control the hue. (d) Using a normal map to modify outgoing directions.
using a stochastic function defining the probability for a ray
to be edited or not. In practice, this function is defined using
a grayscale texture. Figure 10c shows how such a probability
map allows to smoothly apply the editing when overlapping
a lighting effect.
A normal map can also be used to modify the outgoing
local tangent frame and thus tilt rays directions as shown in
Figure 11d. In this case, M is modified ununiformly and is
not a linear transformation anymore, hence shadow rays are
not edited by portals in this result.
5.3 Photometric transformation
Textures can also be used to transform the intensity (Fig-
ure 11b) or the hue (Figure 11c) of lightpaths. An example
of duplication coupled with different photometric transfor-
mations is shown Figure 9c. The statue example (Figure 12)
shows how a color bleeding is colored. Portal input and out-
put surfaces are placed in front of the statue head and a tex-
ture is used to transform the color bleeding hue from the
green wall. Other examples of photometric transformations
are shown in Figure 7 and in the supplemental video.
5.4 Discussion
User control In the current implementation, portals are
manipulated directly through the use of two geometric han-
dles, similarly to the positioning of objects in 3D model-
ing software (as shown using Blender in the supplemen-
tal video). The regular expression filter can be defined us-
ing presets (e.g. mirror-reflections are ES+, purely specu-
lar caustics are LS+) or manually. A more intuitive auto-
matic user-interface could be envisioned. First, the user se-
lects an area on the 3D surface, for instance, with a sketch
in image-space. Second, the portal is extracted from paths
going through this area : the regular expression from the
paths prevailing syntax and the surface from the paths foot-
print. Third, the drag-and-drop of the area in the scene de-
fines the transformation G
x
j
,x
e
associated to the portal. Such
process is similar to the interface proposed and validated by
Schmidt et al. [23].
Prefixed regular expression – When a ray hits an input por-
tal, we compare the expression of the path from its starting
point to the last known vertex to decide if the path is to be
transformed. The regular expression thus represents the path
interactions before intersecting the portal and not the full
path within the scene. A back-tracking mechanism would
allow to compute light scattering events occurring after the
portal surface and decide to filter the path afterward based
on its full definition. However, we found in our experiments
that capturing lighting effects with a prefix regular expres-
sion is effectively done.
Bi-directional traversal Consequently, portals apply ei-
ther on light subpaths (filter starting with L) or camera sub-
paths (filter starting with E). It is possible for subpaths to
intersect opposite-type filtering input portals backwards and
then be linked with opposite-type subpaths, potentially mak-
ing the full path eligible to the portal transformation. To ad-
dress this problem, segments that cross opposite-type input
portals backwards are tagged. After the linking step, tagged
segments are reevaluated both ways and, if they match the
filter, visibility is recomputed as explained Section 4.
Performance In our experimental implementation, using
portals adds a computational overhead directly proportional
to the number of transformed rays, varying from 5% to 15%
for our examples. The memory overhead is negligible as we
only need to store one path syntax at a time.
6 Conclusion and Future work
We have analyzed the problem of editing light transport and
have shown how it can be uniformly defined within the path-
integral formulation of the rendering equation. According to
8 Thomas Subileau et al.
Fig. 12 Color bleeding is edited through a spectral transformation. The regular expression filter is LD. There is a color shift to transform the
greenish bleeding to a reddish one. From left to right: unmodified scene, portal surfaces (in red), result.
this definition, we have proposed a method called ray por-
tals that allows the capture and modification of lightpaths
both in 3D-space and path-space.
In the current implementation, portals do not handle par-
ticipating media. However, the proposed formulation could
be adapted by ensuring that the editing function M
x
j
,x
e
is
continuous and calculable for any point p
0
e
between p
e
and
x
e
. Following an approach similar to light beams manipula-
tion [13] could be envisioned to do such extension.
Another interesting direction for future work is extend-
ing portals to more various geometries. Especially, non-planar
geometries would allow the definition of complex editing
function M
x
j
,x
e
as the one used in Ritschel et al. [22], fitted
to match the target surface. It would allow edits as shown
in Figure 4c that locally conserve illuminance of a lighting
effect through editing. It would also ease the capture and ma-
nipulation of multidirectional effects, such as low-frequency
ambient lighting. However, to fully integrate within a path-
based rendering framework, portal transformations need to
be inverted which can be challenging for complex M
x
j
,x
e
.
Ray portals propose a unified and efficient solution which
allows complex editing of light transport phenomena through
the manipulation of simple geometric portals and user-defined
controls. The proposed method complements and extends
previous state of the art techniques, especially as it is largely
scene independent and ensures a correct evaluation of light
propagation.
Appendix
The edited path contribution measure is defined as
f
j
(x) =
L(x
0
x
1
)G(x
0
x
1
)+
m
0
n=1
L(x
0
x
n
1
)G(x
0
x
n
1
, x
1
)
k1
i=1
f
s
(x
i1
x
i
x
i+1
)G(x
i
x
i+1
)+
m
i
n=1
f
s
(x
i1
x
i
x
n
ii+1
)G(x
i
x
n
ii+1
, x
i+1
)
W(x
k1
x
k
).
References
1. Barzel, R.: Lighting controls for computer cinematography. J.
Graph. Tools 2(1), 1–20 (1997)
2. Birn, J.: Digital Lighting and Rendering (2nd Edition). New Rid-
ers Publishing, Thousand Oaks, CA, USA (2005)
3. Blender Online Community: Blender - a 3d modelling and render-
ing package (2015). URL http://www.blender.org
4. Damez, C., Slusallek, P., Walter, B.J., Myszkowski, K., Wald, I.,
Christensen, P.H.: Global illumination for interactive applications
and high-quality animations (2003)
5. Hachisuka, T., Ogaki, S., Jensen, H.W.: Progressive photon map-
ping. ACM Trans. Graph. 27(5), 130:1–130:8 (2008)
6. Heckbert, P.S.: Adaptive radiosity textures for bidirectional ray
tracing. SIGGRAPH Comput. Graph. 24(4), 145–154 (1990)
7. Jakob, W.: Mitsuba renderer (2010). Http://www.mitsuba-
renderer.org
8. Jensen, H.W.: A practical guide to global illumination using ray
tracing and photon mapping. In: ACM SIGGRAPH 2004 Course
Notes, SIGGRAPH ’04. ACM, New York, NY, USA (2004)
9. Kerr, W.B., Pellacini, F., Denning, J.D.: Bendylights: Artistic con-
trol of direct illumination by curving light rays. Computer Graph-
ics Forum 29(4), 1451–1459 (2010)
10. Lafortune, E.P., Willems, Y.D.: Bi-directional path tracing. In:
Proceedings Conference on Computational Graphics and Visual-
ization Techniques, pp. 145–153 (1993)
11. Lagae, A., Dutr
´
e, P.: An efficient ray-quadrilateral intersection
test. Journal of Graphics Tools 10(4), 23–32 (2005)
12. Mattausch, O., Igarashi, T., Wimmer, M.: Freeform shadow
boundary editing. Computer Graphics Forum 32, 175–184 (2013)
13. Nowrouzezahrai, D., Johnson, J., Selle, A., Lacewell, D.,
Kaschalk, M., Jarosz, W.: A programmable system for artistic vol-
umetric lighting. ACM Trans. Graph. 30(4), 29:1–29:8 (2011)
14. Obert, J., Pellacini, F., Pattanaik, S.: Visibility editing for all-
frequency shadow design. In: Proceedings of the 21st Eurograph-
ics Conference on Rendering, EGSR’10 (2010)
15. Okabe, M., Matsushita, Y., Shen, L., Igarashi, T.: Illumination
brush: Interactive design of all-frequency lighting. In: Proceedings
of the 15th Pacific Conference on Computer Graphics and Appli-
cations, PG ’07, pp. 171–180. IEEE Computer Society (2007)
16. Pellacini, F.: envylight: An interface for editing natural illumina-
tion. ACM Trans. Graph. 29(4), 34:1–34:8 (2010)
17. Pellacini, F., Battaglia, F., Morley, R.K., Finkelstein, A.: Lighting
with paint. ACM Trans. Graph. 26(2) (2007)
18. Pellacini, F., Tole, P., Greenberg, D.P.: A user interface for inter-
active cinematic shadow design. ACM Trans. Graph. 21
19. Raymond, B., Guennebaud, G., Barla, P., Pacanowski, R., Granier,
X.: Optimizing BRDF Orientations for the Manipulation of
Anisotropic Highlights. Computer Graphics Forum (2014)
20. Reiner, T., Kaplanyan, A., Reinhard, M., Dachsbacher, C.: Selec-
tive inspection and interactive visualization of light transport in
virtual scenes. Comp. Graph. Forum 31(2pt4), 711–718 (2012)
21. Ritschel, T., Okabe, M., Thorm
¨
ahlen, T., Seidel, H.P.: Interactive
reflection editing. ACM Trans. Graph. 28(5), 129:1–129:7 (2009)
22. Ritschel, T., Thorm
¨
ahlen, T., Dachsbacher, C., Kautz, J., Sei-
del, H.P.: Interactive on-surface signal deformation. ACM Trans.
Graph. 29(4), 36:1–36:8 (2010)
23. Schmidt, T.W., Nov
´
ak, J., Meng, J., Kaplanyan, A.S., Reiner, T.,
Nowrouzezahrai, D., Dachsbacher, C.: Path-space manipulation of
physically-based light transport. ACM Trans. Graph. 32
24. Veach, E.: Robust monte carlo methods for light transport simula-
tion. Ph.D. thesis, Stanford, CA, USA (1998). Chap. 4,8
Article
Full-text available
En este artículo nos sumergiremos en la evolución y los retos que ha supuesto la postproducción digital del largometraje El Hereje producido en 2015 y en el que se han empleado gráficos tridimensionales generados por medio del software libre Blender. Para ello, recorreremos las diferentes etapas de la postproducción asociadas a los efectos visuales de un largometraje analizando los problemas y soluciones empleados en esta producción. La postproducción de esta película se ha realizado por parte del grupo de investigación IDECA, perteneciente a la universidad de Castilla-La Mancha y supone el primer audiovisual de su categoría realizado en España en estas condiciones.
Article
In recent years, much work was devoted to the design of light editing methods such as relighting and light path editing. So far, little work addressed the target-based manipulation and animation of caustics, for instance to a differently-shaped caustic, text or an image. The aim of this work is the animation of caustics by blending towards a given target irradiance distribution. This enables an artist to coherently change appearance and style of caustics, e.g., for marketing applications and visual effects. Generating a smooth animation is nontrivial, as photon density and caustic structure may change significantly. Our method is based on the efficient solution of a discrete assignment problem that incorporates constraints appropriate to make intermediate blends plausibly resemble caustics. The algorithm generates temporally coherent results that are rendered with stochastic progressive photon mapping. We demonstrate our system in a number of scenes and show blends as well as a key frame animation.
Article
Full-text available
Scenes lit with high dynamic range environment maps of real-world environments exhibit all the complex nuances of natural illumination. For applications that need lighting adjustments to the rendered images, editing environment maps directly is still cumbersome. First, designers have to determine which region in the environment map is responsible for the specific lighting feature (e.g. diffuse gradients, highlights and shadows) they desire to edit. Second, determining the parameters of image-editing operations needed to achieve specific changes to the selected lighting feature requires extensive trial-and-error. This paper presents envyLight, an interactive interface for editing natural illumination that combines an algorithm to select environment map regions, by sketching strokes on lighting features in the rendered image, with a small set of editing operations to quickly adjust the selected feature. The envyLight selection algorithm works well for indoor and outdoor lighting corresponding to rendered images where lighting features vary widely in number, size, contrast and edge blur. Furthermore, envyLight selection is general with respect to material type, from matte to sharp glossy, and the complexity of scenes' shapes. envyLight editing operations allow designers to quickly alter the position, contrast and edge blur of the selected lighting feature and can be keyframed to support animation.
Conference Paper
Full-text available
We present a rendering method designed to provide accurate, general simulation of global illumination for realistic image synthesis. Separating surface interaction into diffuse plus specular, we compute the specular component on the fly, as in ray tracing, and store the diffuse component (the radiosity) for later-reuse, similar to a radiosity algorithm. Radiosities are stored in adaptive radiosity textures (rexes)1 that record the pattern of light and shadow on every diffuse surface in the scene. They adaptively subdivide themselves to the appropriate level of detail for the picture being made, resolving sharp shadow edges automatically.We use a three-pass, bidirectional ray tracing algorithm that traces rays from both the lights and the eye. The "size pass" records visibility information on diffuse surfaces; the "light pass" progressively traces rays from lights and bright surfaces to deposit photons on diffuse surfaces to construct the radiosity textures; and the "eye pass" traces rays from the eye, collecting light from diffuse surfaces to make a picture.
Article
Industry-quality content creation relies on tools for lighting artists to quickly prototype, iterate, and refine final renders. As industry-leading studios quickly adopt physically-based rendering (PBR) across their art generation pipelines, many existing tools have become unsuitable as they address only simple effects without considering underlying PBR concepts and constraints. We present a novel light transport manipulation technique that operates directly on path-space solutions of the rendering equation. We expose intuitive direct and indirect manipulation approaches to edit complex effects such as (multi-refracted) caustics, diffuse and glossy indirect bounces, and direct / indirect shadows. With our sketch-and object-space selection, all built atop a parameterized regular expression engine, artists can search and isolate shading effects to inspect and edit. We classify and filter paths on the fly and visualize the selected transport phenomena. We survey artists who used our tool to manipulate complex phenomena on both static and animated scenes.
Article
This paper introduces a system for the direct editing of highlights produced by anisotropic BRDFs, which we call anisotropic highlights. We first provide a comprehensive analysis of the link between the direction of anisotropy and the shape of highlight curves for arbitrary object surfaces. The gained insights provide the required ingredients to infer BRDF orientations from a prescribed highlight tangent field. This amounts to a non-linear optimization problem, which is solved at interactive framerates during manipulation. Taking inspiration from sculpting software, we provide tools that give the impression of manipulating highlight curves while actually modifying their tangents. Our solver produces desired highlight shapes for a host of lighting environments and anisotropic BRDFs.
Article
This paper presents novel interactive visualization techniques for inspecting the global light transport in virtual scenes. First, we propose a simple extension to photon mapping to gather required lighting information. We then introduce a set of five light inspection tools which process this data to provide further insights. Corresponding visualizations help the user to comprehend how light travels within a scene, how the lighting affects the appearance of a surface, and how objects cause lighting effects such as caustics. We implemented all tools for direct usage in real production environments. Rendering is based on progressive photon mapping, providing interactivity and immediate visual feedback. We conducted a user study to evaluate all techniques in various application scenarios and hence discuss their individual strengths and weaknesses. Moreover, we present feedback from domain experts. © 2012 Wiley Periodicals, Inc.
Article
We present an algorithm for artistically modifying physically based shadows. With our tool, an artist can directly edit the shadow boundaries in the scene in an intuitive fashion similar to freeform curve editing. Our algorithm then makes these shadow edits consistent with respect to varying light directions and scene configurations, by creating a shadow mesh from the new silhouettes. The shadow mesh helps a modified shadow volume algorithm cast shadows that conform to the artistic shadow boundary edits, while providing plausible interaction with dynamic environments, including animation of both characters and light sources. Our algorithm provides significantly more fine-grained local and direct control than previous artistic light editing methods, which makes it simple to adjust the shadows in a scene to reach a particular effect, or to create interesting shadow shapes and shadow animations. All cases are handled with a single intuitive interface, be it soft shadows, or (self-)shadows on arbitrary receivers.
Article
Lighting is an essential component of visually rich cinematographic images. However, the common computer graphics light source models, such as a cone-shaped spotlight, are not versatile enough for cinematographic-quality lighting. This paper describes the controls and features of a light source model for lighting computer graphics films. The model is based on generalized light cones, emphasizing independent control over the shape and texture of lights and shadows. While inspired by techniques of real-world cinematography, it is tailored to the needs and capabilities of computer graphics image generation. The model has been used successfully in production over the past few years to light many short works and the movie Toy Story.
Article
This course serves as a practical guide to ray tracing and photon mapping. The notes are mostly aimed at readers familiar with ray tracing, who would like to add an efficient implementation of photon mapping to an existing ray tracer. The course itself also includes a description of the ray tracing algorithm.There are many reasons to augment a ray tracer with photon maps. Photon maps makes it possible to efficiently compute global illumination including caustics, diffuse color bleeding, and participating media. Photon maps can be used in scenes containing many complex objects of general type (i.e. the method is not restricted to tessellated models). The method is capable of handling advanced material descriptions based on a mixture of specular, diffuse, and non-diffuse components. Furthermore, the method is easy to implement and experiment with.This course is structured as a half day course. We will therefore assume that the participants have knowledge of global illumination algorithms (in particular ray tracing), material models, and radiometric terms such as radiance and flux. We will discuss in detail photon tracing, the photon map data structure, the photon map radiance estimate, and rendering techniques based on photon maps. We will emphasize the techniques for efficient computation throughout the presentation. Finally, we will present several examples of scenes rendered with photon maps and explain the important aspects that we considered when rendering each scene.