Page 1

A Planar Light Probe

Neil G. AlldrinDavid J. Kriegman

nalldrin@cs.ucsd.edu

University of California, San Diego

9500 Gilman Dr. Dept. #0404, La Jolla, CA 92093

kriegman@cs.ucsd.edu

Abstract

We develop a novel technique for measuring lighting that

exploits the interaction of light with a set of custom BRDFs.

This enables the construction of a planar light probe with

certain advantages over existing methods for measuring

lighting. To facilitate the construction of our light probe,

we derive a new class of bi-directional reflectance functions

based on the interaction of light through two planar sur-

faces separated by a transparent medium. Under certain

assumptions and proper selection of the two surfaces, we

show how to recover Fourier series coefficients of the inci-

dent lighting parameterized over the plane. The results are

experimentally validated by imaging a sheet of glass with

spatially varying patterns printed on either side.

1. Introduction

Images of a scene depend not only on the viewpoint

and scene contact, but also upon the way it is illuminated.

Knowledge of the lighting is directly used in computer vi-

sion techniques such as shape from shading, shape from

shadows, shape from specularities, and photometric stereo

and can be indirectly exploited in tasks such as recognition.

In 3-D graphics, lighting is specified during rendering, and

for photo-realistic augmented or mixed reality, the light-

ing of rendered objects must match the lighting in the real

scene. In general, lighting is a function on the 4-D space

of rays or directed line segments within a scene. However,

it is common and often effective to treat sources as being

far from the scene elements in which case lighting can be

viewed as a positive function on a sphere. Unless other-

wise stated, we will consider the later situation here. The

dynamic range of the lighting in a single scene can be very

large between direct illumination from say the sun vs. indi-

rect illumination from say a dark shadowed region. Yet the

totalenergyinthedarkerregionsmaybesignificantbecause

only a very small fraction of the light source directions will

correspond to a bright, direct illuminant.

The output of an illumination estimation process is some

representation of the lighting. Most generally, lighting esti-

mation is viewed as a sampling of the 4-D (or 2-D) lighting

space, and a large number of direct measurements are re-

turned as a radiance or environment map [3]. work assumes

that there are just a small number of point light sources lo-

cated nearby or at a distance [17, 27, 8, 12, 26, 24], and

so these approaches return the coordinates and strengths

of the sources. In general, these methods can be viewed

as assuming a parameterized generative model of lighting

and attempt to estimate the parameters. Others take a non-

parametric approach by representing lighting as a linear su-

perposition of some set of basis functions, and lighting esti-

mation amounts to estimating the coefficients for each basis

function [11]. Lighting and reflectance (BRDF) have been

well characterized by a spherical harmonic basis [1, 18],

and theoretical results [1] and empirical evidence [4, 10]

support the idea that a low order expansion of lighting

(3rd order) is sufficient for many Lambertian scenes. Haar

wavelets have also been used to estimate lighting from cast

shadows [15].

A wide variety of techniques have emerged for measur-

ing lighting, and to understand their relative advantage, it

is helpful to first consider the different attributes of illumi-

nation capture. (1) Does the technique require inserting a

physical probe in the scene, require knowledge of the scene

geometry, or can it passively infer lighting directly from im-

ages of an unknown scene? (2) Does it provide lighting as a

2-D or 4-D function? (3) Does it produce a low or high dy-

namic range (LDR or HDR) illumination map? (4) Does it

require a single image (implying applicability to video with

dynamic lighting) or does it use multiple images, perhaps

to construct an HDR image? (5) What is the size, bulk, and

cost of the probe? (6) What is the resolution (spatial fre-

quency response) of the output? An ideal technique would

passively infer lighting from a single image, would provide

a high resolution 4-D light field, would produce HDR out-

put, would be applicable to video, and would be low cost.

Notechniquemeetsthisideal. Withoutaprobe, theprob-

lem is ill-posed and requires some sort of prior to arrive at

Page 2

a solution [17]; so here we will consider techniques that re-

quire insertion of probe into the scene and that treat lighting

as a function on a sphere. The most straight forward tech-

nique for distant lighting is simply to use a camera to di-

rectly measure the light, perhaps using a fish eye lens [5, 6],

or as part of a catadioptric omni-cam [14]. Alternatively,

a camera can observe a mirrored sphere which has been

placed in the scene, that reflects light from all directions

(though at a lower resolution toward the occluding contour).

While these techniques provide high spatial resolution, they

require HDR imaging [3, 13] to fully characterize dynamic

range of lighting; this is accomplished by capturing multi-

ple LDR images, and is therefore unsuitable for video. In

addition cameras or spherical probes in the scene can be

relatively expensive and/or bulky.

A second approach is to introduce a matte light probe

into the scene which can be a sphere [27, 26, 24] or an ob-

ject with known shape [25]. When considered in terms of

the spherical harmonic expansion of the BRDF [1, 18], mir-

rored and Lambertian balls couldn’t be more different. The

mirrored ball, whose impulse response is akin to a delta

function, passes all spatial frequencies whereas the Lam-

bertian ball acts as a low pass filter and severely attenuates

high frequencies. It is argued that only an expansion to 3rd

order is feasible. However, the advantage of a Lambertian

probe is that the dynamic range of the images of a sphere

under any lighting condition are low, and so lighting can be

estimated from a single image, making the technique suit-

able for video. [2] image multiple spheres with varying re-

flectance (matte, glossy, and specular) to recover lighting.

A third way of constructing a probe is to take ad-

vantage of non-convex geometry and the resulting shad-

ows [20, 21, 22, 15]. Consider a sundial. The irradiance

arriving at a particular point on the underlying surface is

the product of the incident lighting with the visibility func-

tion induced by the geometry of the sundial. In [15] the

relation between lighting and cast shadows are analyzed in

the frequency domain in terms of spherical harmonics and

Haar wavelet bases. [19] analyzes cast shadows in Fourier

space.

In the paper presented here, we introduce a fourth mech-

anism for measuring lighting by using a multi-layered,

transparent medium which differentially absorbs and re-

flects light. In particular, the homogenous middle layer of

the planar probe is a transparent medium (e.g., air or glass),

the top layer is patterned to partially absorb and transmit

light differently at different locations(e.g., a transparency).

The bottom layer is also patterned to reflect light differ-

ently at different locations (e.g., a piece of printed paper).

The key idea is that the design of the two patterns leads

to a spatially varying effective BRDF. That is when image

plane irradiance is averaged over an area (within a single

pixel or over multiple pixels), the manufactured probe can

Transparent Medium

Top Pattern

Bottom Pattern

Figure 1. The imaging setup for our light probe.

be treated as a meso-scopic geometric structure akin to the

micro-facet or pit models used in constructing models of

BRDFs [23, 16, 9]. Given a set of these BRDFs distributed

over the plane, we can recover lighting over the upper hemi-

sphere by treating each BRDF as a basis function. A special

case of the analysis is when the upper and lower patterns are

binary (completely opaque or transparent) in which case the

effective BRDF is solely the result of shadowing and mask-

ing [16].

The advantage of such a light probe is that its capabil-

ities lie between a mirrored and Lambertian probe in that

it can measure higher order frequencies than a Lambertian

ball, yet high dynamic range imaging is not needed. As

consequence, it is suitable for capturing lighting in video.

Furthermore, a particular application of this probe is aug-

mented reality where it is common to include a planar ge-

ometric probe with fiducial markers (See for example AR

Toolkit [7]) for determining relative orientation, and our il-

lumination probe could be readily integrated for lighting es-

timation and photo-consistent rendering.

In the rest of this paper, we first introduce a design for

a probe, and characterize the effective BRDF of the probe

as a function f of the patterned upper and lower layer. We

then show how the probe is designed, and how lighting can

be estimated from the probe. Finally, we report on results

of experiments that validate the potential of the probe.

2. Designing a BRDF for Lighting Recovery

Suppose we have a material consisting of two parallel

layers, referred to as the top and bottom layers, separated

by a transparent medium (see figure 1). Further suppose

both layers are spatially varying so that the top layer reflects

and transmits light as a function of position and the bottom

layer reflects light as a function of position. We assume

that the wavelength of light is much smaller than both the

thickness of the transparent layer and the spatially varying

patterns so that no diffractive effects occur. When light hits

the top layer, some is directly reflected and some is trans-

Page 3

li(?∆i)

?∆|?∆|max

Figure 2. Mapping of the incident lighting onto the plane via re-

fraction.

mitted through the transparent medium where it is reflected

by the bottom layer, travels back through the transparent

medium, and finally passes out through the top layer. We

seek to analyze the reflectance properties of such a material

with the ultimate goal of recovering the lighting from a set

of distinct BRDFs constructed in this way.

We denote the position on the plane by? x, the incident ra-

diance from direction ? ωiarriving at position ? x as li(? x,? ωi),

and the reflected radiance in direction ? ωrexiting at posi-

tion ? x as lr(? x,? ωr). We relate the incident radiance from

differential solid angle d? ωiat position ? x to the reflected ra-

diance at position ? x?in direction ? ωrthrough the BSSRDF

S(? x,d? ωi→ ? x?,? ωr) so that the reflected radiance at position

? x in direction ? ωris

lr(? x,? ωr) =

?

? x?∈A

?

d? ωi∈Ω

li(? x?,d? ωi)S(? x?,d? ωi;? x,? ωr)d? ωN

idA

(1)

where ? ωN

surface normal N. This expression is stating that all light

arriving in some area A contributes to the reflected radiance

at position ? x according to BSSRDF S.

We now begin to specify S. First, we split it into a re-

flective term and a scattering term,

i

is the projected solid angle onto the plane with

S(? x,d? ωi→ ? x?,? ωr) =

fr(? x,d? ωi→ ? ωr) + fs(? x,d? ωi→ ? x?,? ωr).

(2)

The reflective term, fr(? x,d? ωi → ? ωr), is just a standard

spatially varying BRDF and represents the light directly re-

flectedoffthetopsurface. Thescatteringterm, fs(? x,d? ωi→

? x?,? ωr) represents light passing through the top layer at po-

sition ? x?with solid angle d? ωiand re-emerging at position ? x

in direction ? ωr.

Assuming no absorption as light travels through the

transparent medium, specular transmission through the top

layer, and that scattering beyond the initial reflection at the

ωi

ωi?

ωr?

ωr

∆i

∆r

h

Figure 3. Relationships between various angles and distances.

bottom layer is negligible, we can split the scattering term

fsinto the product of three parts,

fs(? x,d? ωi→ ? x?,? ωr) =

ft(? x,d? ωi)Fi(? ωi)...

· gr(? x +?∆i,d? ωi?,? ωr?)d? ωN

· Fr(? ωr)ft(? x?,d? ωr)d? ωN

i? ...

r?

(3)

where ft(? x,d? ωi) represents the initial transmission through

the top layer, gr(? x+?∆i,d? ωi?,? ωr?) is the reflection from the

bottom layer, ft(? x?,d? ωr) isthe transmissionout through the

top layer, and Fi(? ωi) and Fr(? ωr) are Fresnel transmission

terms for entering and leaving the transparent medium. ? x+

?∆(? ωi) is the position where a given ray of light hits the

bottom layer.

Now observe that if we fix the exitant angle, then only

a single incident angle contributes to the scattering term.

To understand this, note that only a single position ? x?con-

tributes to the exitant radiance, which is completely speci-

fied by ? ωi, ? ωr, the thickness of the transparent medium h,

and Snell’s law (specified with the indices of refraction η1

and η2for the outside medium and transparent medium re-

spectively). We can take this idea one step further by pa-

rameterizing the incident and exitant hemisphere in terms

of?∆iand?∆rrespectively, which denote the distance along

the plane between where a ray intersects the top layer and

bottom layer. We can now specify the scattering term as a

function of ? x,?∆i, and?∆r,

fs(? x,?∆i,?∆r) =

ft(? x,?∆i)Fi(?∆i)...

· gr(? x +?∆i,?∆i,?∆r)d? ωN

· Fr(?∆r)ft(? x +?∆r+?∆i,?∆r)d? ω?N

i? ...

r .

(4)

If the incident lighting is distant, then it remains constant

across the plane and we can parameterize it entirely in terms

Page 4

of?∆i. Furthermore, if the refractive index of the transpar-

ent layer is higher than that of the outside environment, then

because of Snell’s law, |?∆i| will be bounded by some finite

value dictated by the critical angle and the thickness of the

medium. Without loss of generality, we scale the coordi-

nate system along the plane so that |?∆i|max=

plify subsequent integrals, we define the incident lighting

so that l(?∆) = 0 for all |?∆| >1

undefined).

Putting everything together, we can rewrite equation 1

as,

?

?∆i∈A

1

2. To sim-

2(where it was previously

lr(? x,?∆r) =

li(?∆i)S(? x,?∆i,?∆r)d? ωN

i

dAdA

(5)

where A = [−1

2,1

2] × [−1

2,1

2].

2.1. Fourier Analysis

In the previous section we formulated the exitant radi-

ance as the integral of the incident lighting li(?∆i) and sim-

plified BSSRDF S(? x,?∆i,?∆r). We now turn our attention

to the average behavior of the system. To simplify the

math, we fold the projected solid angle terms and the Fres-

nel transmission terms into modified versions of fr, ft, and

grso that,

?fr(? x,?∆i,?∆r)

?ft(? x,?∆)

? gr(? x,?∆i,?∆r)

We can then write equation 5 as

=

fr(? x,?∆,? ωr)d? ωN

i

dA

(6)

=

ft(? x,?∆)

gr(···)Fi(?∆i)Fr(?∆r)d? ωN

(7)

=

i

dA. (8)

lr(? x,?∆r) =

?

?∆i∈A

li(?∆i)?S(? x,?∆i,?∆r)dA

(9)

where

?S(? x,?∆i,?∆r) =?fr(? x,?∆i,?∆r) + ...

?ft(? x,?∆i) · ? gr(? x +?∆i,?∆i,?∆r)...

·?ft(? x +?∆i+? ∆r,?∆r).

If?S varies spatially with period 1 in the x and y direc-

tions, then the average exitant radiance is

?

? x∈A

?∆i∈A

(10)

¯lr(?∆r) =

?

li(?∆i)?S(? x,?∆i,?∆r)d?∆id? x

2] × [−1

(11)

where A = [−1

2,1

2,1

2].

2.1.1 An Ideal Case

Suppose we are free to choose any form for?fr,?ft, and ? grso

long as it varies in x and y with period 1. To simplify things,

let?fr= 0,?∆r=?0andsuppose ? gr(? x,?∆i,?∆r) = ? gr(? x,?∆r)

average exitant radiance is

?

? x∈A

?

?∆i∈A

and?ft(? x,?∆i) =?ft(? x) do not depend on?∆i. Then the

lr(?0) =

...

?ft(? x)? gr(? x,?0)

li(?∆i)?ft(? x −?∆i)d?∆id? x

·

(12)

(13)

Recallingourgoalofrecoveringlr, ifwechoose?ft(? x) =

Kronecker delta function, then we get

δ(? x) + δ(? x − ? x?) and ? gr(? x) = δ(? x − ? x?), where δ is the

lr(?0) = li(? x?) + li(?0).

(14)

Thus, setting?ftand ? grto delta functions enables point sam-

the effects of a delta function, this would allow full recovery

of the lighting (or a sampled version if ? x?is restricted to a

finite set of values). However, this is an unrealistic solution

in practice because delta-like functions would result in very

subtle changes in image intensity which would be hard to

recover with an image sensor.

Another possible choice for?ftis

?ft(? x,?∆i)

=

pling of the incident radiance. Assuming we could measure

=

e−i2π? n·? x

e−i2πux−i2πvy

(15)

where ? n = {u,v} and u,v are integers. In this case the

average exitant radiance is

?

? x∈A

?

?∆i∈A

?

? x∈A

?

?∆i∈A

lr(?0) =?G2? nL∗

where L? n = L(u,v) is the u,vth 2D Fourier series coef-

ficient of the lighting,?G2? n=?G(2u,2v) is the (2u,2v)th

operator. To recover L? nwe simply need to choose ? grso

is to set ? gr(? x,?0) = ei4π? n·? x, yielding G2n= 1 and

lr(?0) = L∗

lr(?0) =

...e−i2π? n·? x? gr(? x,?0)

li(?∆i)e−i2π? n·(? x−?∆i)d?∆id? x

·

(16)

lr(?0) =

...e−i2π2? n·? x? gr(? x,?0)

li(?∆i)ei2π? n·?∆id?∆id? x

·

(17)

? n

(18)

Fourier series coefficient of ? gr, and ∗ denotes the conjugate

that it contains frequencies of 2? n: The most logical choice

? n.

(19)

From this equation we directly obtain L? n. While the as-

sumptions used to reach this result are unrealistic1, it does

1Not only is positivity violated, but imaginary numbers are used!

Page 5

provide hope that one can construct a BRDF that directly

outputs frequency components of the lighting. Since low

frequency lightingisoftensufficient forrendering purposes,

we should be able to obtain a useful lighting representation

using only a small set of such BRDFs.

2.1.2A More Realistic Case

To satisfy the laws of physics, we must modify the above

formulation in a number of ways:

1. Positivity and conservation of energy must be en-

forced. A valid BRDF or BTDF is greater than or

equal to zero for all inputs and the integral of a BRDF

or BTDF over all incident directions must sum to 1 or

less.

2. Fresnel reflectance varies with incident angle, so our

assumption that the top and bottom layers are constant

across incident angles is violated.

3. We must depend exclusively on non-imaginary num-

bers.

Assumption 1 can be met simply by adding a constant term

to?ftor ? grand then scaling the signal with a multiplica-

thatsatisfiespositivityandconservationofenergy. Assump-

tion 2 implies that we can no longer factor?ftand ? grout of

these terms into a spatially varying component that doesn’t

vary with incident angle and a non-spatially varying com-

ponent that remains inside the integral. If the spatially vary-

ing components have appropriate signals we can recover the

product of the lighting with the non-spatially varying com-

ponents of?ftand ? gr. Once this is recovered we can di-

ing. Assumption 3 is easily overcome by using sinusoids

instead of complex exponentials (ie, the real-valued form of

the Fourier series).

tive factor. Thus, we will get a new signal km(ka+ f(? x))

the inner integral. However, in many cases we can factor

vide out the undesired terms and recover the original light-

3. Experimental Validation

3.1. Setup

To validate our theory, we printed a set of sinusoidal pat-

terns on a transparency sheet and on a sheet of matte pa-

per and separated the two patterns with a sheet of glass

(see figure 4). The thickness of the glass was measured

to be 0.096 inches and the refractive index assumed to be

1.52. To flatten the transparency sheet we also placed a

sheet of glass above the transparency. Thus, our light probe

consists of two sheets of glass, a transparency sheet and a

piece of matte paper. For each frequency2(u,v) we devote

2Minus redundant frequencies caused by conjugate symmetry.

Figure 4. Our experimental setup. We have our planar light probe

next to a mirrored ball, which is used to capture the baseline light-

ing.

Figure 5. Closeup of the patterns on our planar light probe.

two regions where the top layer is a sinusoid of the form

1

2(1 + sin(−2π(ux + vy))) and the bottom is a sinusoid of

the form1

2(1 + sin(4π(ux + vy) + τ)). If we assume the

bottom layer is Lambertian then it can be shown that the re-

flected radiance is of the form aL0+ bLa? n+ cLb? n+ s(x),

where L0is the average incident radiance, La? nand Lb? nare

the the even and odd portions of the Fourier coefficients,

and s(x) represents the specularities that occur at the sur-

face of the glass. We add a spatial dependence on the sur-

face reflection because while we assume the light and cam-

era are distant, in practice this assumption is violated and

surface reflections vary spatially across the surface (albeit

slowly). To counteract the effect of the spatially varying

specularity term, we sample the specular reflection by plac-

ing unpatterned regions at uniform intervals across the light

probe. There are four types of unpatterned region: (top

clear, bottom white), (top clear, bottom dark), (top dark,

bottom white), (top dark, bottom dark). Because Fresnel re-

flection occurs at the top surface of the glass, by subtracting

the average intensity in a constant region from some other

type of region, we effectively cancel the specular reflection

term.

We layout the planar light probe in terms of blocks,