A Planar Light Probe
Neil G. AlldrinDavid J. Kriegman
University of California, San Diego
9500 Gilman Dr. Dept. #0404, La Jolla, CA 92093
We develop a novel technique for measuring lighting that
exploits the interaction of light with a set of custom BRDFs.
This enables the construction of a planar light probe with
certain advantages over existing methods for measuring
lighting. To facilitate the construction of our light probe,
we derive a new class of bi-directional reflectance functions
based on the interaction of light through two planar sur-
faces separated by a transparent medium. Under certain
assumptions and proper selection of the two surfaces, we
show how to recover Fourier series coefficients of the inci-
dent lighting parameterized over the plane. The results are
experimentally validated by imaging a sheet of glass with
spatially varying patterns printed on either side.
Images of a scene depend not only on the viewpoint
and scene contact, but also upon the way it is illuminated.
Knowledge of the lighting is directly used in computer vi-
sion techniques such as shape from shading, shape from
shadows, shape from specularities, and photometric stereo
and can be indirectly exploited in tasks such as recognition.
In 3-D graphics, lighting is specified during rendering, and
for photo-realistic augmented or mixed reality, the light-
ing of rendered objects must match the lighting in the real
scene. In general, lighting is a function on the 4-D space
of rays or directed line segments within a scene. However,
it is common and often effective to treat sources as being
far from the scene elements in which case lighting can be
viewed as a positive function on a sphere. Unless other-
wise stated, we will consider the later situation here. The
dynamic range of the lighting in a single scene can be very
large between direct illumination from say the sun vs. indi-
rect illumination from say a dark shadowed region. Yet the
only a very small fraction of the light source directions will
correspond to a bright, direct illuminant.
The output of an illumination estimation process is some
representation of the lighting. Most generally, lighting esti-
mation is viewed as a sampling of the 4-D (or 2-D) lighting
space, and a large number of direct measurements are re-
turned as a radiance or environment map . work assumes
that there are just a small number of point light sources lo-
cated nearby or at a distance [17, 27, 8, 12, 26, 24], and
so these approaches return the coordinates and strengths
of the sources. In general, these methods can be viewed
as assuming a parameterized generative model of lighting
and attempt to estimate the parameters. Others take a non-
parametric approach by representing lighting as a linear su-
perposition of some set of basis functions, and lighting esti-
mation amounts to estimating the coefficients for each basis
function . Lighting and reflectance (BRDF) have been
well characterized by a spherical harmonic basis [1, 18],
and theoretical results  and empirical evidence [4, 10]
support the idea that a low order expansion of lighting
(3rd order) is sufficient for many Lambertian scenes. Haar
wavelets have also been used to estimate lighting from cast
A wide variety of techniques have emerged for measur-
ing lighting, and to understand their relative advantage, it
is helpful to first consider the different attributes of illumi-
nation capture. (1) Does the technique require inserting a
physical probe in the scene, require knowledge of the scene
geometry, or can it passively infer lighting directly from im-
ages of an unknown scene? (2) Does it provide lighting as a
2-D or 4-D function? (3) Does it produce a low or high dy-
namic range (LDR or HDR) illumination map? (4) Does it
require a single image (implying applicability to video with
dynamic lighting) or does it use multiple images, perhaps
to construct an HDR image? (5) What is the size, bulk, and
cost of the probe? (6) What is the resolution (spatial fre-
quency response) of the output? An ideal technique would
passively infer lighting from a single image, would provide
a high resolution 4-D light field, would produce HDR out-
put, would be applicable to video, and would be low cost.
Notechniquemeetsthisideal. Withoutaprobe, theprob-
lem is ill-posed and requires some sort of prior to arrive at
as it should yield the best results; however it requires te-
dious and error prone BRDF measurements that can be dif-
ficult to get right.
We have shown theory that suggests BRDFs can be man-
ufactured that are sensitive to specific frequencies of the
lighting. Based on this theory, we have constructed a pla-
nar light probe capable of estimating low frequencies of the
lighting. Such a probe would be useful for many applica-
tions, particularly applications requiring lighting estimation
from low-dynamic range images. For most materials, a fifth
order frequency approximation of the lighting is enough to
photo-realistically render it, so if we can push our probe a
little further it will be a truly useful device.
This work was supported in part by National Science
Foundation grants IIS-0308185 and EIA-0224431.
 R. Basri and D. W. Jacobs.
linear subspaces. IEEE Trans. Pattern Anal. Mach. Intell.,
25(2):218–233, February 2003.
 P. Debevec et al. Estimating surface reflectance properties of
a complex scene under captured natural illumination. Tech-
nical Report ICT-TR-06.2004, University of Southern Cali-
fornia ICT, December 2004.
 P. E. Debevec and J. Malik. Recovering high dynamic range
radiance maps from photographs. In SIGGRAPH ’97: Pro-
ceedings of the 24th annual conference on Computer graph-
ics and interactive techniques, pages 369–378, New York,
NY, USA, 1997. ACM Press/Addison-Wesley Publishing
 R. Epstein, P. Hallinan, and A. Yuille. 5+/-2 eigenimages
suffice: An empirical investigation of low-dimensional light-
ing models. In PBMCV, 1995.
 N. Greene. Environment mapping and other applications of
world projections. IEEE Comput. Graph. Appl., 6(11):21–
 P. Haeberli and M. Segal. Texture mapping as a fundamen-
tal drawing primitive. In Fourth Eurographics Workshop on
Rendering, pages 259–266. Eurographics, June 1993.
 H. Kato. Artoolkit.
 C. Kim, A. P. Petrov, H. Choh, Y. Seo, and I. Kweon. Il-
luminant direction and shape of a bump. Optical Society of
America Journal A, 15:2341–2350, Sep 1998.
 J. J. Koenderink, A. J. V. Doorn, K. J. Dana, and S. Nayar.
Bidirectional reflection distribution function of thoroughly
Int. J. Comput. Vision, 31(2-3):129–144,
 K. Lee, J. Ho, and D. Kriegman. Acquiring linear subspaces
Lambertian reflectance and
tern Analysis & Machine Intelligence, pages 684–698, May
 S. R. Marschner and D. P. Greenberg. Inverse lighting for
photography.In Proceedings of the Fifth Color Imaging
Conference, Society for Imaging Science and Technology,
 D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi.
Polarization-based inverse rendering from a single view. In
ICCV, pages 982–987, 2003.
 S. Nayar and T. Mitsunaga. High dynamic range imaging:
Spatially varying pixel exposures. In Proc. of IEEE Conf. on
Computer Vision and Pattern Recognition, pages 472–479,
 S. K. Nayar. Catadioptric omnidirectional camera. In CVPR
’97: Proceedings of the 1997 Conference on Computer Vi-
sion and Pattern Recognition (CVPR ’97), page 482, Wash-
ington, DC, USA, 1997. IEEE Computer Society.
 T. Okabe, I. Sato, and Y. Sato. Spherical harmonics vs. haar
wavelets: Basis for recovering illumination from cast shad-
ows. In CVPR (1), pages 50–57, 2004.
 M. Oren and S. K. Nayar. Generalization of lambert’s re-
flectance model. In SIGGRAPH ’94: Proceedings of the
21st annual conference on Computer graphics and interac-
tive techniques, pages 239–246, New York, NY, USA, 1994.
 A. Pentland. Finding the illuminant direction. J. Optical Soc.
Am., 72:448–455, 1982.
 R. Ramamoorthi. A signal-processing framework for for-
ward and inverse rendering. PhD thesis, Stanford, 2002.
 R. Ramamoorthi, M. Koudelka, and P. Belhumeur. A fourier
theory for cast shadows. IEEE Trans. Pattern Anal. Mach.
Intell., 27(2):288–295, 2005.
 I. Sato, Y. Sato, and K. Ikeuchi. Illumination distribution
from brightness in shadows: adaptive estimation of illumi-
nation distribution with unknown reflectance properties in
shadowregions. InProceedingsofIEEEICCV’99, volume2,
pages 875 – 882, September 1999.
 I. Sato, Y. Sato, and K. Ikeuchi. Illumination distribution
from shadows. Kluwer Academic Publishers, 2001.
 I. Sato, Y. Sato, and K. Ikeuchi. Illumination from shad-
ows. IEEE Trans. Pattern Anal. Mach. Intell., 25(3):290–
 K. E. Torrance and E. M. Sparrow. Theory for off-specular
of America, 57(9):1105–1114, 1967.
 Y. Wang and D. Samaras. Estimation of multiple illuminants
from a single image of arbitrary known geometry. In ECCV
’02: Proceedings of the 7th European Conference on Com-
puter Vision-Part III, pages 272–288. Springer-Verlag, 2002.
 M. Weber and R. Cipolla. A practical method for estimation
of point light-sources. In BMVC, 2001.
 Y. Zhang and Y.-H. Yang. Multiple illuminant direction de-
tection with application to image synthesis. IEEE Trans. Pat-
tern Anal. Mach. Intell., 23(8):915–920, 2001.
 Q. Zheng and R. Chellappa. Estimation of illuminant direc-
tion, albedo, and shape from shading. IEEE Trans. Pattern
Anal. Mach. Intell., 13(7):680–702, 1991.