Specular Surface Recovery from Reflections of a Planar Pattern Undergoing an Unknown Pure Translation.
ABSTRACT This paper addresses the problem of specular surface recovery, and proposes a novel solution based on observing the reflections
of a translating planar pattern. Previous works have demonstrated that a specular surface can be recovered from the reflections
of two calibrated planar patterns. In this paper, however, only one reference planar pattern is assumed to have been calibrated
against a fixed camera observing the specular surface. Instead of introducing and calibrating a second pattern, the reference
pattern is allowed to undergo an unknown pure translation, and a closed form solution is derived for recovering such a motion.
Unlike previous methods which estimate the shape by directly triangulating the visual rays and reflection rays, a novel method
based on computing the projections of the visual rays on the translating pattern is introduced. This produces a depth range
for each pixel which also provides a measure of the accuracy of the estimation. The proposed approach enables a simple auto-calibration
of the translating pattern, and data redundancy resulting from the translating pattern can improve both the robustness and
accuracy of the shape estimation. Experimental results on both synthetic and real data are presented to demonstrate the effectiveness
of the proposed approach.
- SourceAvailable from: Edward H. Adelson[Show abstract] [Hide abstract]
ABSTRACT: Many materials, including leaves, water, plastic, and chrome exhibit specular reflections. It seems reasonable that the visual system can somehow exploit specular reflections to recover three-dimensional (3D) shape. Previous studies (e.g., J. T. Todd & E. Mingolla, 1983; J. F. Norman, J. T. Todd, & G. A. Orban, 2004) have shown that specular reflections aid shape estimation, but the relevant image information has not yet been isolated. Here we explain how specular reflections can provide reliable and accurate constraints on 3D shape. We argue that the visual system can treat specularities somewhat like textures, by using the systematic patterns of distortion across the image of a specular surface to recover 3D shape. However, there is a crucial difference between textures and specularities: In the case of textures, the image compressions depend on the first derivative of the surface depth (i.e., surface orientation), whereas in the case of specularities, the image compressions depend on the second derivative (i.e., surfaces curvatures). We suggest that this difference provides a cue that can help the visual system distinguish between textures and specularities, even when present simultaneously. More importantly, we show that the dependency of specular distortions on the second derivative of the surface leads to distinctive fields of image orientation as the reflected world is warped across the surface. We find that these "orientation fields" are (i) diagnostic of 3D shape, (ii) remain surprisingly stable when the world reflected in the surface is changed, and (iii) can be extracted from the image by populations of simple oriented filters. Thus the use of specular reflections for 3D shape perception is both easier and more reliable than previous computational work would suggest.Journal of Vision 10/2004; 4(9):798-820. · 2.73 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: eloped. Besides, a simple algorithm for recovering the 3D shape of a SOR using its silhouette from a single view is presented, followed by an analysis of the ambiguity in the reconstruction. This dissertation then studies the dynamic properties of silhouettes, and introduces a complete and practical system for generating high quality 3D models from a sequence of 2D silhouettes. The input to the system is an image sequence of an object under both unknown circular motion and unknown general motion. By exploiting a simple parameterization of the fundamental matrix, circular motion can be estimated easily and accurately from the silhouettes. The registration of arbitrary general views, using silhouettes from the estimated circular motion, reveals information which is concealed under circular motion, and greatly improves both the shape and textures of the 3D models. In contrast to previous techniques, only the 2 outer epipolar tangents to the silhouettes are required in estimating both the11/2001;
Conference Paper: A Theory of Inverse Light Transport.[Show abstract] [Hide abstract]
ABSTRACT: In this paper we consider the problem of computing and removing interreflections in photographs of real scenes. Towards this end, we introduce the problem of inverse light transport - given a photograph of an unknown scene, decompose it into a sum of n-bounce images, where each image records the contribution of light that bounces exactly n times before reaching the camera. We prove the existence of a set of interreflection cancelation operators that enable computing each n-bounce image by multiplying the photograph by a matrix. This matrix is derived from a set of "impulse images" obtained by probing the scene with a narrow beam of light. The operators work under unknown and arbitrary illumination, and exist for scenes that have arbitrary spatially-varying BRDFs. We derive a closed-form expression for these operators in the Lambertian case and present experiments with textured and untextured Lambertian scenes that confirm our theory's predictions.10th IEEE International Conference on Computer Vision (ICCV 2005), 17-20 October 2005, Beijing, China; 01/2005