Conference Paper

A line integration based method for depth recovery from surfacenormals

Dept. of Inf. Electron., Tsinghua Univ., Beijing ;
DOI: 10.1109/ICPR.1988.28301 Conference: Pattern Recognition, 1988., 9th International Conference on
Source: IEEE Xplore

ABSTRACT A method for constructing a depth map from surface normals is described. In this depth recovery method, an arbitrary depth must first be preset for a point somewhere in the image, and then path-independent line integrals are computed to get the relative depths at every point in the image. The validity of the proposed method is discussed and its efficiency is tested using surface normals obtained by shape from the shading algorithm. A comparison to previous methods is made. Theoretical analysis and experimental results show that the present method is both powerful and easy to implement

0 Bookmarks
 · 
96 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: We introduce an example-based photometric stereo approach that does not require explicit reference objects. Instead, we use a robust multi-view stereo technique to create a partial reconstruction of the scene which serves as scene-intrinsic reference geometry. Similar to the standard approach, we then transfer normals from reconstructed to unreconstructed regions based on robust photometric matching. In contrast to traditional reference objects, the scene-intrinsic reference geometry is neither noise free nor does it necessarily contain all possible normal directions for given materials. We therefore propose several modifications that allow us to reconstruct high quality normal maps. During integration, we combine both normal and positional information yielding high quality reconstructions. We show results on several datasets including an example based on data solely collected from the Internet.
    Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part II; 09/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents three new methods for regularizing the least squares solution of the reconstruction of a surface from its gradient field: firstly, the spectral regularization based on discrete generalized Fourier series (e.g., Gram Polynomials, Haar Functions, etc.); secondly, the Tikhonov regularization applied directly to the 2D domain problem; and thirdly, the regularization via constraints such as arbitrary Dirichlet boundary conditions. It is shown that the solutions to the aforementioned problems all satisfy Sylvester Equations, which leads to substantial computational gains; specifically, the solution of the Sylvester Equation is direct (non-iterative) and for an m × n surface is of the same complexity as computing an SVD of the same size, i.e., an ⎡ ⎣ ⎤ ⎡ ⎣ ⎤ ⎦ ⎡ ⎣ ⎤ ⎡ ⎢ ⎢ ⎣ ⎤ ⎥ ⎥ ⎦ ≈ ⎡ ⎢ ⎢ ⎣ ⎤ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎣ ⎤ ⎥ ⎥ ⎦ . � � � � . � � ∂ ∂ ∂ ∂ ∂ ∂ � � � � . � ⎡ ⎣ ⎤ ⎡ ⎣ ⎤ �
    The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe a robust method for the recovery of the depth map (or height map) from a gradient map (or normal map) of a scene, such as would be obtained by photometric stereo or interferometry. Our method allows for uncertain or missing samples, which are often present in experimentally measured gradient maps, and also for sharp discontinuities in the scene’s depth, e.g. along object silhouette edges. By using a multi-scale approach, our integration algorithm achieves linear time and memory costs. A key feature of our method is the allowance for a given weight map that flags unreliable or missing gradient samples. We also describe several integration methods from the literature that are commonly used for this task. Based on theoretical analysis and tests with various synthetic and measured gradient maps, we argue that our algorithm is as accurate as the best existing methods, handling incomplete data and discontinuities, and is more efficient in time and memory usage, especially for large gradient maps.
    Computer Vision and Image Understanding 08/2012; 116(8):882–895. · 1.23 Impact Factor