Article

Spacetime stereo: a unifying framework for depth from triangulation.

Honda Research Institute, Mountain View, CA 94041, USA.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.69). 03/2005; 27(2):296-302. DOI: 10.1109/TPAMI.2005.37
Source: PubMed

ABSTRACT Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework called spacetime stereo that unifies and generalizes many of these previous methods. To show the practical utility of the framework, we develop two new algorithms for depth estimation: depth from unstructured illumination change and depth estimation in dynamic scenes. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly.

1 Bookmark
 · 
97 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel approach to recovering estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. The approach is based on matching spatiotemporal orientation distributions between left and right temporal image streams, which encapsulates both local spatial and temporal structure for disparity estimation. By capturing spatial and temporal structure in this unified fashion, both sources of information combine to yield disparity estimates that are naturally temporal coherent, while helping to resolve matches that might be ambiguous when either source is considered alone. Further, by allowing subsets of the orientation measurements to support different disparity estimates, an approach to recovering multilayer disparity from spacetime stereo is realized. Similarly, the matched distributions allow for direct recovery of dense, robust estimates of 3D scene flow. The approach has been implemented with real-time performance on commodity GPUs using OpenCL. Empirical evaluation shows that the proposed approach yields qualitatively and quantitatively superior estimates in comparison to various alternative approaches, including the ability to provide accurate multilayer estimates in the presence of (semi)transparent and specular surfaces.
    IEEE Transactions on Pattern Analysis and Machine Intelligence 11/2014; 36(11):2241-2254. · 5.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: An active 3D scanning method that can capture a fast motion is strongly required in wide areas. Since most commercial products using a laser pro-jector basically reconstruct the shape with a point or a line for each scan, a fast motion cannot be captured in principle. In addition, an extended method using a structured light can drastically reduce the number of projecting pat-terns, however, they still require several patterns and a fast motion cannot be captured. One solution for the purpose is to use a single pattern (one shot scan). Although, one shot scanning methods have been intensively studied, they often have stability problems and their result tend to have low resolution. In this paper, we develop a new system which achieves dense and robust 3D measurement from a single grid pattern. A 3D reconstruction can be achieved by identifying grid patterns using coplanarity constraints. We also propose a coarse-to-fine method to increase the density of the shape with a single pattern.
    01/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper explores a simple yet effective way to generate temporally coherent disparity maps from binocular video sequences based on kinematic constraints. Given the disparity map at a certain frame, the proposed approach computes the set of possible disparity values for each pixel in the subsequent frame, assuming a maximum displacement constraint (in world coordinates) allowed for each object. These disparity sets are then used to guide the stereo matching procedure in the subsequent frame, generating a temporally coherent disparity map. Experimental results indicate that the proposed approach produces temporally coherent disparity maps comparable to or better than competitive methods.
    IEEE International Conference on Acoustics, Speech, and Signal Processing, Florence, Italy; 05/2014

Preview (2 Sources)

Download
3 Downloads
Available from