Spacetime stereo: A unifying framework for depth from triangulation

Honda Research Institute, Mountain View, CA 94041, USA.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.78). 03/2005; 27(2):296-302. DOI: 10.1109/TPAMI.2005.37
Source: PubMed


Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework called spacetime stereo that unifies and generalizes many of these previous methods. To show the practical utility of the framework, we develop two new algorithms for depth estimation: depth from unstructured illumination change and depth estimation in dynamic scenes. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly.

1 Follower
12 Reads
  • Source
    • "Les reconstructions initiales sont ensuite corrigées grâce à une table de correspondence précalculée de manière analytique qui permet de corriger l'erreur d'estimation de phase. Les approches spatiotemporelles [13] [1] cherchent à valider dans le temps les observations spatiales. Une vision globale plus exhaustive des méthodes de stéréo active est donnée par Salvi et al. [7]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: La profilométrie par déphasage est une méthode éprouvée pour reconstruire des surfaces de manière dense et pré- cise. Par contre la scène doit rester immobile pendant l’ac- quisition d’une séquence de plusieurs images. De plus, il existe des méthodes de stéréo active qui s’affranchissent de la contrainte d’immobilité de la scène mais qui imposent d’autres limitations comme par exemple la continuité de la surface et de la texture ou une résolution de reconstruction considérablement réduite. Nous présentons une nouvelle technique de reconstruction aussi dense et précise que la profilométrie par déphasage et qui permet une translation de la scène pendant l’acqui- sition de la séquence d’images. Cela la rend intéressante pour des applications industrielles. Nous étudions sa per- formance à l’aide de simulations et donnons une démons- tration sur un exemple réel.
    ORASIS 2013; 06/2013
  • Source
    • "The third group is composed of methods of spatiotemporal stereo that do not estimate the motion explicitly , but exploit a local spatiotemporal neighbourhood of pixels to increase discriminability of the similarity statistics. Paper [5] projects an artificial pattern varying over time onto the static scene and temporally aggregate the statistic. The similarity statistic (based on bilateral filtering) is temporally aggregated also in [9], such that adjacent frames are weighted by a Gaussian kernel to cope with a small motion. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Stereo matching is a challenging problem, especially in the presence of noise or of weakly textured objects. Using temporal information in a binocular video sequence to increase the discriminability for matching has been introduced in the recent past, but all the proposed methods assume either constant disparity over time, or small object motions, which is not always true. We introduce a novel stereo algorithm that exploits temporal information by robustly aggregating a similarity statistic over time, in order to improve the matching accuracy for weak data, while preserving regions undergoing large motions without introducing artifacts.
    Pattern Recognition (ICPR), 2012 21st International Conference on; 11/2012
  • Source
    • "That would be true especially when the projected video contains a lot of texture allowing for simple greedy methods to work. Such methods are known as spatio-temporal stereovision [2] [23]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel approach for matching 2-D points between a video projector and a digital camera. Our method is motivated by camera–projector applications for which the projected image needs to be warped to prevent geometric distortion. Since the warping process often needs geometric information on the 3-D scene obtained from a triangulation, we propose a technique for matching points in the projector to points in the camera based on arbitrary video sequences. The novelty of our method lies in the fact that it does not require the use of pre-designed structured light patterns as is usually the case. The backbone of our application lies in a function that matches activity patterns instead of colors. This makes our method robust to pose, severe photometric and geometric distortions. It also does not require calibration of the color response curve of the camera–projector system. We present quantitative and qualitative results with synthetic and real-life examples, and compare the proposed method with the scale invariant feature transform (SIFT) method and with a state-of-the-art structured light technique. We show that our method performs almost as well as structured light methods and significantly outperforms SIFT when the contrast of the video captured by the camera is degraded.
    Machine Vision and Applications 09/2012; 23(5). DOI:10.1007/s00138-011-0358-4 · 1.35 Impact Factor
Show more

Similar Publications

Preview (3 Sources)

12 Reads
Available from