Spacetime stereo: A unifying framework for depth from triangulation

Honda Research Institute, Mountain View, CA 94041, USA.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.78). 03/2005; 27(2):296-302. DOI: 10.1109/TPAMI.2005.37
Source: PubMed

ABSTRACT Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework called spacetime stereo that unifies and generalizes many of these previous methods. To show the practical utility of the framework, we develop two new algorithms for depth estimation: depth from unstructured illumination change and depth estimation in dynamic scenes. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly.

1 Follower
12 Reads
  • Source
    • "Les reconstructions initiales sont ensuite corrigées grâce à une table de correspondence précalculée de manière analytique qui permet de corriger l'erreur d'estimation de phase. Les approches spatiotemporelles [13] [1] cherchent à valider dans le temps les observations spatiales. Une vision globale plus exhaustive des méthodes de stéréo active est donnée par Salvi et al. [7]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: La profilométrie par déphasage est une méthode éprouvée pour reconstruire des surfaces de manière dense et pré- cise. Par contre la scène doit rester immobile pendant l’ac- quisition d’une séquence de plusieurs images. De plus, il existe des méthodes de stéréo active qui s’affranchissent de la contrainte d’immobilité de la scène mais qui imposent d’autres limitations comme par exemple la continuité de la surface et de la texture ou une résolution de reconstruction considérablement réduite. Nous présentons une nouvelle technique de reconstruction aussi dense et précise que la profilométrie par déphasage et qui permet une translation de la scène pendant l’acqui- sition de la séquence d’images. Cela la rend intéressante pour des applications industrielles. Nous étudions sa per- formance à l’aide de simulations et donnons une démons- tration sur un exemple réel.
    ORASIS 2013; 06/2013
  • Source
    • "That would be true especially when the projected video contains a lot of texture allowing for simple greedy methods to work. Such methods are known as spatio-temporal stereovision [2] [23]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel approach for matching 2-D points between a video projector and a digital camera. Our method is motivated by camera–projector applications for which the projected image needs to be warped to prevent geometric distortion. Since the warping process often needs geometric information on the 3-D scene obtained from a triangulation, we propose a technique for matching points in the projector to points in the camera based on arbitrary video sequences. The novelty of our method lies in the fact that it does not require the use of pre-designed structured light patterns as is usually the case. The backbone of our application lies in a function that matches activity patterns instead of colors. This makes our method robust to pose, severe photometric and geometric distortions. It also does not require calibration of the color response curve of the camera–projector system. We present quantitative and qualitative results with synthetic and real-life examples, and compare the proposed method with the scale invariant feature transform (SIFT) method and with a state-of-the-art structured light technique. We show that our method performs almost as well as structured light methods and significantly outperforms SIFT when the contrast of the video captured by the camera is degraded.
    Machine Vision and Applications 09/2012; 23(5). DOI:10.1007/s00138-011-0358-4 · 1.35 Impact Factor
  • Source
    • "Early attempts in seeded stereo algorithms made little or no attempt to track the motion of the camera or scene when propagating disparity information through time. Such an approach quickly leads to errors should any significant motion be present [24] [7]. Several attempts have been made to integrate motion estimates into the process including using 2D optical flow [6] [19] and 3D disparity flow maps [16], both having some success . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Algorithms for stereo video image processing typicaly assume that the various tasks; calibration, static stereo matching, and egomotion are independent black boxes. In particular, the task of computing disparity estimates is normally performed independently of ongoing egomotion and environmental recovery processes. Can information from these processes be exploited in the notoriously hard problem of disparity field estimation? Here we explore the use of feedback from the environmental model being constructed to the static stereopsis task. A prior estimate of the disparity field is used to seed the stereomatching process within a probabilistic framework. Experimental results on simulated and real data demonstrate the potential of the approach.
    Proceedings of the 2012 Joint International Conference on Human-Centered Computer Environments, Aizu-Wakamatsu, Japan; 03/2012
Show more

Preview (2 Sources)

12 Reads
Available from