Article

Spacetime stereo: a unifying framework for depth from triangulation.

Honda Research Institute, Mountain View, CA 94041, USA.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 4.8). 03/2005; 27(2):296-302. DOI: 10.1109/TPAMI.2005.37
Source: PubMed

ABSTRACT Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework called spacetime stereo that unifies and generalizes many of these previous methods. To show the practical utility of the framework, we develop two new algorithms for depth estimation: depth from unstructured illumination change and depth estimation in dynamic scenes. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly.

1 Bookmark
 · 
89 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Stereo camera systems are widely used in many real applications including indoor and outdoor robotics. They are very easy to use and provide accurate depth estimates on well-textured scenes, but often fail when the scene does not have enough texture. It is possible to help the system work better in this situation by actively projecting certain light patterns to the scene to create artificial texture on the scene surface. The question we try to answer in ths paper is what would be the best pattern(s) to project. This paper introduces optimized projection patterns based on a novel concept of (symmetric) non-recurring De Bruijn sequences, and describes algorithms to generate such sequences. A projected pattern creates an artificial texture which does not contain any duplicate patterns over epipolar lines within certain range, thus it makes the correspondence match simple and unique. The proposed patterns are compatible with most existing stereo algorithms, meaning that they can be used without any changes in the stereo algorithm and one can immediately get much denser depth estimates without any additional computational cost. It is also argued that the proposed patterns are optimal binary patterns, and finally a few experimental result using stereo and space-time stereo algorithms are presented.
    Proceedings of the 2009 IEEE international conference on Robotics and Automation; 05/2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper surveys the state of the art in evaluating the performance of scene flow estimation and points out the difficulties in generating benchmarks with ground truth which have not allowed the development of general, reliable solutions. Hopefully, the renewed interest in dynamic 3D content, which has led to increased research in this area, will also lead to more rigorous evaluation and more effective algorithms. We begin by classifying methods that estimate depth, motion or both from multi-view sequences according to their parameterization of shape and motion. Then, we present several criteria for their evaluation, discuss their strengths and weaknesses and conclude with recommendations.
    Proceedings of the 12th international conference on Computer Vision - Volume 2; 10/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel approach for matching 2-D points between a video projector and a digital camera. Our method is motivated by camera–projector applications for which the projected image needs to be warped to prevent geometric distortion. Since the warping process often needs geometric information on the 3-D scene obtained from a triangulation, we propose a technique for matching points in the projector to points in the camera based on arbitrary video sequences. The novelty of our method lies in the fact that it does not require the use of pre-designed structured light patterns as is usually the case. The backbone of our application lies in a function that matches activity patterns instead of colors. This makes our method robust to pose, severe photometric and geometric distortions. It also does not require calibration of the color response curve of the camera–projector system. We present quantitative and qualitative results with synthetic and real-life examples, and compare the proposed method with the scale invariant feature transform (SIFT) method and with a state-of-the-art structured light technique. We show that our method performs almost as well as structured light methods and significantly outperforms SIFT when the contrast of the video captured by the camera is degraded.
    Machine Vision and Applications 09/2012; 23(5). · 1.10 Impact Factor

Full-text (2 Sources)

Download
3 Downloads
Available from