Real-time joint disparity and disparity flow estimation on programmable graphics hardware

Department of Computer Science, Memorial University of Newfoundland, St. John’s, Nfld, Canada A1A 3X5
Computer Vision and Image Understanding (Impact Factor: 1.54). 01/2009; DOI: 10.1016/j.cviu.2008.07.007
Source: DBLP


Disparity flow depicts the 3D motion of a scene in the disparity space of a given view and can be considered as view-dependent scene flow. A novel algorithm is presented to compute disparity maps and disparity flow maps in an integrated process. Consequently, the disparity flow maps obtained helps to enforce the temporal consistency between disparity maps of adjacent frames. The disparity maps found also provides the spatial correspondence information that can be used to cross-validate disparity flow maps of different views. Two different optimization approaches are integrated in the presented algorithm for searching optimal disparity values and disparity flows. The local winner-take-all approach runs faster, whereas the global dynamic programming based approach produces better results. All major computations are performed in the image space of the given view, leading to an efficient implementation on programmable graphics hardware. Experimental results on captured stereo sequences demonstrate the algorithm’s capability of estimating both 3D depth and 3D motion in real-time. Quantitative performance evaluation using synthetic data with ground truth is also provided.

18 Reads
  • Source
    • "However, a few methods address dynamic programming for 2D motion estimation, e.g. [23] [28], but the first method that was able to deal with reasonable large 2D displacement fields was proposed in [12] and was then extended for scene flow estimation in [11]. Still, both methods are restricted to deal with displacement vectors of only 25 and 10 pixels, respectively. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Scan-line optimization via cost accumulation has be-come very popular for stereo estimation in computer vision applications and is often combined with a semi-global cost integration strategy, known as SGM. This paper introduces this combination as a general and effective optimization technique. It is the first time that this concept is applied to 3D medical image registration. The presented algorithm, SGM-3D, employs a coarse-to-fine strategy and reduces the search space dimension for consecutive pyramid levels by a fixed linear rate. This allows it to handle large displacements to an extent that is required for clinical applications in high dimensional data. SGM-3D is evaluated in context of pulmonary motion analysis on the recently extended DIR-lab benchmark that provides ten 4D computed tomography (CT) image data sets, as well as ten challenging 3D CT scan pairs from the COPDgene study archive. Results show that both registra-tion errors as well as run-time performance are very com-petitive with current state-of-the-art methods.
    Computer Vision and Pattern Recognition, Columbus, Ohio; 06/2014
  • Source
    • "Real-time sub-pixel accurate scene flow algorithms, such as the one presented in Rabe et al. (2007), provide only sparse results both for the disparity and the displacement es- timates. The only real-time scene flow algorithm presented in the literature so far is the disparity flow algorithm in Gong (2009), which is an extension of Gong and Yang (2006). This method is a discrete, combinatorial method and requires , a-priori, the allowed range (and discretisation) of values. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.
    International Journal of Computer Vision 10/2011; 95(1):29-51. DOI:10.1007/s11263-010-0404-0 · 3.81 Impact Factor
  • Source
    • "Scene flow was introduced in [16] as a dense 3D motion field. It can be estimated with: (1) variational methods [1] [6] [13], which are usually well suited for simple scenes with a dominant surface; (2) discrete MRF formulations [10] [7], which involve expensive discrete optimization, and (3) local methods finding the correspondences greedily, which are efficient [5] but not so accurate. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided.
    The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011; 06/2011
Show more