Real-time joint disparity and disparity flow estimation on programmable graphics hardware

Department of Computer Science, Memorial University of Newfoundland, St. John’s, Nfld, Canada A1A 3X5
Computer Vision and Image Understanding 01/2009; DOI: 10.1016/j.cviu.2008.07.007
Source: DBLP

ABSTRACT Disparity flow depicts the 3D motion of a scene in the disparity space of a given view and can be considered as view-dependent scene flow. A novel algorithm is presented to compute disparity maps and disparity flow maps in an integrated process. Consequently, the disparity flow maps obtained helps to enforce the temporal consistency between disparity maps of adjacent frames. The disparity maps found also provides the spatial correspondence information that can be used to cross-validate disparity flow maps of different views. Two different optimization approaches are integrated in the presented algorithm for searching optimal disparity values and disparity flows. The local winner-take-all approach runs faster, whereas the global dynamic programming based approach produces better results. All major computations are performed in the image space of the given view, leading to an efficient implementation on programmable graphics hardware. Experimental results on captured stereo sequences demonstrate the algorithm’s capability of estimating both 3D depth and 3D motion in real-time. Quantitative performance evaluation using synthetic data with ground truth is also provided.

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an effective 3D digitalization technique to reconstruct an accurate and reliable D environment model from multi-view stereo for an environment-learning mobile robot. The novelty of this paper lies in the introduction of nonrigid motion analysis to stereo reconstruction routine. In our proposed scheme, reconstruction task is decoupled into two stages. Firstly, range depth of feature points is recovered and in turn is used for building a polygonal mesh and secondly, projection feedback on comparison views, which is generated on assumption of the established coarse mesh model, is carefully introduced to deform the primitive mesh model so as to improve its quality dramatically. The discrepancy of observation on comparison views and the corresponding predictive feedback is quantitatively evaluated by optical flow field and is employed to derive the corresponding scene flow vector field subsequently, which is then used for surface deformation. As optical flow vector field estimation outperforms traditional dense disparity for its inherent advantage of being robust to illumination change and being optimized and smoothed in global sense, the deformed surface can be improved in accuracy, which is validated by experimental results.
    Advanced Robotics 01/2012; · 0.51 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a trinocular stereo vision system using a single chip of FPGA to generate the composite color (RGB) and disparity data stream at video rate, called the RGBD imager. The system uses the triangular configuration of three cameras for synchronous image capture and the trinocular adaptive cooperative algorithm based on local aggregation for smooth and accurate dense disparity mapping. We design a fine-grain parallel and pipelining architecture in FPGA for implementation to achieve a high computational and real-time throughput. A binary floating-point format is customized for data representation to satisfy the wide data range and high computation precision demands in the disparity calculation. Memory management and data bit-width control are applied in the system to reduce the hardware resource consumption and accelerate the processing speed. The system is able to produce dense disparity maps with 320 × 240pixels in a disparity search range of 64pixels at the rate of 30 frames per second. KeywordsRGBD Imager–Trinocular stereo vision–Cooperative algorithm–FGPA
    Machine Vision and Applications 01/2012; 23(3):513-525. · 1.10 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper surveys the state of the art in evaluating the performance of scene flow estimation and points out the difficulties in generating benchmarks with ground truth which have not allowed the development of general, reliable solutions. Hopefully, the renewed interest in dynamic 3D content, which has led to increased research in this area, will also lead to more rigorous evaluation and more effective algorithms. We begin by classifying methods that estimate depth, motion or both from multi-view sequences according to their parameterization of shape and motion. Then, we present several criteria for their evaluation, discuss their strengths and weaknesses and conclude with recommendations.
    Proceedings of the 12th international conference on Computer Vision - Volume 2; 10/2012