Article

Multiview panoramic cameras using mirror pyramids.

Computer Vision and Robotics Laboratory, Beckham Institute for Advanced Science and Technology, University of Illinois, at Urbana-Champaign, Urbana, IL 61801, USA.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.69). 08/2004; 26(7):941-6. DOI: 10.1109/TPAMI.2004.33
Source: IEEE Xplore

ABSTRACT A mirror pyramid consists of a set of planar mirror faces arranged around an axis of symmetry and inclined to form a pyramid. By strategically positioning a number of conventional cameras around a mirror pyramid, the viewpoints of the cameras' mirror images can be located at a single point within the pyramid and their optical axes pointed in different directions to effectively form a virtual camera with a panoramic field of view. Mirror pyramid-based panoramic cameras have a number of attractive properties, including single-viewpoint imaging, high resolution, and video rate capture. It is also possible to place multiple viewpoints within a single mirror pyramid, yielding compact designs for simultaneous multiview panoramic video rate imaging. Nalwa [4] first described some of the basic ideas behind mirror pyramid cameras. In this paper, we analyze the general class of multiview panoramic cameras, provide a method for designing these cameras, and present experimental results using a prototype we have developed to validate single-pyramid multiview designs. We first give a description of mirror pyramid cameras, including the imaging geometry, and investigate the relationship between the placement of viewpoints within the pyramid and the cameras' field of view (FOV), using simulations to illustrate the concepts. A method for maximizing sensor utilization in a mirror pyramid-based multiview panoramic camera is also presented. Images acquired using the experimental prototype for two viewpoints are shown.

0 Followers
 · 
204 Views
  • Source
    • "Omnidirectional stereo is a suitable sensing method for such tasks because it can acquire images and ranges of surrounding areas simultaneously. For omnidiretional stereo vision, an obvious method is to use two (or more) cameras instead of each conventional camera (K.Tan et al., 2004; J.Gluckman et al., 1998; H.Koyasu et al. 2002; A.Jagmohan et al. 2004). Such two-camera (or more-camera) stereo systems are relatively costly and complicated compared to single camera stereo systems. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we have developed a graph-representable three-variable smoothness model for graph cuts to fit the smooth assumption for our omnidirectional images taken by a novel vision sensor. We further develop MZNCC as a suitable similarity measurement and also the necessary modification, including deformed matching template and adaptable scale. Experiments demonstrate the effectiveness of our algorithm, based on which, the regenerated obstacle map is finer for a mobile robot.
    Motion Planning, 06/2008; , ISBN: 978-953-7619-01-5
  • Source
    • "Omnidirectional vision sensors have been constructed in many different ways. Tan et al. [13] use a pyramid of mirrors, and point multiple cameras at the pyramid. This configuration offers high resolution and the possibility of a single view point but is not isotropic and the registration and the physical arrangement of the cameras can be difficult. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents two related methods for autonomous visual guidance of robots: localization by trilateration, and interframe motion estimation. Both methods use coaxial omnidirectional stereopsis (omnistereo), which returns the range r to objects or guiding points detected in the images. The trilateration method achieves self-localization using r from the three nearest objects at known positions.The interframe motion estimation is more general, being able to use any features in an unknown environment. The guiding points are detected automatically on the basis of their perceptual significance and thus they need not have either special markings or be placed at known locations.The interframe motion estimation does not require previous motion history, making it well suited for detecting acceleration (in 20th of a second) and thus supporting dynamic models of robot’s motion which will gain in importance when autonomous robots achieve useful speeds.An initial estimate of the robot’s rotation ω (the visual compass) is obtained from the angular optic flow in an omnidirectional image. A new noniterative optic flow method has been developed for this purpose. Adding ω to all observed (robot relative) bearings θ gives true bearings towards objects (relative to a fixed coordinate frame).The rotation ω and the r,θ coordinates obtained at two frames for a single fixed point at unknown location are sufficient to estimate the translation of the robot. However, a large number of guiding points are typically detected and matched in most real images. Each such point provides a solution for the robot’s translation. The solutions are combined by a robust clustering algorithm Clumat that reduces rotation and translation errors.Simulator experiments are included for all the presented methods. Real images obtained from ScitosG5 autonomously moving robot were used to test the interframe rotation and to show that the presented vision methods are applicable to real images in real robotics scenarios.
    Robotics and Autonomous Systems 09/2007; 55(9-55):667-674. DOI:10.1016/j.robot.2007.05.009 · 1.11 Impact Factor
  • Source
    • "Omnidirectional stereo is a suitable sensing method for such tasks because it can acquire images and ranges of surrounding areas simultaneously. For omnidiretional stereo vision, an obvious method is to use two (or more) cameras instead of each conventional camera [2]-[5]. Such two-camera (or more-camera) stereo systems are relatively costly and complicated compared to single camera stereo systems. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An integrated framework mainly focusing on stereo matching has been presented in this paper to obtain dense depth maps for a mobile robot that is equipped with a novel omnidirectional stereo vision sensor that is designed to obtain height information. The vision sensor is composed of a common perspective camera and two hyperbolic mirrors, which are separately fixed inside a glass cylinder. As the separation between the two mirrors provides much enlarged baseline, the precision of the system has improved correspondingly. Nevertheless, the large disparity space and image particularities that are different from general stereo vision system result in poor performance using common methods. To satisfy the reliability requirement by mobile robot navigation, we use improved graph cuts method, in which more appropriate three-variable smootheness model is proposed for general priors corresponding to more reasonable piecewise smoothness assumption since the well-known swap move algorithm can be applied to a wider class of functions. We also show the necessary modification to handle panoramic images, including deformed matching template, adaptable template scale. Experiment shows that this proposed vision system is feasible as a practical stereo sensor for accurate depth map generation.
    IEEE 11th International Conference on Computer Vision, ICCV 2007, Rio de Janeiro, Brazil, October 14-20, 2007; 01/2007
Show more