Localization for Multirobot Formations in Indoor Environment

Joint Adv. Res. Center, City Univ. of Hong Kong, Kowloon, China
IEEE/ASME Transactions on Mechatronics (Impact Factor: 3.43). 09/2010; 15(4):561 - 574. DOI: 10.1109/TMECH.2009.2030584
Source: IEEE Xplore


Localization is a key issue in multirobot formations, but it has not yet been sufficiently studied. In this paper, we propose a ceiling vision-based simultaneous localization and mapping (SLAM) methodology for solving the global localization problems in multirobot formations. First, an efficient data-association method is developed to achieve an optimistic feature match hypothesis quickly and accurately. Then, the relative poses among the robots are calculated utilizing a match-based approach, for local localization. To achieve the goal of global localization, three strategies are proposed. The first strategy is to globally localize one robot only (i.e., leader) and then localize the others based on relative poses among the robots. The second strategy is that each robot globally localizes itself by implementing SLAM individually. The third strategy is to utilize a common SLAM server, which may be installed on one of the robots, to globally localize all the robots simultaneously, based on a shared global map. Experiments are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed approaches.

13 Reads
  • Source
    • "The use of onboard visual systems is an interesting alternative to simultaneously estimate position and orientation. In such solutions [8]-[10], onboard cameras are used to localize either markers placed at predefined positions on the target space or features of this space (e.g., elements of the illumination system or structural components), and UGV's pose is determined accordingly. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to perform accurate localization is a fundamental requirement of the navigation systems intended to guide unmanned ground vehicles in a given environment. Currently, the use of vision-based systems is a very suitable alternative for some indoor applications. This paper presents a novel distributed FPGA-based embedded image processing system for accurate and fast simultaneous estimation of the position and orientation of remotely controlled vehicles in indoor spaces. It is based on a network of distributed image processing nodes, which minimize the amount of data to be transmitted through communication networks and hence allow dynamic response to be improved, providing a simple, flexible, low-cost, and very efficient solution. The proposed system works properly under variable or nonhomogeneous illumination conditions, which simplifies the deployment. Experimental results on a real scenario are presented and discussed. They demonstrate that the system clearly outperforms the existing solutions of similar complexity. Only much more complex and expensive systems achieve similar performance.
    IEEE Transactions on Industrial Informatics 05/2014; 10(2):1033-1043. DOI:10.1109/TII.2013.2294112 · 8.79 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Networked mobile robots are able to determine their poses (i.e., position and orientation) with the help of a well-configured environment with distributed sensors. Before localizing the mobile robots using distributed sensors, the environment has to have information on each of the robots’ prior knowledge. Consequently, if the environment does not have information on the prior knowledge of a certain mobile robot then it will not determine its current pose. To solve this restriction, as a preprocessing step for indoor localization, we propose a motion-based identification of multiple mobile robots using trajectory analysis. The proposed system identifies the robots by establishing the relation between their identities and their positions, which are estimated from their trajectories related to each of the paths generated as designated signs. The primary feature of the proposed system is the fact that networked mobile robots are quickly and simultaneously able to determine their poses in well-configured environments. Experimental results show that our proposed system simultaneously identifies multiple mobile robots, and approximately estimates each of their poses as an initial state for autonomous localization.
    International Journal of Control Automation and Systems 08/2012; 10(4). DOI:10.1007/s12555-012-0415-4 · 0.95 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a motion estimation approach is introduced for a vision-aided inertial navigation system. The system consists of a ground-facing monocular camera mounted on an inertial measurement unit (IMU) to form an IMU-camera sensor fusion system. The motion estimation procedure fuses inertial data from the IMU and planar features on the ground captured by the camera. The main contribution of this paper is a novel closed-form measurement model based on the image data and IMU output signals. In contrast to existing methods, our algorithm is independent of the underlying vision algorithm for image motion estimation such as optical flow algorithms for camera motion estimation. The algorithm has been implemented using an unscented Kalman filter, which propagates the current and the last state of the system updated in the previous measurement instant. The validity of the proposed navigation method is evaluated both by simulation studies and by real experiments.
    IEEE/ASME Transactions on Mechatronics 08/2014; 19(4):1-10. DOI:10.1109/TMECH.2013.2276404 · 3.43 Impact Factor
Show more