Article

Localization for Multirobot Formations in Indoor Environment

Joint Adv. Res. Center, City Univ. of Hong Kong, Kowloon, China
IEEE/ASME Transactions on Mechatronics (Impact Factor: 3.65). 09/2010; DOI: 10.1109/TMECH.2009.2030584
Source: IEEE Xplore

ABSTRACT Localization is a key issue in multirobot formations, but it has not yet been sufficiently studied. In this paper, we propose a ceiling vision-based simultaneous localization and mapping (SLAM) methodology for solving the global localization problems in multirobot formations. First, an efficient data-association method is developed to achieve an optimistic feature match hypothesis quickly and accurately. Then, the relative poses among the robots are calculated utilizing a match-based approach, for local localization. To achieve the goal of global localization, three strategies are proposed. The first strategy is to globally localize one robot only (i.e., leader) and then localize the others based on relative poses among the robots. The second strategy is that each robot globally localizes itself by implementing SLAM individually. The third strategy is to utilize a common SLAM server, which may be installed on one of the robots, to globally localize all the robots simultaneously, based on a shared global map. Experiments are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed approaches.

0 Bookmarks
 · 
126 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Localization is a crucial ability for autonomous robots and landmark-based localization can be effectively used because it enables localization with only landmark information. For localization, the omnidirectional vision system is efficiently used for robots to obtain information of the surrounding environment, but it is expensive and has some distortion. In this sense, the fish-eye lens vision system can be an alternative. Compared to the omnidirectional vision system, however, it obtains less landmark information at a time so that it needs a localization algorithm using less landmark information. To solve this problem, this paper proposes a novel landmark-based particle localization algorithm for the global localization problem called relocation. It can localize the pose of the robot using only two landmarks. In this algorithm, information on bearing angle and distance of landmarks is used to calculate a possible area of the location of the robot and then particles, each of which represents a pose of the robot, are randomly distributed in the calculated area. The pose of the robot is identified by selecting a particle with the highest importance weight among the distributed particles. Computer simulations and experiments demonstrate the effectiveness of the proposed algorithm.
    IEEE/ASME Transactions on Mechatronics 12/2013; 18(6):1745-1756. · 3.65 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: There are some situations in which the landmarks used in simultaneous localization and mapping (SLAM) have their own classes and for ceiling view (CV)-based navigation, this is usually the case. Ceilings in the home or the office have circular landmarks, such as lamps, speakers, fire alarms, smoke alarms, and so on, but to our knowledge, their classes have not been fully exploited in the data association of SLAM. In this paper, a new SLAM method that exploits the class of the landmarks is proposed and is applied to ceiling view-based SLAM (cvSLAM). The fact that the landmark classification cannot always be correct is also taken into account in the new SLAM and is formulated in the FastSLAM framework. Finally, simulations and experiments are conducted and the validity of the proposed method is demonstrated.
    Advanced Robotics 10/2013; 27(14):1073-1086. · 0.56 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The ability to perform accurate localization is a fundamental requirement of the navigation systems intended to guide unmanned ground vehicles in a given environment. Currently, the use of vision-based systems is a very suitable alternative for some indoor applications. This paper presents a novel distributed FPGA-based embedded image processing system for accurate and fast simultaneous estimation of the position and orientation of remotely controlled vehicles in indoor spaces. It is based on a network of distributed image processing nodes, which minimize the amount of data to be transmitted through communication networks and hence allow dynamic response to be improved, providing a simple, flexible, low-cost, and very efficient solution. The proposed system works properly under variable or nonhomogeneous illumination conditions, which simplifies the deployment. Experimental results on a real scenario are presented and discussed. They demonstrate that the system clearly outperforms the existing solutions of similar complexity. Only much more complex and expensive systems achieve similar performance.
    IEEE Transactions on Industrial Informatics 05/2014; 10(2):1033-1043. · 8.79 Impact Factor