Article

Object-Centered Spectral Matching for Efficient Environmental Information Fusion in Multiple Small Robot Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Conference Paper
Full-text available
Extremely variant image pairs include distorted, deteriorated, and corrupted scenes that have experienced severe geometric, photometric, or non-geometric-non-photometric transformations with respect to their originals. Real world visual data can become extremely dusty, smoky, dark, noisy, motion-blurred, affine, JPEG compressed, occluded, shadowed, virtually invisible, etc. Therefore, matching of extremely variant scenes is an important problem and computer vision solutions must have the capability to yield robust results no matter how complex the visual input is. Similarly, there is a need to evaluate feature detectors for such complex conditions. With standard settings, feature detection, description, and matching algorithms typically fail to produce significant number of correct matches in these types of images. Though, if full potential of the algorithms is applied by using extremely low thresholds, very encouraging results are obtained. In this paper, potential of 14 feature detectors: SIFT, SURF, KAZE, AKAZE, ORB, BRISK, AGAST, FAST, MSER, MSD, GFTT, Harris Corner Detector based GFTT, Harris Laplace Detector, and CenSurE has been evaluated for matching 10 extremely variant image pairs. MSD detected more than 1 million keypoints in one of the images and SIFT exhibited a repeatability score of 99.76% for the extremely noisy image pair but failed to yield high quantity of correct matches. Rich information is presented in terms of feature quantity, total feature matches, correct matches, and repeatability scores. Moreover, computational costs of 25 diverse feature detectors are reported towards the end, which can be used as a benchmark for comparison studies.
Article
Full-text available
This paper deals with the problem of grid map merging in multi-robot SLAM (simultaneous positioning and mapping) where the initial relative pose between robots is unknown. When robots encounter each other, it is easy to obtain a map transformation between robots for grid map merging if bilateral observation measurements are available between robots. However, since the bilateral observation measurements are obtained by encounters between robots, they may limit the availability of using multi-robot systems. To overcome the limitation, spectra-based map merging can be applied without any observation measurements between robots. However, it requires sufficient overlapping areas between indivisual maps of robots, which can also limit the availability of using multi-robot systems. In this paper, therefore, to overcome both limitations, an extension of spectra-based map merging using not bilateral but unilateral observation measurements. The proposed method was tested with datasets obtained from real experiments with mobile robots equipped with a sensor fusion system which can obtain unilateral observation measurements to other robots. Experimental results showed that the proposed map merging method works successfully without any bilateral observation measurements.
Article
Full-text available
For the map building of unknown indoor environment, compared with single robot, multi-robot collaborative mapping has higher efficiency. Map merging is one of the fundamental problems in multi-robot collaborative mapping. However, in the process of grid map merging, image processing methods such as feature matching, as a basic method, are challenged by low feature matching rate. Driven by this challenge, a novel map merging method based on suppositional box that is constructed by right-angled points and vertical lines is proposed. The paper firstly extracts right-angled points of suppositional box selected from the vertical point which is the intersection of the vertical line. Secondly, based on the common edge characteristics between the right-angled points, suppositional box in the map is constructed. Then the transformation matrix is obtained according to the matching pair of suppositional boxes. Finally, for matching errors based on the length of pairs, Kalman filter is used to optimize the transformation matrix. Experimental results show that this method can effectively merge map in different scenes and the successful matching rate is greater than that of other features.
Article
Full-text available
The subject of research is image normalization based on key points analysis. The purpose is development of mathematical models and their software implementation for normalization of image geometric transformations based on the analysis of SIFT, SURF, ORB, BRISK, KAZE, AKAZE descriptors; the model application for comparative analysis of descriptors based on expert assessments of normalization quality, time costs and other indicators; construction and usage in experiments the own dataset with 100 real image pairs which contains scenes of five types: buildings, plane images outside, plane images inside, natural and artificial textures; making conclusions about the performance of the considered descriptors to solve the normalization problem. Such methods are applied: SIFT, SURF, ORB, BRISK, KAZE, AKAZE descriptors for describing key points, the Nearest Neighbor Distance Ratio method or symmetric method for search of corresponding pairs of key points from different images, the RANSAC method for rejecting false correspondences and obtaining a homography matrix, similarity measures, software modeling. The results obtained: experimental normalization results by SIFT, SURF, ORB, BRISK, KAZE, AKAZE descriptors for 100 real pairs of own dataset (normalized images, their overlaps, quantitative descriptor evaluation, precision and recall estimation, time costs estimation, expert quality assessment, conversion of all indicator values to an 8-point rating scale); summary diagrams and conclusions about advantages and weaknesses of the compared descriptors; recommendations about the most-suitable-algorithm selection for solving normalization problem in specific cases.
Conference Paper
Full-text available
Image registration is the process of matching, aligning and overlaying two or more images of a scene, which are captured from different viewpoints. It is extensively used in numerous vision based applications. Image registration has five main stages: Feature Detection and Description; Feature Matching; Outlier Rejection; Derivation of Transformation Function; and Image Reconstruction. Timing and accuracy of feature-based Image Registration mainly depend on computational efficiency and robustness of the selected feature-detector-descriptor, respectively. Therefore, choice of feature-detector-descriptor is a critical decision in feature-matching applications. This article presents a comprehensive comparison of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK algorithms. It also elucidates a critical dilemma: Which algorithm is more invariant to scale, rotation and viewpoint changes? To investigate this problem, image matching has been performed with these features to match the scaled versions (5% to 500%), rotated versions (0° to 360°), and perspective-transformed versions of standard images with the original ones. Experiments have been conducted on diverse images taken from benchmark datasets: University of OXFORD, MATLAB, VLFeat, and OpenCV. Nearest-Neighbor-Distance-Ratio has been used as the feature-matching strategy while RANSAC has been applied for rejecting outliers and fitting the transformation models. Results are presented in terms of quantitative comparison, feature-detection-description time, feature-matching time, time of outlier-rejection and model fitting, repeatability, and error in recovered results as compared to the ground-truths. SIFT and BRISK are found to be the most accurate algorithms while ORB and BRISK are most efficient. The article comprises rich information that will be very useful for making important decisions in vision based applications and main aim of this work is to set a benchmark for researchers, regardless of any particular area.
Article
Full-text available
In 1962 Hough earned the patent for a method [1], popularly called Hough Transform (HT) that efficiently identifies lines in images. It is an important tool even after the golden jubilee year of existence, as evidenced by more than 2500 research papers dealing with its variants, generalizations, properties and applications in diverse fields. The current paper is a survey of HT and its variants, their limitations and the modifications made to overcome them, the implementation issues in software and hardware, and applications in various fields. Our survey, along with more than 200 references, will help the researchers and students to get a comprehensive view on HT and guide them in applying it properly to their problems of interest.
Conference Paper
Full-text available
Geometric alignment of 3D pointclouds, obtained using a depth sensor such as a time-of-flight camera, is a challenging task with important applications in robotics and computer vision. Due to the recent advent of cheap depth sensing devices, many different 3D registration algorithms have been proposed in literature, focussing on different domains such as localization and mapping or image registration. In this survey paper, we review the state-of-the-art registration algorithms and discuss their common mathematical foundation. Starting from simple deterministic methods, such as Principal Component Analysis (PCA) and Singular Value Decomposition (SVD), more recently introduced approaches such as Iterative Closest Point (ICP) and its variants, are analyzed and compared. The main contribution of this paper therefore consists of an overview of registration algorithms that are of interest in the field of computer vision and robotics, for example Simultaneous Localization and Mapping. Keywords–3D pointcloud; PCL; 3D registration; rigid transfor-mation; survey paper I. INTRODUCTION With the advent of inexpensive depth sensing devices, robotics, computer vision and ambient application technology research has shifted from 2D imaging and Laser Imaging Detection And Ranging (LIDAR) scanning towards real-time reconstruction of the environment based on 3D pointcloud data. On one hand, there are structured light based sensors such as the Microsoft Kinect and Asus Xtion sensor which generate a structured point cloud, sampled on a regular grid, and on the other hand, there are many time-of-flight based sensors such as the Softkinetic Depthsense camera yield an unstructured pointcloud. These pointclouds can either be used directly to detect and recognize objects in the environment where ambient technology is been used, or can be integrated over time to completely reconstruct a 3D map of the camera's surroundings [1], [2], [3]. In the latter case however, point clouds obtained at different time instances need to be aligned, a process which is often referred to as registration. Registration algorithms are able to estimate the ego-motion of the robot by calculating the transformation that optimally maps two pointclouds, each of which is subject to camera noise. These registration algorithms can be classified coarsely into rigid and non-rigid approaches. Rigid approaches assume a rigid environment such that the transformation can be modeled using only 6 Degrees Of Freedom (DOF). Non-rigid methods on the other hand, are able to cope with articulated objects or soft bodies that change shape over time. Registration algorithms are used in different fields and applications, such as 3D object scanning, 3D mapping, 3D localization and ego-motion estimation, human body detection. Most of these state-of-the-art applications employ either a
Article
Full-text available
Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion
Article
This paperaddressesthe problemofgridmapmerging for multi-robot systems, which canberesolvedbyacquiringthemap transformation matrix (MTM) among robot maps. Without the initial correspondence or any rendezvous among robots, the only way to acquire the MTM is to find and match the common regions of individual robot maps. This paper proposes a novel map merging technique which is capable of merging individual robot maps by matching the spectral information of robot maps. The proposed technique extracts the spectra of robot maps and enhances the extracted spectra using visual landmarks. Then, the MTM is accurately acquired by finding the maximum cross-correlation among the enhanced spectra. Experimental results in outdoor environments show that the proposed technique was performed successfully. Also, the comparison result shows that the map merging errors were significantly reduced by the proposed technique.
Article
A new grid map-merging technique which consists of virtual emphasis by one-way observation, curvature-based map matching and particle swarm optimisation is proposed. The proposed technique can improve not only the accuracy of map merging, but also the flexibility of multi-robot systems. The improved performance is verified by showing higher similarities than the existing map-merging techniques in experiments.
Article
The iRobot PackBot is a combat-tested, man-portable UGV that has been deployed in Afghanistan and Iraq. The PackBot is also a versatile platform for mobile robotics research and development that supports a wide range of payloads suitable for many different mission types. In this paper, we describe four R&D projects that developed experimental payloads and software using the PackBot platform. CHARS was a rapid development project to develop a chemical/radiation sensor for the PackBot. We developed the CHARS payload in six weeks and deployed it to Iraq to search for chemical and nuclear weapons. Griffon was a resear ch project to develop a flying PackBot that combined the capabilities of a UGV and a UAV. We developed a Griffon prototype equipped with a steerable parafoil and gasoline- powered motor, and we completed successful flight tests including remote-controlled launch, ascent, cruising, descent, and landing. Valkyrie is an ongoing research and development project to develop a PackBot payload that will assist medics in retrieving casualties from the battlefield. Wayfarer is an applied research project to develop autonomous urban navigation capabilities for the PackBot using laser, stereo vision, GPS, and INS sensors.
Article
US military branches are undergoing a shift in the structure and missions that's designed to help them become lighter and more agile, able to move easily and quickly to hot spots. Long-range planning to prepare for modern warfare includes developing robotics for military use. For instance, the Army's Future Combat Systems program plans to make a third of its ground forces robotic within about 15 years. The army's 20-year plan envisions 10 steps of robotic development, starting with completely human-controlled systems and ending with autonomous, armed, cooperative robots. The Robotics Institute has developed a small, unmanned ground vehicle called a "throwbot" that can be tossed into buildings to gather and relay information back to soldiers before they enter the building. The institute is also developing larger robotic vehicles that can do reconnaissance and breaching missions, including a robotic helicopter that can generate 3D models from the air.
Article
In this paper, we are concerned with the registration of two 3D data sets with large-scale stretches and noises. First, by incorporating a scale factor into the standard iterative closest point (ICP) algorithm, we formulate the registration into a constraint optimization problem over a 7D nonlinear space. Then, we apply the singular value decomposition (SVD) approach to iteratively solving such optimization problem. Finally, we establish a new ICP algorithm, named Scale-ICP algorithm, for registration of the data sets with isotropic stretches. In order to achieve global convergence for the proposed algorithm, we propose a way to select the initial registrations. To demonstrate the performance and efficiency of the proposed algorithm, we give several comparative experiments between Scale-ICP algorithm and the standard ICP algorithm.
Article
This paper addresses the problem of feature map merging, which is one of the essential techniques for multi-robot systems. If inter-robot measurements are not available for feature map merging, the only way to obtain the map transformation matrix is feature map matching. However, the conventional feature map matching technique requires too much computation time because it has to be iteratively performed to compute the degree of the mismatch between multiple feature maps. This paper proposes a non-iterative feature map merging technique using virtual supporting lines (VSLs) which is also accurate and robust. The proposed technique extracts the spectral information of multiple feature maps using VSLs and obtains the map transformation matrix using the circular cross-correlation between the extracted spectral information of the multiple feature maps. The proposed technique was tested on feature maps produced by experiments with vision sensors, which was performed non-iteratively. In addition, it consistently showed a high acceptance index, which indicates the degree of accuracy for feature map merging.
Article
We present a new algorithm for merging occupancy grid maps produced by multiple robots exploring the same environment. The algorithm produces a set of possible transformations needed to merge two maps, i.e translations and rotations. Each transformation is weighted, thus allowing to distinguish uncertain situations, and enabling to track multiple cases when ambiguities arise. Transformations are produced extracting some spectral information from the maps. The approach is deterministic, non-iterative, and fast. The algorithm has been tested on public available datasets, as well as on maps produced by two robots concurrently exploring both indoor and outdoor environments. Throughout the experimental validation stage the technique we propose consistently merged maps exhibiting very dierent characteristics.
Conference Paper
The military forces always tried to use new gadgets and weapons for reducing the risk of their casualties and to defeat their enemies. With the development of sophisticated technology, it mostly relies on the high tech weapons or machinery being used. Robotics is one of the hot fields of modern age in which the nations are concentrating upon for military purposes in the state of war and peace. They have been in use for some time for demining and rescue operations but now they are under research for combat or spy missions. Today's modern military forces are using different kinds of robots for different applications ranging from mine detection, surveillance, logistics and rescue operations. In the future they will be used for reconnaissance and surveillance, logistics and support, communications infrastructure, forward-deployed offensive operations, and as tactical decoys to conceal maneuver by manned assets. In order to make robots for the unpredicted cluttered environment of the battlefield, research on different aspects of robots is under investigation in laboratories to be able to do its job autonomously, as efficiently as a human operated machine can do. Latest techniques are being investigated to have advanced and intelligent robots for different operations. This paper presents different kinds of robotic technologies being used in all the three main forces, Navy, Army and Air. Some of the robots discussed are also being used in the wars of Afghanistan and Iraq, also, the robots that are under investigation in laboratories for future military operations. These robots are under investigation for autonomous and cooperative environment. We focus our attention on the uses of robots in war and peace as well as their impact on society.
Map-merging for multi-robot system
  • H Jiří