Article

Daytime Mud Detection for Unmanned Ground Vehicle Autonomous Navigation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Detecting mud hazards is a significant challenge to UGV autonomous off-road navigation. A military UGV stuck in a mud body during a mission may need to be sacrificed or rescued, both unattractive options. JPL is currently developing a daytime mud detection capability under the RCTA program using UGV mounted sensors. To perform robust mud detection under all conditions, we expect multiple sensors will be necessary. A passive mud detection solution is desirable to meet the FCS-ANS requirements. To characterize the advantages and disadvantages of candidate passive sensors, data collections have been performed on wet and dry soil using visible, multi-spectral (including near-infrared), shortwave infrared, mid-wave infrared, long-wave infrared, polarization, and stereo sensors. In this paper, we examine the cues for mud detection each of these sensors provide, along with their deficiencies, and we illustrate localizing detected mud in a world model that can used by a UGV to plan safe paths.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These hazards can be detected by comparison of polarization degree and similarity of the polarization phases. There is a comparison between different approaches of water and mud hazards detection in (Matthies, Bellutta, & Mchenry, 2003;Rankin & Matthies, 2008) and a survey in (Iqbal, Morel, & Meriaudeau, 2010). ...
... Mud is highly polarized and hence polarization based techniques can be used. In (Rankin, & Matthies, 2008) they proposed to use multiple sensors for mud detection including a polarization camera. At a pixel level, partial linear polarization is measured by the transmitted radiance through a polarization filter. ...
... Regions that have a significantly higher DOP can be a potential cue for water or mud. In the experiments of (Rankin, & Matthies, 2008) they used two polarization sensors: a SAMBA polarization camera and a SALSA linear stokes polarization camera. The SAMBA camera provides a "polarization contrast" image and the SALSA camera provides DOP, intensity, and orientation images. ...
Article
Full-text available
For long time, it was thought that the sensing of polarization by animals is invariably related to their behavior, such as navigation and orientation. Recently, it was found that polarization can be part of a high-level visual perception, permitting a wide area of vision applications. Polarization vision can be used for most tasks of color vision including object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. The polarization based visual behavior found in the animal kingdom is briefly covered. Then, the authors go in depth with the bio-inspired applications based on polarization in computer vision and robotics. The aim is to have a comprehensive survey highlighting the key principles of polarization based techniques and how they are biologically inspired.
... In this section, we review the methodologies for the classification of mud, which appears as isolated wet soil surrounded by dry soil in nominal weather conditions. Imaging sensors such as stereo cameras were used to segment the darker soil from the surrounding region in the daytime (Rankin and Matthies, 2008). Stereo cameras are usually used to identify tall vegetation from mud and soil in the cluttered natural scenario. ...
... Therefore, it is difficult to classify mud using only the polarization contrast in the shadows. In (Rankin and Matthies, 2008;Rankin and Matthies, 2010), the polarization contrast was tested to detect mud in the daytime nominal weather conditions along with the stereo camera on the robot. The polarization pixels were projected onto the left stereo camera image to classify the mud within the world map. ...
Article
Full-text available
Understanding the terrain in the upcoming path of a ground robot is one of the most challenging problems in field robotics. Terrain and traversability analysis is a multidisciplinary field combining robotics with image and signal processing, feature extraction, machine learning, three-dimensional (3D) mapping, and 3D geometry. Application scenarios range from autonomous vehicles on urban networks to agriculture, defence, exploration, mining, and search and rescue. Given the broad set of techniques available and the fast progress in this area, in this paper we organize and survey the corresponding literature, define unambiguous key terms, and discuss links among fundamental building blocks ranging from terrain classification to traversability regression. The advantages and the drawbacks of the methods are critically discussed, providing a comprehensive coverage of key aspects, including open code, available datasets for experimentation and comparisons, and important open research issues.
... In [22,23] SWIR camera was used within the unmanned ground vehicle (UGV) together with sensors in other wavebands (MWIR, LWIR) for analyzing soil and detection and localization of mud in the world model which UGV can use to plan safe trajectories. In [24], the authors introduced the WVU Outdoor SWIR Gait (WOSG) dataset built with a SWIR camera to evaluate the performance of gait recognition algorithms in SWIR videos. ...
... Sensors 2022,22, 2562 ...
Article
Full-text available
SWIR imaging bears considerable advantages over visible-light (color) and thermal images in certain challenging propagation conditions. Thus, the SWIR imaging channel is frequently used in multi-spectral imaging systems (MSIS) for long-range surveillance in combination with color and thermal imaging to improve the probability of correct operation in various day, night and climate conditions. Integration of deep-learning (DL)-based real-time object detection in MSIS enables an increase in efficient utilization for complex long-range surveillance solutions such as border or critical assets control. Unfortunately, a lack of datasets for DL-based object detection models training for the SWIR channel limits their performance. To overcome this, by using the MSIS setting we propose a new cross-spectral automatic data annotation methodology for SWIR channel training dataset creation, in which the visible-light channel provides a source for detecting object types and bounding boxes which are then transformed to the SWIR channel. A mathematical image transformation that overcomes differences between the SWIR and color channel and their image distortion effects for various magnifications are explained in detail. With the proposed cross-spectral methodology, the goal of the paper is to improve object detection in SWIR images captured in challenging outdoor scenes. Experimental tests for two object types (cars and persons) using a state-of-the-art YOLOX model demonstrate that retraining with the proposed automatic cross-spectrally created SWIR image dataset significantly improves average detection precision. We achieved excellent improvements in detection performance in various variants of the YOLOX model (nano, tiny and x).
... Terrain classification is typically addressed using sensors such as vision [1,2], Light Detection and Ranging (LiDAR) [3], inertial sensors [4] or a combination of such sensors. The choice of sensor modalities depends on the application and selection of the appropriate sensor suite can improve system performance. ...
... Visual information has shown to be a good indicator for classifying terrain and predicting slip in several works [1,7,8]. A study of passive visual sensors operating in different wavelengths to detect mud has been presented in [2]. However, vision sensors are dependant on the lighting and climatic conditions and hence classification accuracy will vary with changes in either. ...
Conference Paper
Full-text available
div class="section abstract"> Off road navigation demands ground robots to traverse complex and often changing terrain. Classification and assessment of terrain can improve path planning strategies by reducing travel time and energy consumption. In this paper we introduce a terrain classification and assessment framework that relies on both exteroceptive and proprioceptive sensor modalities. The robot captures an image of the terrain it is about to traverse and records corresponding vibration data during traversal. These images are manually labelled and used to train a support vector machine (SVM) in an offline training phase. Images have been captured under different lighting conditions and across multiple locations to achieve diversity and robustness to the model. Acceleration data is used to calculate statistical features that capture the roughness of the terrain whereas angular velocities are used to calculate roll and pitch angles experienced by the robot. These features are used to train a k-means clustering classifier, where k is the number of terrain types the robot is expected to encounter. Human intervention is required only to set k. Cluster indices are auto labelled during training. In the operating phase, the SVM predicts the terrain that the robot will encounter and the corresponding cluster center predicts the vibration features associated with that terrain type. These features can be used to assess traversability. Vibration features are measured and the clusters are updated as the robot traverses, thus adapting to change in terrain over time. An accuracy of 92% has been achieved using visual data to classify terrain and 76% to predict the physical properties across different terrain classes. </div
... These include color stereo, multi-spectral (visible plus near-infrared), short-wave infrared (SWIR), MWIR, LWIR, and polarization. In [14], we characterized the strengths and weaknesses of each passive sensor type for detecting mud under nominal conditions. Our analysis indicated that polarization cameras are particularly useful for segmenting mud surrounded by dry soil since the degree of linear polarization (DOLP) on mud is consistently higher than surrounding dry soil, regardless of sky conditions, the sun position, and the sensor orientation [14]. ...
... In [14], we characterized the strengths and weaknesses of each passive sensor type for detecting mud under nominal conditions. Our analysis indicated that polarization cameras are particularly useful for segmenting mud surrounded by dry soil since the degree of linear polarization (DOLP) on mud is consistently higher than surrounding dry soil, regardless of sky conditions, the sun position, and the sensor orientation [14]. A mud detection algorithm was implemented which thresholds DOLP and back-projects polarization pixels that have a high DOLP into the corresponding rectified left color image (which is registered with the stereo range image). ...
Article
Full-text available
The Robotics Collaborative Technology Alliances (RCTA) program, which ran from 2001 to 2009, was funded by the U.S. Army Research Laboratory and managed by General Dynamics Robotic Systems. The alliance brought together a team of government, industrial, and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles. Under RCTA, three technology areas critical to the development of future autonomous unmanned systems were addressed: advanced perception, intelligent control architectures and tactical behaviors, and human-robot interaction. The Jet Propulsion Laboratory (JPL) participated as a member for the entire program, working four tasks in the advanced perception technology area: stereo improvements, terrain classification, pedestrian detection in dynamic environments, and long range terrain classification. Under the stereo task, significant improvements were made to the quality of stereo range data used as a front end to the other three tasks. Under the terrain classification task, a multi-cue water detector was developed that fuses cues from color, texture, and stereo range data, and three standalone water detectors were developed based on sky reflections, object reflections (such as trees), and color variation. In addition, a multi-sensor mud detector was developed that fuses cues from color stereo and polarization sensors. Under the long range terrain classification task, a classifier was implemented that uses unsupervised and self-supervised learning of traversability to extend the classification of terrain over which the vehicle drives to the far-field. Under the pedestrian detection task, stereo vision was used to identify regions-of-interest in an image, classify those regions based on shape, and track detected pedestrians in three-dimensional world coordinates. To improve the detectability of partially occluded pedestrians and reduce pedestrian false alarms, a vehicle detection algorithm was developed. This paper summarizes JPL's stereo-vision based perception contributions to the RCTA program.
... However , for AGVs automated terrain classification is essential. Terrain surface classification methods for mobile robots have two main categories, those based on proprioceptive (i.e., vibration and slip) sensors [2], [3], [4], [5], [6], [7], [8], [9] and those based on vision sensors [10], [11], [12], corresponding respectively to " feeling " and " seeing. " Unlike proprioceptive sensors, which are often associated with an inertial measurement unit, vision sensor measurements are essentially independent of the speed and load of the vehicle. ...
... The research of [10] used a long range camera and classified large terrain patches using features such us average red, average color, 3D color histogram, and texture metrics. Parallel research in [11] used cameras at short range to detect terrains in close proximity to planetary rovers on Mars-like environments. 1 In addition, short-wave, mid-wave and long-wave infrared cameras, which do work at night, were used in [12] to classify mud. Additional research presented in [13], proposed an image-retrieval approach that utilizes wavelet signatures to estimate the size of fine particles in mineral ore. ...
Conference Paper
Full-text available
To increase autonomous ground vehicle (AGV) safety and efficiency on outdoor terrains the vehicle's control system should have settings for individual terrain surfaces. A first step in such a terrain-dependent control system is classification of the surface upon which the AGV is traversing. This paper considers vision-based terrain surface classification for the path directly in front of the vehicle (¿ 1 m). Most visionbased terrain classification has focused on terrain traversability and not on terrain surface classification. The few approaches to classifying traversable terrain surfaces, with the exception of the use of infrared cameras to classify mud, have relied on stand-alone cameras that are designed for daytime use and are not expected to perform well in the dark. In contrast, this research uses a laser stripe-based structured light sensor, which uses a laser in conjunction with a camera, and hence can work at night. Also, unlike most previous results, the classification here does not rely on color since color changes with illumination and weather, and certain terrains have multiple colors (e.g., sand may be red or white). Instead, it relies only on spatial relationships, specifically spatial frequency response and texture, which captures spatial relationships between different gray levels. Terrain surface classification using each of these features separately is conducted by using a probabilistic neural network. Experimental results based on classifying four outdoor terrains demonstrate the effectiveness of the proposed methods.
... Terrain surface classification methods for mobile robots have two main categories, those based on proprioceptive (i.e., vibration and slip) sensors [2], [3], [4], [5], [6], [7], [8], [9] and those based on vision sensors [10], [11], [12], corresponding respectively to "feeling" and "seeing." Unlike proprioceptive sensors, which are often associated with an inertial measurement unit, vision sensor measurements are essentially independent of the speed and load of the vehicle. ...
... The research of [10] used a long range camera and classified large terrain patches using features such us average red, average color, 3D color histogram, and texture metrics. Parallel research in [11] used cameras at short range to detect terrains in close proximity to planetary rovers on Mars-like environments. 1 In addition, short-wave, mid-wave and long-wave infrared cameras, which do work at night, were used in [12] to classify mud. The terrains classified in this research are similar to those classified in [10] and [11]. ...
Article
It is necessary for autonomous ground vehicles operating on outdoor terrains to identify and adapt to different terrains in order to improve their mobility and safety. This work presents a classification scheme to identify outdoor terrains and an update rule to reduce the possibility of implementing control modes based on classification inaccuracies. A laser stripe-based structured light sensor, which has a laser and infrared camera component, is used to sense terrains directly in front of the vehicle (1m). Use of this infrared vision system allows sensing at night, without external lighting, unlike many previous vision-based approaches that rely on stand-alone cameras. Also, unlike many previous results, the classification algorithm presented here does not rely on measures of color, which are subject to illumination and weather conditions. Instead, the method presented here relies on spatial relationships which are captured in two quantities: spatial frequency from range data and texture from camera data. The presented terrain classification scheme uses a probabilistic neural network classifier to exploit the spatial differences in four terrains: asphalt, grass, gravel and sand. This approach yields empirical results that report a greater than 97% classification accuracy when both spatial frequency and texture features are used. Color robustness and lighting robustness is then shown through additional experiments. Furthermore, instead of implementing control modes directly from the identified terrains, it is shown that the use of current and past terrain detections allows for the rejection of misclassifications with minimal effect on the rate at which a new control mode can be implemented.
... There have been several attempts to utilize SWIR sensors in automated analysis. Rankin and Matthies [8] proposed a method of detecting mud areas by attaching multiple sensors, including a SWIR sensor, to a military UGV and analyzing dry and wet soil using the characteristics of the SWIR sensor in an off-road environment. Lemoff et al. [9] proposed a SWIR imaging system that uses facial recognition to generate facial images at distances up to 350m and can detect a person at a distance of hundreds of meters. ...
Article
Full-text available
Although Short-Wave Infrared (SWIR) sensors have advantages in terms of robustness in bad weather and low-light conditions, the SWIR images have not been well studied for automated object detection and tracking systems. The majority of previous multi-object tracking studies have focused on pedestrian tracking in visible-spectrum images, but tracking different types of vehicles is also important in city-surveillance scenarios. In addition, the previous studies were based on high-computing-power environments such as GPU workstations or servers, but edge computing should be considered to reduce network bandwidth usage and privacy concerns in city-surveillance scenarios. In this paper, we propose a fast and effective multi-object tracking method, called Multi-Class Distance-based Tracking (MCDTrack), on SWIR images of city-surveillance scenarios in a low-power and low-computation edge-computing environment. Eight-bit integer quantized object detection models are used, and simple distance and IoU-based similarity scores are employed to realize effective multi-object tracking in an edge-computing environment. Our MCDTrack is not only superior to previous multi-object tracking methods but also shows high tracking accuracy of 77.5% MOTA and 80.2% IDF1 although the object detection and tracking are performed on the edge-computing device. Our study results indicate that a robust city-surveillance solution can be developed based on the edge-computing environment and low-frame-rate SWIR images.
... The authors from a paper where they describe path detection as a guidance for blind people affirm they deal with potholes and water-puddles, but the paper did not present these results (Sövény et al. 2015). Other two papers Matthies 2008 andRankin et al. 2010) deal with larger water-puddle detection in off-road scenarios for navigation purposes with the use of stereo vision methods. They do not deal with other road features, not even the road detection itself. ...
Article
Full-text available
A challenge still to be overcome in the field of visual perception for vehicle and robotic navigation on heavily damaged and unpaved roads is the task of reliable path and obstacle detection. The vast majority of the researches have scenario roads in good condition, from developed countries. These works cope with few situations of variation on the road surface and even fewer situations presenting surface damages. In this paper we present an approach for road detection considering variation in surface types, identifying paved and unpaved surfaces and also detecting damage and other information on other road surfaces that may be relevant to driving safety. Our approach makes use of Convolutional Neural Networks (CNN) to perform semantic segmentation, we use the U-NET architecture with ResNet34, in addition we use the technique known as Transfer Learning, where we first train a CNN model without using weights in the classes as a basis for a second CNN model where we use weights for each class. We also present a new Ground Truth with image segmentation, used in our approach and that allowed us to evaluate our results. Our results show that it is possible to use passive vision for these purposes, even using images captured with low cost cameras.
... The authors from a paper where they describe path detection as a guidance for blind people affirm they deal with potholes and water-puddles, but the paper did not present these results (Sövény et al. (2015)). Other two papers (Rankin and Matthies (2008) and Rankin et al. (2010)) deal with larger water-puddle detection in offroad scenarios for navigation purposes with the use of stereo vision methods. They do not deal with other road features, not even the road detection itself. ...
Preprint
Full-text available
A challenge still to be overcome in the field of visual perception for vehicle and robotic navigation on heavily damaged and unpaved roads is the task of reliable path and obstacle detection. The vast majority of the researches have as scenario roads in good condition, from developed countries. These works cope with few situations of variation on the road surface and even fewer situations presenting surface damages. In this paper we present an approach for road detection considering variation in surface types, identifying paved and unpaved surfaces and also detecting damage and other information on other road surface that may be relevant to driving safety. We also present a new Ground Truth with image segmentation, used in our approach and that allowed us to evaluate our results. Our results show that it is possible to use passive vision for these purposes, even using images captured with low cost cameras.
... Visual information is a good indicator for classifying terrain and predicting slip as presented in. 1, 8-10 A study of passive visual sensors operating in different wavelengths to detect mud has been presented in. 2 However, vision sensors are dependant on the lighting and climatic conditions and hence classification accuracy will vary with changes in either. LiDAR sensors are often used to overcome light dependence of visual sensors since their main driver are lasers. ...
Conference Paper
Full-text available
Terrain sensing is an important aspect of navigation for autonomous ground vehicles (AGVs) in off-road conditions. Modern AGVs have several sensors that can be used to detect terrain. In this paper, we have implemented terrain classification using a fusion of visual data from a camera and vibrational data from an inertial measurement unit (IMU). The popular supervised learning technique, support vector machine (SVM), has been used due to its high accuracy and relatively small execution time. An image is first captured and the robot then traverses over the region defined by the image to record vibration data. Linear acceleration vectors, perpendicular to the terrain, are extracted from the IMU and statistical features are calculated to make up the vibration data. The images are manually labelled and aligned with the vibration data to create a fused feature vector and train the SVM. Our method has been tested on previously unseen field data and an average accuracy of 90% has been achieved.
... Little trial to detect water surface with short-wave infrared or thermal infrared imagery was unsatisfied, too, due to the weak capability to deal with adjacent aspect reflection, complicated operation and interference of surroundings uncertainty [5]. Meanwhile, detection of hazard aqueous land is an essential but more difficult task for vehicle autonomous navigation, and few efficient achievements can be found on this study [7]. ...
Conference Paper
Detection water and aqueous land hazards in outdoor navigation is a big challenge in unknown outdoor environment. Advantages of laser scanning are encouraging, firstly due to its distinguished detection capacity at night and less quantity of data. It was found by experiments that there was no evident changing trend of laser remission with the increase of color density in water, no matter green, red, or dark blue. These colored water shared common feature of weak mirror-deflection with pure water. However, remission of black water surface dropped linearly approximately with the increase of color density, and strong mirror-deflection was observed. Experiments also indicated that water turbidity had great effect on both remission and mirror-deflection, and there was significant positive correlation between laser remission and the soil moisture content. These phenomena could be explained with twice reflected pulse property of laser range finder and optical property of water-air interface in laser scanning. Based on the experiment results, methods to distinguish pure or turbid water surface from land and to distinguish hazard aqueous land from safe dry land in natural scene were put forward.
... The assessment covered integration of technology developed over an eight-year research program into autonomous navigation. A week prior to the assessment, a vision based water detector [18] was integrated in four hours. Integration included linking output of the detector into the world model, defining the water types as obstacles in a configuration file, and completing successful live field testing. ...
Conference Paper
Full-text available
The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities [1]. Research occurs in 5 main Task Areas: Intelligence, Perception, Dexterous Manipulation and Unique Mobility (DMUM), Human Robot Interaction (HRI), and Integrated Research (IR). This last task of Integrated Research is especially critical and challenging. Individual research components can only be fully assessed when integrated onto a robot where they interact with other aspects of the system to create cross-Task capabilities which move beyond the State of the Art. Adding to the complexity, the RCTA is comprised of 12+ independent organizations across the United States. Each has its own constraints due to development environments, ITAR, “lab” vs “real-time” implementations, and legacy software investments from previous and ongoing programs. We have developed three main components to manage the Integration Task. The first is RFrame, a data-centric transport agnostic middleware which unifies the disparate environments, protocols, and data collection mechanisms. Second is the modular Intelligence Architecture built around the Common World Model (CWM). The CWM instantiates a Common Data Model and provides access services. Third is RIVET, an ITAR free Hardware-In-The-Loop simulator based on 3D game technology. RIVET provides each researcher a common test-bed for development prior to integration, and a regression test mechanism. Once components are integrated and verified, they are released back to the consortium to provide the RIVET baseline for further research. This approach allows Integration of new and legacy systems built upon different architectures, by application of Open Architecture principles.
... Regarding to the literature of robotics research, to increase autonomous ground vehicle (AGV) safety and efficiency on outdoor terrains, the vehicle's control system should have different strategies and settings for individual terrain surfaces. To enable more autonomous tasks in complex outdoor environments, the vehicle must have more "feeling" and "seeing" [Boley et al., 1989] [Iagnemma & Dubowsky, 2002] [Sadhukhan & Moore, 2003] [Ojeda et al., 2006] [DuPont et al., 2005 [Collins, 2008] [Angelova et al., 2007] [Halatci et al., 2007] [Rankin & Matthies, 2008][DuPont et al., 2008. While good terrain models and terrain classification techniques ...
Thesis
Full-text available
This thesis introduces seven novel contributions for two perception tasks: vegetation detection and terrain classification, that are at the core of any control system for efficient autonomous navigation in outdoor environments. Regarding vegetation detection, we first describe a vegetation index-based method (1), which relies on the absorption and reflectance properties of vegetation to visual light and near-infrared light, respectively. Second, a 2D/3D feature fusion (2), which imitates the human visual system in vegetation interpretation, is investigated. Alternatively, an integrated vision system (3) is proposed to realise our greedy ambition in combining visual perception-based and multi-spectral methods by only using a unit device. A depth study on colour and texture features of vegetation has been carried out, which leads to a robust and fast vegetation detection through an adaptive learning algorithm (4). In addition, a double-check of passable vegetation detection (5) is realised, relying on the compressibility of vegetation. The lower degree of resistance vegetation has, the more traversable it is. Regarding terrain classification, we introduce a structure-based method (6) to capture the world scene by inferring its 3D structures through a local point statistic analysis on LiDAR data. Finally, a classification-based method (7), which combines the LiDAR data and visual information to reconstruct 3D scenes, is presented. Whereby, object representation is described more details, thus enabling an ability to classify more object types. Based on the success of the proposed perceptual inference methods in the environmental sensing tasks, we hope that this thesis will really serve as a key point for further development of highly reliable perceptual inference methods.
... Regarding to the literature of robotics research, to increase autonomous ground vehicle (AGV) safety and efficiency on outdoor terrains the vehicle's control system should have different strategies and settings for individual terrain surfaces. To enable more autonomous tasks in complex outdoor environments, the vehicle must have more "feeling" and "seeing" [9][10] [11] [12][13] [14]. While good terrain models and terrain classification techniques are already available to deal with a variety of terrain surfaces, the key limitation of outdoor autonomous navigation is to cope up with domains at which the vehicle has to navigate through tall grass, small bushes, or forested areas. ...
Conference Paper
Full-text available
The paper introduces an active way to detect vegetation which is at front of the vehicle in order to give a better decision-making in navigation. Blowing devices are to be used for creating strong wind to effect vegetation. Motion compensation and motion detection techniques are applied to detect foreground objects which are presumably judged as vegetation. The approach enables a double-check process for vegetation detection which was done by a multi-spectral approach, but more emphasizing on the purpose of passable vegetation detection. In all real world experiments we carried out, our approach yields a detection accuracy of over 98%. We furthermore illustrate how the active way can improve the autonomous navigation capabilities of autonomous ground vehicles.
... The work of [25] also analyzes the effectiveness of many types of visual sensing, including visible, multi-spectral (e.g. near infrared), short-wave infrared, mid-wave infrared, long-wave infrared, polarization, and stereo sensors. ...
Article
Autonomous ground vehicles (AGVs) are commonly used for search and rescue, military and forestry purposes. To safely and efficiently perform missions in these environments the AGV must adapt its driving and control strategies based on the traversed terrain. A paradigm for achieving this type of control is to use a terrain classification algorithm to identify the traversed terrain and update the control mode when a new terrain is encountered. Although terrain classification can be performed using vision sensors, the area of terrain classification that has needed the most development is classification using proprioceptive sensors, which detect the internal state of the vehicle. The purpose of this dissertation is to describe at length recent advancements in terrain classification using proprioceptive sensors, including many that are centered around the author's personal research. The discussion starts with the fundamentals and physics behind vibration-based terrain classification, the most proven means of terrain classification using proprioceptive sensors. This physical understanding is then shown to lead to the use of frequency domain features, extracted from measured vehicle vibrations, to distinguish the underlying terrain. By borrowing techniques and concepts from the computer science field of pattern recognition this identification can be automated and performed in real-time. Using comparisons, the author details the benefits of several different pattern recognition classifiers using performance metrics based on accuracy and computational speed. This pattern recognition discussion ultimately leads to a better understanding of how automated terrain classification can be implemented on AGVs. Perhaps the most difficult problem to address before effectively implementing online terrain classification using proprioceptive sensors is that of speed and load dependency, which is characterized by the need to train classification algorithms based on vehicle speed or load. This means that a large amount of empirical data must be collected in order to ensure the classification algorithm will be accurate. After addressing why these problems occur, it is shown that a vehicle model along with the measured vehicle vibrations can in theory be used to describe the terrain in a way that is independent of both vehicle speed and load. Alternatively, interpolation techniques may reduce the impact of speed and load dependency, while enabling the use of sensor modalities beyond vehicle vibrations. As a terrain classification system is only beneficial to vehicle control when used in coordination with terrain-dependent driving modes, methods are demonstrated to handle both the online and offline cooperation of these systems. Offline, the terrain classification algorithm can be trained to reduce the likelihood of implementing a control mode with settings drastically different from that of the ideal control mode. Online cooperation can be accomplished by using terrain classification to switch between control modes via an update rule that is both sensitive to terrain transitions and robust to misclassifications made by the terrain classification algorithm.
... Terrain classification for AGVs can be performed through what is seen visually, felt through the vehicle reactions during traversal or a combination of both vision and vehicle reactions. Key vision-based techniques in detecting individual terrains include the work of [1], which shows the effectiveness of color-based features and the work of [2] which analyzes the effectiveness of several types of visual Liang.Lu@ge.com sensing in detecting mud. ...
Conference Paper
Full-text available
The need for terrain-dependent control systems on AGVs is evident when considering the variety of outdoor terrains many AGVs encounter. Although the idea of using terrain classification algorithms to identify the terrain and then update the control modes is well-established, the problem of how to intelligently update the control modes based on classifications has been left relatively unaddressed. This paper presents a simplistic rule, called the update rule, which decides when to change control modes based on past and present terrain classifications and is tuned using empirical data. Using experimental data from the experimental Unmanned Vehicle (XUV) mobile robot, this update rule is shown here to be both robust to misclassiflcations as well as sensitive to terrain transitions. This paper also develops and implements a sliding horizon approach to reaction-based terrain classification for improved sensitivity to terrain transitions. The update rule structure presented here is applicable to reaction and vision based terrain classification of individual terrains.
Article
Obstacle detection is a complex task involving detection of obstacle features, identification of appropriate sensor and environmental conditions. The development of water hazard detection in the past two decades can be regarded as a microcosm of the history of typical obstacle detection. This review provides an extensive study of water hazard detection papers spanning a quarter-century (the 1990 s to 2021). The review mainly focuses on the width of water hazards, features of water hazards, sensor types for the detection of water hazards, and environmental light. This paper analyses and summarizes the research overview and status of some research institutions in the field over the past 20 years, hoping to provide some reference for UGV water hazard detection. In addition, the existing water hazard detection problems are summarized, and future development trends are proposed.
Chapter
For long time, it was thought that the sensing of polarization by animals is invariably related to their behavior, such as navigation and orientation. Recently, it was found that polarization can be part of a high-level visual perception, permitting a wide area of vision applications. Polarization vision can be used for most tasks of color vision including object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. The polarization based visual behavior found in the animal kingdom is briefly covered. Then, the authors go in depth with the bio-inspired applications based on polarization in computer vision and robotics. The aim is to have a comprehensive survey highlighting the key principles of polarization based techniques and how they are biologically inspired.
Thesis
Full-text available
Identificar um caminho navegável é uma função importante em um sistema de navegação visual, podendo ter diferentes aplicações práticas, como: veículos autônomos, sistemas de assistência ao motorista ou localização e mapeamento. Para essa tarefa há dois objetivos principais que são: encontrar o caminho à frente do veículo e identificar obstáculos nesse caminho. Isso pode ser alcançado por diferentes tipos de sensores, sejam de visão ativa (lasers) ou visão passiva (câmeras). Alguns desafios devem ser considerados: ambiente em constante alteração, mudanças de iluminação, alterações no tipo de terreno, presença de diferentes obstáculos, buracos ou poças de água no caminho. Além disso, é possível observar pelo estado da arte que a grande maioria dos trabalhos tem como cenário estradas de países desenvolvidos, Europa ou América do Norte, apresentando poucos danos e variações na superfície da estrada. Esse trabalho tem como objetivo utilizar visão passiva, com base em imagens, para a identificação de um caminho navegável, considerando variações na superfície do terreno como tipo de pavimento e danos. Esta tese propõe o uso de técnicas específicas de Visão Computacional e Aprendizado de Máquina para a resolução desses problemas. Como resultados deste trabalho foram geradas duas revisões sistemáticas da literatura, uma com foco na detecção de caminho e outra com foco na detecção de obstáculos. Uma base de imagens para testes e validação também foi criada para os experimentos, com mais de 60 mil frames. Utilizando imagens dessa nova base, foram criados três novos Ground Truth (GT), sendo dois GTs para abordagens de classificação de imagens como: tipos de superfície e qualidade da superfície, e o último GT para abordagem de segmentação de imagem, com foco na segmentação da estrada e das características da estrada. Também foram feitas anotações em imagens de outras bases, principalmente para um GT com foco na detecção dos objetos com informações de movimento e profundidade. Foram consideradas três abordagens experimentais. A primeira visou identificar as características de obstáculos não relacionados à superfície da estrada, como por exemplo: outros veículos e pedestres. Nessa abordagem foram utilizadas informações obtidas por um detector de objetos gerado pela arquitetura de rede neural Mask R-CNN, e com base nos objetos detectados foram extraídos os dados de distância e movimento com a utilização de Visão Estéreo e Optical Flow. Na segunda abordagem com foco na superfície da estrada, foi desenvolvido um classificador de tipos de superfície e qualidade de estrada. Nessa abordagem foi realizado um pré-processamento das imagens, definindo uma Região de Interesse (ROI) e também replicando as imagens com variações de iluminação, de modo a implementar data augmentation para melhoria dos resultados. Em seguida, foi utilizada uma estrutura de rede neural convolucional (CNN) simples para treinar e classificar as imagens nos rótulos de tipo de superfície e também de qualidade de superfície. A terceira abordagem, também com foco na superfície do caminho, foi a aplicação do GT de segmentação de superfícies em uma arquitetura CNN conhecida (U-NET com ResNet) com parametrização dos pesos de cada classe. Os resultados obtidos mostram que é possível utilizar visão passiva para detecção de caminho em estradas com variações na superfície e com danos.
Chapter
Researchers have been inspired by nature to build the next generation of smart robots. Based on the mechanisms adopted by the animal kingdom, research teams have developed solutions to common problems that autonomous robots faced while performing basic tasks. Polarization-based behaviour is one of the most distinctive features of some species of the animal kingdom. Light polarization parameters significantly expand visual capabilities of autonomous robots. Polarization vision can be used for most tasks of color vision, like object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. In this chapter, the authors briefly cover polarization-based visual behavior in the animal kingdom. Then, they go in depth with bio-inspired applications based on polarization in computer vision and robotics. The aim is to have a comprehensive survey highlighting the key principles of polarization-based techniques and how they are biologically inspired.
Article
An adaptive approach for road extraction inspired by the mechanism of primary visual cortex (V1) is proposed. The motivation is originated by the characteristics in the receptive field from V1. It has been proved that human or primate visual systems can distinguish useful cues from real scenes effortlessly while traditional computer vision techniques cannot accomplish this task easily. This idea motivates us to design a bio-inspired model for road extraction from remote sensing imagery. The proposed approach is an improved support vector machine (SVM) based on the pooling of feature vectors, using an improved Gaussian radial basis function (RBF) kernel with tuning on synaptic gains. The synaptic gains comprise the feature vectors through an iterative optimization process representing the strength and width of Gaussian RBF kernel. The synaptic gains integrate the excitation and inhibition stimuli based on internal connections from V1. The summation of synaptic gains contributes to pooling of feature vectors. The experimental results verify the correlation between the synaptic gain and classification rules, and then show better performance in comparison with hidden Markov model, SVM, and fuzzy classification approaches. Our contribution is an automatic approach to road extraction without prelabeling and postprocessing work. Another apparent advantage is that our method is robust for images taken even under complex weather conditions such as snowy and foggy weather.
Article
Full-text available
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Article
Models of soil reflectance under changing moisture conditions are needed to better quantify soil and vegetation properties from remote sensing. In this study, measurements of reflected shortwave radiation (400–2500 nm) were acquired in a laboratory setting for four different soils at various moisture contents. The observed changes in soil reflectance revealed a nonlinear dependence on moisture that was well described by an exponential model, and was similar for different soil types when moisture was expressed as degree of saturation. Reflectance saturated at much lower moisture contents in the visible and near‐infrared (VNIR) spectral region than in the shortwave‐infrared (SWIR) spectral region, suggesting that longer wavelengths are better suited for measuring volumetric moisture contents above ∼20%. To explore the potential of a general reflectance model based solely on dry reflectance and moisture, we employed a Monte Carlo analysis that accounted for observed variability in the measured spectra. Modeling results indicated that the SWIR region offers significant potential for relating moisture and reflectance on an operational basis, with uncertainties less than half as large as in the VNIR. The results of this study help to quantify the strong influence of moisture on spectral reflectance and absorption features, and should aid in the development of operational algorithms as well as more physically based models in the future.
Conference Paper
Detecting water hazards for autonomous, off-road navigation of unmanned ground vehicles is a largely unexplored problem. In this paper, we catalog environmental variables that affect the difficulty of this problem, including day vs. night operation, whether the water reflects sky or other terrain features, the size of the water body, and other factors. We briefly survey sensors that are applicable to detecting water hazards in each of these conditions. We then present analyses and results for water detection for four specific sensor cases: (1) using color image classification to recognize sky reflections in water during the day, (2) using ladar to detect the presense of water bodies and to measure their depth, (3) using short-wave infrared (SWIR) imagery to detect water bodies, as well as snow and ice, and (4) using mid-wave infrared (MWIR) imagery to recognize water bodies at night. For color imagery, we demonstrate solid results with a classifier that runs at nearly video rate on a 433 MHz processor. For ladar, we present a detailed propagation analysis that shows the limits of water body detection and depth estimation as a function of lookahead distance, water depth, and ladar wavelength. For SWIR and MWIR, we present sample imagery from a variety of data collections that illustrate the potential of these sensors. These results demonstrate significant progress on this problem.
Article
This method has proven very useful for detecting preferential flow paths in the soil. Traditionally, image In this technical note the ability to estimate surface soil moisture analysis of the dye photographs has only involved sepa- () from soil color using image analysis is evaluated. Four natural soils and uniform fine sand were used. Calibration soil samples with ration between stained and nonstained soil. However, varying from 0 to 0.40 m 3 m 3 in 0.05 m 3 m 3 increments were pre- during the 1990s, image analysis improved to the extent pared and photographed. The variations in soil color with were that estimation of dye concentration from soil color was investigated in both the RGB (red, green, and blue) and HSV (hue, possible (Forrer, 1997; Ewing and Horton, 1999). The saturation, and value) color spaces. Generally, all tested soils became image analysis involves several corrections of, for exam-
Article
Soil water content (WC) affects the accuracy of the visible (VIS) and near infrared (NIR) spectroscopic measurement of other soil properties, for example, C, N, and other nutrients. This study was con- ducted to subtract the WC contribution to VIS-NIR spectra by clas- sifying soil spectra into different WC groups. This classification might improve the accuracy of prediction of other soil properties with calibra- tion models established separately for each group of WC. A mobile, fiber-type, VIS-NIR spectrophotometer (Zeiss Corona 1.7 visnir fiber), with a measurement range of 306.5 to 1710.9 nm was used to measure the light reflectance of two sample sets: one (275 samples) collected from a single field and the other (360 samples) collected from multi- ple fields in Belgium and northern France. The partial least squares (PLS) regression analysis and factorial discriminant analysis (FDA) were applied to the VIS-NIR spectra to quantify WC and classify spec- tra into different WC groups, respectively. Samples were divided into calibrationandvalidationsetswithratiosof10:1 and3:1for thePLS and FDA, respectively. The PLS for the single-field sample set provided better estimation of WC (R2 5 0.98) than for the multiple-field sample set (R2 5 0.88). For the single-field sample set, spectra were successfully classified into six WC groups with correct classification (CC) of 94.1 and 95.6% for the calibration and validation datasets, respectively. Due to the large variability in the multiple-field sample set, soils were success- fully classified into three WC groups only. The CC obtained were 88.1 and 79.7% for the calibration and validation sets, respectively. These results suggested that the FDA can be successfully used to classify soil VIS-NIR spectra into different WC levels, particularly when soil vari- ability is minimal.
Conference Paper
Unmanned Aerial Vehicle (UAV) capabilities are evolving rapidly, from technical, regulatory and operational standpoints. It is likely that these platforms will begin to offer new alternatives for agricultural and other applications needing high spatial resolution data and delivered in near real-time. This paper presents an overview of the Microwave Autonomous Copter System (MACS) currently under development at the Center for Hydrology, Soil Climatology and Remote Sensing (HSCaRS), Alabama A & M University. An L-band (1.4-GHz), horizontal polarization, radiometer is one of the sensors that has been proposed for MACS. The UAV helicopter system will be used for monitoring the temporal changes of soil moisture as a function of depth, even in the presence of vegetative covers. These measurements could greatly improve our understanding of soil moisture under vegetation covers that are necessary for completing algorithms of global energy and water balance products for examining variations in weather and climate. The paper describes the UAV helicopter, the microwave system and current status of the development
Conference Paper
The surface moisture is one of the most important parameters in modeling water-plant coupled land surface processes and land-air coupled climatic systems. The use of remotely sensed data is potentially of great interest in such a context. Several methods have been proposed to estimate soil moisture conditions, but the limitations of these methods are obvious. In this study, we developed a method to detect soil moisture condition using surface temperature (Ts) and vegetation index (NDVI) derived from Landsat ETM+ data. We then applied this method to the semiarid area and mapped out the spatial distribution of soil moisture. We further compared the soil moisture distribution with the spatial distribution of desertification and found that desertification in semiarid area exerts a great effect on soil moisture conditions
Article
The requirements of sensors to monitor soil water content from mobile agricultural machinery are reviewed. These sensors can be divided into the two categories of contact and non-contact sensing. The applications of both non-contact and contact sensors are discussed and the developments towards designing such sensors are considered. Most progress has been made in developing contact sensing systems.
Article
This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and ge- ometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and wood- chips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers. © 2007 Wiley Periodicals, Inc.
Article
Numerical modelling results are reported from a pilot study investigating the feasibility of developing a technique for daily soil moisture measurement throughout the world, based on GOES infrared data. A detailed one-dimensional boundary layer-surface-soil model was used in order to determine which physical parameters observable from GOES are most sensitive to soil moisture, and which are most effected by seasonal changes, atmospheric effects and vegetation cover. The results of the sensitivity test show that the mid-morning differential of surface temperature with respect to absorbed solar radiation is optimally sensitive to soil moisture. A case study comparing model results with GOES infrared data confirms the sensitivity of this parameter to soil moisture and also confirms the applicability of the model to predicting area-averaged surface temperature changes. Model measurements of soil moisture are expected to be most accurate for dry or marginal agricultural areas where drought is common. Sources of error, including the advection of clouds, are examined and methods of minimizing error are discussed.
Article
Red and near-infrared satellite data from the Advanced Very High Resolution Radiometer sensor have been processed over several days and combined to produce spatially continuous cloud-free imagery over large areas with sufficient temporal resolution to study green-vegetation dynamics. The technique minimizes cloud contamination, reduces directional reflectance and off-nadir viewing effects, minimizes sun-angle and shadow effects, and minimizes aerosol and water-vapor effects. The improvement is highly dependent on the state of the atmosphere, surface-cover type, and the viewing and illumination geometry of the sun, target and sensor. An example from southern Africa showed an increase of 40 percent from individual image values tothe final composite image. Limitations associated with the technique are discussed, and recommendations are given to improve this approach.
Article
An empirical algorithm for the retrieval of soil moisture content and surface root mean square (RMS) height from remotely sensed radar data was developed using scatterometer data. The algorithm is optimized for bare surfaces and requires two copolarized channels at a frequency between 1.5 and 11 GHz. It gives best results for kh&les;2.5, μ<sub>&upsi;</sub>&les;35%, and θ&ges;30°. Omitting the usually weaker hv-polarized returns makes the algorithm less sensitive to system cross-talk and system noise, simplifies the calibration process and adds robustness to the algorithm in the presence of vegetation. However, inversion results indicate that significant amounts of vegetation (NDVI>0.4) cause the algorithm to underestimate soil moisture and overestimate RMS height. A simple criteria based on the σ<sub>hv</sub><sup>0</sup>/σ<sub>vv</sub><sup>0</sup> ratio is developed to select the areas where the inversion is not impaired by the vegetation. The inversion accuracy is assessed on the original scatterometer data sets but also on several SAR data sets by comparing the derived soil moisture values with in-situ measurements collected over a variety of scenes between 1991 and 1994. Both spaceborne (SIR-C) and airborne (AIRSAR) data are used in the test. Over this large sample of conditions, the RMS error in the soil moisture estimate is found to be less than 4.2% soil moisture