Jeffrey A. Edlund’s research while affiliated with California Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (82)


Robust High-Speed State Estimation for Off-Road Navigation Using Radar Velocity Factors
  • Article

December 2024

·

16 Reads

IEEE Robotics and Automation Letters

·

Jeffrey A. Edlund

·

Patrick Spieler

·

[...]

·

Enabling robot autonomy in complex environments for mission critical application requires robust state estimation. Particularly under conditions where the exteroceptive sensors, which the navigation depends on, can be degraded by environmental challenges thus, leading to mission failure. It is precisely in such challenges where the potential for Frequency Modulated Continuous Wave (FMCW) radar sensors is highlighted: as a complementary exteroceptive sensing modality with direct velocity measuring capabilities. In this work we integrate radial speed measurements from a FMCW radar sensor, using a radial speed factor, to provide linear velocity updates into a sliding–window state estimator for fusion with LiDAR pose and IMU measurements. We demonstrate that this augmentation increases the robustness of the state estimator to challenging conditions present in the environment and the negative effects they can pose to vulnerable exteroceptive modalities. The proposed method is extensively evaluated using robotic field experiments conducted using an autonomous, full-scale, off-road vehicle operating at high-speeds ( \sim 12 m/s) in complex desert environments. Furthermore, the robustness of the approach is demonstrated for cases of both simulated and real-world degradation of the LiDAR odometry performance along with comparison against state-of-the-art methods for radar-inertial odometry on public datasets.


The class remapping amongst datasets to main- tain consistency with our existing multi-biome dataset.
Few-shot Semantic Learning for Robust Multi-Biome 3D Semantic Mapping in Off-Road Environments
  • Preprint
  • File available

November 2024

·

7 Reads

Off-road environments pose significant perception challenges for high-speed autonomous navigation due to unstructured terrain, degraded sensing conditions, and domain-shifts among biomes. Learning semantic information across these conditions and biomes can be challenging when a large amount of ground truth data is required. In this work, we propose an approach that leverages a pre-trained Vision Transformer (ViT) with fine-tuning on a small (<500 images), sparse and coarsely labeled (<30% pixels) multi-biome dataset to predict 2D semantic segmentation classes. These classes are fused over time via a novel range-based metric and aggregated into a 3D semantic voxel map. We demonstrate zero-shot out-of-biome 2D semantic segmentation on the Yamaha (52.9 mIoU) and Rellis (55.5 mIoU) datasets along with few-shot coarse sparse labeling with existing data for improved segmentation performance on Yamaha (66.6 mIoU) and Rellis (67.2 mIoU). We further illustrate the feasibility of using a voxel map with a range-based semantic fusion approach to handle common off-road hazards like pop-up hazards, overhangs, and water features.

Download

Robust High-Speed State Estimation for Off-road Navigation using Radar Velocity Factors

September 2024

·

15 Reads

Enabling robot autonomy in complex environments for mission critical application requires robust state estimation. Particularly under conditions where the exteroceptive sensors, which the navigation depends on, can be degraded by environmental challenges thus, leading to mission failure. It is precisely in such challenges where the potential for FMCW radar sensors is highlighted: as a complementary exteroceptive sensing modality with direct velocity measuring capabilities. In this work we integrate radial speed measurements from a FMCW radar sensor, using a radial speed factor, to provide linear velocity updates into a sliding-window state estimator for fusion with LiDAR pose and IMU measurements. We demonstrate that this augmentation increases the robustness of the state estimator to challenging conditions present in the environment and the negative effects they can pose to vulnerable exteroceptive modalities. The proposed method is extensively evaluated using robotic field experiments conducted using an autonomous, full-scale, off-road vehicle operating at high-speeds (~12 m/s) in complex desert environments. Furthermore, the robustness of the approach is demonstrated for cases of both simulated and real-world degradation of the LiDAR odometry performance along with comparison against state-of-the-art methods for radar-inertial odometry on public datasets.



An Addendum to NeBula: Toward Extending Team CoSTAR’s Solution to Larger Scale Environments

January 2024

·

50 Reads

·

4 Citations

This paper presents an appendix to the original NeBula autonomy solution [Agha et al., 2021] developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), participating in the DARPA Subterranean Challenge. Specifically, this paper presents extensions to NeBula’s hardware, software, and algorithmic components that focus on increasing the range and scale of the exploration environment. From the algorithmic perspective, we discuss the following extensions to the original NeBula framework: (i) large-scale geometric and semantic environment mapping; (ii) an adaptive positioning system; (iii) probabilistic traversability analysis and local planning; (iv) large-scale POMDPbased global motion planning and exploration behavior; (v) large-scale networking and decentralized reasoning; (vi) communication-aware mission planning; and (vii) multi-modal ground-aerial exploration solutions. We demonstrate the application and deployment of the presented systems and solutions in various large-scale underground environments, including limestone mine exploration scenarios as well as deployment in the DARPA Subterranean challenge.




Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data

October 2022

·

142 Reads

Autonomous driving is complex, requiring sophisticated 3D scene understanding, localization, mapping, and control. Rather than explicitly modelling and fusing each of these components, we instead consider an end-to-end approach via reinforcement learning (RL). However, collecting exploration driving data in the real world is impractical and dangerous. While training in simulation and deploying visual sim-to-real techniques has worked well for robot manipulation, deploying beyond controlled workspace viewpoints remains a challenge. In this paper, we address this challenge by presenting Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving, without using any real-world data. This is done by learning to translate randomized simulation images into simulated segmentation and depth maps, subsequently enabling real-world images to also be translated. This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world. Our approach, which can be trained in 48 hours on 1 GPU, can perform equally as well as a classical perception and control stack that took thousands of engineering hours over several months to build. We hope this work motivates future end-to-end autonomous driving research.


Fig. 2. Diagram of software architecture for inter-robot comms. Comms to and from the base is similar.
Fig. 3. Information RoadMap (IRM) constructed collaboratively during exploration of the DARPA SubT Finals competition environment. The environment state is indicated by green, orange, and red comms checkpoints which represent strong, weak, and no comms, respectively.
Fig. 4. Connectivity map of the DARPA SubT Finals competition environment based on predicted signal strength, which can inform autonomy and augment situational awareness for the human supervisor.
Fig. 8. Size of the data reporter buffer (summed over all key and missioncritical topics). The top graph depicts results from Day 1 with the data classification from CHORD and the bottom graph depicts results from Day 2 with the new data classification.
ACHORD: Communication-Aware Multi-Robot Coordination With Intermittent Connectivity

October 2022

·

183 Reads

·

30 Citations

IEEE Robotics and Automation Letters

Communication is an important capability for multi-robot exploration because (1) inter-robot communication (comms) improves coverage efficiency and (2) robot-to-base comms improves situational awareness. Exploring comms-restricted (e.g., subterranean) environments requires a multi-robot system to tolerate and anticipate intermittent connectivity, and to carefully consider comms requirements, otherwise mission-critical data may be lost. In this paper, we describe and analyze ACHORD (Autonomous & Collaborative High-Bandwidth Operations with Radio Droppables), a multi-layer networking solution which tightly co-designs the network architecture and high-level decision-making for improved comms. ACHORD provides bandwidth prioritization and timely and reliable data transfer despite intermittent connectivity. Furthermore, it exposes low-layer networking metrics to the application layer to enable robots to autonomously monitor, map, and extend the network via droppable radios, as well as restore connectivity to improve collaborative exploration. We evaluate our solution with respect to the comms performance in several challenging underground environments including the DARPA SubT Finals competition environment. Our findings support the use of data stratification and flow control to improve bandwidth-usage.



Citations (56)


... The problem of autonomous robot exploration as the primary objective has been widely studied [4], [5], [6], [7], [8], [9], Fig. 1. Mars analog environment used during the deployment. ...

Reference:

A Multi-Robot Exploration Planner for Space Applications
An Addendum to NeBula: Toward Extending Team CoSTAR’s Solution to Larger Scale Environments
  • Citing Article
  • January 2024

... Radar-to-LiDAR registration was also explored in [19], where the authors developed an end-to-end machine learning method for performing localization through registration of a spinning radar point cloud on a LiDAR map. Our previous work [20] investigated radar-LiDAR fusion for increased robustness in off-road environments utilizing a sliding-window smoother, with the disadvantage of only adding velocity information in the body-frame forward direction. In [21], the authors proposed a method for integrating LiDAR features in a factor graph smoother with radar least squares linear velocity estimates (both with the same measurement frequency) for improving the methods performance in environments with geometric self-similarity or dense fog. ...

ROAMER: Robust Offroad Autonomy using Multimodal State Estimation with Radar Velocity Integration
  • Citing Conference Paper
  • March 2024

... Any realistic sampling strategy needs to be able to make decisions under uncertainty and rapidly adapt to the environment in order to maximize the scientific return from the mission. To address these challenges, recent efforts have focused on the development of capabilities for autonomous excavation site selection [9], [10], autonomy software prototype to execute complex and highly constrained missions with limited human intervention [11], and field testing the functional autonomy stack in terrestrial analog environments [12]. Specifically, our prior work [10] proposed Deep Meta-Learning with Controlled Deployment Gaps (CoDeGa), an adaptive scooping strategy that uses deep Gaussian processes trained with a novel meta-learning approach. ...

Demonstration of Autonomous Sampling Techniques in an Icy Moon Terrestrial Analog
  • Citing Conference Paper
  • March 2023

... Simultaneous Localization and Mapping (SLAM) is a cornerstone of modern robotics, with applications ranging from self-driving vehicles [1] to autonomous subterranean inspection missions [2], and search and rescue operations in challenging environments [3], [4]. As highlighted throughout the literature [5]- [7], the key components for successful SLAM in long-term and large-scale missions are the accurate pose estimation in the front-end and the addition of loop closures, through place recognition (PR) in the back-end, which helps to mitigate any accumulated errors during pose estimation. ...

NeBula: TEAM CoSTAR’s Robotic Autonomy Solution that Won Phase II of DARPA Subterranean Challenge

Field Robotics

... A socket entails the existing connection between two computers over a given network for the main purpose of sharing valuable resources such as texts, files, and images amongst others. Socket is an endpoint that allows for the transmission or receiving of these resources [16]. ...

ACHORD: Communication-Aware Multi-Robot Coordination With Intermittent Connectivity

IEEE Robotics and Automation Letters

... Predicting acoustic communication performance is a topic that has received limited attention in the literature. The authors in [2], [5]- [8] employ Gaussian processes to predict received signal strength (RSS) for evaluating communication performance. In our prior work [9], [10], we modeled the signal-tonoise ratio (SNR) of successful acoustic communication events using a Gaussian process (GP). ...

PropEM-L: Radio Propagation Environment Modeling and Learning for Communication-Aware Multi-Robot Exploration
  • Citing Conference Paper
  • June 2022

... The FTS system (Force-Torque-Sensor) is a force sensor located on the RA, between the turret itself and the turret joint. The readings from the sensor are used to determine the required amount of preload forces during drilling / ACA docking operations and also for the potential unexpected collision between the turret and its surroundings (the ground or the rover itself) (Moeller et al., 2021;Schaler et al., 2021). Other than the task/responsibilities -specific design requirements and considerations, structural considerations such as material strengths, weights, and thermal requirements must also be considered as well, which more details will be discussed in the following section: structural analysis. ...

Two-stage calibration of a 6-axis force-torque sensor for robust operation in the Mars 2020 robot arm
  • Citing Article
  • September 2021

... There were multiple teams participated in the challenge with different robot formations and autonomy solutions [4], [14]- [18]. However, the design of the robot formation and the operation strategy has been less of focus in literature due to the more imminent challenge in modular autonomy modules, such as communication [19], [20], SLAM [21]- [23], global planning [11], [24], and traversability analysis [25]. In this work, we model the operation formally to infer optimal deployment strategy and team formation in a principled way. ...

CHORD: Distributed Data-sharing via Hybrid ROS 1 and 2 for Multi-robot Exploration of Large-scale Complex Environments
  • Citing Article
  • February 2021

IEEE Robotics and Automation Letters

... Semantic representation for environment can be used to abstract raw environment data in terms of traversability [16], robot behaviors [17], [18], or connectivity in communication [19]. The semantics-based approach in the representation allows robots to play in operator-friendly contexts [20]. ...

TRAVERSABILITY-AWARE SIGNAL COVERAGE PLANNING FOR COMMUNICATION NODE DEPLOYMENT IN PLANETARY CAVE EXPLORATION

... The objective of minimizing perturbations is important for adversarial examples, as human agents are often assigned to supervise machine learning components (Kaszas and Roberts 2023;Otsu et al. 2020;Schiaretti et al. 2017;Han et al. 2022) (e.g., an airport agent manually reviewing the object detection algorithm's output on an Xray image ). For example, several researchers demonstrated that physical objects can contain adversarial noise (e.g., clothing), causing a failure in an object detection model's ability (Thys et al. 2019;Wu et al. 2020). ...

Supervised Autonomy for Communication-degraded Subterranean Exploration by a Robot Team
  • Citing Conference Paper
  • March 2020