Ji Zhang

Ji Zhang
Carnegie Mellon University | CMU · Robotics Institute

PhD in Robotics

About

61
Publications
78,447
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
5,493
Citations
Introduction
Robot Navigation, Perception and Localization, Lidar Mapping, Computer Vision

Publications

Publications (61)
Article
In the era of advancing autonomous driving and increasing reliance on geospatial information, high-precision mapping not only demands accuracy but also flexible construction. Current approaches mainly rely on expensive mapping devices, which are time consuming for city-scale map construction and vulnerable to erroneous data associations without acc...
Article
Multi-agent exploration of a bounded 3D environment with the unknown initial poses of agents is a challenging problem. It requires both quickly exploring the environments and robustly merging the sub-maps built by the agents. Most existing exploration strategies directly merge two sub-maps built by different agents when a single frame observation i...
Article
Full-text available
The camera is an attractive device for use in beyond visual line of sight drone operation since cameras are low in size, weight, power, and cost. However, state-of-the-art visual localization algorithms have trouble matching visual data that have significantly different appearances due to changes in illumination or viewpoint. This paper presents iS...
Preprint
Full-text available
To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only...
Article
Full-text available
Autonomous robot navigation in austere environments is critical to missions like “search and rescue”, yet it remains difficult to achieve. The recent DARPA Subterranean Challenge (SubT) highlights prominent challenges including GPS-denied navigation through rough terrains, rapid exploration in large-scale three-dimensional (3D) space, and interrobo...
Preprint
Full-text available
Multi-agent exploration of a bounded 3D environment with unknown initial positions of agents is a challenging problem. It requires quickly exploring the environments as well as robustly merging the sub-maps built by the agents. We take the view that the existing approaches are either aggressive or conservative: Aggressive strategies merge two sub-m...
Preprint
Full-text available
The visual camera is an attractive device in beyond visual line of sight (B-VLOS) drone operation, since they are low in size, weight, power, and cost, and can provide redundant modality to GPS failures. However, state-of-the-art visual localization algorithms are unable to match visual data that have a significantly different appearance due to ill...
Preprint
Under shared autonomy, wheelchair users expect vehicles to provide safe and comfortable rides while following users high-level navigation plans. To find such a path, vehicles negotiate with different terrains and assess their traversal difficulty. Most prior works model surroundings either through geometric representations or semantic classificatio...
Preprint
Full-text available
We present the ALTO dataset, a vision-focused dataset for the development and benchmarking of Visual Place Recognition and Localization methods for Unmanned Aerial Vehicles. The dataset is composed of two long (approximately 150km and 260km) trajectories flown by a helicopter over Ohio and Pennsylvania, and it includes high precision GPS-INS ground...
Preprint
Full-text available
We present AutoMerge, a LiDAR data processing framework for assembling a large number of map segments into a complete map. Traditional large-scale map merging methods are fragile to incorrect data associations, and are primarily limited to working only offline. AutoMerge utilizes multi-perspective fusion and adaptive loop closure detection for accu...
Preprint
Full-text available
For long-term autonomy, most place recognition methods are mainly evaluated on simplified scenarios or simulated datasets, which cannot provide solid evidence to evaluate the readiness for current Simultaneous Localization and Mapping (SLAM). In this paper, we present a long-term place recognition dataset for use in mobile localization under large-...
Preprint
Full-text available
Appearance-based visual localization (AVL) is an approach that aligns the visual image against previously saved target images for robotics navigation. Current visual localization methods are easily affected by viewpoint (forward, backward) and environmental condition (illuminations, weathers) changes, and remains fragile for long-term localization,...
Preprint
Full-text available
Autonomous Exploration Development Environment is an open-source repository released to facilitate the development of high-level planning algorithms and integration of complete autonomous navigation systems. The repository contains representative simulation environment models, fundamental navigation modules, e.g., local planner, terrain traversabil...
Preprint
We present our work on a fast route planner based on visibility graph. The method extracts edge points around obstacles in the environment to form polygons, with which, the method dynamically updates a global visibility graph, expanding the visibility graph along with the navigation and removing edges that become occluded by dynamic obstacles. When...
Conference Paper
Full-text available
We present a method for autonomous exploration in complex three-dimensional (3D) environments. Our method demonstrates exploration faster than the current state-of-the-art using a hierarchical framework-one level maintains data densely and computes a detailed path within a local planning horizon, while another level maintains data sparsely and comp...
Conference Paper
Full-text available
This paper describes a novel framework for autonomous exploration in large and complex environments. We show that the framework is efficient as a result of its hierarchical structure, where at one level it maintains a sparse representation of the environment and at another level, a dense representation is used within a local planning horizon around...
Preprint
Full-text available
We present a method for localizing a single camera with respect to a point cloud map in indoor and outdoor scenes. The problem is challenging because correspondences of local invariant features are inconsistent across the domains between image and 3D. The problem is even more challenging as the method must handle various environmental conditions su...
Article
Full-text available
Real-time 3D place recognition is a crucial technology to recover from localization failure in applications like autonomous driving, last-mile delivery, and service robots. However, it is challenging for 3D place retrieval methods to be accurate, efficient, and robust to the variant viewpoints differences. In this paper, we propose FusionVLAD, a fu...
Preprint
Full-text available
Light-weight camera localization in existing maps is essential for vision-based navigation. Currently, visual and visual-inertial odometry (VO\&VIO) techniques are well-developed for state estimation but with inevitable accumulated drifts and pose jumps upon loop closure. To overcome these problems, we propose an efficient monocular camera localiza...
Article
Full-text available
We propose a planning method to enable fast autonomous flight in cluttered environments. Typically, autonomous navigation through a complex environment requires a continuous search on a graph generated by a k‐connected grid or a probabilistic scheme. As the vehicle travels, updating the graph with data from onboard sensors is expensive as is the se...
Preprint
Full-text available
In densely populated environments, socially compliant navigation is critical for autonomous robots as driving close to people is unavoidable. This manner of social navigation is challenging given the constraints of human comfort and social rules. Traditional methods based on hand-craft cost functions to achieve this task have difficulties to operat...
Conference Paper
Full-text available
We propose a planning method to enable fast autonomous flight in cluttered environments. Typically, autonomous navigation through a complex environment requires a continuous search on a graph generated by a k-connected grid or a probabilistic scheme. As the vehicle travels, updating the graph with data from onboard sensors is expensive as is the se...
Conference Paper
Full-text available
We here address the issue of air vehicles flying autonomously at a high speed in complex environments. Typically, autonomous navigation through a com- plex environment requires a continuous heuristic search on a graph generated by a k-connected grid or a probabilistic scheme. The process is expensive especially if the paths must be kino-dynamically...
Article
Full-text available
We present a data processing pipeline to online estimate ego-motion and build a map of the traversed environment, leveraging data from a 3D laser scanner, a camera, and an IMU. Different from traditional methods that use a Kalman filter or factor-graph optimization, the proposed method employs a sequential, multi-layer processing pipeline, solving...
Conference Paper
Full-text available
We propose a novel method to enable fast autonomous flight in cluttered environments. Typically, autonomous navigation through a complex environment requires a continuous heuristic search on a graph generated by a k-connected grid or a probabilistic scheme. As the vehicle progresses, modification of the graph with data from onboard sensors is expen...
Conference Paper
Full-text available
We here present studies to enable aerial and ground-based collaborative mapping in GPS-denied environments. The work utilizes a system that incorporates a laser scanner, a camera, and a low-grade IMU in a miniature package which can be carried by a light-weight aerial vehicle. We also discuss a processing pipeline that involves multi-layer optimiza...
Conference Paper
Full-text available
We present a data processing pipeline to online estimate ego-motion and build a map of the traversed environment, leveraging data from a 3D laser, a camera, and an IMU. Different from traditional methods that use a Kalman filter or factor-graph optimization, the proposed method employs a sequential, multi-layer processing pipeline, solving for moti...
Article
Full-text available
Here we propose a real-time method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation (especially without an external reference such as GPS) cause mis-registration of the resulting poi...
Article
Full-text available
Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery o...
Article
Full-text available
The requirement to operate aircraft in GPS-denied environments can be met by using visual odometry. Aiming at a full-scale aircraft equipped with a high-accuracy inertial navigation system (INS), the proposed method combines vision and the INS for odometry estimation. With such an INS, the aircraft orientation is accurate with low drift, but it con...
Conference Paper
Full-text available
Here, we present a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method. The method shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features. The proposed on-line method starts with visual odometry to e...
Article
Full-text available
This article presents perception and navigation systems for a family of autonomous orchard vehicles. The systems are customized to enable safe and reliable driving in modern planting environments. The perception system is based on a global positioning system (GPS)-free sensor suite composed of a twodimensional (2-D) laser scanner, wheel and steerin...
Article
Full-text available
The requirement to operate aircrafts in GPS denied environments can be met by use of visual odometry.We study the case that the height of the aircraft above the ground can be measured by an altimeter. Even with a high quality INS that the orientation drift is neglectable, random noise exists in the INS orientation. The noise can lead to the error o...
Conference Paper
Full-text available
Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery o...
Conference Paper
Full-text available
Autonomous orchard vehicles have been shown to increase worker efficiency in tasks including pruning, thinning, tree maintenance, and pheromone placing. To cover entire blocks, they must be able to repeatedly exit an orchard row, turn, and enter the next. The authors' experience deploying autonomous vehicles for five years in commercial and researc...
Conference Paper
Full-text available
Rows of trees such as in orchards, planted in straight parallel lines can provide navigation cues for autonomous machines that operate in between them. When the tree canopies are well managed, tree rows appear similar to corridor walls and a simple 2D sensing scheme suffices. However, when the tree canopies are three dimensional, or ground vegetati...
Conference Paper
This paper addresses the refactoring of an agricultural vehicle localization system and its deployment and field-testing in apple orchards. The system enables affordable precision agriculture in tree fruit production by providing the vehicle's position in the orchard without the use of expensive differential GPS. The localization methodology depend...
Conference Paper
Full-text available
We present a monocular visual navigation methodology for autonomous orchard vehicles. Modern orchards are usually planted with straight and parallel tree rows that form a corridor-like environment. Our task consists of driving a vehicle autonomously along the tree rows. The original contributions of this paper are: 1) a method to recover vehicle ro...
Conference Paper
Full-text available
Here we present a robust method for monocular visual odometry capable of accurate position estimation even when operating in undulating terrain. Our algorithm uses a steering model to separately recover rotation and translation. Robot 3DOF orientation is recovered by minimizing image projection error, while, robot translation is recovered by solvin...
Article
We report a new error-aware monocular visual odometry method that only uses vertical lines, such as vertical edges of buildings and poles in urban areas as landmarks. Since vertical lines are easy to extract, insensitive to lighting conditions/ shadows, and sensitive to robot movements on the ground plane, they are robust features if compared with...
Conference Paper
Full-text available
We report a new error-aware monocular visual odometry method that only uses vertical lines, such as vertical edges of buildings and poles in urban areas as landmarks. Since vertical lines are easy to extract, insensitive to lighting conditions/shadows, and sensitive to robot movements on the ground plane, they are robust features if compared with r...
Conference Paper
Full-text available
When a robot travels in urban area, Global Positional System (GPS) signals might be obstructed by buildings. Hence visual odometry is a choice. We notice that the vertical edges from high buildings and poles of street lights are a very stable set of features that can be easily extracted. Thus, we develop a monocular vision-based odometry system tha...
Article
Full-text available
Both the search time and the search result for fixed points during passive walking depend on the initial values. This paper investigates the necessary and sufficient conditions for periodic gaits to obtain constrains between the state variables at the fixed points, and reduce the two dimensional search space to a one dimensional space with an algor...
Conference Paper
Full-text available
This paper presents our study over the effect of complementary energy feedback on virtual slope walking, while virtual slope walking is our new biped gait generation method inspired by passive dynamic walking. The energy feedback strength is defined and the walking is modeled as a step-to-step function. The Jacobi matrix eigenvalues of the function...
Conference Paper
Full-text available
This paper presents the gait generation and mechanical design of a humanoid robot based on a limit cycle walking method-Virtual Slope Control. This method is inspired by Passive Dynamic Walking. By shortening the swing leg, the robot walking on level ground can be considered as on a virtual slope. Parallel double crank mechanisms and elastic feet a...
Conference Paper
Full-text available
This paper presented a brief description for the gait planning of quadruped robot named Aibo ERS-7 which is a standard platform in the RoboCup 4-legged league. We approach a spline shaped locus to reduce the dimension of the parameter optimizing space and solve the problem of the significant bias between the planned locus and the real one. The resu...

Network

Cited By