Jianzhu HuaiWuhan University | WHU · State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing
Jianzhu Huai
Doctor of Philosophy
Consistent subterranean mapping with lidars, radars, and cameras.
About
42
Publications
15,385
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
443
Citations
Introduction
I am a postdoc researcher at the state key lab of surveying, mapping, and remote sensing lab, Wuhan university. My research focus is to fuse multiple sensor data to create an accurate and consistent map of an environment. I am attacking this problem with traditional robust geometric approaches and recent neural graphic primitives. I am also working on the upstream and downstream problems of consistent mapping, such as simultaneous localization and mapping, motion planning.
Skills and Expertise
Additional affiliations
Education
January 2013 - January 2017
September 2009 - March 2012
September 2005 - June 2009
Publications
Publications (42)
Wide-angle cameras are widely used in photogrammetry and autonomous systems which rely on the accurate metric measurements derived from images. To find the geometric relationship between incoming rays and image pixels, geometric camera calibration (GCC) has been actively developed. Aiming to provide practical calibration guidelines, this work surve...
For the SLAM system in robotics and autonomous driving, the accuracy of front-end odometry and back-end loop-closure detection determine the whole intelligent system performance. But the LiDAR-SLAM could be disturbed by current scene moving objects, resulting in drift errors and even loop-closure failure. Thus, the ability to detect and segment mov...
4D radars are increasingly favored for odometry and mapping of autonomous systems due to their robustness in harsh weather and dynamic environments. Existing datasets, however, often cover limited areas and are typically captured using a single platform. To address this gap, we present a diverse large-scale dataset specifically designed for 4D rada...
Accurate and robust localization is a critical requirement for autonomous driving and intelligent robots, particularly in complex dynamic environments and various motion scenarios. However, existing LiDAR odometry methods often struggle to promptly respond to changes in the surroundings and motion conditions with fixed parameters through execution,...
For the SLAM system in robotics and autonomous driving, the accuracy of front-end odometry and back-end loop-closure detection determine the whole intelligent system performance. But the LiDAR-SLAM could be disturbed by current scene moving objects, resulting in drift errors and even loop-closure failure. Thus, the ability to detect and segment mov...
The extended Kalman filter (EKF) is a common state estimation method for discrete nonlinear systems. It recursively executes the propagation step as time goes by and the update step when a set of measurements arrives. In the update step, the EKF linearizes the measurement function only once. In contrast, the iterated EKF (IEKF) refines the state in...
In many camera-based applications, it is necessary to find the geometric relationship between incoming rays and image pixels, i.e., the projection model, through the geometric camera calibration (GCC). Aiming to provide practical calibration guidelines, this work surveys and evaluates the existing GCC tools. The survey covers camera models, calibra...
Millimeter wave radar can measure distances, directions, and Doppler velocity for objects in harsh conditions such as fog. The 4D imaging radar with both vertical and horizontal data resembling an image can also measure objects' height. Previous studies have used 3D radars for ego-motion estimation. But few methods leveraged the rich data of imagin...
Millimeter wave radar can measure distances, directions, and Doppler velocity for objects in harsh conditions such as fog. The 4D imaging radar with both vertical and horizontal data resembling an image can also measure objects' height. Previous studies have used 3D radars for ego-motion estimation. But few methods leveraged the rich data of imagin...
Cameras with rolling shutters (RSs) dominate consumer markets but are subject to distortions when capturing motion. Many methods have been proposed to mitigate RS distortions for applications such as vision-aided odometry and three-dimensional (3D) reconstruction. They usually need known line delay d between successive image rows. To calibrate d, s...
Navigation/positioning systems have become critical to many applications, such as autonomous driving, Internet of Things (IoT), Unmanned Aerial Vehicle (UAV), and smart cities. However, it is difficult to provide a robust, accurate, and seamless solution with single navigation/positioning technology. For example, the Global Navigation Satellite Sys...
Many Chinese cities have severe air pollution due to the rapid development of the Chinese economy, urbanization, and industrialization. Particulate matter (PM2.5) is a significant component of air pollutants. It is related to cardiopulmonary and other systemic diseases because of its ability to penetrate the human respiratory system. Forecasting ai...
Camera–inertial measurement unit (IMU) sensor fusion has been extensively studied in recent decades. Numerous observability analysis and fusion schemes for motion estimation with self-calibration have been presented. However, it has been uncertain whether the intrinsic parameters of both the camera and the IMU are observable under general motion. T...
The rapid development of the Bluetooth technology offers a possible solution for indoor localization scenarios. Compared with other indoor localization technologies, such as vision, Light Detection and Ranging, Ultra Wide Band, etc., Bluetooth has been characterized as low cost, easy deployment, low energy consumption and potentially high localizat...
In recent years, indoor positioning has drawn intensive attention for both pedestrian and mobile robot applications. Among various indoor positioning technologies, visible light positioning has many advantages due to its high localization accuracy, high bandwidth, energy-efficiency, long lifetime, and cost-efficiency. For post-processing or semi-re...
Nonlinear systems of affine control inputs overarch many sensor fusion instances. Analyzing whether a state variable in such a nonlinear system can be estimated (i.e., observability) informs better estimator design. Among the research on local observability of nonlinear systems, approaches based on differential geometry have attracted much attentio...
The Rolling Shutter (RS) mechanism is widely used in consumer-grade cameras, which are essential parts in smartphones and autonomous vehicles. RS leads to image distortion when the camera moves relative to the scene while capturing images. This effect needs to be considered in structure-from-motion, and vision-aided odometry, for which recent studi...
Camera-IMU (Inertial Measurement Unit) sensor fusion has been extensively studied in recent decades. Numerous observability analysis and fusion schemes for motion estimation with self-calibration have been presented. However, it has been uncertain whether both camera and IMU intrinsic parameters are observable under general motion. To answer this q...
More and more devices, such as Bluetooth and IEEE 802.15.4 devices forming Wireless Personal Area Networks (WPANs) and IEEE 802.11 devices constituting Wireless Local Area Networks (WLANs), share the 2.4 GHz Industrial, Scientific and Medical (ISM) band in the realm of the Internet of Things (IoT) and Smart Cities. However, the coexistence of these...
The rolling shutter (RS) mechanism is widely used by consumer-grade cameras, which are essential parts in smartphones and autonomous vehicles. The RS effect leads to image distortion upon relative motion between a camera and the scene. This effect needs to be considered in video stabilization, structure from motion, and vision-aided odometry, for w...
Given that the BDS-3 (Beidou System-3) has been accomplished and works well, there are increasing demands for localization and navigation in daily life. However, BDS-3’s signals cannot cover some challenging areas such as urban canyons and indoor environments. To extend the availability of the navigation system, other positioning technologies are r...
In this paper, we propose a real-time and low-drift localization method for lidar-equipped robot in indoor environments. State-of-the-art lidar localization research mostly uses a scan-to-scan method, which produces high drifts during the localization of the robot. It is not suitable for robots to operate indoors (such as factory environment) for a...
State estimation problems without absolute position measurements routinely arise in navigation of unmanned aerial vehicles, autonomous ground vehicles, etc., whose proper operation relies on accurate state estimates and reliable covariances. Unaware of absolute positions, these problems have immanent unobservable directions. Traditional causal esti...
State estimation problems that use relative observations routinely arise in navigation of unmanned aerial vehicles, autonomous ground vehicles, \etc whose proper operation relies on accurate state estimates and reliable covariances. These problems have immanent unobservable directions. Traditional causal estimators, however, usually gain spurious i...
Motion estimation by fusing data from at least a camera and an Inertial Measurement Unit (IMU) enables many applications in robotics. However, among the multitude of Visual Inertial Odometry (VIO) methods, few efficiently estimate device motion with consistent covariance, and calibrate sensor parameters online for handling data from consumer sensor...
We have observed a common problem of solving for the marginal covariance of parameters introduced in new observations. This problem arises in several situations, including augmenting parameters to a Kalman filter, and computing weight for relative pose constraints. To handle this problem, we derive a solution in a least squares sense. The solution...
In recent years, commodity mobile devices equipped with cameras and inertial measurement units (IMUs) have attracted much research and design effort for augmented reality (AR) and robotics applications. Based on such sensors, many commercial AR toolkits and public benchmark datasets have been made available to accelerate hatching and validating new...
Visual place recognition and simultaneous localization and mapping (SLAM) have recently begun to be used in real-world autonomous navigation tasks like food delivery. Existing datasets for SLAM research are often not representative of in situ operations, leaving a gap between academic research and real-world deployment. In response, this paper pres...
Targeted at operations without adequate global navigation satellite system signals, simultaneous localization and mapping (SLAM) has been widely applied in robotics and navigation. Using data crowdsourced by cameras, collaborative SLAM presents a more appealing solution than SLAM in terms of mapping speed, localization accuracy, and map reuse. To b...
Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many
purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still
very challenging. In this paper, we present a mapping system for 3D reconstruction that fuse...
This paper presents an inversed quad tree merging method for hierarchical high-resolution remote sensing image segmentation,
in which bottom-up approaches of region based merge techniques are chained. The image segmentation process is mainly composed
of three sections: grouping pixels to form image object/region primitives in imagery using inversed...
In order to overcome the complexity of region merging in the segmentation of high resolution remote sensing images, an edge-guided segmentation method for multi-scale and high resolution remote sensing image was proposed. First, SUSAN operator was used to extract feature edges from the original test image. Then, a graph-based segmentation algorithm...
Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing im...
Automatically processing high-resolution remote sensing images is currently of regional and global research priority. This paper presented an algorithm based on adjacency graph partition for high-resolution remote sensing image segmentation. The proposed algorithm utilized both the region geometrical and spectral properties to evaluate the weight o...