Jianzhu Huai

Jianzhu Huai
Wuhan University | WHU · School of Remote Sensing Information Engineering

Doctor of Philosophy
Building an autonomous ground robot with a load of sensors, while vying for collaborators.

About

28
Publications
10,765
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
91
Citations
Citations since 2017
21 Research Items
75 Citations
20172018201920202021202220230510152025
20172018201920202021202220230510152025
20172018201920202021202220230510152025
20172018201920202021202220230510152025
Additional affiliations
January 2013 - present
The Ohio State University
Position
  • PhD Student
Description
  • Integration of data of GPS and IMUs, Offline deterministic and stochastic calibration of IMUs, visual inertial integration for indoor localization, mapping indoor environments with a Kinect and an IMU
Education
January 2013 - January 2017
The Ohio State University
Field of study
  • geodetic engineering
September 2009 - March 2012
Beihang University (BUAA)
Field of study
  • geomatics
September 2005 - June 2009
Beihang University (BUAA)
Field of study
  • civil engineering

Publications

Publications (28)
Article
Full-text available
Navigation/positioning systems have become critical to many applications, such as autonomous driving, Internet of Things (IoT), Unmanned Aerial Vehicle (UAV), and smart cities. However, it is difficult to provide a robust, accurate, and seamless solution with single navigation/positioning technology. For example, the Global Navigation Satellite Sys...
Article
Cameras with rolling shutters (RSs) dominate consumer markets but are subject to distortions when capturing motion. Many methods have been proposed to mitigate RS distortions for applications such as vision-aided odometry and three-dimensional (3D) reconstruction. They usually need known line delay d between successive image rows. To calibrate d ,...
Article
Camera–inertial measurement unit (IMU) sensor fusion has been extensively studied in recent decades. Numerous observability analysis and fusion schemes for motion estimation with self-calibration have been presented. However, it has been uncertain whether the intrinsic parameters of both the camera and the IMU are observable under general motion. T...
Article
The rapid development of the Bluetooth technology offers a possible solution for indoor localization scenarios. Compared with other indoor localization technologies, such as vision, Light Detection and Ranging, Ultra Wide Band, etc., Bluetooth has been characterized as low cost, easy deployment, low energy consumption and potentially high localizat...
Article
Full-text available
In recent years, indoor positioning has drawn intensive attention for both pedestrian and mobile robot applications. Among various indoor positioning technologies, visible light positioning has many advantages due to its high localization accuracy, high bandwidth, energy-efficiency, long lifetime, and cost-efficiency. For post-processing or semi-re...
Preprint
Full-text available
Nonlinear systems of affine control inputs overarch many sensor fusion instances. Analyzing whether a state variable in such a nonlinear system can be estimated (i.e., observability) informs better estimator design. Among the research on local observability of nonlinear systems, approaches based on differential geometry have attracted much attentio...
Article
The Rolling Shutter (RS) mechanism is widely used in consumer-grade cameras, which are essential parts in smartphones and autonomous vehicles. RS leads to image distortion when the camera moves relative to the scene while capturing images. This effect needs to be considered in structure-from-motion, and vision-aided odometry, for which recent studi...
Preprint
Full-text available
Camera-IMU (Inertial Measurement Unit) sensor fusion has been extensively studied in recent decades. Numerous observability analysis and fusion schemes for motion estimation with self-calibration have been presented. However, it has been uncertain whether both camera and IMU intrinsic parameters are observable under general motion. To answer this q...
Article
Full-text available
More and more devices, such as Bluetooth and IEEE 802.15.4 devices forming Wireless Personal Area Networks (WPANs) and IEEE 802.11 devices constituting Wireless Local Area Networks (WLANs), share the 2.4 GHz Industrial, Scientific and Medical (ISM) band in the realm of the Internet of Things (IoT) and Smart Cities. However, the coexistence of these...
Preprint
Full-text available
The rolling shutter (RS) mechanism is widely used by consumer-grade cameras, which are essential parts in smartphones and autonomous vehicles. The RS effect leads to image distortion upon relative motion between a camera and the scene. This effect needs to be considered in video stabilization, structure from motion, and vision-aided odometry, for w...
Chapter
Given that the BDS-3 (Beidou System-3) has been accomplished and works well, there are increasing demands for localization and navigation in daily life. However, BDS-3’s signals cannot cover some challenging areas such as urban canyons and indoor environments. To extend the availability of the navigation system, other positioning technologies are r...
Chapter
In this paper, we propose a real-time and low-drift localization method for lidar-equipped robot in indoor environments. State-of-the-art lidar localization research mostly uses a scan-to-scan method, which produces high drifts during the localization of the robot. It is not suitable for robots to operate indoors (such as factory environment) for a...
Article
State estimation problems without absolute position measurements routinely arise in navigation of unmanned aerial vehicles, autonomous ground vehicles, etc., whose proper operation relies on accurate state estimates and reliable covariances. Unaware of absolute positions, these problems have immanent unobservable directions. Traditional causal esti...
Preprint
Full-text available
State estimation problems that use relative observations routinely arise in navigation of unmanned aerial vehicles, autonomous ground vehicles, \etc whose proper operation relies on accurate state estimates and reliable covariances. These problems have immanent unobservable directions. Traditional causal estimators, however, usually gain spurious i...
Preprint
Full-text available
Motion estimation by fusing data from at least a camera and an Inertial Measurement Unit (IMU) enables many applications in robotics. However, among the multitude of Visual Inertial Odometry (VIO) methods, few efficiently estimate device motion with consistent covariance, and calibrate sensor parameters online for handling data from consumer sensor...
Preprint
Full-text available
We have observed a common problem of solving for the marginal covariance of parameters introduced in new observations. This problem arises in several situations, including augmenting parameters to a Kalman filter, and computing weight for relative pose constraints. To handle this problem, we derive a solution in a least squares sense. The solution...
Preprint
In recent years, commodity mobile devices equipped with cameras and inertial measurement units (IMUs) have attracted much research and design effort for augmented reality (AR) and robotics applications. Based on such sensors, many commercial AR toolkits and public benchmark datasets have been made available to accelerate hatching and validating new...
Preprint
Full-text available
Visual place recognition and simultaneous localization and mapping (SLAM) have recently begun to be used in real-world autonomous navigation tasks like food delivery. Existing datasets for SLAM research are often not representative of in situ operations, leaving a gap between academic research and real-world deployment. In response, this paper pres...
Article
Targeted at operations without adequate global navigation satellite system signals, simultaneous localization and mapping (SLAM) has been widely applied in robotics and navigation. Using data crowdsourced by cameras, collaborative SLAM presents a more appealing solution than SLAM in terms of mapping speed, localization accuracy, and map reuse. To b...
Conference Paper
Full-text available
Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuse...
Article
Full-text available
This paper presents an inversed quad tree merging method for hierarchical high-resolution remote sensing image segmentation, in which bottom-up approaches of region based merge techniques are chained. The image segmentation process is mainly composed of three sections: grouping pixels to form image object/region primitives in imagery using inversed...
Article
In order to overcome the complexity of region merging in the segmentation of high resolution remote sensing images, an edge-guided segmentation method for multi-scale and high resolution remote sensing image was proposed. First, SUSAN operator was used to extract feature edges from the original test image. Then, a graph-based segmentation algorithm...
Article
Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing im...
Article
Automatically processing high-resolution remote sensing images is currently of regional and global research priority. This paper presented an algorithm based on adjacency graph partition for high-resolution remote sensing image segmentation. The proposed algorithm utilized both the region geometrical and spectral properties to evaluate the weight o...

Network

Cited By

Projects

Project (1)
Project
Visual and inertial data are collected by multiple cellphones. Each calibrates the visual inertial sensors on-site and estimates its own location and sends key messages to the server. The server collaboratively builds maps and sends corrected poses to each cellphone. So these users can entertain with games of augmented reality.