October 2024
·
9 Reads
·
2 Citations
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
October 2024
·
9 Reads
·
2 Citations
July 2024
·
92 Reads
During production, smart cars are equipped with calibrated LiDARs and cameras. However, due to the vibrations that inevitably occur during driving, the sensors’ extrinsic parameters may change slightly over time. It is a significant challenge to ensure the ongoing security of these systems throughout the car’s lifetime. To address this issue, we propose a self-checking and recalibration algorithm that can continuously detect the sensor data of intelligent vehicles. If the sensor’s miscalibration is detected, the data can be repaired promptly to ensure the vehicle’s reliability. Our self-checking algorithm extracts features from the point cloud and image and performs pixel-wise comparisons. To improve feature quality, we utilize the patch-wise transformer to enhance local information exchange, which also benefits the subsequent extrinsic recalibration. To facilitate the study, we generate two customized datasets from the KITTI dataset and the Waymo Open Dataset. The experiments conducted on these datasets demonstrate the effectiveness of our proposed method in accurately calibrating the LiDAR and camera systems throughout the car’s lifetime. This study is the first to highlight the importance of continually checking the calibrated extrinsic parameters for autonomous driving. Our findings contribute to the broader goal of improving safety and reliability in autonomous driving systems. The dataset and code are available at https://github.com/OpenCalib/LiDAR2camera_self-check.
May 2024
·
5 Reads
·
9 Citations
May 2024
·
11 Reads
·
2 Citations
April 2024
·
44 Reads
Sensor-based environmental perception is a crucial component of autonomous driving systems. To perceive the surrounding environment better, an intelligent system would utilize multiple LiDARs (3D Light Detection and Ranging). The accuracy of the perception largely depends on the quality of the sensor calibration. This research aims to develop a robust, fast, automatic, and accurate calibration strategy for multiple LiDAR systems. Our proposed multi-LiDAR calibration method consists of two stages: rough and refinement calibration. In the first stage, sensors are roughly calibrated from an arbitrary initial position using a deep neural network that does not rely on prior information or constraints on the initial sensor pose. In the second stage, we propose the octree-based refinement, an optimization method that considers sensor noise and prioritization. Our strategy is robust, fast, and not restricted to any environment. Additionally, we collected two datasets consisting of both real-world and simulated scenarios. Our experimental results from both datasets demonstrate the reliability and accuracy of our method. All the related datasets and codes are open-sourced on the GitHub website at https://github.com/OpenCalib/LiDAR2LiDAR.
April 2024
·
26 Reads
IEEE Robotics and Automation Letters
Properly-calibrated sensors are the prerequisite for a dependable autonomous driving system. Besides the extrinsic calibration between the sensors, the extrinsic between the sensor and the vehicle is also important, especially the rotation. Most of the existing sensor-to-vehicle calibration approaches have requirements on facilities, manual work, road features or vehicle trajectory. In this work, we propose more general and flexible methods for four commonly used sensors: Camera, LiDAR, GNSS/INS, and millimeter-wave Radar, composing a toolbox named SensorX2car. In each method, the rotation between a sensor and the vehicle is calibrated individually in road scenarios. Experiments on large-scale real-world datasets demonstrate the practicality of our proposed methods. Meanwhile, the related codes have been open-sourced to benefit the community. To our knowledge, SensorX2car is the first open-source sensor to vehicle calibration toolbox. The code is available at https://github.com/OpenCalib/SensorX2car.</uri
June 2023
·
27 Reads
·
1 Citation
The research on extrinsic calibration between Light Detection and Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and generic manner. Since deep learning has been employed in calibration, the restrictions on the scene are greatly reduced. However, data driven method has the drawback of low transfer-ability. It cannot adapt to dataset variations unless additional training is taken. With the advent of foundation model, this problem can be significantly mitigated. By using the Segment Anything Model(SAM), we propose a novel LiDAR-camera calibration method, which requires zero extra training and adapts to common scenes. With an initial guess, we opimize the extrinsic parameter by maximizing the consistency of points that are projected inside each image mask. The consistency includes three properties of the point cloud: the intensity, normal vector and categories derived from some segmentation methods. The experiments on different dataset have demonstrated the generality and comparable accuracy of our method. The code is available at https://github.com/OpenCalib/CalibAnything.
May 2023
·
62 Reads
·
44 Citations
May 2023
·
53 Reads
With the development of autonomous driving technology, sensor calibration has become a key technology to achieve accurate perception fusion and localization. Accurate calibration of the sensors ensures that each sensor can function properly and accurate information aggregation can be achieved. Among them, camera calibration based on surround view has received extensive attention. In autonomous driving applications, the calibration accuracy of the camera can directly affect the accuracy of perception and depth estimation. For online calibration of surround-view cameras, traditional feature extraction-based methods will suffer from strong distortion when the initial extrinsic parameters error is large, making these methods less robust and inaccurate. More existing methods use the sparse direct method to calibrate multi-cameras, which can ensure both accuracy and real-time performance and is theoretically achievable. However, this method requires a better initial value, and the initial estimate with a large error is often stuck in a local optimum. To this end, we introduce a robust automatic multi-cameras (pinhole or fisheye cameras) calibration and refinement method in the road scene. We utilize the coarse-to-fine random-search strategy, and it can solve large disturbances of initial extrinsic parameters, which can make up for falling into optimal local value in nonlinear optimization methods. In the end, quantitative and qualitative experiments are conducted in actual and simulated environments, and the result shows the proposed method can achieve accuracy and robustness performance. The open-source code is available at https://github.com/OpenCalib/SurroundCameraCalib.
January 2023
·
111 Reads
The performance of sensors in the autonomous driving system is fundamentally limited by the quality of sensor calibration. Sensors must be well-located with respect to the car-body frame before they can provide meaningful localization and environmental perception. However, while many online methods are proposed to calibrate the extrinsic parameters between sensors, few studies focus on the calibration between sensor and vehicle coordinate system. To this end, we present SensorX2car, a calibration toolbox for the online calibration of sensor-to-car coordinate systems in road scenes. It contains four commonly used sensors: IMU (Inertial Measurement Unit), GNSS (Global Navigation Satellite System), LiDAR (Light Detection and Ranging), Camera, and millimeter-wave Radar. We design a method for each sensor respectively and mainly calibrate its rotation to the car-body. Real-world and simulated experiments demonstrate the accuracy and generalization capabilities of the proposed method. Meanwhile, the related codes have been open-sourced to benefit the community. To the best of our knowledge, SensorX2car is the first open-source sensor-to-car calibration toolbox. The code is available at https://github.com/OpenCalib/SensorX2car.
... However, while segmentation methods perform well in familiar environments, they struggle with unfamiliar settings or new datasets. To address this issue, Luo et al. [14] proposed a method that achieves highprecision calibration without prior training, focusing on a localized range. Despite its effectiveness, this method is limited when applied to broader areas. ...
May 2024
... Early stage target-based methods [16], [17], [18] were inspired by camera intrinsic calibration techniques using chessboard patterns.Since then, numerous calibration targets based on different patterns have been introduced to improve the accuracy of feature extraction. Typical examples include plane boards with holes [19], [20], triangles [21], [22], polygons [23], spheres [24] and ArUco [25]. Despite improvements in accuracy, the specialized designs aimed at providing more geometric constraints make the production of targets more challenging. ...
May 2023
... Further examples utilizing the SAM are concealed object segmentation [94], removing shadows in images [107][108][109], matting [110,111], inpainting [112,113], or general image editing [114], LiDAR to 2D camera calibration [115], video super-resolution [116], object tracking or segmentation in videos [117][118][119][120][121][122], object counting [106,123,124], eye-tracking supported labeling of images [125], and open-world object detection [126]. ...
June 2023
... Existing multi-LiDAR-to-LiDAR 1 methods often assume a shared FOV overlap with one main, e.g. rooftop, LiDAR for the feature matching [10], posing challenges for EDGAR's setup due to the lack of overlap between all LiDARs. Motivated by the lack of open-source contributions, we propose a calibration framework that can handle atypical setups like EDGARs without requiring FOV overlap between all LiDARs or reliance on other external sensors. ...
October 2022
... MFCalib [21], which utilizes depth discontinuities and depth continuous edges, performed iterative optimization to reduce calibration errors further. There is also a method that improves computational efficiency and performance by extracting lines [22,23] from edges, but it is difficult to find external parameters when features are extracted in a tiny area. Previous studies [24,25] address the challenge of finding an optimal solution, and the optimization problem is particularly difficult in calibration tasks that involve extracting numerous features. ...
July 2022
Software Impacts