Xiaoting Zhang’s research while affiliated with Beijing University of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Automatic Roadside Camera Calibration with Transformers
  • Article
  • Full-text available

November 2023

·

135 Reads

·

2 Citations

Sensors

Yong Li

·

·

Yunli Chen

·

[...]

·

Rui Tian

Previous camera self-calibration methods have exhibited certain notable shortcomings. On the one hand, they either exclusively emphasized scene cues or solely focused on vehicle-related cues, resulting in a lack of adaptability to diverse scenarios and a limited number of effective features. Furthermore, these methods either solely utilized geometric features within traffic scenes or exclusively extracted semantic information, failing to comprehensively consider both aspects. This limited the comprehensive feature extraction from scenes, ultimately leading to a decrease in calibration accuracy. Additionally, conventional vanishing point-based self-calibration methods often required the design of additional edge-background models and manual parameter tuning, thereby increasing operational complexity and the potential for errors. Given these observed limitations, and in order to address these challenges, we propose an innovative roadside camera self-calibration model based on the Transformer architecture. This model possesses a unique capability to simultaneously learn scene features and vehicle features within traffic scenarios while considering both geometric and semantic information. Through this approach, our model can overcome the constraints of prior methods, enhancing calibration accuracy and robustness while reducing operational complexity and the potential for errors. Our method outperforms existing approaches on both real-world dataset scenarios and publicly available datasets, demonstrating the effectiveness of our approach.

Download

A Spatial Alignment Framework Using Geolocation Cues for Roadside Multi-View Multi-Sensor Fusion

October 2023

·

37 Reads

·

1 Citation

Conference Record - IEEE Conference on Intelligent Transportation Systems

Multi-modal sensor fusion plays a vital role in achieving high-quality roadside perception for intelligent traffic monitoring. Unlike on-board sensors in autonomous driving, roadside sensors present heightened calibration complexity, posing a challenge to spatial alignment for data fusion. And existing spatial alignment methods typically focus on one-to-one alignment between cameras and radar sensors and require precise calibration. However, when applied to large-scale roadside monitoring networks, these methods can be difficult to implement and may be vulnerable to environmental influences. In this paper, we present a spatial alignment framework that utilizes geolocation cues to enable multi-view alignment across distributed multi-sensor systems. In this framework, a deep learning-based camera calibration model combined with angle and distance estimation is used for monocular geolocation estimation. A camera parameter approaching method is then used to search for pseudo camera parameters that can tolerate inevitable calibration errors in practice. Finally, the geolocation information is then used for data association between Light Detection and Ranging (LiDAR) and cameras. The framework has been conducted and tested at several intersections in Hangzhou. Experimental results show that the framework can achieve geolocation estimation errors of less than 1.1 m for vehicles traversing the monitored zone, demonstrating the framework's ability to accomplish spatial alignment with a singular execution, and apply it in extensive large-scale roadside sensor fusion scenarios.


Figure 2. The schematic diagram of the vanishing point-based camera self-calibration model.
Figure 3. Examples of three vanishing points in traffic scenarios, with red, green, and blue colors representing the directions of the first, second, and third vanishing points.
Figure 4. The overall architecture diagram of our model in this paper.
Figure 5. Line Segment Descriptor in Vanishing Point Detection
Figure 6. Experimental traffic scenes: (a) Scene 1, (b) Scene 2, and (c) Scene 3.

+3

Automatic Roadside Camera Calibration with Transformers

September 2023

·

137 Reads

·

1 Citation

Previous camera self-calibration methods have exhibited certain notable shortcomings. On one hand, they either exclusively emphasized scene cues or solely focused on vehicle-related cues, resulting in a lack of adaptability to diverse scenarios and a limited number of effective features. Furthermore, these methods either solely utilized geometric features within traffic scenes or exclusively extracted semantic information, failing to comprehensively consider both aspects. This limited the comprehensive feature extraction from scenes, ultimately leading to a decrease in calibration accuracy. Additionally, conventional vanishing point-based self-calibration methods often required the design of additional edge-background models and manual parameter tuning, thereby increasing operational complexity and the potential for errors. Given these observed limitations, and in order to address these challenges, we propose an innovative roadside camera self-calibration model based on the Transformer architecture. This model possesses a unique capability to simultaneously learn scene features and vehicle features within traffic scenarios while considering both geometric and semantic information. Through this approach, our model can overcome the constraints of prior methods, enhancing calibration accuracy and robustness while reducing operational complexity and the potential for errors. Our method outperforms existing approaches on both real-world dataset scenarios and publicly available datasets, demonstrating the effectiveness of our approach.

Citations (1)


... By integrating the predicted calibration result T pred from the network with the initial calibration parameter T init , the extrinsic calibration parameters between the uncalibrated LiDAR and the camera are derived. These extrinsic calibration parameters are denoted as shown in Equation (13). ...

Reference:

RLCFormer: Automatic Roadside LiDAR-Camera Calibration Framework with Transformer
Automatic Roadside Camera Calibration with Transformers

Sensors