Fig 4 - available via license: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
Content may be subject to copyright.
Qualitative detection results from the proposed fusion model. The predictions obtained in the RA view (represented as blue boxes in top row) have been projected onto the camera images with ground truth bounding boxes in green.
Source publication
Cameras can be used to perceive the environment around the vehicle, while affordable radar sensors are popular in autonomous driving systems as they can withstand adverse weather conditions unlike cameras. However, radar point clouds are sparser with low azimuth and elevation resolution that lack semantic and structural information of the scenes, r...
Context in source publication
Similar publications
Multi-camera 3D object detection for autonomous driving is a challenging problem that has garnered notable attention from both academia and industry. An obstacle encountered in vision-based techniques involves the precise extraction of geometry-conscious features from RGB images. Recent approaches have utilized geometric-aware image backbones pretr...
Yanli Wu Junyin Wang Hui Li- [...]
Xiao Li
LiDAR and camera are two key sensors that provide mutually complementary information for 3D detection in autonomous driving. Existing multimodal detection methods often decorate the original point cloud data with camera features to complete the detection, ignoring the mutual fusion between camera features and point cloud features. In addition, grou...
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This paper introduces...
Cross-modal data registration has long been a critical task in computer vision, with extensive applications in autonomous driving and robotics. Accurate and robust registration methods are essential for aligning data from different modalities, forming the foundation for multimodal sensor data fusion and enhancing perception systems' accuracy and re...
Surgical automation requires precise guidance and understanding of the scene. Current methods in the literature rely on bulky depth cameras to create maps of the anatomy, however this does not translate well to space-limited clinical applications. Monocular cameras are small and allow minimally invasive surgeries in tight spaces but additional proc...