Fig 2 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Illustration of two image-like representations of event data. (a) The Time Surface with exponential decay used in [40, 41]; (b) The proposed Linear Time Surface. The applied color system is explained by the adjacent color bar.
Source publication
Predicting a potential collision with leading vehicles is an essential functionality of any autonomous/assisted driving system. One bottleneck of existing vision-based solutions is that their updating rate is limited to the frame rate of standard cameras used. In this paper, we present a novel method that estimates the time to collision using a neu...
Context in source publication
Context 1
... illustrated in Fig. 2, the proposed LTS is, on one hand, similar to TS in the sense that it also comes with the property of distance field, which enables us to establish event-contour association efficiently. On the other hand, as shown in Fig. 3, the proposed LTS enhances the continuity of the distance field's gradient in a different way. Unlike [41] that ...
Similar publications
This research addresses the challenge of camera calibration and distortion parameter prediction from a single image using deep learning models. The main contributions of this work are: (1) demonstrating that a deep learning model, trained on a mix of real and synthetic images, can accurately predict camera and lens parameters from a single image, a...
This paper describes the implementation of an in-telligent navigation system for the mobile robot GFS-X equippedwith a manipulator, aimed at solving a range of tasks on the arenawithin a limited time frame. The approach to route planning andadjustment using a map obtained from an overhead camera isexamined. YOLOv11 is utilized to gather information...
Recent advances in autonomous driving systems have shifted towards reducing reliance on high-definition maps (HDMaps) due to the huge costs of annotation and maintenance. Instead, researchers are focusing on online vectorized HDMap construction using on-board sensors. However, sensor-only approaches still face challenges in long-range perception du...
In autonomous driving, the fusion of multiple sensors is considered essential to improve the accuracy and safety of 3D object detection. Currently, a fusion scheme combining low-cost cameras with highly robust radars can counteract the performance degradation caused by harsh environments. In this paper, we propose the IRBEVF-Q model, which mainly c...
This study develops and evaluates a deep learning based visual servoing (DLBVS) control system for guiding industrial robots during aircraft refueling, aiming to enhance operational efficiency and precision. The system employs a monocular camera mounted on the robot's end effector to capture images of target objects—the refueling nozzle and bottom...