Conference Paper

LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Year Source V2X Sensors Size Agents Tasks PA/R V2X-Sim 1.0 [22] 2022 Sim V2V L 10,000 2-5 OD, OT, SS ✓/✓ V2X-Sim 2.0 [4] 2022 Sim V2V, V2I C, L 10,000 2-5 OD, OT, SS ✓/✓ OPV2V [23] 2022 Sim V2V C, L 11,464 2-7 OD, OT ✓/✓ DAIR-V2X-C [24] 2022 Real V2I C, L 38,845 2 OD -/ -V2XSet [10] 2022 Sim V2V, V2I C, L 11,447 2-7 OD ✓/ -DOLPHINS [25] 2023 Sim V2V, V2I C, L 42,736 3 OD ✓/✓ LUCOOP [26] 2023 Real V2V L 54,000 3 OD, OT ✓/ -V2V4Real [27] 2023 Real V2V C, L 60,000 2 OD, OT, DA ✓/ -V2X-Seq(SPD) [28] 2023 Real V2I C, L 15,000 2 OD, OT, TP -/ -DeepAccident [29] 2023 Sim V2V, V2I C, L 57,000 1-5 OD, OT, SS, MP, DA ✓/ -TumTraf-V2X [30] 2024 framework, which was developed to address the challenges of temporal asynchrony by using a specialized asynchronous subset from DAIR-V2X-C. Alongside the DAIR-V2X-C subset, the dataset also features the DAIR-V2X-V and DAIR-V2X-I subsets, focusing on vehicle and infrastructure only. ...
... LUCOOP [26] is a large scale real-world V2V dataset created by Leibniz University. It stands out from the other realworld datasets, focusing on multi-vehicle urban navigation and collaborative perception. ...
Conference Paper
Full-text available
This survey offers a comprehensive examination of collaborative perception datasets in the context of Vehicle-to-Infrastructure (V2I), Vehicle-to-Vehicle (V2V), and Vehicle-to-Everything (V2X). It highlights the latest developments in large-scale benchmarks that accelerate advancements in perception tasks for autonomous vehicles. The paper systematically analyzes a variety of datasets, comparing them based on aspects such as diversity, sensor setup, quality, public availability, and their applicability to downstream tasks. It also highlights the key challenges such as domain shift, sensor setup limitations, and gaps in dataset diversity and availability. The importance of addressing privacy and security concerns in the development of datasets is emphasized, regarding data sharing and dataset creation. The conclusion underscores the necessity for comprehensive, globally accessible datasets and collaborative efforts from both technological and research communities to overcome these challenges and fully harness the potential of autonomous driving.
... A 6 DoF sensor-to-platform calibration and platform-to-vehicle calibration was carried out before and after the measurement campaign with a laser tracker, as described in Axmann et al. (2023). The extrinsic camera calibrations were additionally carried out before and after the experiments. ...
Conference Paper
In recent years, the demand for accurate and reliable positioning has increased. Especially with regard to autonomous driving, highly accurate positioning information is needed. While conventional Global Navigation Satellite System (GNSS) positioning solutions provide reliable results in open areas such as highways, this is quite different in urban environments (e.g. urban canyons, tunnels or underground car parks). For these areas, GNSS positioning must be supported by other information in order to guarantee accurate and reliable positioning. For this contribution, sensor fusion is applied to a multi-vehicle network in order to make statements about the opportunities and limitations of collaborative navigation on the accuracy and robustness of position determination. The focus is on collaboration itself and not on communication. We use a Real-Time Kinematic (RTK) algorithm supported by inter-vehicle observations from stereo imagery to aid the position estimation. For this paper, we consider real data collected in a GNSS challenging environment in Hannover, Germany. The original inter-vehicle and GNSS observations are modified into bias-free observations with Gaussian noise. As Vehicle-to-Vehicle observations, we simulate a total of 18 coordinate differences with different precisions. The estimated position results are verified with a reference solution. This approach is then used to understand the potential and limitations of collaborative positioning. We demonstrate that the improvement of the C-RTK solution as well as its magnitude are determined by the precision of the V2V observations, the amount of available GNSS observations, and the performance of the aiding agents in the network. We show that the use of a collaborative RTK leads to a max. improvement of the 3D RMSE of up to 51% compared to the single-vehicle (SV) RTK solution. The collaborative solution has a particularly positive effect on the determination of the height component. In contrast, a careful selection of collaboration partners is required, since imprecise and inaccurate aiding agents might lead to a significant deterioration compared to the SV solution.
... Moreover, about 80 % of the dataset is recorded on highways which normally show less occlusions and necessity for collective perception. 5) LUCOOP: LUCOOP was published by Axmann et al. [28] in 2023. LUCOOP is a real-world dataset which was recorded in the city of Hannover with three CAVs. ...
Preprint
To ensure safe operation of autonomous vehicles in complex urban environments, complete perception of the environment is necessary. However, due to environmental conditions, sensor limitations, and occlusions, this is not always possible from a single point of view. To address this issue, collective perception is an effective method. Realistic and large-scale datasets are essential for training and evaluating collective perception methods. This paper provides the first comprehensive technical review of collective perception datasets in the context of autonomous driving. The survey analyzes existing V2V and V2X datasets, categorizing them based on different criteria such as sensor modalities, environmental conditions, and scenario variety. The focus is on their applicability for the development of connected automated vehicles. This study aims to identify the key criteria of all datasets and to present their strengths, weaknesses, and anomalies. Finally, this survey concludes by making recommendations regarding which dataset is most suitable for collective 3D object detection, tracking, and semantic segmentation.
... However, these simulations cannot generate GNSS raw measurements. To our best knowledge, the dataset named LUCOOP in [156] is the first public real-world dataset of GNSS-based CP, which provides multi-modal data collected by three ground vehicles equipped with GNSS, UWB, INS, and LiDAR. Table IV has summarized some works on data simulation, the data content and some descriptions are provided. ...
Article
With the advancements of communication technologies and the Global Navigation Satellite Systems (GNSS), research and development in GNSS-based Cooperative Positioning (CP) is growing increasingly due to its potential of performance improvement compared to single-agent positioning. This paper presents a comprehensive survey that provides an overview of the state-of-the-art works on GNSS-based CP, categorizing them into six blocks: architecture, data sharing, measurements, estimation algorithms, performance evaluation, and integrity monitoring. Multiple solutions for each block are analyzed and compared if possible. Given that GNSS faces challenges mainly in urban areas, such as non-line-of-sight (NLOS), multipath, and cycle-slip, the discussions primarily focus on vehicular networks within urban environments. Finally, the challenges of practical applications and academic experiments of GNSS-based CP are discussed, and some potential research directions are outlined.
... Interest in dedicated datasets for cooperative V2X tasks has recently been increasing [4]. Apart from multiple synthetic cooperative perception datasets [21], [22], [23], V2V4Real (V2V) [24], DAIR-V2X (V2I) [12], and LUCOOP (V2V, V2I) [25] are notable mentions for realworld V2X cooperative perception datasets. ...
Preprint
Full-text available
Connectivity is a main driver for the ongoing megatrend of automated mobility: future Cooperative Intelligent Transport Systems (C-ITS) will connect road vehicles, traffic signals, roadside infrastructure, and even vulnerable road users, sharing data and compute for safer, more efficient, and more comfortable mobility. In terms of communication technology for realizing such vehicle-to-everything (V2X) communication, the WLAN-based peer-to-peer approach (IEEE 802.11p, ITS-G5 in Europe) competes with C-V2X based on cellular technologies (4G and beyond). Irrespective of the underlying communication standard, common message interfaces are crucial for a common understanding between vehicles, especially from different manufacturers. Targeting this issue, the European Telecommunications Standards Institute (ETSI) has been standardizing V2X message formats such as the Cooperative Awareness Message (CAM). In this work, we present V2AIX, a multi-modal real-world dataset of ETSI ITS messages gathered in public road traffic, the first of its kind. Collected in measurement drives and with stationary infrastructure, we have recorded more than 285 000 V2X messages from more than 2380 vehicles and roadside units in public road traffic. Alongside a first analysis of the dataset, we present a way of integrating ETSI ITS V2X messages into the Robot Operating System (ROS). This enables researchers to not only thoroughly analyze real-world V2X data, but to also study and implement standardized V2X messages in ROS-based automated driving applications. The full dataset is publicly available for non-commercial use at v2aix.ika.rwth-aachen.de.
... In recent years, with the increasingly widespread use of sensors such as LiDAR and multi-view cameras on intelligent systems such as autonomous vehicles, unmanned aerial vehicles (UAV) [18,19], unmanned surface vehicles (USV) [20,21], and industrial robots [22], data collection and enhancement of system perception capabilities have been greatly facilitated [23]. Notably, datasets from individual intelligent entities have significantly promoted perception performance improvement in various domains [24][25][26]. However, due to factors such as occlusion, truncation, distance, and blind spots in the field of view, data obtained from sensors carried by individual agents still suffer from the issue of incomplete information [27]. ...
Article
Full-text available
The rapid development of vehicle cooperative 3D object-detection technology has significantly improved the perception capabilities of autonomous driving systems. However, ship cooperative perception technology has received limited research attention compared to autonomous driving, primarily due to the lack of appropriate ship cooperative perception datasets. To address this gap, this paper proposes S2S-sim, a novel ship cooperative perception dataset. Ship navigation scenarios were constructed using Unity3D, and accurate ship models were incorporated while simulating sensor parameters of real LiDAR sensors to collect data. The dataset comprises three typical ship navigation scenarios, including ports, islands, and open waters, featuring common ship classes such as container ships, bulk carriers, and cruise ships. It consists of 7000 frames with 96,881 annotated ship bounding boxes. Leveraging this dataset, we assess the performance of mainstream vehicle cooperative perception models when transferred to ship cooperative perception scenes. Furthermore, considering the characteristics of ship navigation data, we propose a regional clustering fusion-based ship cooperative 3D object-detection method. Experimental results demonstrate that our approach achieves state-of-the-art performance in 3D ship object detection, indicating its suitability for ship cooperative perception.
Article
In this study, we address the challenge of constructing continuous 3D models that accurately represent uncertain surfaces, derived from noisy LiDAR data. Building upon our prior work, which utilized the Gaussian Process (GP) and Gaussian Mixture Model (GMM) for structured building models, we introduce a more generalized approach tailored for complex surfaces in urban scenes, where GMM Regression and GP with derivative observations are applied. A Hierarchical GMM (HGMM) is employed to optimize the number of GMM components and speed up the GMM training. With the prior map obtained from HGMM, GP inference is followed for the refinement of the final map. Our approach models the implicit surface of the geo-object and enables the inference of the regions that are not completely covered by measurements. The integration of GMM and GP yields well-calibrated uncertainties alongside the surface model, enhancing both accuracy and reliability. The proposed method is evaluated on real data collected by a mobile mapping system. Compared to the performance in mapping accuracy and uncertainty quantification of other state-of-the-art methods, the proposed method achieves lower RMSEs, higher log-likelihood and lower computational costs for the evaluated data.
Preprint
In the typical urban intersection scenario, both vehicles and infrastructures are equipped with visual and LiDAR sensors. By successfully integrating the data from vehicle-side and road monitoring devices, a more comprehensive and accurate environmental perception and information acquisition can be achieved. The Calibration of sensors, as an essential component of autonomous driving technology, has consistently drawn significant attention. Particularly in scenarios involving multiple sensors collaboratively perceiving and addressing localization challenges, the requirement for inter-sensor calibration becomes crucial. Recent years have witnessed the emergence of the concept of multi-end cooperation, where infrastructure captures and transmits surrounding environment information to vehicles, bolstering their perception capabilities while mitigating costs. However, this also poses technical complexities, underscoring the pressing need for diverse end calibration. Camera and LiDAR, the bedrock sensors in autonomous driving, exhibit expansive applicability. This paper comprehensively examines and analyzes the calibration of multi-end camera-LiDAR setups from vehicle, roadside, and vehicle-road cooperation perspectives, outlining their relevant applications and profound significance. Concluding with a summary, we present our future-oriented ideas and hypotheses.
Article
Full-text available
3D Mapping-Aided (3DMA) Global Navigation Satellite System (GNSS) is a widely used method to mitigate multipath errors. Various research has been presented which utilizes 3D building model data in conjunction with ray-tracing algorithms to compute and predict satellites’ visibility conditions and compute delays caused by signal reflection. To simulate, model and potentially correct multipath errors in highly dynamic applications, such as, e.g., autonomous driving, the satellite–receiver–reflector geometry has to be known precisely in a common reference frame. Three-dimensional building models are often provided by regional public or private services and the coordinate information is usually given in a coordinate system of a map projection. Inconsistencies in the coordinate frames used to express the satellite and user coordinates, as well as the reflector surfaces, lead to falsely determined multipath errors and, thus, reduce the performance of 3DMA GNSS. This paper aims to provide the needed transformation steps to consider when integrating 3D building model data, user position, and GNSS orbit information. The impact of frame inconsistencies on the computed extra path delay is quantified based on a simulation study in a local 3D building model; they can easily amount to several meters. Differences between the extra path-delay computations in a metric system and a map projection are evaluated and corrections are proposed to both variants depending on the accuracy needs and the intended use.
Article
Full-text available
Sharing collective perception messages (CPM) between vehicles is investigated to decrease occlusions, so as to improve perception accuracy and safety of autonomous driving. However, highly accurate data sharing and low communication overhead is a big challenge for collective perception, especially when real-time communication is required among connected and automated vehicles. In this paper, we propose an efficient and effective keypoints-based deep feature fusion framework built on the 3D object detector PV-RCNN, called Fusion PV-RCNN, (FPV-RCNN for short), for collective perception. We introduce a bounding box proposal matching module of high performance and a keypoints selection strategy to compress the CPM size and solve the multi-vehicle data fusion problem. Besides, we also proposed an effective localization error correction module with maximum consensus to increase the robustness of the data fusion. Compared to a bird's-eye view (BEV) keypoints feature fusion, FPV-RCNN achieves improved detection accuracy by about 9% at a high evaluation criterion (IoU 0.7) on the synthetic dataset COMAP dedicated to collective perception. In addition, its performance is comparable to two raw data fusion baselines that have no data loss in sharing. Moreover, our method also significantly decreases the CPM size to less than 0.3KB, which is about 50 times smaller than the BEV feature map sharing used in previous works. Even with further decreased CPM feature channels, i.e., from 128 to 32, the detection performance does not show obvious drops. The code of our method is available at https://github.com/YuanYunshuang/FPV_RCNN.
Article
Full-text available
Collective perception of connected vehicles can sufficiently increase the safety and reliability of autonomous driving by sharing perception information. However, collecting real experimental data for such scenarios is extremely expensive. Therefore, we built a computational efficient co-simulation synthetic data generator through CARLA and SUMO simulators. The simulated data contain image and point cloud data as well as ground truth for object detection and semantic segmentation tasks. To verify the superior performance gain of collective perception over single-vehicle perception, we conducted experiments of vehicle detection, which is one of the most important perception tasks for autonomous driving, on this data set. A 3D object detector and a Bird’s Eye View (BEV) detector are trained and then test with different configurations of the number of cooperative vehicles and vehicle communication ranges. The experiment results showed that collective perception can not only dramatically increase the overall mean detection accuracy but also the localization accuracy of detected bounding boxes. Besides, a vehicle detection comparison experiment showed that the detection performance drop caused by sensor observation noise can be canceled out by redundant information collected by multiple vehicles.
Conference Paper
Full-text available
Collective perception of connected vehicles can sufficiently increase the safety and reliability of autonomous driving by sharing perception information. However, collecting real experimental data for such scenarios is extremely expensive. Therefore, we built a computational efficient co-simulation synthetic data generator through CARLA and SUMO simulators. The simulated data contain image and point cloud data as well as ground truth for object detection and semantic segmentation tasks. To verify the superior performance gain of collective perception over single-vehicle perception, we conducted experiments of vehicle detection, which is one of the most important perception tasks for autonomous driving, on this data set. A 3D object detector and a Bird's Eye View (BEV) detector are trained and then test with different configurations of the number of cooperative vehicles and vehicle communication ranges. The experiment results showed that collective perception can not only dramatically increase the overall mean detection accuracy but also the localization accuracy of detected bounding boxes. Besides, a vehicle detection comparison experiment showed that the detection performance drop caused by sensor observation noise can be canceled out by redundant information collected by multiple vehicles.
Conference Paper
Full-text available
Semantic segmentation of large-scale outdoor point clouds is essential for urban scene understanding in various applications, especially autonomous driving and urban high-definition (HD) mapping. With rapid developments of mobile laser scanning (MLS) systems, massive point clouds are available for scene understanding, but publicly accessible large-scale labeled datasets, which are essential for developing learning-based methods, are still limited. This paper introduces Toronto-3D, a large-scale urban outdoor point cloud dataset acquired by a MLS system in Toronto, Canada for semantic segmentation. This dataset covers approximately 1 km of point clouds and consists of about 78.3 million points with 8 labeled object classes. Baseline experiments for semantic segmentation were conducted and the results confirmed the capability of this dataset to train deep learning models effectively. Toronto-3D is released 1 to encourage new research, and the labels will be improved and updated with feedback from the research community.
Conference Paper
Full-text available
Compared to a single robot, a swarm system can conduct a given task in a shorter time, and it is more robust to system failures of each agent. To successfully execute cooperative missions with multiple agents, accurate relative positioning is important. If global positioning (e.g. with a GNSS-based positioning) is available, we can easily compute relative positions. In environments where a global positioning system is unreliable or unavailable, visual odometry can be applied for estimating each agent's egomotion, by exploiting onboard cameras. Using these self-localization results, relative positions between agents can be estimated, once the relative geometry between agents is initialized. However, since visual odometry is a dead-reckoning process, the estimation errors accumulate inherently without bounds. We propose a cooperative localization method using visual odometry and inter-agent range measurements. Using the proposed method, we can reduce the drifts in position estimates with very modest requirements on the communication channel between agents.
Conference Paper
Full-text available
We have collected an extensive winter autonomous driving data set consisting of over 4TB of data collected between November 2019 and March 2020. Our base configuration features two 16 channel LiDARs, forward facing color camera, wide field of view NIR camera, and an ADAS LWIR camera. RTK corrected GNSS positioning and IMU data is also available. Portions include data from four different HD LiDARs operating in a variety of winter driving conditions. The set highlights some of the unique aspects of operating in northern climates including changing landmarks due to snow accumulation, wildlife, and snowmobiles operating on local roadways.
Conference Paper
Full-text available
Semantic scene understanding is important for various applications. In particular, self-driving cars need a fine-grained understanding of the surfaces and objects in their vicinity. Light detection and ranging (LiDAR) provides precise geometric information about the environment and is thus a part of the sensor suites of almost all self-driving cars. Despite the relevance of semantic scene understanding for this application, there is a lack of a large dataset for this task which is based on an automotive LiDAR. In this paper, we introduce a large dataset to propel research on laser-based semantic segmentation. We annotated all sequences of the KITTI Vision Odometry Benchmark and provide dense point-wise annotations for the complete 360o360^{o} field-of-view of the employed automotive LiDAR. We propose three benchmark tasks based on this dataset: (i) semantic segmentation of point clouds using a single scan, (ii) semantic segmentation using multiple past scans, and (iii) semantic scene completion, which requires to anticipate the semantic scene in the future. We provide baseline experiments and show that there is a need for more sophisticated models to efficiently tackle these tasks. Our dataset opens the door for the development of more advanced methods, but also provides plentiful data to investigate new research directions.
Article
Full-text available
In this paper, we address the challenge of robust indoor positioning using integrated UWB and Wi-Fi measurements. A key limitation of any fusion algorithm is whether the distribution that describes the random errors in the measurements has been correctly specified. Here, we describe the details of a set of practical experiments conducted on a purpose built calibration range, to evaluate the performance of commercial UWB sensors with Wi-Fi measurements as captured by an in-house smartphone application. In this paper, we present comparisons of ranges from the UWB sensors and the Wi-Fi built into the smartphone to true ranges obtained from a robotic total station. This approach is validated in both static and kinematic tests. The calibration range has been established as one component of an indoor laboratory to undertake a more diverse research agenda into robust indoor positioning systems. The experiments presented here have been conducted collaboratively under the joint FIG (WG5.5) and IAG (SC4.2) working groups on multi-sensor systems.
Article
Full-text available
Global Navigation Satellite Systems (GNSS) deliver absolute position and velocity, as well as time information (P, V, T). However, in urban areas, the GNSS navigation performance is restricted due to signal obstructions and multipath. This is especially true for applications dealing with highly automatic or even autonomous driving. Subsequently, multi-sensor platforms including laser scanners and cameras, as well as map data are used to enhance the navigation performance, namely in accuracy, integrity, continuity and availability. Although well-established procedures for integrity monitoring exist for aircraft navigation, for sensors and fusion algorithms used in automotive navigation, these concepts are still lacking. The research training group i.c.sens, integrity and collaboration in dynamic sensor networks, aims to fill this gap and to contribute to relevant topics. This includes the definition of alternative integrity concepts for space and time based on set theory and interval mathematics, establishing new types of maps that report on the trustworthiness of the represented information, as well as taking advantage of collaboration by improved filters incorporating person and object tracking. In this paper, we describe our approach and summarize the preliminary results.
Conference Paper
Full-text available
Scene parsing aims to assign a class (semantic) label for each pixel in an image. It is a comprehensive analysis of an image. Given the rise of autonomous driving, pixel-accurate environmental perception is expected to be a key enabling technical piece. However, providing a large scale dataset for the design and evaluation of scene parsing algorithms, in particular for outdoor scenes, has been difficult. The per-pixel labelling process is prohibitively expensive, limiting the scale of existing ones. In this paper, we present a large-scale open dataset, ApolloScape, that consists of RGB videos and corresponding dense 3D point clouds. Comparing with existing datasets, our dataset has the following unique properties. The first is its scale, our initial release contains over 140K images - each with its per-pixel semantic mask, up to 1M is scheduled. The second is its complexity. Captured in various traffic conditions, the number of moving objects averages from tens to over one hundred. And the third is the 3D attribute, each image is tagged with high-accuracy pose information at cm accuracy and the static background point cloud has mm relative accuracy. We are able to label these many images by an interactive and efficient labelling pipeline that utilizes the high-quality 3D point cloud. Moreover, our dataset also contains different lane markings based on the lane colors and styles. We expect our new dataset can deeply benefit various autonomous driving related applications that include but not limited to 2D/3D scene understanding, localization, transfer learning, and driving simulation.
Article
Full-text available
Absolute positioning of vehicles is based on Global Navigation Satellite Systems (GNSS) combined with on-board sensors and high-resolution maps. In Cooperative Intelligent Transportation Systems (C-ITS), the positioning performance can be augmented by means of vehicular networks that enable vehicles to share location-related information. This paper presents an Implicit Cooperative Positioning (ICP) algorithm that exploits the Vehicle-to-Vehicle (V2V) connectivity in an innovative manner, avoiding the use of explicit V2V measurements such as ranging. In the ICP approach, vehicles jointly localize non-cooperative physical features (such as people, traffic lights or inactive cars) in the surrounding areas, and use them as common noisy reference points to refine their location estimates. Information on sensed features are fused through V2V links by a consensus procedure, nested within a message passing algorithm, to enhance the vehicle localization accuracy. As positioning does not rely on explicit ranging information between vehicles, the proposed ICP method is amenable to implementation with off-the-shelf vehicular communication hardware. The localization algorithm is validated in different traffic scenarios, including a crossroad area with heterogeneous conditions in terms of feature density and V2V connectivity, as well as a real urban area by using Simulation of Urban MObility (SUMO) for traffic data generation. Performance results show that the proposed ICP method can significantly improve the vehicle location accuracy compared to the stand-alone GNSS, especially in harsh environments, such as in urban canyons, where the GNSS signal is highly degraded or denied.
Article
Full-text available
Future driver assistance systems will rely on accurate, reliable and continuous knowledge on the position of other road participants, including pedestrians, bicycles and other vehicles. The usual approach to tackle this requirement is to use on-board ranging sensors inside the vehicle. Radar, laser scanners or vision based systems are able to detect objects in their line-of-sight. In contrast to these non-cooperative ranging sensors, cooperative approaches follow a strategy in which other road participants actively support the estimation of the relative position. The limitations of on-board ranging sensors regarding their detection range and angle of view and the facility of blockage can be approached by using a cooperative approach based on vehicle-to-vehicle communication. The fusion of both, cooperative and non-cooperative strategies, seems to offer the largest benefits regarding accuracy, availability and robustness. This survey gives the reader a comprehensive review about the performance and limitations of commercially available radar sensors and laser scanners and a summary on the current research in the area of vision-based systems. Moreover, the state-of-the-art in GNSS-based vehicle localization techniques and the latest findings in vehicular communication are reviewed and put into the context of vehicle relative positioning.
Code
Full-text available
GFZRNX is a software toolbox for Global Navigation Satellite System (GNSS) data provided in the REceiver Independent EXchange format (RINEX) of the major versions 2 and 3. The following RINEX data types are supported: - Observation data - Navigation data - Meteorological data The following global and regional satellite systems are supported:GPS - Global Positioning System (USA) GLONASS - GLObal NAvigation Satellite System (RUS)BEIDOU - Chinese Global and Regional Navigation Satellite System (CHN)GALILEO - European Global Navigation Satellite SystemIRNSS - Indian Regional Naviagation Satellite System (IND)QZSS - Quasi Zenith Satellite System (JAP) The following operations/tasks are supported: - RINEX data check and repair - RINEX data format conversion ( version 3 to 2 and vice versa ) - RINEX data splice - RINEX data split - RINEX data statistics generation - RINEX data manipulations like: (1) data sampling, (2) observation types selection, (3) satellite systems selection, (4) elimination of overall empty or sparse observation types - Automatic version dependent file naming on output data - RINEX data header editing - RINEX data meta data extraction - RINEX data comparison The following operating systems are supported: - Microsoft Windows (64) - Microsoft Windows (32) - Apple macOS - ORACLE Solaris (SPARC) - ORACLE Solaris (i86) - Linux (64) - Linux (32) Please find the executables and the Documentation via: http://semisys.gfz-potsdam.de/semisys/scripts/download/index.php (GFZ Software -> gfzrnx)
Conference Paper
Full-text available
Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.
Article
Full-text available
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10–100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
Conference Paper
Autonomous vehicles may make wrong decisions due to inaccurate detection and recognition. Therefore, an intelligent vehicle can combine its own data with that of other vehicles to enhance perceptive ability, and thus improve detection accuracy and driving safety. However, multi-vehicle cooperative perception requires the integration of real world scenes and the traffic of raw sensor data exchange far exceeds the bandwidth of existing vehicular networks. To the best our knowledge, we are the first to conduct a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems. In this work, relying on LiDAR 3D point clouds, we fuse the sensor data collected from different positions and angles of connected vehicles. A point cloud based 3D object detection method is proposed to work on a diversity of aligned point clouds. Experimental results on KITTI and our collected dataset show that the proposed system outperforms perception by extending sensing area, improving detection accuracy and promoting augmented results. Most importantly, we demonstrate it is possible to transmit point clouds data for cooperative perception via existing vehicular network technologies.
Article
We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an end-to-end model trained via imitation learning, and an end-to-end model trained via reinforcement learning. The approaches are evaluated in controlled scenarios of increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform's utility for autonomous driving research. The supplementary video can be viewed at https://youtu.be/Hp8Dz-Zek2E
Article
This paper presents a large-scale strip adjustment method for LiDAR mobile mapping data, yielding highly precise maps. It uses several concepts to achieve scalability. First, an efficient graph-based pre-segmentation is used, which directly operates on LiDAR scan strip data, rather than on point clouds. Second, observation equations are obtained from a dense matching, which is formulated in terms of an estimation of a latent map. As a result of this formulation, the number of observation equations is not quadratic, but rather linear in the number of scan strips. Third, the dynamic Bayes network, which results from all observation and condition equations, is partitioned into two sub-networks. Consequently, the estimation matrices for all position and orientation corrections are linear instead of quadratic in the number of unknowns and can be solved very efficiently using an alternating least squares approach. It is shown how this approach can be mapped to a standard key/value MapReduce implementation, where each of the processing nodes operates independently on small chunks of data, leading to essentially linear scalability. Results are demonstrated for a dataset of one billion measured LiDAR points and 278,000 unknowns, leading to maps with a precision of a few millimeters.
Simulation framework for collaborative navigation: Development-analysis-optimization
  • N G Fernandez
A2D2: Audi Autonomous Driving Dataset
  • J Geyer
  • Y Kassahun
  • M Mahmudi
  • X Ricou
  • R Durgesh
  • A S Chung
  • L Hauswald
  • V H Pham
  • M Mühlegg
  • S Dorn
  • T Fernandez
  • M Jänicke
  • S Mirashi
  • C Savani
  • M Sturm
  • O Vorobiov
  • M Oelker
  • S Garreis
  • P Schuberth
Cooperative driving dataset (codd)
  • E Arnold
Pandar64 64-Channel Mechanical LiDAR User Manual
  • Hesai Technology Co
  • Ltd
VEXXIS Antennas GNSS-850
  • Novatel Inc
TERRAPOS User’s Manual Version 2.5
  • As Terratec
Leica AR20: Visionary 3D GNSS Antenna Matchless multipath suppression
  • Leica Geosystems
Leica absolute tracker at960
  • Hexagon Metrology
PandarXT-32 32-Channel Medium-Range Mechanical LiDAR User Manual
  • Hesai Technology Co
  • Ltd
RINEX - The Receiver Independent Exchange Format - Version 3.04
  • Gnss International
  • Service
TERRAPOS User’s Manual Version 2.5
  • terratec
A2D2: Audi Autonomous Driving Dataset
  • geyer