Metadata label taxonomy for the DMD.

Metadata label taxonomy for the DMD.

Source publication
Article
Full-text available
Tremendous advances in advanced driver assistance systems (ADAS) have been possible thanks to the emergence of deep neural networks (DNN) and Big Data (BD) technologies. Huge volumes of data can be managed and consumed as training material to create DNN models which feed functions such as lane keeping systems (LKS), automated emergency braking (AEB...

Contexts in source publication

Context 1
... groups allowed us to design the recording protocols and annotation guidelines, reducing ambiguity and boosting the value of the recorded material. The proposed taxonomy of features in the DMD is presented in Figure 3. This taxonomy is flexible and allows the inclusion of other labels within each category. ...
Context 2
... Metadata: Inside the rosbag, there was also information about the recordings that were extracted and saved in the annotation file. This data is considered in the annotations list as scenario metadata (subject, context, video and camera information), shown in Figure 3. ...

Similar publications

Preprint
Full-text available
The hippocampus is crucial for forming new episodic memories. While the encoding of spatial and temporal information (where and when) in the hippocampus is well understood, the encoding of objects (what) remains less clear due to the high dimensions of object space. Rather than encoding each individual object separately, the hippocampus may instead...

Citations

... Furthermore, most of the experiments are not performed using VR headsets. Regarding user driving, Ortega et al. (2022) published a large dataset that provides, among other features, the location at which the user is looking (pre-defined, from zero to nine) as well as the bounding boxes of face, head, eyes and other objects that are used as distractors while driving. The dataset also includes frames with users yawning, having microsleeps, texting or drinking. ...
... Rather than showing a similar data scheme, widespread datasets have different features or similar ones captured in different lighting conditions, thus hardening the simultaneous use of various datasets. For instance, two of the previously revised driving datasets do not detect the gaze vector, but a gaze zone; Ortega et al. (2022) labelled ten zones, whereas Naqvi et al. (2018) distinguished seventeen. Still, Kothari et al. (2022) proved that feeding networks with multiple datasets contributed to obtaining better results. ...
Article
Full-text available
Virtual reality (VR) has evolved substantially beyond its initial remit of gaming and entertainment, catalyzed by advancements such as improved screen resolutions and more accessible devices. Among various interaction techniques introduced to VR, eye-tracking stands out as a pivotal development. It not only augments immersion but offers a nuanced insight into user behavior and attention. This precision in capturing gaze direction has made eye-tracking instrumental for applications far beyond mere interaction, influencing areas like medical diagnostics, neuroscientific research, educational interventions, and architectural design, to name a few. Though eye-tracking’s integration into VR has been acknowledged in prior reviews, its true depth, spanning the intricacies of its deployment to its broader ramifications across diverse sectors, has been sparsely explored. This survey undertakes that endeavor, offering a comprehensive overview of eye-tracking’s state of the art within the VR landscape. We delve into its technological nuances, its pivotal role in modern VR applications, and its transformative impact on domains ranging from medicine and neuroscience to marketing and education. Through this exploration, we aim to present a cohesive understanding of the current capabilities, challenges, and future potential of eye-tracking in VR, underscoring its significance and the novelty of our contribution.
... Since humans are poor supervisors of automation (Parasuraman & Riley, 1997), another technical safety layer has been suggested, where the human driver is continuously monitored to ensure a sufficient level of fitness (Hayley et al., 2021;Hecht et al., 2018). An issue here is that driver monitoring is difficult and available systems are facing challenges that are not easily overcome (Doudou et al., 2019;Koay et al., 2022;Koesdwiady et al., 2017;Ortega et al., 2022;Perkins et al., 2022). Examples of why driver monitoring is difficult include inter-and intra-individual differences and context dependence, and especially in an automated driving setting, the need to predict the driver's readiness to re-engage within the automated systems warning timeframe. ...
Technical Report
Full-text available
The objective of this work is to describe guidelines for measuring degraded human performance based on driver state and competences from a real-time driver monitoring perspective. The guidelines integrate state-of-the-art knowledge from the literature with knowhow from the industry and practical results from the Mediator project. The formulated guidelines are defined based on functionality, technological possibilities, safety relevance and feasibility.
... This method requires a high computation time as it applies multiple training samples with information about moving vehicles, pedestrians, and motorcyclists. In [13], the authors studied the challenges to build a multi-camera dataset for driver monitoring system is carried out. This work also describes the approach for handling the technical issues during the design and implementation of the monitoring systems. ...
Preprint
The main functions of the automated systems rely on the advanced sensors for detection and perception of the environment around the vehicle. Radars and cameras are commonly utilized to detect the potential obstacles and vehicles ahead on the road. Nevertheless, cameras can generate spurious detections in the extreme weather conditions such as fog, rain, dust, snow, dark, and heavy sunlight in the sky. Due to limitations in vertical field view of the radars, single radars are not reliable to detect the height of the targets precisely. In this paper, a triple radar arrangement (long-range, medium-range, and short-range radars) based on sensor fusion technique is proposed to detect objects with different size in level 2 Advanced Driver-Assistance (ADAS) system. The typical objects including truck, pedestrians, and animals are detected in different scenarios. The developed model considered ISO 26262 and ISO/PAS 21448 to reasonably address insufficient robustness and inability of the sensors. The models of sensor and level 2 ADAS systems are developed using MATLAB toolbox and Simulink. Sensor detection performance is determined by running simulations with triple radar setup. Obtained results demonstrate that the proposed approach generates accurate detections of targets in all tested scenarios.
Article
Conditionally automated vehicles can be operated on most regular roads without a driver’s supervision. They show excellent potential for market adoption and are now being targeted by numerous auto manufacturers for mass production. The system of such a vehicle enables it to autonomously perform dynamic driving tasks within the operational design domain, but once this system fails or malfunctions, the vehicle will be unable to reliably complete a dynamic driving task. In such cases, the system will send a takeover request, following which the driver needs to immediately take control of the vehicle. The driver’s physical and mental state, as well as the non-driving-related tasks that they are engaged in, affects the time required for them to perform the takeover and the quality of the takeover. To manage driving risks and guarantee the safety of drivers during automated driving, an automated driving system should be able to monitor a driver’s state and behavior, assess their level of alertness, and perform the appropriate actions as required. In recent years, techniques for monitoring a driver’s state have been widely researched, and several practical methods have been proposed. In this review, we review representative methods, aiming to introduce the concept of driver state monitoring to a broader audience. First, we identified a few typical driver states that are important in driver state monitoring from the perspective of application demands for driver state monitoring in conditionally automated driving. Then, we categorized and reviewed existing studies on driver state monitoring according to the types of sensing data employed by previously proposed methods. Additionally, we collected datasets corresponding to different data types for driver state monitoring. Finally, by analyzing existing issues in driver state monitoring in relation to conditionally automated driving, we provided several suggestions for future research directions in this area, and discussed potential challenges and possible solutions.
Article
Full-text available
Critical issues with current detection systems are their susceptibility to adverse weather conditions and constraint on the vertical field view of the radars limiting the ability of such systems to accurately detect the height of the targets. In this paper, a novel multi-range radar (MRR) arrangement (i.e. triple: long-range, medium-range, and short-range radars) based on the sensor fusion technique is investigated that can detect objects of different sizes in a level 2 advanced driver-assistance system. To improve the accuracy of the detection system, the resilience of the MRR approach is investigated using the Monte Carlo (MC) method for the first time. By adopting MC framework, this study shows that only a handful of fine-scaled computations are required to accurately predict statistics of the radar detection failure, compared to many expensive trials. The results presented huge computational gains for such a complex problem. The MRR approach improved the detection reliability with an increased mean detection distance (4.9% over medium range and 13% over long range radar) and reduced standard deviation over existing methods (30% over medium range and 15% over long-range radar). This will help establishing a new path toward faster and cheaper development of modern vehicle detection systems.