Article

Increasing system transparency through confidence information in cooperative, automated driving

Taylor & Francis
Behaviour & Information Technology
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Driving simulators are typically used to evaluate next-generation automotive user interfaces in user studies as they offer a replicable driving setting that also allows studying safety-critical and/or future systems. However, this AutomotiveUI experience research is often limited to university or company campuses and their students and staff. To combat that, we introduced a mobile driving simulator lab in a car trailer. We present features but also limitations of this lab, report on experiences after the first days of operation, and discuss further use cases beyond research. During 7 days of user studies with the trailer at a national garden festival, we conducted trials Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). with more than 70 participants from diverse backgrounds. However, executing studies at public events also has its limitations, e.g., on accepted trial duration and potential for biased responses.
Conference Paper
Full-text available
In a fully autonomous driving situation, passengers hand over the steering control to a highly automated system. Autonomous driving behaviour may lead to confusion and negative user experience. When establishing such new technology, the user’s acceptance and understanding are crucial factors regarding success and failure. Using a driving simulator and a mobile application, we evaluated if system transparency during and after the interaction can increase the user experience and subjective feeling of safety and control. We contribute an initial guideline for autonomous driving experience design, bringing together the areas of user experience, explainable artificial intelligence and autonomous driving. The AVAM questionnaire, UEQ-S and interviews show that explanations during or after the ride help turn a negative user experience into a neutral one, which might be due to the increased feeling of control. However, we did not detect an effect for combining explanations during and after the ride.
Article
Full-text available
Context-adaptive functions are not new in the driving context, but even so, investigations into these functions concerning the automation human–machine interface (aHMI) have yet to be carried out. This study presents research into context-adaptive availability notifications for an SAE Level 3 automation in scenarios where participants were surprised by either availability or non-availability. For this purpose, participants (N = 30) took part in a driving simulator study, experiencing a baseline HMI concept as a comparison, and a context-adaptive HMI concept that provided context-adaptive availability notifications with the aim of improving acceptance and usability, while decreasing frustration (due to unexpected non-availability) and gaze deviation from the road when driving manually. Furthermore, it was hypothesized that participants, when experiencing the context-adaptive HMI, would activate the automated driving function more quickly when facing unexpected availability. None of the hypotheses could be statistically confirmed; indeed, where gaze behavior was concerned, the opposite effects were found, indicating increased distraction induced by the context-adaptive HMI. However, the trend in respect to the activation time was towards shorter times with the context-adaptive notifications. These results led to the conclusion that context-adaptive availability notifications might not always be beneficial for users, while more salient availability notifications in the case of an unexpected availability could be advantageous.
Article
Full-text available
Automated vehicles (AVs) are on the edge of being available on the mass market. Research often focuses on technical aspects of automation, such as computer vision, sensing, or artificial intelligence. Nevertheless, researchers also identified several challenges from a human perspective that need to be considered for a successful introduction of these technologies. In this paper, we first analyze human needs and system acceptance in the context of AVs. Then, based on a literature review, we provide a summary of current research on in-car driver-vehicle interaction and related human factor issues. This work helps researchers, designers, and practitioners to get an overview of the current state of the art.
Conference Paper
Full-text available
With the development of automated driving technologies, human factors in the automated driving domain are getting more and more attention for a balanced implementation in commercial vehicle models. One influential human factor is mental workload. The driver’s mental workload is crucial for driving safety; if the driver is under too high mental workload, he/she cannot execute proper action timely. In the proposed model, the mental workload is affected by different cognitive channels: visual, spatial, and verbal. In this paper, the relationships among different cognitive channels concerning the driver’s mental workload are studied in a driving simulator experiment. Physiological data (pupil diameter), secondary task performance, and NASA-TLX are used for the mental workload measurement. The results show that during manual driving, the driver’s mental workload in the visual cognitive channel is the highest, followed by the mental workload in the spatial cognitive channel, and the mental workload in the verbal cognitive channel is the lowest.
Conference Paper
Full-text available
Fully autonomous driving leaves drivers with little opportunity to intervene in the driving decision. Giving drivers more control can enhance their driving experience. We develop two collabora-tive interface concepts to increase the user experience of drivers in autonomous vehicles. Our aim is to increase the joy of driving and to give drivers competence and autonomy even when driving autonomously. In a driving simulator study (N = 24) we investigate how vehicles and drivers can collaborate to decide on driving actions together. We compare autonomous driving (AD), the option to take back driving control (TBC) and two collaborative driving interface concepts by evaluating usability, user experience, workload, psychological needs, performance criteria and interview statements. The collaborative interfaces significantly increase autonomy and competence compared to AD. Joy is highly represented in the qualitative data during TBC and collaboration. Collaboration proves to be good for situations in which quick decisions are called for. CCS CONCEPTS • Human-centered computing → Collaborative and social computing ; HCI theory, concepts and models; Interactive systems and tools.
Conference Paper
Full-text available
Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants' desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future. CCS CONCEPTS • Human-centered computing → User studies; Empirical studies in HCI; Empirical studies in visualization; Mixed / augmented reality. KEYWORDS trust in automation, automated driving, explainable artificial intelligence , augmented reality, feedback systems, user centered design, interview studies ACM Reference Format:
Conference Paper
Full-text available
It seems that autonomous driving systems are substituting human responsibilities in the driving task. However, this does not mean that vehicles should not interact with their driver anymore, even in case of full automation. One reason is that the automation is not yet advanced enough to predict other road user's behavior in complex situations, which can lead to sub-optimal action choices, decrease comfort and user experience. In contrast, a human driver may have a more reliable understanding of other road users' intentions which could complement that of the automation. We propose a framework that distinguishes between four levels for interaction with automation. Based on the framework, we introduce a concept which allows drivers to provide prediction-level guidance to an automated driving system through gaze-speech interaction. Results of a pilot user study show that people hold a positive attitude towards prediction-level intervention as well as the gaze-based interaction method.
Article
Full-text available
Current research in human factors and automated driving is increasingly focusing on predictable transitions instead of urgent and critical take-overs. Predictive human–machine interface (HMI) elements displaying the remaining time until the next request to intervene were identified as a user need, especially when the user is engaging in non-driving related activities (NDRA). However, these estimations are prone to errors due to changing traffic conditions and updated map-based information. Thus, we investigated a confidence display for Level 3 automated driving time estimations. Based on a preliminary study, a confidence display resembling a mobile phone connectivity symbol was developed. In a mixed-design driving simulator study with 32 participants, we assessed the impact of the confidence display concept (within factor) on usability, frustration, trust and acceptance during city and highway automated driving (between factor). During automated driving sections, participants engaged in a naturalistic visual NDRA to create a realistic scenario. Significant effects were found for the scenario: participants in the city experienced higher levels of frustration. However, the confidence display has no significant impact on the subjective evaluation and most participants preferred the baseline HMI without a confidence symbol.
Article
Full-text available
Within a workshop on evaluation methods for automated vehicles (AVs) at the Driving Assessment 2019 symposium in Santa Fe; New Mexico, a heuristic evaluation methodology that aims at supporting the development of human–machine interfaces (HMIs) for AVs was presented. The goal of the workshop was to bring together members of the human factors community to discuss the method and to further promote the development of HMI guidelines and assessment methods for the design of HMIs of automated driving systems (ADSs). The workshop included hands-on experience of rented series production partially automated vehicles, the application of the heuristic assessment method using a checklist, and intensive discussions about possible revisions of the checklist and the method itself. The aim of the paper is to summarize the results of the workshop, which will be used to further improve the checklist method and make the process available to the scientific community. The participants all had previous experience in HMI design of driver assistance systems, as well as development and evaluation methods. They brought valuable ideas into the discussion with regard to the overall value of the tool against the background of the intended application, concrete improvements of the checklist (e.g., categorization of items; checklist items that are currently perceived as missing or redundant in the checklist), when in the design process the tool should be applied, and improvements for the usability of the checklist.
Article
Full-text available
Determining an appropriate time to execute a lane change is a critical issue for the development of Autonomous Vehicles (AVs).However, few studies have considered the rear and the front vehicle-driver’s risk perception while developing a human-like lane-change decision model. This paper aims to develop a lane-change decision model for AVs and to identify a two level threshold that conforms to a driver’s perception of the ability to safely change lanes with a rear vehicle approaching fast. Based on the signal detection theory and extreme moment trials on a real highway, two thresholds of safe lane change were determined with consideration of risk perception of the rear and the subject vehicle drivers, respectively. The rear vehicle’s Minimum Safe Deceleration (MSD) during the lane change maneuver of the subject vehicle was selected as the lane change safety indicator, and was calculated using the proposed human-like lane-change decision model. The results showed that, compared with the driver in the front extreme moment trial, the driver in the rear extreme moment trial is more conservative during the lane change process. To meet the safety expectations of the subject and rear vehicle drivers, the primary and secondary safe thresholds were determined to be 0.85 m/s² and 1.76 m/s², respectively. The decision model can help make AVs safer and more polite during lane changes, as it not only improves acceptance of the intelligent driving system, but also further ensures the rear vehicle’s driver’s safety.
Article
Full-text available
Technological developments in the domain of vehicle automation are targeted toward driver-less, or driver-out-of-the-loop driving. The main societal motivation for this ambition is that the majority of (fatal) accidents with manually driven vehicles are due to human error. However, when interacting with technology, users often experience the need to customize the technology to their personal preferences. This paper considers how this might apply to vehicle automation, by a conceptual analysis of relevant use cases. The analysis proceeds by comparing how handling of relevant situations is likely to differ between manual driving and automated driving. The results of the analysis indicate that full out-of-the-loop automated driving may not be acceptable to users of the technology. It is concluded that a technology that allows shared control between the vehicle and the user should be pursued. Furthermore, implications of this view are explored for the concrete temporal dynamics of shared control, and general characteristics of human machine interface that support shared control are proposed. Finally, implications of the proposed view and directions for further research are discussed.
Article
Full-text available
During automated driving, there is a need for interaction between the automated vehicle (AV) and the passengers inside the vehicle and between the AV and the surrounding road users outside of the car. For this purpose, different types of human machine interfaces (HMIs) are implemented. This paper introduces an HMI framework and describes the different HMI types and the factors influencing their selection and content. The relationship between these HMI types and their influencing factors is also presented in the framework. Moreover, the interrelations of the HMI types are analyzed. Furthermore, we describe how the framework can be used in academia and industry to coordinate research and development activities. With the help of the HMI framework, we identify research gaps in the field of HMI for automated driving to be explored in the future.
Conference Paper
Full-text available
The intentions of an automated vehicle are hard to spot in the absence of eye contact with a driver or other established means of communication. External car displays have been proposed as a solution, but what if they malfunction or display misleading information? How will this influence pedestrians' trust in the vehicle? To investigate these questions, we conducted a between-subjects study in Virtual Reality (N = 18) in which one group was exposed to erroneous displays. Our results show that participants already started with a very high degree of trust. Incorrectly communicated information led to a strong decline in trust and perceived safety, but both recovered very quickly. This was also reflected in participants' road crossing behavior. We found that malfunctions of an external car display motivate users to ignore it and thereby aggravate the effects of overtrust. Therefore, we argue that the design of external communication should avoid misleading information and at the same time prevent the development of overtrust by design.
Conference Paper
Full-text available
Highly automated driving evolves steadily and even gradually enters public roads. Nevertheless, there remain driving-related tasks that can be handled more efficiently by humans. Cooperation with the human user on a higher abstraction level of the dynamic driving task has been suggested to overcome operational boundaries. This cooperation includes for example deciding whether pedestrians want to cross the road ahead. We suggest that systems should monitor their users when they have to make such decisions. Moreover, these systems can adapt the interaction to support their users. In particular, they can match gaze direction and objects in their environmental model like vulnerable road users to guide the focus of users towards overlooked objects. We conducted a pilot study to investigate the need and feasibility of this concept. Our preliminary analysis showed that some participants overlooked pedestrians that intended to cross the road which could be prevented with such systems.
Conference Paper
Full-text available
As a consequence of insufficient situation awareness and inappropriate trust, operators of highly automated driving systems may be unable to safely perform takeovers following system failures. The communication of system uncertainties has been shown to alleviate these issues by supporting trust calibration. However, the existing approaches rely on information presented in the instrument cluster and therefore require users to regularly shift their attention between road, uncertainty display, and non-driving related tasks. As a result, these displays have the potential to increase workload and the likelihood of missed signals. A driving simulator study was conducted to compare a digital uncertainty display located in the instrument cluster with a peripheral awareness display consisting of a light strip and vibro-tactile seat feedback. The results indicate that the latter display affords users flexibility to direct more attention towards the road prior to critical situations and leads to lower workload scores while improving takeover performance.
Conference Paper
Full-text available
Automated vehicles will eventually operate safely without the need of human supervision and fallback, nevertheless, scenarios will remain that are managed more efficiently by a human driver. A common approach to overcome such weaknesses is to shift control to the driver. Control transitions are challenging due to human factor issues like post-automation behavior changes. We thus investigated cooperative overtaking wherein driver and vehicle complement each other: drivers support the vehicle to perceive the traffic scene and decide when to execute a maneuver whereas the system steers. We explored two maneuver approval and cancel techniques on touchscreens, and show that cooperative overtaking is feasible, both interaction techniques provide good usability and were preferred over manual maneuver execution. However, participants disregarded rear traffic in more complex situations. Consequently, system weaknesses can be overcome with cooperation, but drivers should be assisted by an adaptive system.
Article
Full-text available
Despite an increasingly large body of research that focuses on the potential demand for autonomous vehicles (AVs), risk preference is an understudied factor. Given that AV technology and how it will interact with the evolving mobility system are highly risky, this lack of research on risk preference is a critical gap in current understanding. By using a stated preference survey of 1,142 individuals from Singapore, this study achieves three objectives. First, it develops one measure of psychometric risk preference and operationalizes prospect theory to create two economic risk preference parameters. Second, it examines how these psychometric and economic risk preferences are associated with socioeconomic variables. Third, it analyzes how risk preference influences the mode choice of AVs. The study finds that risk preference parameters are significantly associated with socioeconomic variables: the elderly, poor, females, and unemployed Singaporeans appear more risk-averse and tend to overestimate small probabilities of losses. Furthermore, all three risk preference parameters contribute to the prediction of AV adoption. These modeling results have policy implications at both the aggregate and disaggregate levels. At the aggregate level, people misperceive probabilities, are overall risk-averse, and hence under-consume AVs relative to the social optimum. At the disaggregate level, the elderly, poor, female, and unemployed are more risk-averse and thus are less likely to adopt AVs. These results suggest that it might be valuable for governments to implement policies to encourage technology adoption, particularly for disadvantaged social groups, although caution remains due to uncertainty in the long-term effects of AVs. Individualized risk preference parameters could also inform how to design regulations, safety standards, and liability allocations of AVs since many regulations are essentially mechanisms for risk allocation. One limitation of the paper is that risk preference is measured and modeled only as individual-specific but not alternative-specific variables. Future studies should examine the relationship between the multiple components of risk preference and the multiple risky aspects of AVs.
Article
Full-text available
The concept of automated driving changes the way humans interact with their cars. However, how humans should interact with automated driving systems remains an open question. Cooperation between a driver and an automated driving system—they exert control jointly to facilitate a common driving task for each other—is expected to be a promising interaction paradigm that can address human factors issues caused by driving automation. Nevertheless, the complex nature of automated driving functions makes it very challenging to apply the state-of-the-art frameworks of driver–vehicle cooperation to automated driving systems. To meet this challenge, we propose a hierarchical cooperative control architecture which is derived from the existing architectures of automated driving systems. Throughout this architecture, we discuss how to adapt system functions to realize different forms of cooperation in the framework of driver–vehicle cooperation. We also provide a case study to illustrate the use of this architecture in the design of a cooperative control system for automated driving. By examining the concepts behind this architecture, we highlight that the correspondence between several concepts of planning and control originated from the fields of robotics and automation and the ergonomic frameworks of human cognition and control offers a new opportunity for designing driver–vehicle cooperation.
Conference Paper
Full-text available
In the emerging field of automated vehicles (AVs), the many recent advancements coincide with different areas of system limitations. The recognition of objects like traffic signs or traffic lights is still challenging, especially under bad weather conditions or when traffic signs are partially occluded. A common approach to deal with system boundaries of AVs is to shift to manual driving, accepting human factor issues like post-automation effects. We present CooperationCaptcha, a system that asks drivers to label unrecognized objects on the fly, and consequently maintain automated driving mode. We implemented two different interaction variants to work with object recognition algorithms of varying sophistication. Our findings suggest that this concept of driver-vehicle cooperation is feasible, provides good usability, and causes little cognitive load. We present insights and considerations for future research and implementations.
Article
Full-text available
In most levels of vehicle automation, drivers will not be merely occupants or passengers of automated vehicles. Especially in lower levels of automation, where the driver is still required to serve as a fallback level (SAE L3) or even as a supervisor (SAE L2), there is a need to communicate relevant system states (e.g., that the automated driving system works reliably or that there is a need for manual intervention) via the Human-Machine Interface (HMI). However, there are currently no agreed-upon guidelines that apply specifically to HMIs for automated driving. In this paper, we summarize design recommendations for visual-auditory and visual-vibrotactile HMIs derived from empirical research, applicable standards and design guidelines pertaining to in-vehicle interfaces. On this basis, we derive an initial set of principles and criteria for guiding the development and design of automated vehicle HMIs. A heuristic evaluation methodology consisting of an itemized checklist evaluation that can be used to verify that basic HMI requirements formulated in the guidelines are met is also presented. The heuristic evaluation involves an inspection of the HMI during typical use cases, judging their compliance with the proposed guidelines and documentation of identified instances of non-compliance. Taken together, the combination of the proposed guidelines and the heuristic evaluation methodology form the basis for both design and validation recommendations of automated vehicle HMIs, that can serve the industry in the important evolution of automation within vehicles.
Conference Paper
Full-text available
Automated vehicles could omit traditional steering controls to provide larger spaces for driver-passengers or prevent unnecessary interventions. However, manual control could still be necessary to provide manual driving fun or respond to Take-Over requests (TORs). This paper investigates, whether brought-in consumer devices (in this case a 10.2 inch tablet) can act as input alternative to classical steering wheels in TOR situations. Results of a driving simulator study (n=14) confirm that responding to Take-Overs with nomadic devices can reduce response times in imminent transitions during engagement in Non-Driving Related Tasks (NDRTs), as a change of the 'device in hands' is omitted. Further on, subjective scales addressing user experience show that the approach is well accepted. We conclude that nomadic device integration is a crucial pre-requisite for the success of automated vehicles, but for steering input several pivotal issues still need to be solved.
Conference Paper
Full-text available
Designing safe and effective systems for control transitions between human and vehicle is a difficult task, due to increased reaction times and potentially inattentive drivers. In order to respond to these difficulties, this paper presents an overview of interaction solutions for control transitions between manual and autonomous driving modes. The paper examines technology patents, as well as academic publications. The paper's first contribution is an examination of the current state of the art of control transition interfaces in automated vehicles. The paper's second contribution is the reusable categorization framework developed for this overview. The results are used to identify holes and potentials regarding control transition design, including strong focus on the system over the human, lacking fallback performance, and the potentials of effective driving mode communication. These aspects point the way towards the challenges to be solved -- together with how they might be solved -- for safe and effective control transitions.
Conference Paper
Full-text available
Highly-automated vehicles will provide the freedom for drivers to engage in secondary activities while the vehicle is in control. However, little is known regarding the nature of activities that drivers will undertake, and how these may impact drivers’ ability to resume manual control. In a novel, long-term, qualitative simulator study, six experienced drivers completed the same 30-minute motorway journey (portrayed as their commute to work) at the same time on five consecutive weekdays in a highly-automated car; a system ‘health-bar’ indicated the overall status of the automated system during each drive. Participants were invited to bring with them any objects or devices that they would expect to use in their own (automated) vehicle during such a journey, and use these freely during the drives. Inclement weather (heavy fog) on the penultimate day of testing presented an unexpected, emergency 5.0-second take-over request (indicated by an urgent auditory alarm and a flashing visual icon replacing a system ‘health-bar’). Video analysis with thematic coding shows that participants were quickly absorbed by a variety of secondary activities/devices, which typically demanded high levels of visual, manual and cognitive attention, and postural adaptation (e.g. moving/reclining the driver’s seat). The steering wheel was routinely used as a support for secondary objects/devices. Drivers were required to rapidly discharge secondary devices/activities and re-establish driving position/posture following the unexpected, emergency hand-over request on day four. This resulted in notable changes in participants’ subjective ratings of trust on the final day of testing, with some participants apparently more sceptical of the system following the emergency hand-over event, whereas others were more trusting than before. Qualitative results are presented and discussed in the context of the re-design of vehicles to enable the safe and comfortable execution of secondary activities during high-automation, while enabling effective transfer of control.
Article
Full-text available
Since Alan Turing envisioned Artificial Intelligence (AI) [1], a major driving force behind technical progress has been competition with human cognition. Historical milestones have been frequently associated with computers matching or outperforming humans in difficult cognitive tasks (e.g. face recognition [2], personality classification [3], driving cars [4], or playing video games [5]), or defeating humans in strategic zero-sum encounters (e.g. Chess [6], Checkers [7], Jeopardy! [8], Poker [9], or Go [10]). In contrast, less attention has been given to developing autonomous machines that establish mutually cooperative relationships with people who may not share the machine's preferences. A main challenge has been that human cooperation does not require sheer computational power, but rather relies on intuition [11], cultural norms [12], emotions and signals [13, 14, 15, 16], and pre-evolved dispositions toward cooperation [17], common-sense mechanisms that are difficult to encode in machines for arbitrary contexts. Here, we combine a state-of-the-art machine-learning algorithm with novel mechanisms for generating and acting on signals to produce a new learning algorithm that cooperates with people and other machines at levels that rival human cooperation in a variety of two-player repeated stochastic games. This is the first general-purpose algorithm that is capable, given a description of a previously unseen game environment, of learning to cooperate with people within short timescales in scenarios previously unanticipated by algorithm designers. This is achieved without complex opponent modeling or higher-order theories of mind, thus showing that flexible, fast, and general human-machine cooperation is computationally achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.
Chapter
Full-text available
Humans represent knowledge and learning experiences in the form of mental models. This concept from the field of cognitive psychology is one of the central theoretical paradigms for understanding and designing the interaction between humans and technical systems.
Conference Paper
First concepts for cooperative driving already illustrate the potential of the cooperation between human drivers and automated vehicles. However, the main influencing factors that determine an efficient and effective cooperation still need to be investigated. We therefore examined the effects of displaying the vehicle’s intended decision in a critical situation in combination with a confidence value. In a driving simulator study (N=49), the automated vehicle communicated uncertainty in predicting the behavior of a pedestrian and the participants could support the automation by entering their own decision to stop or drive through. The results show that the time until entering the own decision (system override) was longer when a pending system decision was indicated with a confidence value in the HMI. A low confidence value resulted in the longest interaction times. In addition, trust in automation and usability were lower compared to the baseline cooperative HMI without a confidence value.
Book
This book is dedicated to user experience design for automated driving to address humane aspects of automated driving, e.g., workload, safety, trust, ethics, and acceptance. Automated driving has experienced a major development boost in recent years. However, most of the research and implementation has been technology-driven, rather than human-centered. The levels of automated driving have been poorly defined and inconsistently used. A variety of application scenarios and restrictions has been ambiguous. Also, it deals with human factors, design practices and methods, as well as applications, such as multimodal infotainment, virtual reality, augmented reality, and interactions in and outside users. This book aims at 1) providing engineers, designers, and practitioners with a broad overview of the state-of-the-art user experience research in automated driving to speed-up the implementation of automated vehicles and 2) helping researchers and students benefit from various perspectives and approaches to generate new research ideas and conduct more integrated research.
Article
Objective: Investigating takeover , driving, non-driving related task (NDRT) performance, and trust of conditionally automated vehicles (AVs) in critical transitions on a test track. Background: Most experimental results addressing driver takeover were obtained in simulators. The presented experiment aimed at validating relevant findings while uncovering potential effects of motion cues and real risk. Method: Twenty-two participants responded to four critical transitions on a test track. Non-driving related task modality (reading on a handheld device vs. auditory) and takeover timing (cognitive load) were varied on two levels. We evaluated takeover and NDRT performance as well as gaze behavior. Further, trust and workload were assessed with scales and interviews. Results: Reaction times were significantly faster than in simulator studies. Further, reaction times were only barely affected by varying visual, physical, or cognitive load. Post-takeover control was significantly degraded with the handheld device. Experiencing the system reduced participants' distrust, and distrusting participants monitored the system longer and more frequently. NDRTs on a handheld device resulted in more safety-critical situations. Conclusion: The results confirm that takeover performance is mainly influenced by visual-cognitive load, while physical load did not significantly affect responses. Future takeover request (TOR) studies may investigate situation awareness and post-takeover control rather than reaction times only. Trust and distrust can be considered as different dimensions in AV research. Application: Conditionally AVs should offer dedicated interfaces for NDRTs to provide an alternative to using nomadic devices. These interfaces should be designed in a way to maintain drivers' situation awareness. Précis: This paper presents a test track experiment addressing conditionally automated driving systems. Twenty-two participants responded to critical TORs, where we varied NDRT modality and takeover timing. In addition, we assessed trust and workload with standardized scales and interviews.
Article
The automotive industry has witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with a high level of automation. With the recent developments in Artificial Intelligence (AI), automotive companies now employ blackbox AI models to enable vehicles to perceive their environment and make driving decisions with little or no input from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance with regulations. The assessment of the compliance of AVs to these acceptance requirements can be facilitated through the provision of explanations for AVs' behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have 'seen', done, and might do in environments in which they operate. In this paper, we provide a comprehensive survey of the existing work in explainable autonomous driving. First, we open by providing a motivation for explanations by highlighting the importance of transparency, accountability, and trust in AVs; and examining existing regulations and standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and elicit their AV explanation requirements. Third, we provide a rigorous review of previous work on explanations for the different AV operations (i.e., perception, localisation, planning, vehicle control, and system management). Finally, we discuss pertinent challenges and provide recommendations including a conceptual framework for AV explainability. This survey aims to provide the fundamental knowledge required of researchers who are interested in explanation provisions in autonomous driving.
Article
Humans are an ultrasocial species. This sociality, however, cannot be fully explained by the canonical approaches found in evolutionary biology, psychology, or economics. Understanding our unique social psychology requires accounting not only for the breadth and intensity of human cooperation but also for the variation found across societies, over history, and among behavioral domains. Here, we introduce an expanded evolutionary approach that considers how genetic and cultural evolution, and their interaction, may have shaped both the reliably developing features of our minds and the well-documented differences in cultural psychologies around the globe. We review the major evolutionary mechanisms that have been proposed to explain human cooperation, including kinship, reciprocity, reputation, signaling, and punishment; we discuss key culture–gene coevolutionary hypotheses, such as those surrounding self-domestication and norm psychology; and we consider the role of religions and marriage systems. Empirically, we synthesize experimental and observational evidence from studies of children and adults from diverse societies with research among nonhuman primates. Expected final online publication date for the Annual Review of Psychology, Volume 72 is January 5, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Thesis
https://mediatum.ub.tum.de/?id=1520460 Um die Systeminteraktion zwischen hochautomatisiertem System und dem Lkw-Fahrer positiv zu beeinflussen, werden HMI-Konzepte im iterativen, nutzerzentrierten Ansatz entwickelt und evaluiert. Eine abschließende, summative Evaluation untersucht das finale HMI-Konzept hinsichtlich der Veränderung der Mensch-Maschine-Interaktion mit steigender Systemerfahrung in einem Langzeittest. Der Fokus liegt hierbei auf der Erfassung der Veränderung des mentalen Modells, des Vertrauens, der Akzeptanz und der Nutzungsintention.
Article
Operators of highly automated driving systems may exhibit behaviour characteristic for overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. A driving simulator study was used to assess the impact of dynamically communicating system uncertainties on monitoring, trust, workload, takeovers, and physiological responses. The uncertainty information was conveyed visually using a stylised heart beat combined with a numerical display and users were engaged in a visual search task. Multilevel analysis results suggest that uncertainty communication helps operators calibrate their trust and gain situation awareness prior to critical situations, resulting in safer takeovers. In addition, eye tracking data indicate that operators can adjust their gaze behaviour in correspondence with the level of uncertainty. However, conveying uncertainties using a visual display significantly increases operator workload and impedes users in the execution of non-driving related tasks. Practitioner Summary: This article illustrates how the communication of system uncertainty information helps operators calibrate their trust in automation and, consequently, gain situation awareness. Multilevel analysis results of a driving simulator study affirm the benefits for trust calibration and highlight that operators adjust their behaviour according to multiple uncertainty levels.
Conference Paper
Safe manual driving performance following takeovers in conditionally automated driving systems is impeded by a lack in situation awareness, partly due to an inappropriate trust in the system's capabilities. Previous work has indicated that the communication of system uncertainties can aid the trust calibration process. However, it has yet to be investigated how the information is best conveyed to the human operator. The study outlined in this publication presents an interface layout to visualise function-specific uncertainty information in an augmented reality display and explores the suitability of 11 visual variables. 46 participants completed a sorting task and indicated their preference for each of these variables. The results demonstrate that particularly colour-based and animation-based variables, above all hue, convey a clear order in terms of urgency and are well-received by participants. The presented findings have implications for all augmented reality displays that are intended to show content varying in urgency.
Conference Paper
Lack of trust can arise when people do not know what autonomous vehicles perceive in the environment. To convey this information without causing alarm or compelling people to act, we designed and evaluated a way to sonify an autonomous vehicle's perception of salient driving events using abstract auditory icons, or "earcons." These are localized in space using an in-car quadraphonic speaker array to correspond with the direction of events. We describe the interaction design for these awareness cues and a validation experiment (N=28) examining the effects of sonified events on drivers' sense of situation awareness, comfort, and trust. Overall, this work suggests that our designed earcons do improve people's awareness of in-simulation events. The effect of the increased situational awareness on trust and comfort is inconclusive. However, post-study design feedback suggests that sounds should have low levels of intensity and dissonance, and a sense of belonging to a common family.
Chapter
As long as automated vehicles are not able to handle driving in every possible situation, drivers will still have to take part in the driving task from time to time. Recent research focused on handing over control entirely when automated systems reach their boundaries. Our overview on research in this domain shows that handovers are feasible, however, they are not a satisfactory solution since human factor issues such as reduced situation awareness arise in automated driving. In consequence, we suggest to implement cooperative interfaces to enable automated driving even with imperfect automation. We recommend to consider four basic requirements for driver–vehicle cooperation: mutual predictability, directability, shared situation representation, and calibrated trust in automation. We present research that can be seen as a step towards cooperative interfaces in regard to these requirements. Nevertheless, these systems are only solutions for parts of future cooperative interfaces and interaction concepts. Future design of interaction concepts in automated driving should integrate the cooperative approach in total in order to achieve safe and comfortable automated mobility.
Article
As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human-autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human-automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human-autonomy interfaces are presented and directions for future research discussed.
Article
Given the highly social nature of the human emotion system, it is likely that it subserved the evolution of ultrasociality. We review how the experience and functions of human emotions enable social processes that promote ultrasociality (e.g., cooperation). We also point out that emotion may represent one route to redress one of the negative consequences of ultrasociality: ecosystem domination.
Article
The task of car driving is automated to an ever greater extent. In the foreseeable future, drivers will no longer be required to touch the steering wheel and pedals and could engage in non-driving tasks such as working or resting. Vibrotactile displays have the potential to grab the attention of the driver when the automation reaches its functional limits and the driver has to take over control. The aim of the present literature survey is to outline the key physiological and psychophysical aspects of vibrotactile sensation and to provide recommendations and relevant research questions regarding the use of vibrotactile displays for taking over control from an automated vehicle. Results showed that a distinction can be made between four dimensions for coding vibrotactile information (amplitude, frequency, timing, and location), each of which can be static or dynamic. There is a consensus that frequency and amplitude are less suitable for coding information than location and timing. Vibrotactile stimuli have been shown to be effective as simple warnings. However, vibrations can evoke annoyance, and providing vibrations in close spatial–temporal proximity might cause a lack of comprehension of the signal. We describe the sequential stages of a take-over process and argue that vibrotactile displays are a promising candidate for redirecting the attention of a distracted driver. Furthermore, vibrotactile displays hold potential for supporting cognitive processing and action selection while resuming control of an automated vehicle. Finally, we argue that multimodal feedback should be used to assist the driver in the take-over process.