Figure 2 - uploaded by Tobias Schneider
Content may be subject to copyright.
Driving scenario no. 1 as displayed to participants of group A (left) and group B (right).

Driving scenario no. 1 as displayed to participants of group A (left) and group B (right).

Source publication
Conference Paper
Full-text available
In a fully autonomous driving situation, passengers hand over the steering control to a highly automated system. Autonomous driving behaviour may lead to confusion and negative user experience. When establishing such new technology, the user’s acceptance and understanding are crucial factors regarding success and failure. Using a driving simulator...

Context in source publication

Context 1
... example of driving scenario number 1 is shown in Figure 2. Only the participants of group B received live explanations in multiple ways. ...

Similar publications

Article
Full-text available
En un entorno de constantes cambios y nuevas maneras de interacción, muchas de ellas mediadas por tecnología, el término innovación es utilizado con mayor frecuencia en los ámbitos empresarial, tecnológico, social y educativo. Esto propicia que el término tenga un carácter polisémico, que adquiere diferentes significados de acuerdo al contexto en q...

Citations

... Such work emphasizes the need for in-situ explanations [15-17, 19, 34, 48, 94] to foster user trust and collaboration, especially during unexpected AV behaviors [47,59]. Providing explanations during the ride, especially focusing on answering "why" questions, can enhance user experience, perceived safety, and trust while reducing negative emotions [21,49,64,77]. However, existing XAI approaches in AVs still face challenges in addressing the specific needs of various stakeholders, such as balancing intelligibility with technical complexity [65,66,68,99]. ...
... Situational awareness directly influences trust in AVs [81], as it affects their ability to avoid hazards, plan routes, and maintain safety. Our findings reiterate the need for providing adaptable and context-sensitive explanations [19,21,34,94] during or immediately after a ride [77]. Further, addressing passengers' concerns about an AV's situational awareness could foster appropriate trust, bridging the gap between user expectations and the vehicle's capabilities. ...
Preprint
Full-text available
Improving end-users' understanding of decisions made by autonomous vehicles (AVs) driven by artificial intelligence (AI) can improve utilization and acceptance of AVs. However, current explanation mechanisms primarily help AI researchers and engineers in debugging and monitoring their AI systems, and may not address the specific questions of end-users, such as passengers, about AVs in various scenarios. In this paper, we conducted two user studies to investigate questions that potential AV passengers might pose while riding in an AV and evaluate how well answers to those questions improve their understanding of AI-driven AV decisions. Our initial formative study identified a range of questions about AI in autonomous driving that existing explanation mechanisms do not readily address. Our second study demonstrated that interactive text-based explanations effectively improved participants' comprehension of AV decisions compared to simply observing AV decisions. These findings inform the design of interactions that motivate end-users to engage with and inquire about the reasoning behind AI-driven AV decisions.
... This subscale of trust is particularly important in the context of C-ITS, where decisions rely on data shared invisibly between AVs and with the infrastructure [19]. Without clear explanations, users may experience automation surprises, where decisions appear unjustified or unpredictable, reducing trust and acceptance [4,17,22]. Müller, Colley et al. [19] identified three key information types to visualize V2X information: action-related (e.g., showing planned maneuvers), location-specific (e.g., visualization of V2X coverage area), and dynamic environment (e.g., highlighting other road users). Their study found that visualizations incorporating all three types significantly increased understanding and, thus, trust in the AV. ...
... Recently, researchers have shown increasing interest in the field of Explainable Artificial Intelligence (XAI), which aims to enhance transparency in intelligent systems and inform users about the decision-making processes behind system behavior, predictions, or recommendations . XAI initiatives span diverse domains, including recommender systems in educational settings (Karga & Satratzemi, 2019;Zheng & Toribio, 2021), autonomous driving systems (Schneider et al., 2021), clinical decision-support systems (Bussone et al., 2015), and robotics (Kaptein et al., 2017). These applications highlight the critical need for transpar-ency, as complex systems increasingly impact various facets of human life. ...
Chapter
In today's digitally driven world, human interaction has transformed significantly, particularly with the emergence of parasocial relationships and virtual influencers. This chapter applies Horton and Wohl (1956) parasocial interaction (PSI) framework to critically analyze how audiences connect with virtual influencers. It examines their appeal to younger demographics like Generation Z, who are more receptive to AI-driven figures, and explores how brands employ these influencers as marketing agents. Ethical concerns are highlighted, including transparency, manipulation, and the promotion of unrealistic standards. The chapter also questions the authenticity of virtual influencers, given their entirely orchestrated personas. Further, the chapter provides a comprehensive exploration of these developments, emphasizing the need for ethical responsibility as technology continues to blur the lines between the virtual and real worlds.
... Tener and Liu highlight the indispensable need for human assistance in AVs, despite advancements in technology [65]. Schneider et al. explore the impact of system transparency on AV user experience and safety perceptions, providing guidelines for integrating user experiences with autonomous driving [60]. Moreover, Dillen et al. address the conflict between preset AV driving styles and passenger expectations, affecting comfort and anxiety [12]. ...
Article
Full-text available
In the evolution of software systems, especially in domains like autonomous vehicles, dynamic user preferences are critical yet challenging to accommodate. Existing methods often misrepresent these preferences, either by overlooking their dynamism or overburdening users as humans often find it challenging to express their objectives mathematically. Addressing this, we introduce a novel framework interpreting dynamic preferences as inherent uncertainty, anchored on a “human-on-the-loop” mechanism enabling users to give feedback when dissatisfied with system behaviors. Leveraging a designed fitness function, our system employs a genetic algorithm to adapt preference values, aligning preferences with user expectations through feedback-driven adaptation. We validated its effectiveness with an autonomous driving prototype and a user study involving 20 participants. The findings highlight our framework’s capability to effectively merge algorithm-driven adjustments with user complaints, leading to improved participants’ subjective satisfaction in autonomous systems.
... According to ISO 9241-210 [41], user experience incorporates all the users' emotions, beliefs, preferences, and perceptions before, during, and after use. In the AV context, user acceptance-the extent to which users are willing to use a new technology based on perceived ease of use and usefulness [23]-contributes to high user experience [14,20,72]. ...
... Both undertrust (e.g., leading to not using AVs) and overtrust (e.g., inadequately supervising the operation) present challenges for AV usage if the user's trust is inappropriate to the actual AV reliability [54]. Therefore, previous works have shown that visualizations of AV functionality are a way to enhance user experience [20,32,72]. ...
... Currano et al. [22] also tested an AR HUD and found situation awareness varied based on the driving scene's complexity and the participants' reported driving styles, suggesting that the HUD should be personalized and adapt to these factors. Schneider et al. [72] evaluated explanations delivered via an AR WSD and an LED strip concerning the future vehicle trajectory. User experience increased with explanations, but adding a post-explanation via a smartphone app did not enhance it. ...
Preprint
Full-text available
Automated vehicle (AV) acceptance relies on their understanding via feedback. While visualizations aim to enhance user understanding of AV's detection, prediction, and planning functionalities, establishing an optimal design is challenging. Traditional "one-size-fits-all" designs might be unsuitable, stemming from resource-intensive empirical evaluations. This paper introduces OptiCarVis, a set of Human-in-the-Loop (HITL) approaches using Multi-Objective Bayesian Optimization (MOBO) to optimize AV feedback visualizations. We compare conditions using eight expert and user-customized designs for a Warm-Start HITL MOBO. An online study (N=117) demonstrates OptiCarVis's efficacy in significantly improving trust, acceptance, perceived safety, and predictability without increasing cognitive load. OptiCarVis facilitates a comprehensive design space exploration, enhancing in-vehicle interfaces for optimal passenger experiences and broader applicability.
... Similar to the trust rating, perceived safety was probably not improved by the visual concept due to participants not being able to perceive the changes in the pattern of the light strip. These effects point towards the importance of the automations' transparency and explainability for increasing trust in automation and perceived safety, as similar research on explainable AI has shown [5,19,25,34,37]. During periods in which the automation was able to execute the driving task independently, the participants felt reassured that the automation would inform them in case of critical situations that it potentially was unable to handle on its own. ...
... Only one requires an absolute position [12]. Asynchronous interaction was also considered [19,65,137,149]. ...
... vehicle driver cooperation and collaboration: [58] eHMI: [9,36] takeover: [33] in-vehicle interaction: [32,135] driver-passenger interaction: [15] eHMI: [6,9] in-vehicle interaction: [32] eHMI: [9] vehicle passenger human-human-robot-interaction: [52] in-vehicle multiplayer gaming: [21] in-vehicle interaction: [30,149,174] in-vehicle interaction: [149,174] pedestrian SA for PVIs: [99] eHMI: [98] cyclist motor-cyclist in-vehicle interaction: [19,137] adaptive driving: [128,143] cooperation and collaboration: [27,129,132,165,166,171] dHMI, eHMI: [117] driver monitoring: [17,92,93] driver-passenger interaction: [15] eHMI: [9,118] entertainment: [18] in-vehicle interaction: [19, 23, 32, 40, 43, 46, 55, 65-68, 71-73, 83, 87, 88, 94, 95, 112, 115, 122, 135, 137, 142, 155, 158, 160, 163, 175, 176, 178, 180] takeover: [45,53,54,63,80,82,116,121,124,126,127,139,145,147,164,168] in-vehicle interaction: [12] e-scooter rider-road-user interaction: [102] in-vehicle interaction: [19] vehicle passenger accessibility: [109] eHMI: [37] in-vehicle interaction: [23,31,46,59,65,68,83,103,140,149,155,158,167,174] in-vehicle multiplayer gaming: [96,159] motion sickness: [133] human-humanrobot-interaction: [52] in-vehicle interaction: [61] pedestrian eHMI: [4, 9, 13, 25, 28, 29, 37, 39, 42, 47, 49-51, 57, 69, 97, 98, 105, 120, 141, 156] in-vehicle interaction: [30,32,149,174] pedestrian-vehicle interaciton: [107] pedestrian-vehicle interaction: [79,170] takeover: [45,63] e-scooter rider-road-user interaction: [102] wheelchair user-pedestrian interaction, wheelchair-user interaction: [179] interaction: [177] eHMI: [9,81,120] in-vehicle interaction: [32,149,174] vehicle-cyclist interaction: [111] e-scooter rider-road-user interaction: [102] motorcyclist eHMI: [9] e-scooter rider-road-user interaction: [102] official non-roaduser contestable camera cars: [8] teleoperation interfaces: [157] smart vehicle cooperation and collaboration: [165] in-vehicle interaction: [31,32,40,149,174] in-vehicle multiplayer gaming: [96,159] e-scooter rider-road-user interaction: [102] smart object other wheelchair user-pedestrian interaction, wheelchair-user interaction: [179] ecological driving [40] e-scooter-rider-road-user interaction [102] pedestrian-vehicle interaction [79] vehicle-pedestrian interaction [107] driver-passenger interaction [15] dHMI [117] driver monitoring [93] [93] ...
... vehicle driver cooperation and collaboration: [58] eHMI: [9,36] takeover: [33] in-vehicle interaction: [32,135] driver-passenger interaction: [15] eHMI: [6,9] in-vehicle interaction: [32] eHMI: [9] vehicle passenger human-human-robot-interaction: [52] in-vehicle multiplayer gaming: [21] in-vehicle interaction: [30,149,174] in-vehicle interaction: [149,174] pedestrian SA for PVIs: [99] eHMI: [98] cyclist motor-cyclist in-vehicle interaction: [19,137] adaptive driving: [128,143] cooperation and collaboration: [27,129,132,165,166,171] dHMI, eHMI: [117] driver monitoring: [17,92,93] driver-passenger interaction: [15] eHMI: [9,118] entertainment: [18] in-vehicle interaction: [19, 23, 32, 40, 43, 46, 55, 65-68, 71-73, 83, 87, 88, 94, 95, 112, 115, 122, 135, 137, 142, 155, 158, 160, 163, 175, 176, 178, 180] takeover: [45,53,54,63,80,82,116,121,124,126,127,139,145,147,164,168] in-vehicle interaction: [12] e-scooter rider-road-user interaction: [102] in-vehicle interaction: [19] vehicle passenger accessibility: [109] eHMI: [37] in-vehicle interaction: [23,31,46,59,65,68,83,103,140,149,155,158,167,174] in-vehicle multiplayer gaming: [96,159] motion sickness: [133] human-humanrobot-interaction: [52] in-vehicle interaction: [61] pedestrian eHMI: [4, 9, 13, 25, 28, 29, 37, 39, 42, 47, 49-51, 57, 69, 97, 98, 105, 120, 141, 156] in-vehicle interaction: [30,32,149,174] pedestrian-vehicle interaciton: [107] pedestrian-vehicle interaction: [79,170] takeover: [45,63] e-scooter rider-road-user interaction: [102] wheelchair user-pedestrian interaction, wheelchair-user interaction: [179] interaction: [177] eHMI: [9,81,120] in-vehicle interaction: [32,149,174] vehicle-cyclist interaction: [111] e-scooter rider-road-user interaction: [102] motorcyclist eHMI: [9] e-scooter rider-road-user interaction: [102] official non-roaduser contestable camera cars: [8] teleoperation interfaces: [157] smart vehicle cooperation and collaboration: [165] in-vehicle interaction: [31,32,40,149,174] in-vehicle multiplayer gaming: [96,159] e-scooter rider-road-user interaction: [102] smart object other wheelchair user-pedestrian interaction, wheelchair-user interaction: [179] ecological driving [40] e-scooter-rider-road-user interaction [102] pedestrian-vehicle interaction [79] vehicle-pedestrian interaction [107] driver-passenger interaction [15] dHMI [117] driver monitoring [93] [93] ...
Article
Full-text available
Rising diversity through novel forms of mobility and increasing connectivity through intelligent systems and wireless connection is leading to a complex traffic environment. However, traditional automotive interface research often focuses on the interaction between vehicle and driver, passenger, or pedestrian, not capturing the interconnected relationships among various traffic participants. Therefore, we developed a design space for Cross-Traffic Interaction (CTI) based on a focus group with six HCI experts, encompassing the dimensions: (1) interaction partners, (2) their traffic situations, and (3) their interaction relationship. Through a systematic literature review, we classified 116 publications, showing less-studied interaction possibilities. Illustrating the practical application of our design space, we developed three interactive prototypical applications: Shooting Stars, Flow Rider, and Road Reels. A study (N=12) shows that the applications were received well and could improve traffic experience. Overall, our design space serves as a foundational tool for understanding and exploring the challenges and diverse opportunities within CTI, bridging the gap between traditional automotive interface research and the complex realities of modern traffic environments.
... Explanations have been found useful in enhancing user experience [56], trust [34,26], and improved situational awareness [49,40] in automated driving. Recent works have explored human factors in the application of explainable AI in AD. ...
... We focused on how this setup would impact passengers' perceived safety and related factors, such as the feeling of anxiety and the thought to takeover control. Our results not only corroborate but also extend previous findings in the field, among others, demonstrating that while intelligible explanations generally create positive experiences for AV users [50,26,56,42], this effect is predominantly observed when the AV's perception system errors are low. ...
... It comprises a 26-item questionnaire on a 7-point Likert scale, developed after a survey conducted to evaluate six different autonomy scenarios.Items 24-26 were used to assess the Perceived Safety factor, while items 19-21 were used to assess the Feeling of Anxiety factor. Similar to[56], we introduced a new item to assess participants' feelings to takeover navigation control from the AV during the ride (Takeover Feeling). Specifically, participants were asked to rate the statement 'During the ride, I had the feeling to take over control from the vehicle' on a 7-point Likert scale. ...
Preprint
Full-text available
Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety), that could outweigh its benefits? It's quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD, and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers' behavioural cues and their feelings during the autonomous drives. Our findings showed that passengers felt safer with specific explanations when the vehicle's perception system had minimal errors, while abstract explanations that hid perception errors led to lower feelings of safety. Anxiety levels increased when specific explanations revealed perception system errors (high transparency). We found no significant link between passengers' visual patterns and their anxiety levels. Our study suggests that passengers prefer clear and specific explanations (high transparency) when they originate from autonomous vehicles (AVs) with optimal perceptual accuracy.
... However, no significant effects were found on the user perception of the AV functionalities. Similarly, Schneider et al. (2021) evaluated explanations of the future trajectory via an AR WSD and an LED strip. First-time users reported increased user experience with explanations. ...
Article
Full-text available
User acceptance is essential for successfully introducing automated vehicles (AVs). Understanding the technology is necessary to overcome skepticism and achieve acceptance. This could be achieved by visualizing (uncertainties of) AV's internal processes, including situation perception, prediction, and trajectory planning. At the same time, relevant scenarios for communicating the functionalities are unclear. Therefore, we developed EduLicit to concurrently elicit relevant scenarios and evaluate the effects of visualizing AV's internal processes. A website capable of showing annotated videos enabled this methodology. With it, we replicated the results of a previous online study (N=76) using pre-recorded real-world videos. Additionally, in a second online study (N=22), participants uploaded scenarios they deemed challenging for AVs using our website. Most scenarios included large intersections and/or multiple vulnerable road users. Our work helps assess scenarios perceived as challenging for AVs by the public and, simultaneously, can help educate the public about visualizations of the functionalities of current AVs.
... However, previous research has primarily concentrated on conveying traffic information among sighted individuals. Therefore, these efforts predominantly rely on the visual modality [18,41,50,66,76], which remains inaccessible to BVIPs. ...
... Additionally, it has been suggested that the presentation of future trajectories to passengers can enhance their trust levels [20,41,76]. Hence, Schneider et al. [66] used peripherical light bands inside the vehicle to visualize objects and situations on the road. Similar approaches using ambient light showed increased effects on the participant's comprehension of the road situation [47,[50][51][52]. ...
Article
Full-text available
Highly Automated Vehicles offer a new level of independence to people who are blind or visually impaired. However, due to their limited vision, gaining knowledge of the surrounding traffic can be challenging. To address this issue, we conducted an interactive, participatory workshop (N=4) to develop an auditory interface and OnBoard - a tactile interface with expandable elements - to convey traffic information to visually impaired people. In a user study with N=14 participants, we explored usability, situation awareness, predictability, and engagement with OnBoard and the auditory interface. Our qualitative and quantitative results show that tactile cues, similar to auditory cues, are able to convey traffic information to users. In particular, there is a trend that participants with reduced visual acuity showed increased engagement with both interfaces. However, the diversity of visual impairments and individual information needs underscores the importance of a highly tailored multimodal approach as the ideal solution.