Alischa Rosenstein’s research while affiliated with Mercedes-Benz (Germany) and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Kinesthetic learning style. The kynesthetic learning style consists of highlighted portions of the original full text.
Auditory learning style. The auditory learning style consists of a textual summary of the original content and an additional auditory reading of the summary.
Reading/writing learning style. The visual learning style consists of a graphic rendering of the key points of the original full text.
Visual learning style. The reading/writing learning style consists of a series of bullet points containing key chunks of the original full text, adapted from the summary.
Malfunction explanation.

+3

Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System
  • Article
  • Full-text available

March 2024

·

66 Reads

·

3 Citations

·

·

·

[...]

·

Sabine T. Koeszegi

This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the task of learning new texts and taking quizzes. Using a 2 × 2 mixed design, the system’s malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated as between-subjects variables, whereas exposure time was used as a repeated-measure variable. A combined qualitative and quantitative methodological approach was used to analyze the data from 171 participants. Our results show that participants perceived the system making a faulty recommendation as a trust violation. Additionally, a trend emerged from both the quantitative and qualitative analyses regarding how the availability of explanations (even when not accessed) increased the perception of a trustworthy system.

Download

Don’t fail me! The Level 5 Autonomous Driving Information Dilemma regarding Transparency and User Experience

March 2023

·

247 Reads

·

11 Citations

Autonomous vehicles can behave unexpectedly, as automated systems that rely on data-driven machine learning have shown to infer false predictions or misclassifications, e.g., due to stickers on traffic signs, and thus fail in some situations. In critical situations, system designs must guarantee safety and reliability. However, in non-critical situations, the possibility of failures resulting in unexpected behaviour should be considered, as they negatively impact the passenger’s user experience and acceptance. We analyse if an interactive conversational user interface can mitigate negative experiences when interacting with imperfect artificial intelligence systems. In our quantitative interactive online survey (N=113) and comparative qualitative Wizard of Oz study (N=8), users were able to interact with an autonomous SAE level 5 driving simulation. Our findings demonstrate that increased transparency improves user experience and acceptance. Furthermore, we show that additional information in failure scenarios can lead to an information dilemma and should be implemented carefully.


Figure 1: Experiment sequence for individual participants.
Figure 2: Screenshot of moderate and racy speed modes.
Statistical tests on sets of paired questionnaire items to validate the hypotheses formulated in section 2.0.4. (1) Wilcoxon rank sum test, (2) Wilcoxon signed rank test.
Velocity Styles for Autonomous Vehicles affecting Control, Safety, and User Experience

November 2021

·

269 Reads

·

2 Citations

We introduce and evaluate if user acceptance increases by allowing users to select their preferred driving velocity in the context of autonomous driving. While the actual driving style does not differ, adjustments are made with regard to visualisation and sound of the interior of an autonomous vehicle simulator. These adjustments mimic the preference of a faster or slower driving style selected by the user. The experimental results show (1) that the perception regarding control and safety can increase when introducing customisation options and different feedback modalities and (2) that users experience a change in the experienced driving style simply due to visual and auditive modifications, even though the vehicle’s actual driving does not change.


ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving

May 2021

·

503 Reads

·

78 Citations

In a fully autonomous driving situation, passengers hand over the steering control to a highly automated system. Autonomous driving behaviour may lead to confusion and negative user experience. When establishing such new technology, the user’s acceptance and understanding are crucial factors regarding success and failure. Using a driving simulator and a mobile application, we evaluated if system transparency during and after the interaction can increase the user experience and subjective feeling of safety and control. We contribute an initial guideline for autonomous driving experience design, bringing together the areas of user experience, explainable artificial intelligence and autonomous driving. The AVAM questionnaire, UEQ-S and interviews show that explanations during or after the ride help turn a negative user experience into a neutral one, which might be due to the increased feeling of control. However, we did not detect an effect for combining explanations during and after the ride.

Citations (3)


... Similarly, objective understandability evaluations, as used by Bhattacharya et al. [9], might complement Hoffman et al. 's subjective questions [23]. Additionally, studies suggest that assessing trust requires longitudinal evaluations to account for its gradual development [70]. Future work should incorporate these measures for a more comprehensive evaluation. ...

Reference:

Show Me How: Benefits and Challenges of Agent-Augmented Counterfactual Explanations for Non-Expert Users
Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System

... In designing AI voice agents for autonomous vehicles, researchers have emphasized factors such as user preferences (Large et al., 2019;Lee et al., 2022;Wang et al., 2021), safety and comfort (Prasetio & Nurliyana, 2023;Yoo et al., 2022), diverse scenarios (Graefe et al., 2022;Schneider et al., 2023), privacy and security (Prasetio & Nurliyana, 2023;Sadaf et al., 2023). However, a critical but often overlooked aspect is the design of these agents for multi-user environments within FAVs. ...

Don’t fail me! The Level 5 Autonomous Driving Information Dilemma regarding Transparency and User Experience

... Such work emphasizes the need for in-situ explanations [15-17, 19, 34, 48, 94] to foster user trust and collaboration, especially during unexpected AV behaviors [47,59]. Providing explanations during the ride, especially focusing on answering "why" questions, can enhance user experience, perceived safety, and trust while reducing negative emotions [21,49,64,77]. However, existing XAI approaches in AVs still face challenges in addressing the specific needs of various stakeholders, such as balancing intelligibility with technical complexity [65,66,68,99]. ...

ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving