July 2023
·
588 Reads
·
7 Citations
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
July 2023
·
588 Reads
·
7 Citations
October 2022
·
486 Reads
·
5 Citations
May 2021
·
503 Reads
·
77 Citations
In a fully autonomous driving situation, passengers hand over the steering control to a highly automated system. Autonomous driving behaviour may lead to confusion and negative user experience. When establishing such new technology, the user’s acceptance and understanding are crucial factors regarding success and failure. Using a driving simulator and a mobile application, we evaluated if system transparency during and after the interaction can increase the user experience and subjective feeling of safety and control. We contribute an initial guideline for autonomous driving experience design, bringing together the areas of user experience, explainable artificial intelligence and autonomous driving. The AVAM questionnaire, UEQ-S and interviews show that explanations during or after the ride help turn a negative user experience into a neutral one, which might be due to the increased feeling of control. However, we did not detect an effect for combining explanations during and after the ride.
September 2019
·
245 Reads
·
8 Citations
Nowadays, intelligent assistants, such as Amazon's Alexa, are widely available. Unsurprisingly, intelligent assistants find their way into cars, in some cases as a major way to interact with the car. We conducted a user enactment exploring the impact of transparency on a possible future user experience with a digital AI assistant in the car. The focus is on whether tasks should be performed in an opaque way, only involving the user when it is necessary, or in a transparent way, always offering the user insights into what is being done and how. We present initial findings indicating a slight preference towards more transparency.
July 2019
·
371 Reads
·
30 Citations
Communications in Computer and Information Science
It is typically not transparent to end-users, how AI systems derive information or make decisions. This becomes crucial, the more pervasive AI systems enter human daily lives, the more they influence automated decision-making, and the more people rely on them. We present work in progress on explainability to support transparency in human AI interaction. In this paper, we discuss methods and research findings on categorizations of user types, system scope and limits, situational context, and changes over time. Based on these different dimensions and their range and combinations, we aim at individual facets of transparency that address a specific situation best. The approach is human-centered to provide adequate explanations with regard to their depth of detail and level of information, and we outline the different dimensions of this complex task.
... Research. Building on the concepts of Design Fiction and Diegetic Prototypes from the late 2000s, HCI researchers have explored various media beyond film [31,100], including interactive theater [98], websites [79], design probes [61], storyboards [73,88], memos [103], and even fictional research papers [51]. Markussen and Knutz [55] have also redefined Design Fiction from a "poetics" perspective, proposing new prototyping methods. ...
July 2023
... An example is techno-mimesis [16], a method for the design of robots, where developers roleplay the robots to be designed to better understand their limitations and "superpowers" in relation to humans. Another approach is to think more about the roles the otherware should fulfill (see also [63]), similarly to the way Eva identified personal "social gaps" and designed personas to fill them. Note that when it comes to social interactions, it is almost inevitable to draw on or compare to experiences of human-human interactions and relationships. ...
October 2022
... Such work emphasizes the need for in-situ explanations [15-17, 19, 34, 48, 94] to foster user trust and collaboration, especially during unexpected AV behaviors [47,59]. Providing explanations during the ride, especially focusing on answering "why" questions, can enhance user experience, perceived safety, and trust while reducing negative emotions [21,49,64,77]. However, existing XAI approaches in AVs still face challenges in addressing the specific needs of various stakeholders, such as balancing intelligibility with technical complexity [65,66,68,99]. ...
May 2021
... Liu (2021), for instance, in the context of AI, found that transparency (i.e., when the details for the decision are outlined and the decision is sound) can reduce user uncertainty and enhance trust. Neuhaus et al. (2019), in examining whether tasks performed by an AI-based intelligent assistant should be executed in an opaque or transparent manner, acknowledged the preference for enhanced AI-Based Digital Assistants in the Workplace: An Idiomatic Analysis Volume 55 10.17705/1CAIS.05524 ...
September 2019
... In alignment with user needs, these systems amplify human expertise while ensuring transparency and explainability, helping users understand the decisions and limitations of AI [35,60]. Through effective communication, iterative feedback, and user control, these systems create dynamic collaborations to enhance workflows [45,91,105]. ...
July 2019
Communications in Computer and Information Science