Andreas Butz’s research while affiliated with Ludwig-Maximilians-Universität in Munich and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (298)


Fig. 2. The four versions of the diversion assistance system, from top to bottom: Rec, Cont, Rec+Cont, Baseline. The corresponding views 1 ○-6 ○ that are applicable during the three operational phases normal flight, emergency occurs, and select diversion option, are numbered corresponding to the detailed view in Figure 3.
Fig. 3. Screenshots of the diversion assistance system. 1 ○ Recommendations. 2 ○ Edit recommendation criteria. 3 ○ Local hints during normal flight. 4 ○ Adjusted local hints during emergency. 5 ○ Select emergency type. 6 ○ Baseline without AI. The numbers correspond to those in Figure 2.
Fig. 6. Single engine failure scenario. The flight is from Geneva to London. Figure based on SkyVector 10 .
Fig. 7. Medical emergency scenario. The flight is from Tenerife to Munich. Figure based on SkyVector.
Fig. 8. Airport closure scenario. The flight is from Sharm El Sheikh to Hamburg. Figure based on SkyVector.

+1

Beyond Recommendations: From Backward to Forward AI Support of Pilots' Decision-Making Process
  • Article
  • Full-text available

November 2024

·

86 Reads

·

1 Citation

Proceedings of the ACM on Human-Computer Interaction

·

·

Lucas Dullenkopf

·

[...]

·

Andreas Butz

AI is anticipated to enhance human decision-making in high-stakes domains like aviation, but adoption is often hindered by challenges such as inappropriate reliance and poor alignment with users' decision-making. Recent research suggests that a core underlying issue is the recommendation-centric design of many AI systems, i.e., they give end-to-end recommendations and ignore the rest of the decision-making process. Alternative support paradigms are rare, and it remains unclear how the few that do exist compare to recommendation-centric support. In this work, we aimed to empirically compare recommendation-centric support to an alternative paradigm, continuous support, in the context of diversions in aviation. We conducted a mixed-methods study with 32 professional pilots in a realistic setting. To ensure the quality of our study scenarios, we conducted a focus group with four additional pilots prior to the study. We found that continuous support can support pilots' decision-making in a forward direction, allowing them to think more beyond the limits of the system and make faster decisions when combined with recommendations, though the forward support can be disrupted. Participants' statements further suggest a shift in design goal away from providing recommendations, to supporting quick information gathering. Our results show ways to design more helpful and effective AI decision support that goes beyond end-to-end recommendations.

Download


You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard Ones

September 2024

·

50 Reads

·

1 Citation

Explaining the mechanisms behind model predictions is a common strategy in AI-assisted decision-making to help users rely appropriately on AI. However, recent research shows that the effectiveness of explanations depends on numerous factors, leading to mixed results, with many studies finding no effect or even an increase in overreliance, while explanations do improve appropriate reliance in other studies. We consider the factor of decision difficulty to better understand when feature-based explanations can mitigate overre-liance. To this end, we conducted an online experiment (N = 200) with carefully selected task instances that cover a wide range of difficulties. We found that explanations reduce overreliance for easy decisions, but that this effect vanishes with increasing decision difficulty. For the most difficult decisions, explanations might even increase overreliance. Our results imply that explanations of the model's inner workings are only helpful for a limited set of decision tasks where users easily know the answer themselves.


Social MediARverse Investigating Users Social Media Content Sharing and Consuming Intentions with Location-Based AR

August 2024

·

55 Reads

Augmented Reality (AR) is evolving to become the next frontier in social media, merging physical and virtual reality into a living metaverse, a Social MediARverse. With this transition, we must understand how different contexts (public, semi-public, and private) affect user engagement with AR content. We address this gap in current research by conducting an online survey with 110 participants, showcasing 36 AR videos, and polling them about the content's fit and appropriateness. Specifically, we manipulated these three spaces, two forms of dynamism (dynamic vs. static), and two dimensionalities (2D vs. 3D). Our findings reveal that dynamic AR content is generally more favorably received than static content. Additionally, users find sharing and engaging with AR content in private settings more comfortable than in others. By this, the study offers valuable insights for designing and implementing future Social MediARverses and guides industry and academia on content visualization and contextual considerations.


Beyond Recommendations: From Backward to Forward AI Support of Pilots' Decision-Making Process

June 2024

·

117 Reads

AI is anticipated to enhance human decision-making in high-stakes domains like aviation, but adoption is often hindered by challenges such as inappropriate reliance and poor alignment with users' decision-making. Recent research suggests that a core underlying issue is the recommendation-centric design of many AI systems, i.e., they give end-to-end recommendations and ignore the rest of the decision-making process. Alternative support paradigms are rare, and it remains unclear how the few that do exist compare to recommendation-centric support. In this work, we aimed to empirically compare recommendation-centric support to an alternative paradigm, continuous support, in the context of diversions in aviation. We conducted a mixed-methods study with 32 professional pilots in a realistic setting. To ensure the quality of our study scenarios, we conducted a focus group with four additional pilots prior to the study. We found that continuous support can support pilots' decision-making in a forward direction, allowing them to think more beyond the limits of the system and make faster decisions when combined with recommendations, though the forward support can be disrupted. Participants' statements further suggest a shift in design goal away from providing recommendations, to supporting quick information gathering. Our results show ways to design more helpful and effective AI decision support that goes beyond end-to-end recommendations.






Patients’ Trust in Artificial Intelligence–based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial

November 2023

·

59 Reads

·

25 Citations

European Urology Focus

Background: Artificial intelligence (AI) has the potential to enhance diagnostic accuracy and improve treatment outcomes. However, AI integration into clinical workflows and patient perspectives remain unclear. Objective: To determine patients’ trust in AI and their perception of urologists relying on AI, and future diagnostic and therapeutic AI applications for patients. Design, setting, and participants: A prospective trial was conducted involving patients who received diagnostic or therapeutic interventions for prostate cancer (PC). Intervention: Patients were asked to complete a survey before magnetic resonance imaging, prostate biopsy, or radical prostatectomy. Outcome measurements and statistical analysis The primary outcome was patient trust in AI. Secondary outcomes were the choice of AI in treatment settings and traits attributed to AI and urologists. Results and limitations: Data for 466 patients were analyzed. The cumulative affinity for technology was positively correlated with trust in AI (correlation coefficient 0.094; p = 0.04), whereas patient age, level of education, and subjective perception of illness were not (p > 0.05). The mean score (± standard deviation) for trust in capability was higher for physicians than for AI for responding in an individualized way when communicating a diagnosis (4.51 ± 0.76 vs 3.38 ± 1.07; mean difference [MD] 1.130, 95% confidence interval [CI] 1.010–1.250; t924 = 18.52, p < 0.001; Cohen’s d = 1.040) and for explaining information in an understandable way (4.57 ± vs 3.18 ± 1.09; MD 1.392, 95% CI 1.275–1.509; t921 = 27.27, p < 0.001; Cohen’s d = 1.216). Patients stated that they had higher trust in a diagnosis made by AI controlled by a physician versus AI not controlled by a physician (4.31 ± 0.88 vs 1.75 ± 0.93; MD 2.561, 95% CI 2.444–2.678; t925 = 42.89, p < 0.001; Cohen’s d = 2.818). AI-assisted physicians (66.74%) were preferred over physicians alone (29.61%), physicians controlled by AI (2.36%), and AI alone (0.64%) for treatment in the current clinical scenario. Conclusions: Trust in future diagnostic and therapeutic AI-based treatment relies on optimal integration with urologists as the human-machine interface to leverage human and AI capabilities. Patient summary: Artificial intelligence (AI) will play a role in diagnostic decisions in prostate cancer in the future. At present, patients prefer AI-assisted urologists over urologists alone, AI alone, and AI-controlled urologists. Specific traits of AI and urologists could be used to optimize diagnosis and treatment for patients with prostate cancer.


Citations (81)


... Frontline negotiators need to keep abreast of the emotions, cultural backgrounds, and human irrationality involved in the negotiation process [14,16,18,19]. This focus on the human element aligns with the approaches in human-AI collaborationparticularly the"Process-Oriented Support" [65]. Process-oriented support is an approach to AI decision support systems that focuses on assisting users throughout the entire decision-making process rather than just providing end-to-end recommendations. ...

Reference:

"ChatGPT, Don't Tell Me What to Do": Designing AI for Context Analysis in Humanitarian Frontline Negotiations
Beyond Recommendations: From Backward to Forward AI Support of Pilots' Decision-Making Process

Proceedings of the ACM on Human-Computer Interaction

... Our volunteers associated more empathetic and preferable messages with human authorship, though the preferred messages were often authored by chatbots. This preference aligns with other studies that patients, especially at lower education levels, view AI messages negatively and express higher trust in physicians than AI-generated information [1,4,12,25]. Urologists are aware of these perceptions, and a 2024 survey of 456 urologists identified negative patient perception as a primary concern of AI-generated information [26]. ...

Patients’ Trust in Artificial Intelligence–based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial
  • Citing Article
  • November 2023

European Urology Focus

... We also tried to balance professions and gender in this step. We kept only 14 task instances to keep the main study (see Section 3.4) short, as excessively long studies may inadvertently contribute to overreliance [42]. The decision difficulty scores in this step were preliminary since our volunteers did not necessarily match the demographics of our study participants. ...

Is Overreliance on AI Provoked by Study Design?
  • Citing Conference Paper
  • August 2023

Lecture Notes in Computer Science

... This highlights the importance of iterative design by evaluating and optimizing device features such as this technology's effectiveness, affordability, operability, perception, and acceptability. 29 Additional factors such as weight (both total and worn by the user), whether a device is tethered (portable vs wearable), and ease of configuration to patients should also be considered in the design and selection of devices. [30][31][32] However, there may also not always be a single best solution that fits the requirements of all end users. ...

Would You Hold My Hand? Exploring External Observers’ Perception of Artificial Hands

Multimodal Technologies and Interaction

... Regarding operational performance, the previous experience factor shows high validity in various areas for performance indicators (Grabner et al., 2006). According to Ou et al. (2023), there is a literature gap regarding the impact of the experience criteria on the institution's performance. Based on the above, the following hypotheses were proposed. ...

The Impact of Expertise in the Loop for Exploring Machine Rationality
  • Citing Conference Paper
  • March 2023

... Furthermore, the investigation delves into the factors influencing user trust, exploring elements such as transparency, explainability, and ethical considerations [7]. The dynamic nature of trust is analyzed over extended periods, considering the impact of continued interactions on the evolution of trust levels. ...

Designing AI for Appropriation Will Calibrate Trust

... Tulli and colleagues [12] define transparency as "an appropriate mutual understanding and trust that leads to effective collaboration between humans and agents." Transparent systems often focus on making the robot's behavior understandable to the user [13] and are used in various domains, such as teaching robots [14] and healthcare [15]. Explainability is used to make systems more understandable and communicate their capabilities, by providing explanations of the shown behavior. ...

A Literature Survey of How to Convey Transparency in Co-Located Human-Robot Interaction

Multimodal Technologies and Interaction

... A frequent problem is that AI decision support is usually designed to be recommendation-centric, where the primary functionality of the system is to give end-to-end decision recommendations, i.e., the system suggests a possible end result straight from its input data. By directly jumping to the end result, these systems only support the very end of the decision-making [7], ignoring the entire process leading up to the decision [71,73,76]. ...

Resilience Through Appropriation: Pilots' View on Complex Decision Support

... Although these devices offer convenience and opportunities to momentarily escape discomforting states, research has documented substantial drawbacks linked to excessive and absent-minded usage [14]. Empirical evidence associates overuse with impaired sleep quality, increased anxiety, social isolation, reduced life satisfaction, and diminished academic or professional performance [6,14,33,42]. Such detrimental effects do not stem solely from prolonged screen time but also arise from the nature of usage. ...

Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory

... Virtual Reality (VR) technology has witnessed significant advancements in recent years, with its applications through various domains, including gaming [19], healthcare [45], and training [20]. As VR evolves, the focus has shifted towards developing adaptive systems intelligently adapting to users' states in real-time [3]. ...

Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR