Paul L. Fiedler’s research while affiliated with University of Tübingen and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Screenshots of unbranded application mock-ups. Static text-based online encyclopedia (left), dynamic text-based LLM-powered chatbot typing out the answer, voice-based assistant; information content was identical for all branded and unbranded applications within each experiment.
Information credibility by presentation mode, information accuracy, and branding. Panel (a) shows the main results of Experiment 1: Presentation mode: F(2, 550) = 32.10, p < 0.001, η²partial = 0.11, information accuracy: F(1, 550) = 152.41, p < 0.001, η²partial = 0.22, presentation mode × information accuracy: F(2, 550) = 9.36, p < 0.001, η²partial = 0.03. Panel (b) shows the main results of Experiment 2: Presentation mode: F(2, 659) = 10.25, p < 0.001, η²partial = 0.03, information accuracy: F(1, 659) = 39.36, p < 0.001, η²partial = 0.06, branding: F(1, 659) = 0.001, p = 0.973, η²partial < 0.01, presentation mode × information accuracy: F(2, 659) = 5.89, p = 0.003, η²partial = 0.02, presentation mode × branding: F(1, 659) = 0.09, p = 0.911, η²partial < 0.01, presentation mode × information accuracy × branding: F(2, 659) = 0.16, p = 0.855, η²partial < 0.01. Bars represent estimated marginal means (exact values displayed within each bar), error bars show standard errors (exact values within parentheses). Post-hoc pairwise comparisons of significant two-way interaction (presentation mode × information accuracy) with Bonferroni adjustment. ***p < 0.001, **p < 0.01, *p < 0.05 (two-tailed). Blue asterisks and brackets indicate significant differences in information credibility between low and high accuracy information within a given presentation mode. Gray asterisks and brackets indicate significant differences in perceived information credibility between two modes of presentation.
Parallel multiple mediator models predicting information credibility. Panels (a) and (b) show the results of Experiments 1 and 2, respectively. The upper part shows the comparison between static text-based encyclopedia vs. voice-based agent and dynamic text-based agent. The lower part shows the comparison between voice-based agent and dynamic text-based agent. Significant effects are displayed in bold. 95% confidence intervals of relative indirect and direct effects are displayed in square brackets.
Global trustworthiness. Results of Experiment 1 are displayed in Panel (a), results of Experiment 2 in Panel (b). Bars represent means (exact values displayed within each bar), error bars show standard deviations (exact values in parentheses). *** p < 0.001 (two-tailed).
Conversational presentation mode increases credibility judgements during information search with ChatGPT
  • Article
  • Full-text available

July 2024

·

64 Reads

·

5 Citations

·

·

Büsra Sarigül

·

[...]

·

People increasingly use large language model (LLM)-based conversational agents to obtain information. However, the information these models provide is not always factually accurate. Thus, it is critical to understand what helps users adequately assess the credibility of the provided information. Here, we report the results of two preregistered experiments in which participants rated the credibility of accurate versus partially inaccurate information ostensibly provided by a dynamic text-based LLM-powered agent, a voice-based agent, or a static text-based online encyclopedia. We found that people were better at detecting inaccuracies when identical information was provided as static text compared to both types of conversational agents, regardless of whether information search applications were branded (ChatGPT, Alexa, and Wikipedia) or unbranded. Mediation analysis overall corroborated the interpretation that a conversational nature poses a threat to adequate credibility judgments. Our research highlights the importance of presentation mode when dealing with misinformation.

Download

Information Search With ChatGPT: An Experimental Comparison of Credibility Judgments across Different Presentation Modes

February 2024

·

42 Reads

People increasingly use large language model (LLM)-based conversational agents to obtain information. However, the information these models provide is not always factually accurate. Thus, it is critical to understand what helps users adequately assess the credibility of the provided information. Here, we report the results of two preregistered experiments in which participants rated the credibility of accurate versus partially inaccurate information ostensibly provided by a dynamic text-based LLM-powered agent, a voice-based agent, or a static text-based online encyclopedia. We found that people were better at detecting inaccuracies when information was provided as static text compared to both types of conversational agents, regardless of whether information search applications were branded (Alexa, ChatGPT, and Wikipedia) or unbranded. Mediation analysis further corroborated the interpretation that a conversational nature poses a threat to adequate credibility judgments. Our research highlights the importance of presentation mode when dealing with misinformation.

Citations (1)


... Besides, conversational presentation of information in the form of chatbots might lead to further overreliance [2], as humanised or anthropomorphised chatbots can lead to an illusion of reciprocity and care [58]. Future work could nevertheless investigate how to stimulate continuous reflection by processing the answers of the operator and asking follow-up questions. ...

Reference:

Questions: A Taxonomy for Critical Reflection in Machine-Supported Decision-Making
Conversational presentation mode increases credibility judgements during information search with ChatGPT