Simon T. Perrault’s research while affiliated with Singapore University of Technology and Design and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (50)


Enhancing Deliberativeness: Evaluating the Impact of Multimodal Reflection Nudges
  • Conference Paper

April 2025

·

1 Read

ShunYi Yeo

·

Zhuoqun Jiang

·

Anthony Tang

·

Simon Tangi Perrault

Iffy-Or-Not: Extending the Web to Support the Critical Evaluation of Fallacious Texts

March 2025

·

8 Reads

Social platforms have expanded opportunities for deliberation with the comments being used to inform one's opinion. However, using such information to form opinions is challenged by unsubstantiated or false content. To enhance the quality of opinion formation and potentially confer resistance to misinformation, we developed Iffy-Or-Not (ION), a browser extension that seeks to invoke critical thinking when reading texts. With three features guided by argumentation theory, ION highlights fallacious content, suggests diverse queries to probe them with, and offers deeper questions to consider and chat with others about. From a user study (N=18), we found that ION encourages users to be more attentive to the content, suggests queries that align with or are preferable to their own, and poses thought-provoking questions that expands their perspectives. However, some participants expressed aversion to ION due to misalignments with their information goals and thinking predispositions. Potential backfiring effects with ION are discussed.


Enhancing Deliberativeness: Evaluating the Impact of Multimodal Reflection Nudges
  • Preprint
  • File available

February 2025

·

26 Reads

Nudging participants with text-based reflective nudges enhances deliberation quality on online deliberation platforms. The effectiveness of multimodal reflective nudges, however, remains largely unexplored. Given the multi-sensory nature of human perception, incorporating diverse modalities into self-reflection mechanisms has the potential to better support various reflective styles. This paper explores how presenting reflective nudges of different types (direct: persona and indirect: storytelling) in different modalities (text, image, video and audio) affects deliberation quality. We conducted two user studies with 20 and 200 participants respectively. The first study identifies the preferred modality for each type of reflective nudges, revealing that text is most preferred for persona and video is most preferred for storytelling. The second study assesses the impact of these modalities on deliberation quality. Our findings reveal distinct effects associated with each modality, providing valuable insights for developing more inclusive and effective online deliberation platforms.

Download



Not Too Long, Not Too Short: Goldilocks Principle of 'Optimal' Reflection Time on Online Deliberation Platforms

August 2024

·

30 Reads

The deliberative potential of online platforms has been widely examined but the impact of reflection time on the quality of deliberation remains under-explored. This paper presents two user studies involving 100 and 72 participants respectively, to investigate the impact of reflection time on the quality of deliberation in minute-scale deliberations. In the first study, we identified an optimal reflection time for composing short opinion comments. In the second study, we introduced four distinct interface-based time nudges aimed at encouraging reflection near the optimal time. While these nudges may not improve the quality of deliberation, they effectively prolonged reflection periods. Additionally, we observed mixed effects on users' experience, influenced by the nature of the time nudges. Our findings suggest that reflection time is crucial, particularly for users who typically deliberate below the optimal reflection threshold.






Citations (30)


... However, existing empirical evaluations of explainable fact-checking are almost exclusively directed at and executed with laypeople from a limited selection of Western countries, rather than expert fact-checkers with diverse and varied contexts and perspectives. One such study indicated that neither feature-attribution nor example-based explanations of automated veracity prediction had an effect on laypeople's perceptions of the veracity of a news story or their intent to share it, but increased their tendency to over-rely on the AI system when it provided incorrect predictions [75]. A separate study also found no effect of example-based explanations on people's accuracy in predicting the veracity of a claim [77]. ...

Reference:

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
XAI in Automated Fact-Checking? The Benefits Are Modest and There's No One-Explanation-Fits-All
  • Citing Conference Paper
  • May 2024

... As ION hinges on the performance of the detection of fallacies, we describe a technical evaluation of the LLM used for our study following prior work [58]. We did not evaluate the other prompts as most of them were for creative purposes (producing queries and questions) rather than decision-making (classifying fallacies). ...

Evaluation of an LLM in Identifying Logical Fallacies: A Call for Rigor When Adopting LLMs in HCI Research
  • Citing Conference Paper
  • November 2024

... Design research primarily focuses on studying design processes and artefacts to propose generic methods (e.g., 6-3-5 sketch) and principles (e.g., function sharing) enabling different design approaches such as biomimicry [11], empathic design [12] etc. Research in innovation and management theorises the mechanisms that enable technological artefacts to create value and how organisations and innovation ecosystems co-ordinate to maximise such value [13][14][15]. Building upon the double-diamond process model of design innovation and leveraging the recent developments in data science, deep learning deep generative models etc., Luo [16] propose a model for Data-Driven Innovation (DDI), referred to as the "double-dump" model ( Figure 3.1). ...

Empathy in Smartphone Security Design: A User-Centred Approach for Trustworthy Web Browsing Across Age Groups
  • Citing Preprint
  • January 2024

... Recent advances in LLMs have accelerated the development of new tools that assist researchers in tasks such as literature synthesis, idea generation, data analysis, and academic writing. These tools include open-source ones like ScholarQA [3], which helps researchers synthesize literature and identify research gaps and CollabCoder [16], which supports inductive collaborative qualitative analysis. An increasing number of new tools are being introduced. ...

CollabCoder: A Lower-barrier, Rigorous Workflow for Inductive Collaborative Qualitative Analysis with Large Language Models
  • Citing Conference Paper
  • May 2024

... Given their strong capabilities in natural language generation, reasoning, and image understanding LLMs have become an increasingly popular topic of research in HCI and beyond [2,6,34,89,93]. ...

LLMs as Research Tools: Applications and Evaluations in HCI Data Work
  • Citing Conference Paper
  • May 2024

... Generative AI continues to be integrated into work practices to support knowledge workers' tasks with the goal of improving the quality and delivery time of their work. Large language models (LLMs) 1 can be effectively used to generate documents, code, or summaries, to analyze or transform data, to translate languages, to answer questions, to classify or categorize content, to brainstorm, or to plan and organize activities [12,15,17,29,73,75,79]. They can also be leveraged to evaluate the quality of both AI-generated and human-created content [20,26,42,80]. ...

A Taxonomy for Human-LLM Interaction Modes: An Initial Exploration
  • Citing Conference Paper
  • May 2024

... Beyond reading, online deliberation also calls for being critical in writing content. Lightweight interface nudges have been shown to facilitate deeper deliberation [70] and reflection [129] to enhance the introspection and quality of arguments in the comments. Such nudges could work in tandem with ION when implemented in social platforms to provide a holistic end-to-end experience in engaging critically with online content. ...

Help Me Reflect: Leveraging Self-Reflection Interface Nudges to Enhance Deliberativeness on Online Deliberation Platforms
  • Citing Conference Paper
  • May 2024

... keep pace with rapidly evolving generative models [35,65,92,118]. Parallel efforts have explored the use of content warnings, especially in misinformation contexts [47,77], but disclosure design for AI-generated content (AIG) remains underdeveloped. As Generative AI tools become widespread across users of varying abilities, AIG is now more prevalent, thus disclosure has become both more necessary and more complicated. ...

Effects of Automated Misinformation Warning Labels on the Intents to Like, Comment and Share Posts
  • Citing Conference Paper
  • December 2023

... They also analyzed that human coders took much longer than GPT [14]. Similarly, Gao et al. developed a web-based tool using GPT-3 to make code suggestions from excerpts and discovered that employing this tool could reduce the workload of individual coding and improve mutual comprehension [25]. In another study, Xiao et al. found that GPT-3 using provided code definition and examples (i.e., few-shot) has better agreement with human experts [68]. ...

CollabCoder: A GPT-Powered WorkFlow for Collaborative Qualitative Analysis
  • Citing Conference Paper
  • October 2023

... Trust involves a certain degree of risk and uncertainty, and people rely on credibility cues to validate their choice of trusted communication sources. These credibility cues include experience, reliability, fairness, bias, and accuracy of the source, among others [26]. ...

Fact Checking Chatbot: A Misinformation Intervention for Instant Messaging Apps and an Analysis of Trust in the Fact Checkers
  • Citing Chapter
  • June 2023