Anqing Chen’s research while affiliated with University of Texas at Austin and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation
  • Article

April 2023

·

54 Reads

·

45 Citations

Proceedings of the ACM on Human-Computer Interaction

Angie Zhang

·

Olympia Walker

·

Kaci Nguyen

·

[...]

·

Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making.


Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation
  • Preprint
  • File available

February 2023

·

146 Reads

Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders -- decision makers (faculty) and decision subjects (students) -- use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making.

Download

Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation

November 2022

·

28 Reads

·

23 Citations

Proceedings of the ACM on Human-Computer Interaction

Hyper-partisan misinformation has become a major public concern. In order to examine what type of misinformation label can mitigate hyper-partisan misinformation sharing on social media, we conducted a 4 (label type: algorithm, community, third-party fact-checker, and no label) X 2 (post ideology: liberal vs. conservative) between-subjects online experiment (N = 1,677) in the context of COVID-19 health information. The results suggest that for liberal users, all labels reduced the perceived accuracy and believability of fake posts regardless of the posts' ideology. In contrast, for conservative users, the efficacy of the labels depended on whether the posts were ideologically consistent: algorithmic labels were more effective in reducing the perceived accuracy and believability of fake conservative posts compared to community labels, whereas all labels were effective in reducing their belief in liberal posts. Our results shed light on the differing effects of various misinformation labels dependent on people's political ideology.


The Means (í µí±†í µí°¸) of label perceptions
Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation

March 2022

·

91 Reads

Hyper-partisan misinformation has become a major public concern. In order to examine what type of misinformation label can mitigate hyper-partisan misinformation sharing on social media, we conducted a 4 (label type: algorithm, community, third-party fact-checker, and no label) X 2 (post ideology: liberal vs. conservative) between-subjects online experiment (N = 1,677) in the context of COVID-19 health information. The results suggest that for liberal users, all labels reduced the perceived accuracy and believability of fake posts regardless of the posts' ideology. In contrast, for conservative users, the efficacy of the labels depended on whether the posts were ideologically consistent: algorithmic labels were more effective in reducing the perceived accuracy and believability of fake conservative posts compared to community labels, whereas all labels were effective in reducing their belief in liberal posts. Our results shed light on the differing effects of various misinformation labels dependent on people's political ideology.

Citations (2)


... Contributions in this area span diverse fields, including healthcare [15], judicial systems [4], civic engagement [1], philanthropy [28], and urban planning [36]. More technical applications include collective debiasing [9], collaborative debugging [32], ranking with partial preferences [8] and web-based tools for democratizing ML workflows [46] . Collectively, these contributions reflect what has been described as a "participatory turn" in AI design [13]. ...

Reference:

Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making
Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation
  • Citing Article
  • April 2023

Proceedings of the ACM on Human-Computer Interaction

... Rijo and Waldzus [248] looked into how voting patterns and political beliefs influence the way people evaluate information credibility. Jia et al. [141] focused on liberal and conservative users. Other works (e.g., [130]) focus on social media users in general. ...

Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation
  • Citing Article
  • November 2022

Proceedings of the ACM on Human-Computer Interaction