David V. Pynadath’s research while affiliated with University of Southern California and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (114)


PsybORG + : Modeling and Simulation for Detecting Cognitive Biases in Advanced Persistent Threats
  • Conference Paper

October 2024

·

1 Read

·

1 Citation

Shuo Huang

·

Fred Jones

·

Nikolos Gurney

·

[...]

·

Quanyan Zhu


The Design of Transparency Communication for Human-Multirobot Teams

July 2023

·

13 Reads

Lecture Notes in Computer Science

Successful human-machine teaming often hinges on the ability of eXplainable Artificial Intelligence (XAI) to make an agent’s reasoning transparent to human teammates. Doing so requires that the agent navigate a tradeoff between revealing its reasoning to those teammates without overwhelming them with too much information. This challenge is amplified when a person is teamed with multiple agents. This amplification is not simply linear, due to the increase from 1 to N agents’ worth of reasoning content, but also due to the interdependency among the agents’ reasoning that must be made transparent as well. In this work, we examine the challenges in conveying this interdependency to people teaming with multiple agents. We also propose alternate domain-independent strategies for a team of simulated robots to generate messages about their reasoning to be conveyed to a human teammate. We illustrate these strategies through their implementation in a search-and-rescue simulation testbed.KeywordsRobotsAvatars and Virtual HumansHuman-Robot InteractionExplainable Artificial Intelligence


My Actions Speak Louder Than Your Words: When User Behavior Predicts Their Beliefs About Agents’ Attributes

July 2023

·

11 Reads

·

2 Citations

Lecture Notes in Computer Science

An implicit expectation of asking users to rate agents, such as an AI decision-aid, is that they will use only relevant information—ask them about an agent’s benevolence, and they should consider whether or not it was kind. Behavioral science, however, suggests that people sometimes use irrelevant information. We identify an instance of this phenomenon, where users who experience better outcomes in a human-agent interaction systematically rated the agent as having better abilities, being more benevolent, and exhibiting greater integrity in a post hoc assessment than users who experienced worse outcomes—which were the result of their own behavior—with the same agent. Our analyses suggest the need for augmentation of models so they account for such biased perceptions as well as mechanisms so that agents can detect and even actively work to correct this and similar biases of users.KeywordsAgent factorsTrait attributionCognitive biasAgent-user interactions


The Role of Heuristics and Biases during Complex Choices with an AI Teammate

June 2023

·

26 Reads

·

1 Citation

Proceedings of the AAAI Conference on Artificial Intelligence

Behavioral scientists have classically documented aversion to algorithmic decision aids, from simple linear models to AI. Sentiment, however, is changing and possibly accelerating AI helper usage. AI assistance is, arguably, most valuable when humans must make complex choices. We argue that classic experimental methods used to study heuristics and biases are insufficient for studying complex choices made with AI helpers. We adapted an experimental paradigm designed for studying complex choices in such contexts. We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes. The evidence suggests that some participants, particularly those in a loss frame, put too much faith in the AI helper and experienced worse choice outcomes by doing so. The paradigm also generates computational modeling-friendly data allowing future studies of human-AI decision making.


Comparing Psychometric and Behavioral Predictors of Compliance During Human-AI Interactions

April 2023

·

14 Reads

Lecture Notes in Computer Science

Optimization of human-AI teams hinges on the AI’s ability to tailor its interaction to individual human teammates. A common hypothesis in adaptive AI research is that minor differences in people’s predisposition to trust can significantly impact their likelihood of complying with recommendations from the AI. Predisposition to trust is often measured with self-report inventories that are administered before interactions. We benchmark a popular measure of this kind against behavioral predictors of compliance. We find that the inventory is a less effective predictor of compliance than the behavioral measures in datasets taken from three previous research projects. This suggests a general property that individual differences in initial behavior are more predictive than differences in self-reported trust attitudes. This result also shows a potential for easily accessible behavioral measures to provide an AI with more accurate models without the use of (often costly) survey instruments.Keywordshuman-robot interactionhuman-computer interactioncompliancetrustinterventiondecision making


Multiagent Inverse Reinforcement Learning via Theory of Mind Reasoning

February 2023

·

44 Reads

To understand how people interact with each other in collaborative settings, especially in situations where individuals know little about their teammates, Multiagent Inverse Reinforcement Learning (MIRL) aims to infer the reward functions guiding the behavior of each individual given trajectories of a team's behavior during task performance. Unlike current MIRL approaches, team members \emph{are not} assumed to know each other's goals a priori, rather they collaborate by adapting to the goals of others perceived by observing their behavior, all while jointly performing a task. To address this problem, we propose a novel approach to MIRL via Theory of Mind (MIRL-ToM). For each agent, we first use ToM reasoning to estimate a posterior distribution over baseline reward profiles given their demonstrated behavior. We then perform MIRL via decentralized equilibrium by employing single-agent Maximum Entropy IRL to infer a reward function for each agent, where we simulate the behavior of other teammates according to the time-varying distribution over profiles. We evaluate our approach in a simulated 2-player search-and-rescue operation where the goal of the agents, playing different roles, is to search for and evacuate victims in the environment. Results show that the choice of baseline profiles is paramount to the recovery of ground-truth rewards, and MIRL-ToM is able to recover the rewards used by agents interacting with either known and unknown teammates.


Study 1 FC Model Comparisons
Study 1 M1C Model Comparisons
Study 2 FC Models
Study 2 FC Model Comparison
Study 2 M1C Models Dependent variable: Compliance Percentage Buildings 16:120

+4

Comparing Psychometric and Behavioral Predictors of Compliance During Human-AI Interactions
  • Preprint
  • File available

February 2023

·

53 Reads

Optimization of human-AI teams hinges on the AI's ability to tailor its interaction to individual human teammates. A common hypothesis in adaptive AI research is that minor differences in people's predisposition to trust can significantly impact their likelihood of complying with recommendations from the AI. Predisposition to trust is often measured with self-report inventories that are administered before interactions. We benchmark a popular measure of this kind against behavioral predictors of compliance. We find that the inventory is a less effective predictor of compliance than the behavioral measures in datasets taken from three previous research projects. This suggests a general property that individual differences in initial behavior are more predictive than differences in self-reported trust attitudes. This result also shows a potential for easily accessible behavioral measures to provide an AI with more accurate models without the use of (often costly) survey instruments.

Download

My Actions Speak Louder Than Your Words: When User Behavior Predicts Their Beliefs about Agents' Attributes

January 2023

·

33 Reads

An implicit expectation of asking users to rate agents, such as an AI decision-aid, is that they will use only relevant information -- ask them about an agent's benevolence, and they should consider whether or not it was kind. Behavioral science, however, suggests that people sometimes use irrelevant information. We identify an instance of this phenomenon, where users who experience better outcomes in a human-agent interaction systematically rated the agent as having better abilities, being more benevolent, and exhibiting greater integrity in a post hoc assessment than users who experienced worse outcome -- which were the result of their own behavior -- with the same agent. Our analyses suggest the need for augmentation of models so that they account for such biased perceptions as well as mechanisms so that agents can detect and even actively work to correct this and similar biases of users.


Figure 3: Search duration difference is the total number of submitted settings during a participant's solo effort minus the total number of submitted settings while working with the AI helper.
The Role of Heuristics and Biases During Complex Choices with an AI Teammate

January 2023

·

90 Reads

Behavioral scientists have classically documented aversion to algorithmic decision aids, from simple linear models to AI. Sentiment, however, is changing and possibly accelerating AI helper usage. AI assistance is, arguably, most valuable when humans must make complex choices. We argue that classic experimental methods used to study heuristics and biases are insufficient for studying complex choices made with AI helpers. We adapted an experimental paradigm designed for studying complex choices in such contexts. We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes. The evidence suggests that some participants, particularly those in a loss frame, put too much faith in the AI helper and experienced worse choice outcomes by doing so. The paradigm also generates computational modeling-friendly data allowing future studies of human-AI decision making.


Citations (81)


... Once inside, such malware could disrupt key operations, alter traffic signals, or corrupt routing algorithms, leading to widespread congestion and unsafe road conditions. These attacks often exploit cognitive vulnerabilities inherent in human decisionmaking [33]. One such vulnerability is base rate neglect, where individuals fail to consider the base rate or statistical probability of an event when making decisions. ...

Reference:

Game-Theoretic Foundations for Cyber Resilience Against Deceptive Information Attacks in Intelligent Transportation Systems
PsybORG + : Modeling and Simulation for Detecting Cognitive Biases in Advanced Persistent Threats
  • Citing Conference Paper
  • October 2024

... It is possible that ordering of robot advisors mattered; however, the data are insufficient to specify a hierarchical model that would uncover such a feature. Study 3 participants (n = 148, Amazon Mechanical Turk) completed one mission that covered 45 buildings with an RL-based (RL: reinforcement learning) robot in a fully between design [31, 15,14]. The treatment conditions held the robot's ability constant but varied how it explained its recommendations: no explanation, explanation of decision, explanation of decision and learning. ...

My Actions Speak Louder Than Your Words: When User Behavior Predicts Their Beliefs About Agents’ Attributes
  • Citing Chapter
  • July 2023

Lecture Notes in Computer Science

... By implementing these measures, AI applications can make more reliable predictions and achieve better results in manufacturing [23]. In the context of Indonesia's manufacturing landscape, it is crucial to consider regional nuances that may affect data quality. ...

The Role of Heuristics and Biases during Complex Choices with an AI Teammate
  • Citing Article
  • June 2023

Proceedings of the AAAI Conference on Artificial Intelligence

... Subjects may be biased towards making more or less exploratory decisions by their internal state, properties of the environment they are in, and their personality (Addicott et al., 2017). For instance, the current aspiration level of a subject, as it might be manipulated by the presence of explicit anchors, has been shown to influence how much is explored (Gurney et al., 2023). Similarly, the scarcity or richness of the current environment, as well as of environments they experience regularly, impacts people's exploration-exploitation decisions (Chang et al., 2022). ...

The Role of Heuristics and Biases in Complex Choices

... Over the past several decades, the study of theory of mind (ToM) has garnered increasing interest from a diverse range of research fields, as evidenced by numerous studies (Fu et al., 2023;Quesque & Rossetti, 2020;Schaafsma et al., 2015), and its theoretical framework has been debated extensively by philosophers, psychologists, and neuroscientists. Moreover, ToM research has recently been applied to computational modeling and artificial intelligence (AI) to better understand and simulate the mental states and behaviors of others (Gurney et al., 2021;Ho et al., 2022;Langley et al., 2022;Ong et al., 2020;Rusch et al., 2020). Moreover, the economics of emotions (EoE) has rapidly advanced, with the aim of rendering the human mind programmable in tandem with AI progression (Kadokawa, 2023). ...

Operationalizing Theories of Theory of Mind: A Survey

Lecture Notes in Computer Science

... This is typically done by conducting user studies to evaluate the impact of different explanations on human factors. In these studies, different aspects such as the content, length, amount of information, completeness, and structure of explanations are typically analyzed by exposing participants to different types of explanations (e.g., [6], [30], [31]). This comparative approach makes it possible to assess how well explanations serve their intended purposes. ...

Explainable Reinforcement Learning in Human-Robot Teams: The Impact of Decision-Tree Explanations on Transparency
  • Citing Conference Paper
  • August 2022

... Humans are adept at integrating multiple modalities for this task, including contextual information. Given its importance in human-human interactions, computational ToM has recently emerged as a new frontier in developing intelligent computational agents that can understand and collaborate with humans [18]. Despite a surge of papers on this new task, deep learning methods for predicting other agents' mental states have mainly been studied in constrained artificial environments [3,38,32,15,40,39,30,31,8]. ...

Robots with Theory of Mind for Humans: A Survey *
  • Citing Conference Paper
  • August 2022

... Moreover, some research considers agents' chain of suspicion, that is, agents can prejudge other agents' actions to optimize their action policies. Pynadath et al. (2023) used a recursive model to generate the expectations for other agent behavior to update the beliefs of their actions. Applying this model to ABM, they simulate human behavior and social dynamics during hurricane disasters. ...

Disaster world: Decision-theoretic agents for simulating population responses to hurricanes

Computational and Mathematical Organization Theory

... One solution for mitigating the damage done by these mistakes, which can lead to better-calibrated trust, is for an agent to ensure that its reasoning is communicated in a way to meet the varying information needs and preferences of its human teammates. Sufficient transparency in communication will allow people to know when they should comply with an AI's recommendation and when they should not, i.e. develop better-calibrated trust, which is an important precursor to good compliance behavior in human-AI interactions [21,8,44,33,2]. ...

Human-agent bidirectional transparency
  • Citing Chapter
  • January 2021

... Trust calibration is timely for human-agent interaction given the adoption of agents within collaborative team environments in industries like education [37,51,67] healthcare [20,24,39], and defense [59,60]. Notably, trust calibration models have been employed in Human-Robot Interaction (HRI) contexts to improve team performance [14,15,59,60], as well as agent-agent [53] and human-agent interaction [1,48]. ...

A Markovian Method for Predicting Trust Behavior in Human-Agent Interaction
  • Citing Conference Paper
  • September 2019