Conference Paper

Robot Theory of Mind with Reverse Psychology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition.
Article
Full-text available
If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a theory of mind. This paper presents the theories of Leslie (1994) and Baron-Cohen (1995) on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.
Article
To facilitate effective human-robot interaction (HRI), trust-aware HRI has been proposed, wherein the robotic agent explicitly considers the human's trust during its planning and decision making. The success of trust-aware HRI depends on the specification of a trust dynamics model and a trust-behavior model. In this study, we proposed one novel trust-behavior model, namely the reverse psychology model, and compared it against the commonly used disuse model. We examined how the two models affect the robot's optimal policy and the human-robot team performance. Results indicate that the robot will deliberately ‘manipulate’ the human's trust under the reverse psychology model. To correct this \textcolor{blue}{‘manipulative’} behavior, we proposed a trust-seeking reward function that facilitates trust establishment without significantly sacrificing the team performance.
Article
Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This article introduces a computational model that integrates trust into robot decision making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.
Article
We review the idea that Theory of Mind—our ability to reason about other people's mental states—can be formalized as inverse reinforcement learning. Under this framework, expectations about how mental states produce behavior are captured in a reinforcement learning (RL) model. Predicting other people’s actions is achieved by simulating a RL model with the hypothesized beliefs and desires, while mental-state inference is achieved by inverting this model. Although many advances in inverse reinforcement learning (IRL) did not have human Theory of Mind in mind, here we focus on what they reveal when conceptualized as cognitive theories. We discuss landmark successes of IRL, and key challenges in building human-like Theory of Mind.
Article
Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.
A theory of the child's theory of mind
  • A Jerry
  • Fodor
Jerry A Fodor. 1992. A theory of the child's theory of mind. Cognition (1992).
Conservative q-learning for offline reinforcement learning
  • Aviral Kumar
  • Aurick Zhou
  • George Tucker
  • Sergey Levine
  • Kumar Aviral
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. 2020. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems 33 (2020), 1179-1191.
d3rlpy: An Offline Deep Reinforcement Learning Library
  • Takuma Seno
  • Michita Imai
  • Seno Takuma
Takuma Seno and Michita Imai. 2022. d3rlpy: An Offline Deep Reinforcement Learning Library. Journal of Machine Learning Research 23, 315 (2022), 1-20. http://jmlr.org/papers/v23/22-0017.html
Jerry A Fodor. 1992. A theory of the child's theory of mind
  • A Jerry
  • Fodor
  • Fodor Jerry A