Conference PaperPDF Available

Abstract

Recent research shows that how we respond to other social actors depends on what sort of mind we ascribe to them. In this article we examine how perceptions of a virtual agent's mind shape behavior in human-agent negotiations. We varied descriptions and communicative behavior of virtual agents on two dimensions according to the mind perception theory:agency (cognitive aptitude) andpatiency (affective aptitude). Participants then engaged in negotiations with the different agents. People scored more points and engaged in shorter negotiations with agents described to be cognitively intelligent, and got lower points and had longer negotiations with agents that were described to be cognitively unintelligent. Accordingly, agents described as having low agency ended up earning more points than those with high agency. Within the negotiations themselves, participants sent more happy and surprise emojis and emotionally valenced messages to agents described to be emotional. This high degree of described patiency also affected perceptions of the agent's moral standing and relatability. In short, manipulating the perceived mind of agents affects how people negotiate with them. We discuss these results, which show that agents are perceived not only as social actors, but as intentional actors through negotiations.
What’s on Your Virtual Mind?
Mind Perception in Human-Agent Negotiations
Minha Lee
m.lee@tue.nl
Human-Technology Interaction
Eindhoven University of Technology
Eindhoven, North Brabant, the Netherlands
Gale Lucas, Johnathan Mell, Emmanuel
Johnson, Jonathan Gratch
{lucas,mell,ejohnson,gratch}@ict.usc.edu
Institute for Creative Technologies
University of Southern California, Playa Vista, CA, USA
ABSTRACT
Recent research shows that how we respond to other social actors
depends on what sort of mind we ascribe to them. In this article
we examine how perceptions of a virtual agent’s mind shape be-
havior in human-agent negotiations. We varied descriptions and
communicative behavior of virtual agents on two dimensions ac-
cording to the mind perception theory: agency (cognitive aptitude)
and patiency (aective aptitude). Participants then engaged in ne-
gotiations with the dierent agents. People scored more points and
engaged in shorter negotiations with agents described to be cogni-
tively intelligent, and got lower points and had longer negotiations
with agents that were described to be cognitively unintelligent. Ac-
cordingly, agents described as having low-agency ended up earning
more points than those with high-agency. Within the negotiations
themselves, participants sent more happy and surprise emojis and
emotionally valenced messages to agents described to be emotional.
This high degree of described patiency also aected perceptions of
the agent’s moral standing and relatability. In short, manipulating
the perceived mind of agents aects how people negotiate with
them. We discuss these results, which show that agents are per-
ceived not only as social actors, but as intentional actors through
negotiations.
CCS CONCEPTS
Human-centered computing HCI theory, concepts and
models;Empirical studies in HCI.
KEYWORDS
Virtual agent, mind perception theory, theory of mind, human-
agent negotiation, IAGO negotiation platform
ACM Reference Format:
Minha Lee and Gale Lucas, Johnathan Mell, Emmanuel Johnson, Jonathan
Gratch. 2019. What’s on Your Virtual Mind?
Mind Perception in Human-Agent Negotiations. In ACM International Con-
ference on Intelligent Virtual Agents (IVA ’19), July 2–5, 2019, PARIS, France.
ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3308532.3329465
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
IVA ’19, July 2–5, 2019, PARIS, France
©2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-6672-4/19/07. . . $15.00
https://doi.org/10.1145/3308532.3329465
1 INTRODUCTION
While philosophical explorations on what a mind is and how we
perceive it has been an active area of inquiry, how to empirically
test our perception of other minds, specically that of technological
entities, is a relatively new project that is becoming more and more
relevant with a growing number of digital beings entering our
everyday environments (see, e.g., Dennett [
16
]). The perception of
another’s mind is especially relevant to human-agent interactions
since how we relate to an agent depends on how likely we are to
attribute a mind to it—for instance based on how we infer its social
motivation [
18
]. According to the mind perception theory (MPT),
the mind is assessed on two dimensions: agency, which encompasses
cognition, and patiency, which encompasses emotions [
23
]. We
designed dierent types of minds of virtual robot negotiators that
varied along the dimensions described by MPT in order to see the
resulting inuence on human interactants’ behavior in negotiations.
Though whether an agent can realize its own theory of mind as
well as others’ minds is an important topic [
31
], far less attention
is paid to how agents designed to have minds aect humans they
interact with, which is the focus of our paper.
We motivate our research on two grounds: (1) negotiations pre-
sume higher order theory of mind reasoning and therefore are
tting for empirically testing minds of various complexities [
14
]
and (2) negotiators’ ability to read and inuence each others’ minds
depend more on their mind perception, which can be observed by
people’s behavior towards agents that are systematically designed
to have minds of various orders. Designing intelligent virtual nego-
tiators can push the boundaries of AI via algorithmically-imbued
agency [
22
]. To do so, these systems should be designed with the
realization that what drives people’s behavior is their perception
of a machine’s agency; human-agent negotiation research benets
from looking into how an agent’s perceived agency impacts human
negotiators. Negotiations are a robust context for exploring how
articial minds of machines lead to divergent behaviors in humans.
Previous research showed that the ascribed mind of an agent
aects the outcome of a simple game. For example, in the ultimatum
game, people gave more money to an agent based on perceived
high-agency and high-patiency [
11
]. However, (unlike [
11
]) we
attentively distinguished between recognition of and control over
emotions (agency) and the ability to feel emotion states (patiency)
as per MPT [
23
] in our design; emotional expressions can reect
authentic feelings (patiency) or strategic motives (agency) in nego-
tiations. We thus systematically manipulated articial minds with
agents’ descriptions and dialog states to see the resulting inuence
in a more complex game than the ultimatum game: negotiation.
Negotiations allow people to interactively perceive an agent’s mind;
changes in perceived mind over time aect human-agent nego-
tiations, which is the primary exploration underlying our study.
Our results demonstrate that people earned higher points against
an agent that appeared to have high-agency, and participants did
worse against and negotiated longer with an agent purported to
have low-agency. Agents that had high-patiency did not directly
impact game points, but did aect people’s behavior. People sent
them more emojis and messages laced with emotional language.
People’s behavior thus suggests that the perceived mind is critical
in seeing agents not just as social actors, but as intentional actors.
To frame our study, we present relevant works on theory of mind,
mind perception, and negotiation in relation to human-agent in-
teractions, followed by our method and results. We then discuss
implications of our ndings.
2 BACKGROUND
2.1 Theory of mind
The ability to attribute mental states to oneself and/or others is
known as having a theory of mind [
41
]. The most commonly attrib-
uted mental state is intent [
41
]. This intentionality, or the directed-
ness of mental processing to some end, is purveyed as a hallmark
of having a mind, yet a motley of mental states such as beliefs or
desires adds more complexity to what a mind is [
15
,
16
,
30
]. In
attributing intent to an agent, we attempt to predictively piece to-
gether what the agent wants or believes in order to make sense
of who the agent is to ourselves [
15
]. One utilizes the theory of
one’s own mind as a requisite for recognizing other minds, even for
non-human entities [
18
]. People thus have a tendency to be biased
towards their own minds as a frame of reference when interacting
with humans and agents [31].
Through a course of a shared activity, interactants can form a
theory of each other’s mind, which helps them nd a common
ground [
30
]. At the same time, what one expresses to the other
party does not need to accurately reect one’s actual intentions
and is often conditional to environmental or situational demands
[
15
]. This introduces dierent degrees of having a mind. The theory
of mind at zero-order is to be self-aware (impute mental states to
self), at rst-order it is to be self- and other-aware (impute mental
states to self and others), and at higher-order it is to use self- and
other-awareness to modify behavioral outcomes (regulate mental
states of self and others) [
14
]. Social actors can be ascribed minds
of zero-order to higher order, yet intentional actors often require
higher-order minds, especially in cognitively challenging tasks like
negotiation [14].
The mind perception theory (MPT) helps to systematically de-
sign minds of various orders and to empirically test the perception
of articial minds, which are key challenges in research. The mind
is perceived on two continuous dimensions of agency and patiency
[
23
]. Agency refers to the ability to plan, think, remember, to know
right from wrong, etc., and these items assess how much control an
agent has over its actions and feelings to behave intentionally [
23
].
Patiency is dened by having the propensity to feel joy, pleasure,
fear, etc. [
23
]. While we refer to patiency as aective capacity, it also
includes biological states like hunger or pain as experiential factors
[
23
]. To note, perceived agency and patiency are not independent
of each other [
23
,
40
]. People’s assumptions about agency can drive
perceptions on patiency, and vice versa; cognition and aect cannot
be neatly separated [
8
]. More broadly, MPT dimensions conceptu-
ally relate to the stereotype content model (SCM). SCM deals with
interpersonal perceptions of social group members based on two
dimensions of competence, e.g., intelligent, competitive, condent,
and warmth, e.g., friendly, good-natured, sincere [
19
]. Competence
items evoke agency and warmth items are reminiscent of patiency,
though the aims of two scales dier [
28
]. To generalize, items on
agency and competence have more to do with cognitive reasoning
while patiency and warmth relate to aective qualities.
MPT confers an entity with a perceived mind to be a moral agent,
i.e., doer of a moral/immoral deed, and a moral patient, i.e., recipient
of a moral/immoral deed [
26
]. Entities with minds can play either of
the two roles to dierent degrees, although they are most likely to
be typecast solely as a moral agent or a patient in a given scenario
[
26
]. While moral agents and patients both can have moral standing
(e.g., the standing to be protected from harm and to be treated with
fairness and compassion), entities who act cruelly or cause harm
are bestowed lowered moral standing as well as lowered agency
[
29
]. Morally relevant acts can therefore inuence the perceived
intentionality of a moral agent, which makes these acts relevant to
negotiations.
Between humans, our relations to others fulll our “need to
belong” [
1
]. And, how we relate to non-human agents is informed
by our human-human interactions [
31
]. Though people normally
grant low intentionality and theory of mind to agents [
23
,
49
] agents
can be treated in a human-like social fashion [
6
,
39
]. For example,
people are willing to help out a computer that was previously
helpful to them [
20
], punish those agents that betray them [
35
],
and grant personality traits to computers based on text-based chats
[
37
]. Humans do not need to be ascribed higher-order minds to be
treated socially, like when adults talk to newborns. Additionally,
the belief that one is interacting with a mere machine can allow
one to divulge more personally sensitive information to an agent
than a human, for a machine is not seen to be judgmental like a
human [
33
,
36
]. At the same time, when agents are made to look like
humans, people apply certain stereotypes based on appearance, e.g.
the perceived gender or race of virtual humans and robots aects
people’s behaviors toward them [
3
,
17
,
44
]. In sum, people may
have preconceived beliefs about agents having low-order minds
compared to them, yet by treating agents as social actors, they
apply certain social stereotypes such as gender or race-related
biases towards agents that have human-like appearances.
Machines may be treated dierently when attributed with higher-
order minds. When it comes to complex interactions that unfold
over time in which a machine’s goals are unclear for human inter-
actants, the focus shifts from machines as social actors to machines
as intentional actors, incorporating the possibility that machines
can be attributed with higher-order minds. Research suggests that
agents can be perceived to have higher-order minds through various
manipulations. For one, when an agent is given aective richness
and portrayed as an emotional entity, it can be granted a human-like
mind [
25
]. Besides emotions, the attribution of mind can arise from
goal-directedness coupled with cognitive ability (a high degree of
intentionality), which the agency dimension of MPT captures. In a
study that asked participants to attribute intentionality to a robot,
computer, and human, the task of object identication resulted in
low intentionality attribution to both a robot and computer com-
pared to a human [
32
]. But, higher intentionality was attributed to a
robot, more so than a computer, when it practiced goal-driven gaze
towards selective objects; when people were asked to observe an
agent’s gaze direction, perceived intentionality behind the agent’s
action increased, meaning that people’s initial bias that agents do
not have an intentional stance can be overridden based on manip-
ulated context [
32
]. One context that is ripe for manipulating the
perceived mind of an agent is negotiation.
2.2 Human-human vs. human-agent
negotiations
Negotiation is a process by which dierent parties come to an
agreement when their interests and/or goals regarding mutually
shared issues may not be initially aligned [
7
]. Also, negotiation
may involve joint decision-making with others when one cannot
fulll one’s interests and/or goals without their involvement [
47
].
The concept of fairness as a component of morality [
21
] can be
estimated in negotiations through measurable components, such as
negotiation outcomes (e.g., points per player) or process measures
(e.g., how many oers a player made to the opponent) [
47
]. Thus,
self- and other- regard is inherent to negotiations, encompassing
complex socio-psychological processes [
46
]. Negotiations therefore
involve theory of mind reasoning; negotiators have to reason about
each others’ intentions, trade-os, and outcomes as a cognitively
taxing process [
22
]. Especially if negotiators have to cooperate and
compete, such as during a mixed-motive negotiation, they often rely
on a higher-theory of mind [
14
]. Mixed-motive negotiations are
pertinent scenarios for observing how players attempt to decipher
and shape each other’s intentions and beliefs, when players engage
in higher-order mind perceiving and reasoning.
There are similarities and dierences between human-human
and human-agent negotiations, though more research is necessary
for denitive comparisons. The similarities are that emotions ex-
pressed by players aect people’s negotiation approach, be it with
virtual negotiators [
9
] or human negotiators [
4
,
38
]. An agent’s
expressed anger, regret, or joy (both facial and textual expressions)
inuence how human opponents play against it [
9
], extending the
view that emotions in human-human negotiations reveal strategic
intentions and inuence outcomes [
4
,
38
]. To add, priming people’s
belief about the negotiation (emphasizing cooperation vs. exploita-
tion at the start) impacts human-agent negotiations [
13
], echoing
how framing of a game in itself for human-human negotiations
results in divergent outcomes [
42
]. Increasingly, agents are capable
of using complex human-like strategies in negotiation, and the
perceived gap between humans and agents may continue to shrink
[2].
However, people still do have preconceptions about agents’ lack
of human-like mind in many negotiation scenarios. Specically,
a human opponent is granted agency by default, but a machine’s
agency can be independent of or dependent on a human actor; the
belief about the agent (autonomous vs. human-controlled agent)
can result in dierent tactics adopted by human players [
9
,
12
]. In
another study, when machines with higher-order minds negotiated
with people, both parties ended up with higher scores (larger joint
outcome) when machines made the rst bid, but not when humans
made the rst oer [
14
]. In simple games, people are likely to
allocate more goods to agents that have a high degree of human-
like mind, based on perceived agency and patiency [
11
]. Thus, an
agent’s mind and a human player’s perception of an agent’s mind
are crucial to how their negotiation unfolds [
11
]. We focused on
the latter, the perception of an agent’s mind through negotiations
as an interactive context.
2.3 Research question
The research question of our study is as follows. In what ways do
manipulated agency and patiency via dialog states and descriptions
of a virtual agent that negotiates with a human inuence the negoti-
ation outcome and process? We expected that agency would drive
participants to partake in heightened engagement with the agent to
(1) increase the joint outcome of the negotiation (regardless of who
wins) and (2) would cause participants to seek more game-relevant
information from the agent (send more messages on preferences
and oers to the agent). Higher joint outcome implies greater cog-
nitive eort, for it requires players’ usage of higher-order theory of
mind reasoning to increase the size of the “pie” for mutually ben-
ecial ends. We hypothesized that patiency would increase other
regard; participants would grant the agent (1) fairer allocations and
(2) would send greater numbers of emotionally-valenced messages.
Agency and patiency were assumed to both contribute to negotia-
tion outcome and processes [
11
]. In addition, we were interested in
whether or not MPT dimensions relate to competence and warmth
(corresponding SCM dimensions), as well as participants’ judgment
of the agent’s moral standing and relatability. We looked at adja-
cent concepts such as SCM and moral standing to more holistically
understand how minds of articial agents are viewed.
3 METHODS
3.1 Design
Figure 1: Negotiation interface
Our agent was a virtual robot that was simple in appearance (Fig-
ure 1), without any gender, race, or other highly anthropomorphic
traits that may trigger people’s biases [
3
,
17
,
44
], which helped to
drive the perception of its mind based on its behavior rather than
its looks. We used a congurable negotiation platform (IAGO) that
Robot type Description Dialog
Low-Agency Low-Patiency
The robot does not have a complex disposition to think,
feel, and reect.
“Preparing oer.” “Armative.” “Does not com-
pute.”
Low-Agency High-Patiency
The robot has a complex disposition to feel, but cannot
think or reect.
“I like this!” “Yay! I’m happy.” “Oh...I’m sad...”
High-Agency Low-Patiency
The robot has a complex disposition to think and reect,
but cannot feel.
“This is the most logical oer.” “I inferred that
you would accept this deal.” “You seem to be up-
set.”
High-Agency High-Patiency
The robot has a complex disposition to think, feel, and
reect.
“I’m going to make this oer.” “I feel so good
about negotiating with you!” “Oh...Your sadness
makes me feel sad...”
Table 1: Agent types and excerpts from their descriptions and dialogs
allows for designing custom negotiation experiments. It features
emotional communication (participants can click on dierent emo-
jis to send to an agent; see Figure 1), as well as customizable agents
(e.g., agents’ pictures can have dierent emotional expressions as
reactions to people’s behavior) [34].
We employed a 2X2 between-participants factorial design of
High vs. Low Agency and High vs. Low Patiency dimensions.
Agency and patiency were manipulated in two ways. There were
descriptions of the agent presented before the negotiation and short-
ened versions of descriptions appeared next to the picture of the
agent (Figure 1) during the experiment. We also modied dialog
states of the agent, i.e., how it “talked” (Table 1 lists excerpts). The
sentence structure of our descriptions was modeled after previ-
ous research on moral standing [
29
,
40
]. We used the items of the
MPT scale [
23
] to construct the content of the descriptions and the
dialogs. To illustrate, one agency item, “the robot appears to be
capable of understanding how others are feeling” was translated to
the agent having an awareness of the participant’s emotion states
during the negotiation, e.g., a “sad” emoji from the participant re-
sulted in “you seem to be upset” message from the high-agency
low-patiency agent while the agent’s expression remained neutral
(Figure 1). This suggests high-agency, but does not directly translate
to a complete lack of emotional capacity (the agent is aware of the
other player’s emotion states), even though the description stated it
“cannot feel”. We attempted to imbue the high-agency low-patiency
agent with an awareness of others’ emotions (e.g. - “you seem to
be upset”) whilst not being emotionally expressive itself, which
are two dierent, but often conated, design elements of aective
virtual agents. In contrast, the low-agency low-patiency agent did
not use emotional language or expressions (static neutral face) and
always responded to participants’ emojis with the statement “does
not compute”. Hence, unlike prior work [
11
], our agency and pa-
tiency manipulation separated an agent’s awareness of displayed
emotions (agency) from actually feeling emotions (patiency). We
imbued agency and patiency features into agents’ descriptions and
dialogs that occur over time in a negotiation (Table 1), which is how
we carefully manipulated the mind dimensions according to MPT
(in contrast to [11]).
We piloted our manipulation (descriptions and dialogs) before
the main experiment. After reading agency dialogs, pilot study
participants assigned them with both perceived agency F(1, 308) =
4.95, p = .027, and patiency F(1, 308) = 7.39, p = .007, and there was
no interaction F(1, 308) = .348, p = .556. Patiency dialogs resulted
in signicance for perceived patiency F(1, 308) = 12.20, p = .001
and non-signicance for perceived agency F(1, 308) = .783, p = .377,
with no interaction F(1, 308) = .104, p = .748. Agency descriptions
were assigned perceived agency F(1, 249) = 42.09, p = .00 and per-
ceived patiency F(1, 249) = 29.98, p = .00 and no interaction was
found F(1, 249) = 1.31, p = .254. The patiency descriptions were
signicant for both perceived agency F(1, 249) = 17.49, p = .00 and
perceived patiency F(1, 249) = 59.86, p = .00 with a signicant in-
teraction F(1, 249) = 5.78, p = .017. We concluded that perceived
agency and patiency of descriptions and dialogs were signicant
for the corresponding dimensions, even if they were not perfectly
orthogonal. In fact, Gray et al.’s original MPT data showed a high
correlation between two dimensions at r(11) = .90, p < .001 [
23
,
40
].
We therefore proceeded with using dialogs and descriptions in the
main experiment based on our pilot study.
As a reminder, only dialogs and descriptions diered between
agents (Table 1). Also in all negotiations, there were 7 clocks, 5
crates of records, 5 paintings, and 5 lamps, with dierent values
per item per player for records and lamps (Table 2). All agents
began the negotiation by proposing the same starting oer (Table
3). The negotiation structure was partially integrative and partially
distributive, meaning that half of the items were equally valuable to
both players (distributive) while the other half of items had dierent
values for players (integrative). This allows players to potentially
“grow the pie” in a cooperative fashion through in-game commu-
nication while still playing competitively. Before the negotiation,
participants were informed only about what they preferred. They
were told prior to the experiment that one person who earned the
highest points against the agent would get $10 as a bonus prize.
Clocks Records Paintings Lamps
Robot 4 1 2 3
Human 4 3 2 1
Table 2: Points per item
All agents’ negotiation strategy was based on the minimax prin-
ciple of minimizing the maximal potential loss [
34
]; the agent ad-
justed its oers if the participant communicated his/her preferences,
and strove for fair oers, while rejecting unfair deals. The agent
did not know participants’ preferences, but assumed an integrative
structure. The agent made a very lopsided rst oer (as a form of
"anchoring") as shown in Table 3: it took almost all clocks (equally
the most valuable item for both players), it allocated more lamps to
itself (more valuable for itself) and gave more records to the partic-
ipant (more valuable for the participant), and equally distributed
the paintings (equally valuable item).
Clocks Records Paintings Lamps Points
Robot 6*40*12*24*3=40
Undecided
1 1 1 1
Human 0*44*32*20*1=16
Table 3: Starting oer as item * points = total points
3.2 Participants
226 participants residing in the U.S. were recruited on Amazon
Mechanical Turk. We had 135 men (59.7%), 90 women, and 1 of
undisclosed gender. Participants were all over 18 years of age. 53.5%
were between the ages of 25 and 34 (121 participants). As for other
participants, 17 were between 18-24 years of age, 47 were between
35-44 years of age, 26 were between 45-54 years of age, 13 were
between 55-64 years of age, and 2 were between 65 and 74 years
of age. 87.2% identied as White/Caucasian (197 participants), and
10 as Black/African Americans, 6 as Native Americans/American
Indians, 14 as Asian Americans, and 1 identied as Black/African
and Asian American. 60.6% had some college education or above.
3.3 Procedure and measurements
Participants got a link to the survey on Amazon Mechanical Turk,
which rst contained the informed consent form, questions on
participants’ current emotion states and demographic information.
Then participants read the description of an agent based on the
randomly assigned condition (Table 1 shows four conditions) and
answered attention check questions about the description. After
that, they read the instruction about the negotiation task, followed
by additional attention check questions about the task, which they
had to pass to go to the actual negotiation interface. They had up to
6 minutes to engage in a negotiation of four dierent goods (Table 2),
and the count-down of time was displayed on the interface (Figure
1). Upon completion of the negotiation, participants nished the
second part of the survey of our measurements.
We deployed following measurements: MPT (agency and pa-
tiency) [
23
], SCM (competence and warmth) [
19
], the moral stand-
ing scale [
29
,
40
], emotion states [
10
,
27
,
45
], the moral identity
questionnaire [5], the inclusion of other in self (IOS) scale [1] as a
measure of relatability. We asked additional questions on whether
or not participants made concessions to the agent and if the agent
did anything unexpected. We only report relevant measures in our
results. Participants were compensated $3 for their time, based on
an estimate of 30 minutes to nish the entire survey and negotia-
tion. One participant was randomly selected and awarded the $10
bonus prize, after the experiment was completed.
4 RESULTS
4.1 Manipulation check
Both of our experimental manipulations aected perceived agency;
that is, there was both a signicant main eect of agency (F(1, 222)
= 35.68, p < .001) and a signicant main eect of patiency (F(1, 222)
= 53.42, p < .001) on perceived agency, whereas the interaction
between agency and patiency did not approach signicance (F(1,
222) = .60, p = .44). Serving as a manipulation check, participants
perceived lower agency for the agent that could purportedly not
reason (M = 2.89, SE = .14) than when the agent was described as
being able to reason (M = 4.01, SE = .13). However, participants
also rated the agent as lower in agency when it could not feel (M
= 2.77, SE = .13) than when the agent was described as being able
to feel (M = 4.14, SE = .13). In contrast, only manipulated patiency
signicantly aected perceived patiency (F(1, 222) = 71.24, p <
.001); the eect of agency on perceived patiency only approached
signicance (F(1, 222) = 2.57, p = .11), and the interaction did not
approach signicance (F(1, 222) = .001, p = .99). Participants rated
the agent as lower in patiency when it could not feel (M = 1.88, SE =
.13) than when the agent is described as being able to feel (M = 3.44,
SE = .13). The manipulations thus showed a same trend as in our
pilot study. As aforementioned, agency and patiency dimensions
were highly correlated in the original MPT study [23, 40].
4.2 Main analysis
We next looked into negotiation outcomes. For user points, there
was a signicant main eect of agency (F(1, 143) = 4.35, p = .04);
participants got more in the negotiation when the agent was de-
scribed as being able to reason (M = 28.825, SE = .67) than when the
agent was described as not being able to reason (M = 26.69, SE =
.77). No other eects approached signicance (Fs < .50, ps > .48). For
agent points, there was also a signicant main eect of agency (F(1,
143) = 6.68, p = .01); agents got less in the negotiation when it was
described as being able to reason (M = 34.06, SE = .76) than when
the agent was described as not being able to reason (M = 37.05,
SE = .87). No other eects approached signicance (Fs < .23, ps >
.63). The positive eect of agency on user points and the negative
eect of agency on agent points cancelled out, such that the eect
of agency on joint points was not signicant F(1, 143) = 1.66, p =
.20); no other eects approached signicance (Fs < .58, ps > .44).
However, the eect of agency on initial oer was not signicant
F(1, 143) = .49, p = .49); no other eects reached signicance (Fs <
2.7, ps > .10).
Process measures capture how participants played against the
agent and thus are also important elements of negotiations. There
was a marginally signicant eect of agency on game end time
(F(1, 143) = 3.62, p = .059); participants took longer if the agent was
described as not being able to reason (M = 296.88, SE = 13.36) than
when the agent was described as being able to reason (M = 263.14,
SE = 11.67). But, this eect was driven entirely by the low-patiency
condition, as per a signicant interaction (F(1, 143) = 5.38, p = .02).
The main eect of patiency did not approach signicance (F < .01,
p > .99). There was a parallel pattern for number of rejected oers.
We saw a signicant eect of agency on number of times users
rejected oers (F(1, 143) = 9.50, p = .002); participants were more
likely to reject an oer if the agent was described as not being able
to reason (M = .72, SE = .11) than when the agent was described
as being able to reason (M = .29, SE = .09). However, this eect
was again driven entirely by the low-patiency condition, as per a
signicant interaction (F(1, 143) = 5.85, p = .02). The main eect of
patiency did not reach signicance (F < 2.32, p > .13).
Participants chose to display the happy emoji signicantly more
when the agent was described as being able to feel (M = 1.25, SE =
.18; F(1, 143) = 8.14, p = .005) than when the agent was described as
not being able to feel (M = .88, SE = .20). No other eects reached
signicance (Fs < 1.92, ps > .17). Likewise, participants also chose to
display the surprise emoji signicantly more when the agent was
described as being able to feel (M = .47, SE = .07; F(1, 143) = 4.54, p
= .04) than when the agent was described as not being able to feel
(M = .25, SE = .08). No other eects reached signicance (Fs < 1.60,
ps > .21). No other eects for any other emoji emotional display
reached signicance (Fs < 1.95, ps > .17).
There were a few messages that participants sent to the agent
(pre-set messages in the UI) that were signicantly used. Partici-
pants chose to convey the message “it is important that we are both
happy with an agreement” more when the agent was described as
being able to feel (M = .36, SE = .06; F(1, 143) = 5.18, p = .02) than
when the agent was described as not being able to feel (M = .16, SE
= .07). No other eects approached signicance (Fs < .03, ps > .85).
The interaction between agency and patiency signicantly aected
how often participants chose to convey this message “I gave a little
here; you give a little next time” (F(1, 143) = 5.18, p = .02). No other
eects reached signicance (Fs < 2.87, ps > .09). There was also a
signicant interaction between agency and patiency for this mes-
sage “This is the last oer. Take it or leave it” (F(1, 143) = 3.88, p =
.05). No other eects reached signicance (Fs < .85, ps > .36). No
other eects for any other message options reached signicance
(Fs < 2.17, ps > .14). There were other process related measures that
were relevant, but not signicant: the eect of agency on number
of times users made oers, accepted oers, declared their preferred
items, posed queries, and sent messages to the agent.
4.3 Exploratory analysis
We examined the impact of agency and patiency dimensions on
competence, warmth, IOS (relatability), and moral standing. Both
of our experimental manipulations aected perceived competence,
meaning that there was both a signicant main eect of agency
(F(1, 222) = 1.30, p = .002) and a signicant main eect of patiency
(F(1, 222) = 19.20, p < .0001) on perceived competence, whereas
the interaction between agency and patiency did not approach
signicance (F(1, 222) = .08, p = .77). Participants perceived the
agent that purportedly could not reason as lower in competence
(M = 3.13, SE = .09) than when the agent was described as being
able to reason (M = 3.52, SE = .08). However, participants also rated
the agent as lower in competence when it could not feel (M = 3.06,
SE = .09) than when the agent is described as being able to feel (M
= 3.59, SE = .09). Likewise, both of our experimental manipulations
aected perceived warmth; there was a signicant main eect of
agency (F(1, 222) = 6.71, p = .01) and a signicant main eect of
patiency (F(1, 222) = 4.06, p < .001) on perceived warmth, whereas
the interaction between agency and patiency did not approach
signicance (F(1, 222) = .03, p = .86). Participants rated the agent as
lower in warmth when it could not feel (M = 2.33, SE = .10) than
when the agent was described as being able to feel (M = 3.20, SE
= .10). However, participants also perceived the agent that could
purportedly not reason as lower in warmth (M = 2.59, SE = .10)
than when the agent was described as being able to reason (M =
2.95, SE = .09).
Only manipulated patiency signicantly aected psychological
distance (IOS) from the agent (F(1, 222) = 29.1, p = .002); the eect
of agency on IOS and the interaction did not reach signicance (Fs
< 1.16, ps > .28). Participants reported that they identied with the
agent more when the agent was described as being able to feel (M
= 2.86, SE = .16) and that the agent was more distant from them
psychologically when it could not feel (M = 2.14, SE = .16). Only
manipulated patiency signicantly aected moral standing (F(1,
222) = 17.81, p < .00001); the eect of agency on moral standing
and the interaction did not reach signicance (Fs < 1.53, ps > .22).
Participants rated the agent as lower in moral standing when it
could not feel (M = 3.08, SE = .16) than when the agent was described
as being able to feel (M = 4.03, SE = .16).
5 DISCUSSION
Our focus was on how people played against virtual agents that
were designed to have divergent minds. The results on negotiation
outcomes and processes, two paradigmatic measures in negotiation
research [
47
], overall did not align with our hypotheses. Unlike
previous ndings that showed both agency and patiency to matter
in game outcomes, e.g., the ultimatum game [
11
], we only noted a
signicant eect of agency, in a dierent direction than anticipated.
Agency did not contribute to greater joint outcomes. Rather, partic-
ipants scored higher and played shorter games when the agent was
portrayed to have high-agency, and they scored lower via longer
negotiations with low-agency agents. Joint outcomes hence were
insignicant. Participants also did not seek more information from
the agent with high-agency. While game points did not hinge on
patiency, people did utilize more emotive messages and emojis with
an agent with high-patiency.
Emotions matter in how people negotiate [
4
,
48
]. Participants
sent emojis and emotionally relevant messages to a high-patiency
agent which was described to be aective. Yet, patiency in itself
did not aect negotiation results. Rather, it only aected a few
process measures. Potentially there was more “noise” for people to
interpret when they interacted with high-patiency agents—not only
do they have to gure out game mechanics in terms of item values,
but people may have assumed that the agents’ emotional capacity
served a strategic purpose. Additional analyses demonstrated that
patiency played a large part in attributing moral standing and
relatability (IOS) to an agent. But, how much participants identied
with an agent and to what degree they thought the agent had moral
standing did not align with how well they played against it.
When an agent that was described to be less cognitively intel-
ligent (low-agency) interacted with participants in a cognitively
taxing task (negotiation over goods), participants’ assumed "win-
ning" strategy could have drifted from point-based calculations as
the time passed or it was initially assumed to not be just about
item points. Human-like qualities such as an agent’s emotions,
moral standing or relatability are in essence, distracting points
when it comes to game mechanics, yet these distractors could have
(wrongly) gained greater traction as part and parcel of the game,
especially since harm salience regarding a moral patient increases
with time pressure [
24
]. Agents demonstrating low-agency traits
can seem inconsistent with highly agentic tasks like negotiation
they partake in, which can mean that people do poorly when they
cannot conclude to what degree an agent has a perceived mind.
Negotiations serve as a context for adjusting preconceptions on
technological entities’ minds. We buttress this on three premises.
First, people have preconceived beliefs about virtual agents’ minds;
agents are seen to have low-order theory of minds [
23
,
49
] (at least
presently) even if people interact with agents socially [
6
,
39
]. Sec-
ond, the perceived mind of an agent can be adjusted, be it through
patiency (aective richness [
25
]) or agency (behavioral intentional-
ity [
32
]). Third, negotiations require cognitively eortful partici-
pation that involves theory of mind reasoning [
14
,
22
], especially
when it comes to mixed-motive negotiations [
14
,
43
]. Through ne-
gotiations, an agent’s behavioral intentionality can be called into
question, providing people opportunities to reformulate an agent’s
degree of conferred mind via agency.
In our experiment, participants’ perception of how agents ne-
gotiated was the main point, not how agents actually negotiated.
All agents appeared to calculatively negotiate, but not with any
sophisticated AI; their oer strategies were not aected by emo-
tional communications from players. Agents, however, adjusted
their oers if participants communicated about preferences [
34
].
Participants were incentivized to do well, i.e., extra monetary com-
pensation for the best player, and were purposefully not provided
information on the agent’s preferences; communication was neces-
sary to cooperate and compete as an interaction paradigm.
Our approach alluded to diering degrees of agency and patiency
over time through descriptions (pre- and in-game- manipulation),
dialogs (in-game manipulation), facial expressions (in-game manip-
ulation) and the experimental context (negotiation) in itself was
suggestive of agency. The common belief that technological agents
have low-agency and patiency [
23
,
49
] can be solidied or not called
into question in an interaction, unless people have reasons to ad-
just their beliefs, e.g., manipulated behavioral intentionality based
on an agent’s action [
32
]. Thus, the disjointed nature between the
low-agency agent’s dialogs and descriptions vs. its negotiation style
(mixed-motive games often require higher-order theory of mind)
potentially called into question what the agent was “up to”. The
high-agency agent could have been more straightforward to “read”
for participants. It negotiated, talked, and was described as if it
could have a higher-order mind. But, this manipulation in itself
would not necessarily overturn people’s belief that an agent has
low-agency and low-patiency compared to them.
An agent that calculates oers or talks in an overly logical fashion
(“Spock-like”) is not granted high-agency by default. An agent that
smiles or frowns is not granted high-patiency by default. These
manipulations alone do not greatly challenge people’s notion that
they are interacting with a mere machine. Our high-agency agent
did poorly against participants that do have a higher degree of
mind. Our low-agency agent did well against participants, over time.
When we cannot easily guess what an agent desires or intends to do,
i.e., predict its intentional stance [
15
], we can exercise our higher
degree theory of mind, investigating and questioning the bias we
hold as a fact—the inability of technology to have a human-like
mind.
We note that human-agent negotiations can greatly aid research
on mind perception. There are potential areas for broadening future
research, such as processes that challenge people’s steadfast beliefs
about forms of technology. Specically, we recommend a thorough
look at how the mind is judged on continuous dimensions of agency
and patiency as there are tiered degrees of having a mind. If so,
to treat agency and patiency as discrete dimensions in designing
virtual agents, e.g., agency as random vs. intentional actions, and
patiency as facial expressions vs. no expressions in a simple game
[
11
], or to test these dimensions without interactive conversations,
e.g., computational approaches to modelling agents’ minds with-
out conversational assessments [
14
], leaves much out when the
combination of mind perception and human-agent negotiation as
conjoined research areas can richly inform each other.
One novel implication is that mind perception may require theo-
retical revisions to account for interactive opinion formation about
an agent’s mind; negotiations provide a contextually dierent frame-
work than a single instance evaluation of an agent’s mind. MPT
focuses more on the latter case. Minds of various beings were judged
through a survey; it is about people’s pre-existing beliefs at a single
point in time [
23
]. The novelty of our study is that people seem
to be revising their opinion of the agent’s perceived mind; the hu-
man attribution of seeing a mind in a machine may be misguided,
but people can question their own beliefs through an interaction.
Negotiations are potentially one of many interactive paradigms
that can better enlighten us on how people assess agents that dis-
play dierent degrees of having a mind in dierent ways over time.
More relevantly, exploring other types of negotiations, e.g., purely
integrative or distributive negotiations, can reveal in what ways an
agent’s perceived mind impact people as they attempt to understand
whether or not a social agent is also an intentional agent.
6 CONCLUSION
We are far from having virtual agents that are truly intentional ac-
tors like humans. But, the degree to which agents are perceived to
have agency and patiency can be observed via human-agent inter-
actions. Through negotiations, we caught a glimpse of how people
react when they encounter agents that behave counter-intuitively,
e.g., negotiating in an agentic manner without prescribed agentic
traits, as manipulated via dialogs and descriptions. The results show
that participants got more points against an agent with high-agency.
In contrast, they did worse, took longer to play, and rejected more
oers from a low-agency agent, as inuenced by patiency. Patiency
resulted in more emotional expressions from participants to the
agent; people engaged more with emotional signals (emojis, mes-
sages). People also granted higher moral standing and related more
to the agent when it was described to have patiency. We conjecture
that a virtual agent that sends unclear or mismatched signals that
people have to interpret during a complex interaction like nego-
tiation can lead them to reconsider manipulated mind perception
dimensions. What we can conclude is that in attempting to com-
prehend a virtual negotiator’s “mind”, people react to its rational
and emotional capacities in divergent ways, leading to noticeable
dierences in how they behave.
7 ACKNOWLEDGMENTS
This work is supported by the Air Force Oce of Scientic Research,
under Grant FA9550-18-1-0182, and the US Army. The content does
not necessarily reect the position or the policy of any Government,
and no ocial endorsement should be inferred.
REFERENCES
[1]
Arthur Aron, Elaine N Aron, and Danny Smollan. 1992. Inclusion of other in the
self scale and the structure of interpersonal closeness. Journal of personality and
social psychology 63, 4 (1992), 596.
[2]
Tim Baarslag, Michael Kaisers, Enrico Gerding, Catholijn M. Jonker, and Jonathan
Gratch. 2017. When will negotiation agents be able to represent us? The chal-
lenges and opportunities for autonomous negotiators.. In International Joint
Conference on Articial Intelligence. 4684–4690.
[3]
Jeremy N Bailenson, Jim Blascovich, Andrew C Beall, and Jack M Loomis. 2003.
Interpersonal distance in immersive virtual environments. Personality and Social
Psychology Bulletin 29, 7 (2003), 819–833.
[4]
Bruce Barry, Ingrid Smithey Fulmer, Gerben A Van Kleef, et al
.
2004. I laughed, I
cried, I settled: The role of emotion in negotiation. The handbook of negotiation
and culture (2004), 71–94.
[5]
Jessica E Black and William M Reynolds. 2016. Development, reliability, and
validity of the Moral Identity Questionnaire. Personality and Individual Dierences
97 (2016), 120–129.
[6]
Jim Blascovich, Jack Loomis, Andrew C Beall, Kimberly R Swinth, Crystal L Hoyt,
and Jeremy N Bailenson. 2002. Immersive virtual environment technology as
a methodological tool for social psychology. Psychological Inquiry 13, 2 (2002),
103–124.
[7]
Peter J Carnevale and Dean G Pruitt. 1992. Negotiation and mediation. Annual
review of psychology 43, 1 (1992), 531–582.
[8] Antonio R Damasio. 2006. Descartes’ error. Random House.
[9]
Celso M de Melo, Peter J Carnevale, Stephen J Read, and Jonathan Gratch. 2014.
Reading people’s minds from emotion expressions in interdependent decision
making. Journal of personality and social psychology 106, 1 (2014), 73.
[10]
Celso M de Melo and Jonathan Gratch. 2015. People show envy, not guilt, when
making decisions with machines. In 2015 International Conference on Aective
Computing and Intelligent Interaction (ACII). IEEE, 315–321.
[11]
Celso M De Melo, Jonathan Gratch, and Peter J Carnevale. 2014. The importance
of cognition and aect for articially intelligent decision makers. In Twenty-
Eighth AAAI Conference on Articial Intelligence. 336–342.
[12]
Celso M de Melo, Jonathan Gratch, and Peter J Carnevale. 2015. Humans versus
computers: Impact of emotion expressions on people’s decision making. IEEE
Transactions on Aective Computing 6, 2 (2015), 127–136.
[13]
Celso M de Melo, Peter Khooshabeh, Ori Amir, and Jonathan Gratch. 2018. Shap-
ing Cooperation between Humans and Agents with Emotion Expressions and
Framing. In Proceedings of the 17th International Conference on Autonomous Agents
and MultiAgent Systems. International Foundation for Autonomous Agents and
Multiagent Systems, 2224–2226.
[14]
Harmen de Weerd, Rineke Verbrugge, and Bart Verheij. 2017. Negotiating with
other minds: the role of recursive theory of mind in negotiation with incomplete
information. Autonomous Agents and Multi-Agent Systems 31, 2 (2017), 250–287.
[15] Daniel Dennett. 1989. The intentional stance. MI T press.
[16]
Daniel Dennett. 2008. Kinds of minds: Toward an understanding of consciousness.
Basic Books.
[17]
Ron Dotsch and Daniël HJ Wigboldus. 2008. Virtual prejudice. Journal of
experimental social psychology 44, 4 (2008), 1194–1198.
[18]
Nicholas Epley, Adam Waytz, and John T Cacioppo. 2007. On seeing human: a
three-factor theory of anthropomorphism. Psychological review 114, 4 (2007),
864.
[19]
Susan T Fiske, Amy JC Cuddy, Peter Glick, and Jun Xu. 2002. A model of (often
mixed) stereotype content: competence and warmth respectively follow from
perceived status and competition. Journal of personality and social psychology 82,
6 (2002), 878.
[20]
BJ Fogg and Cliord Nass. 1997. How users reciprocate to computers: an ex-
periment that demonstrates behavior change. In CHI’97 Extended Abstracts on
Human Factors in Computing Systems (CHI EA ’97). ACM, New York, NY, USA,
331–332. https://doi.org/10.1145/1120212.1120419
[21]
Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik,
and Peter H Ditto. 2013. Moral foundations theory: The pragmatic validity of
moral pluralism. In Advances in experimental social psychology. Vol. 47. Elsevier,
55–130.
[22]
Jonathan Gratch, David DeVault, Gale M Lucas, and Stacy Marsella. 2015. Nego-
tiation as a challenge problem for virtual humans. In International Conference on
Intelligent Virtual Agents. Springer, 201–215.
[23]
Heather M Gray, Kurt Gray, and Daniel M Wegner. 2007. Dimensions of mind
perception. science 315, 5812 (2007), 619–619.
[24]
Kurt Gray, Chelsea Schein, and Adrian F Ward. 2014. The myth of harmless
wrongs in moral cognition: Automatic dyadic completion from sin to suering.
Journal of Experimental Psychology: General 143, 4 (2014), 1600.
[25]
Kurt Gray and Daniel M Wegner. 2012. Feeling robots and human zombies: Mind
perception and the uncanny valley. Cognition 125, 1 (2012), 125–130.
[26]
Kurt Gray, Liane Young, and Adam Waytz. 2012. Mind perception is the essence
of morality. Psychological inquiry 23, 2 (2012), 101–124.
[27]
Jonathan Haidt et al
.
2003. The moral emotions. Handbook of aective sciences
11, 2003 (2003), 852–870.
[28]
Nick Haslam. 2012. Morality, mind, and humanness. Psychological Inquiry 23, 2
(2012), 172–174.
[29]
Mansur Khamitov, Je D Rotman, and Jared Piazza. 2016. Perceiving the agency
of harmful agents: A test of dehumanization versus moral typecasting accounts.
Cognition 146 (2016), 33–47.
[30]
Nicole C Krämer. 2008. Theory of mind as a theoretical prerequisite to model
communication with virtual humans. In Modeling communication with robots
and virtual humans. Springer, 222–240.
[31]
Nicole C Krämer, Astrid von der Pütten, and Sabrina Eimler. 2012. Human-
agent and human-robot interaction theory: similarities to and dierences from
human-human interaction. In Human-computer interaction: The agency perspec-
tive. Springer, 215–240.
[32]
Daniel T Levin, Stephen S Killingsworth, Megan M Saylor, Stephen M Gordon,and
Kazuhiko Kawamura. 2013. Tests of concepts about dierent kinds of minds: Pre-
dictions about the behavior of computers, robots, and people. Human–Computer
Interaction 28, 2 (2013), 161–191.
[33]
Gale M Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014.
It’s only a computer: Virtual humans increase willingness to disclose. Computers
in Human Behavior 37 (2014), 94–100.
[34]
Johnathan Mell and Jonathan Gratch. 2017. Grumpy & Pinocchio: answering
human-agent negotiation questions through realistic agent design. In Proceedings
of the 16th Conference on Autonomous Agents and Multiagent Systems. Interna-
tional Foundation for Autonomous Agents and Multiagent Systems, 401–409.
[35]
Johnathan Mell, Gale M. Lucas, and Jonathan Gratch. 2015. An Eective Con-
versation Tactic for Creating Value over Repeated Negotiations. In International
Conference on Autonomous Agents and Multiagent Systems. International Founda-
tion for Autonomous Agents and Multiagent Systems, 1567–1576.
[36]
Johnathan Mell, Gale M. Lucas, and Jonathan Gratch. 2017. Prestige Questions,
Online Agents, and Gender-Driven Dierences in Disclosure. In International
Conference on Intelligent Virtual Agents. Springer, 273—282.
[37]
Youngme Moon and Cliord Nass. 1996. How “real” are computer personali-
ties? Psychological responses to personality types in human-computer interac-
tion. Communication Research 23, 6 (1996), 651–674. https://doi.org/10.1177/
009365096023006002
[38]
Michael W Morris and Dacher Keltner. 1999. How Emotions Work: An Analysis
of the Social Functions of Emotional Expression in Negotiation, Vol. 11. in B.M.
Staw R.I. Sutton (Eds.), Research in organizational behavior. Amsterdam: JAI,
1–50.
[39]
Cliord Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social
actors. In Proceedings of the SIGCHI conference on Human factors in computing
systems. ACM, 72–78.
[40]
Jared Piazza, Justin F Landy, and Georey P Goodwin. 2014. Cruel nature:
Harmfulness as an important, overlooked dimension in judgments of moral
standing. Cognition 131, 1 (2014), 108–124.
[41]
David Premack and Guy Woodru. 1978. Does the chimpanzee have a theory of
mind? Behavioral and brain sciences 1, 4 (1978), 515–526.
[42]
Dean G Pruitt. 1967. Reward structure and cooperation: The decomposed Pris-
oner’s Dilemma game. Journal of Personality and Social Psychology 7, 1p1 (1967),
21.
[43]
Dean G Pruitt and Melvin J Kimmel. 1977. Twenty years of experimental gaming:
Critique, synthesis, and suggestions for the future. Annual review of psychology
28, 1 (1977), 363–392.
[44]
Mikey Siegel, Cynthia Breazeal, and Michael I Norton. 2009. Persuasive robotics:
The inuence of robot gender on human behavior. In 2009 IEEE/RSJ International
Conference on Intelligent Robots and Systems. IEEE, 2563–2568.
[45]
Eva EA Skoe, Nancy Eisenberg, and Amanda Cumberland. 2002. The role of
reported emotion in real-life and hypothetical moral dilemmas. Personality and
Social Psychology Bulletin 28, 7 (2002), 962–973.
[46]
Leigh Thompson. 1990. Negotiation behavior and outcomes: Empirical evidence
and theoretical issues. Psychological bulletin 108, 3 (1990), 515.
[47]
Leigh L Thompson, Jiunwen Wang, and Brian C Gunia. 2010. Negotiation. Annual
review of psychology 61 (2010), 491–515.
[48]
Gerben A Van Kleef, Carsten KW De Dreu, and Antony SR Manstead. 2004.
The interpersonal eects of emotions in negotiations: a motivated information
processing approach. Journal of personality and social psychology 87, 4 (2004),
510.
[49]
Adam Waytz, Kurt Gray, Nicholas Epley, and Daniel M Wegner. 2010. Causes
and consequences of mind perception. Trends in cognitive sciences 14, 8 (2010),
383–388.
... Evaluations of agency and patiency may be correlated, but artificial entities may be assigned high agency alongside low patiency (Akechi et al., 2018;Gray et al., 2007). Lee et al. (2019) found that manipulations of patiency significantly affected perceived agency but that the reverse was not true. Items that explicitly discuss both agency and moral consideration were included (e.g. ...
... Studies have found that people are more willing to grant artificial entities moral consideration when they have humanlike appearance Nijssen et al., 2019), have high emotional (Nijssen et al., 2019;Lee et al., 2019) or mental capacities (Gray & Wegner, 2012;Nijssen et al., 2019;Piazza et al., 2014;Sommer et al., 2019), verbally respond to harm inflicted on them (Freier, 2008), or seem to act autonomously (Chernyak & Gary, 2016). There is also evidence that people in individual rather than group settings (Hall, 2005), with prior experience interacting with robots (Spence et al., 2018), or presented with information promoting support for robot rights, such as "examples of non-human entities that are currently granted legal personhood" are more willing to grant artificial entities moral consideration. ...
... Lavi (2019) Lavi cites consequentialist and deontological thinkers on animal rights but criticizes the use of language that anthropomorphizes non-humans and defends giving greater moral standing to humans than animals or robots Lee et al. (2019) In this experiment, manipulations of the described moral patiency of artificial entities significantly affected participants' ratings of their agency, but the manipulations of described moral agency did not significantly affect perceptions of patiency. Participants also rated entities as lower in patiency when they were described as not able to feel. ...
Article
Full-text available
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
... Studies have shown that agents' expression of negative emotions elicits concessions from people [8,34,35], suggesting that the limits of concessions are encoded in emotional expressions and decoded by people. Preference estimation [1,18,19] and mind attribution [21] are also important in negotiations. ...
... To estimate the sample size, we followed the power calculations proposed by [6] and did these calculations using G*Power. Based on earlier work [18,21], we predicted a medium effect size ( 2 = .09) in terms of the outcome. ...
... We used a configurable negotiation platform, Interactive Arbitration Guide Online (IAGO) [23], which is a popular framework for constructing intelligent agents who negotiate with people [15,18,21,31]. Figure 1 shows the interface used in our experiment. A static image of a virtual agent, a Japanese male, was displayed (1). ...
... Chatbots are typically designed to mimic the social roles usually associated with a human conversational partner, for example, a buddy [144], a tutor [44,143], healthcare provider [50,110], a salesperson [59,161], a hotel concierge [92], or, as in this research, a tourist assistant [30,31]. Research on mind perception theory [67,83,93] suggests that although artificial agents are presumed to have sub-standard intelligence, people still apply certain social stereotypes to them. It is reasonable, then, to assume that "machines may be treated differently when attributed with higher-order minds" [93]. ...
... Research on mind perception theory [67,83,93] suggests that although artificial agents are presumed to have sub-standard intelligence, people still apply certain social stereotypes to them. It is reasonable, then, to assume that "machines may be treated differently when attributed with higher-order minds" [93]. As chatbots enrich their communication and social skills, the user expectations will likely grow as the conversational competence and perceived social role of chatbots approach the human profiles they aim to represent. ...
Article
Full-text available
Chatbots are often designed to mimic social roles attributed to humans. However, little is known about the impact of using language that fails to conform to the associated social role. Our research draws on sociolinguistic to investigate how a chatbot’s language choices can adhere to the expected social role the agent performs within a context. We seek to understand whether chatbots design should account for linguistic register. This research analyzes how register differences play a role in shaping the user’s perception of the human-chatbot interaction. We produced parallel corpora of conversations in the tourism domain with similar content and varying register characteristics and evaluated users’ preferences of chatbot’s linguistic choices in terms of appropriateness, credibility, and user experience. Our results show that register characteristics are strong predictors of user’s preferences, which points to the needs of designing chatbots with register-appropriate language to improve acceptance and users’ perceptions of chatbot interactions.
... As mentioned above, explainable systems could highlight their lack of agency when providing explanations. Algorithms could be designed to oppose any attribution of mind, which has been shown to influence people's behavior towards them [29,58]. ...
Preprint
Full-text available
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.
... Surveys of lay attitudes on robots generally suggest that only a minority favor any kind of legal rights in the United States (Spence et al., 2018), Japan, China, and Thailand (Nakada, 2012). Others have found when AI is described as able to feel, people show greater moral consideration (Lee et al., 2019;Nijssen et al., 2019), although it is unclear to what extent this translates to supporting legal protection. ...
Article
Full-text available
To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults ( n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.
... The critical aspect is in what ways the circle will grow (or not). Research indicates that we do perceive non-human agents to have minds when these agents engage with us [33] and we often treat machines in a social manner [40,45]. The complexity lies is in how we act when machines appear to have minds to us [6]. ...
... How much a robot is perceived to act based on its own artificial intelligence, or based on explicit human guidance [5], contributes to a robot's perceived human-likeness in the sense that it implies capabilities for "self-governed movement, understanding, and decision-making" [74, p. 207]. While humans are granted autonomy and agency by default, artificial agents' (in)dependence from humans is taken into account and influences social decision-making [15,16,46]. Due to this difference in agency, (perceived) teleoperated avatars and autonomous agents often yield different levels of social influence. ...
Article
Full-text available
Context. Asynchronous messaging is increasingly used to support human–machine interactions, generally implemented through chatbots. Such virtual entities assist the users in activities of different kinds (e.g., work, leisure, and health-related) and are becoming ingrained into humans’ habits due to factors including (i) the availability of mobile devices such as smartphones and tablets, (ii) the increasingly engaging nature of chatbot interactions, (iii) the release of dedicated APIs from messaging platforms, and (iv) increasingly complex AI-based mechanisms to power the bots’ behaviors. Nevertheless, most of the modern chatbots rely on state machines (implementing conversational rules) and one-fits-all approaches, neglecting personalization, data-stream privacy management, multi-topic management/interconnection, and multimodal interactions. Objective. This work addresses the challenges above through an agent-based framework for chatbot development named EREBOTS. Methods. The foundations of the framework are based on the implementation of (i) multi-front-end connectors and interfaces (i.e., Telegram, dedicated App, and web interface), (ii) enabling the configuration of multi-scenario behaviors (i.e., preventive physical conditioning, smoking cessation, and support for breast-cancer survivors), (iii) online learning, (iv) personalized conversations and recommendations (i.e., mood boost, anti-craving persuasion, and balance-preserving physical exercises), and (v) responsive multi-device monitoring interface (i.e., doctor and admin). Results. EREBOTS has been tested in the context of physical balance preservation in social confinement times (due to the ongoing pandemic). Thirteen individuals characterized by diverse age, gender, and country distribution have actively participated in the experimentation, reporting advancements in the physical balance and overall satisfaction of the interaction and exercises’ variety they have been proposed.
Conference Paper
Full-text available
This work considers the possibility of using virtual agents to encourage disclosure for sensitive information. In particular, this research used “prestige questions”, which asked participants to disclose information relevant to their socioeconomic status, such as credit limit, as well as university attendance, and mortgage or rent payments they could afford. We explored the potential for agents to enhance disclosure compared to conventional web-forms, due to their ability to serve as relational agents by creating rapport. To consider this possibility, agents were framed as artificially intelligent versus avatars controlled by a real human, and we compared these conditions to a version of the financial questionnaire with no agent. In this way, both the perceived agency of the agent and its ability to generate rapport were tested. Additionally, we examined the differences in disclosure between men and women in these conditions. Analyses reveled that agents (either AI- or human-framed) evoked greater disclosure compared to the no agent condition. However, there was some evidence that human-framed agents evoked greater lying. Thus, users in general responded more socially to the presence of a human- or AI-framed agent, and the benefits and costs of this approach were made apparent. The results are discussed in terms of rapport and anonymity.
Conference Paper
Full-text available
Computers that negotiate on our behalf hold great promise for the future and will even become indispensable in emerging application domains such as the smart grid and the Internet of Things. Much research has thus been expended to create agents that are able to negotiate in an abundance of circumstances. However, up until now, truly autonomous negotiators have rarely been deployed in real-world applications. This paper sizes up current negotiating agents and explores a number of technological, societal and ethical challenges that autonomous negotiation systems have brought about. The questions we address are: in what sense are these systems autonomous, what has been holding back their further proliferation, and is their spread something we should encourage? We relate the automated negotiation research agenda to dimensions of autonomy and distill three major themes that we believe will propel autonomous negotiation forward: accurate representation, long-term perspective, and user trust. We argue these orthogonal research directions need to be aligned and advanced in unison to sustain tangible progress in the field.
Conference Paper
Full-text available
We present the Interactive Arbitration Guide Online (IAGO) platform , a tool for designing human-aware agents for use in negotiation. Current state-of-the-art research platforms are ideally suited for agent-agent interaction. While helpful, these often fail to address the reality of human negotiation, which involves irrational actors, natural language, and deception. To illustrate the strengths of the IAGO platform, the authors describe four agents which are designed to showcase the key design features of the system. We go on to show how these agents might be used to answer core questions in human-centered computing, by reproducing classical human-human negotiation results in a 2x2 human-agent study. The study presents results largely in line with expectations of human-human negotiation outcomes, and helps to demonstrate the validity and usefulness of the IAGO platform.
Article
Full-text available
As areas of psychology focus more on how people make moral choices, there is a need for psychometrically sound instruments that include meaningful components of moral cognition. The purpose of this research was the development of a measure of moral identity that would encompass both integrity and the importance of morality to self-identity. In two large samples, we developed the Moral Identity Questionnaire (MIQ), and established internal consistency, test–retest reliability, and evidence of validity, including confirmatory factorial analysis, and correlations with current measures of morality. In summary, the MIQ, which measures the salience of moral integrity and moral self independently of political orientation or gender, provided scores that were reliable and valid, with strong correlations to measures of similar constructs.
Article
Full-text available
Agency the capacity to plan and act - and experience - The capacity to sense and feel are two critical aspects that determine whether people will perceive non-human entities, such as autonomous agents, to have a mind. There is evidence that the absence of either can reduce cooperation. We present an experiment that tests the necessity of both for cooperation with agents. In this experiment we manipulated people's perceptions about the cognitive and affective abilities of agents, when engaging in the ultimatum game. The results indicated that people offered more money to agents that were perceived to make decisions according to their intentions (high agency), rather than randomly (low agency). Additionally, the results showed that people offered more money to agents that expressed emotion (high experience), when compared to agents that did not (low experience). We discuss the implications of this agency-experience theoretical framework for the design of artificially intelligent decision makers. Copyright © 2014, Association for the Advancement of Artificial Intelligence.
Article
Full-text available
It will be discussed whether a theory specific for human-robot and human-agent interaction is needed or whether theories from human-human interactions can be adapted. First, theories from human-human interaction will be presented. Then, empirical evidence from human-robot- and human-agent interaction is presented. Research suggests that, from the perspective of the user, interaction with an artificial entity is similar to interaction with fellow humans. Explanations for this treatment of agents/robots in a social way (such as the ethopoeia approach, Nass& Moon, 2000) assume that due to our social nature humans will use their interaction routines also when confronted with artificial entities. Based on this it will be discussed whether theories from human-human-interaction will be a helpful framework also for human-agent/robot interaction, whether amendments will be beneficial or whether, alternatively, a totally new approach is needed.
Article
Full-text available
Theory of mind refers to the ability to reason explicitly about unobservable mental content of others, such as beliefs, goals, and intentions. People often use this ability to understand the behavior of others as well as to predict future behavior. People even take this ability a step further, and use higher-order theory of mind by reasoning about the way others make use of theory of mind and in turn attribute mental states to different agents. One of the possible explanations for the emergence of the cognitively demanding ability of higher-order theory of mind suggests that it is needed to deal with mixed-motive situations. Such mixed-motive situations involve partially overlapping goals, so that both cooperation and competition play a role. In this paper, we consider a particular mixed-motive situation known as Colored Trails, in which computational agents negotiate using alternating offers with incomplete information about the preferences of their trading partner. In this setting, we determine to what extent higher-order theory of mind is beneficial to computational agents. Our results show limited effectiveness of first-order theory of mind, while second-order theory of mind turns out to benefit agents greatly by allowing them to reason about the way they can communicate their interests. Additionally, we let human participants negotiate with computational agents of different orders of theory of mind. These experiments show that people spontaneously make use of second-order theory of mind in negotiations when their trading partner is capable of second-order theory of mind as well.
Conference Paper
Full-text available
Automated negotiation research focuses on getting the most value from a single negotiation, yet real-world settings often involve repeated serial negotiations between the same parties. Repeated negotiations are interesting because they allow the discovery of mutually beneficial solutions that don't exist within the confines of a single negotiation. This paper introduces the notion of Pareto efficiency over time to formalize this notion of value-creation through repeated interactions. We review literature from human negotiation research and identifS' a dialog strategy, favors and ledgers, that facilitates this process. As part of a longer-term effort to build intelligent virtual humans that can train human negotiators, we create a conversational agent that instantiates this strategy, and assess its effectiveness with human users, using the established Colored Trails negotiation testbed. In an empirical study involving a series of repeated negotiations, we show that humans are more likely to discover Pareto optimal solutions over time when matched with our favor-seeking agent. Further, an agent that asks for favors during early negotiations, regardless of whether these favors are ever repaid, leads participants to discover more joint value in later negotiations, even under the traditional definition of Pareto optimality within a single negotiation. Further, agents that match their words with deeds (repay their favors) create the most value for themselves. We discuss the implications of these findings for agents that engage in long-term interactions with human users. Copyright © 2015, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Article
Stereotype research emphasizes systematic processes over seemingly arbitrary contents, but content also may prove systematic. On the basis of stereotypes' intergroup functions, the stereotype content model hypothesizes that (a) 2 primary dimensions are competence and warmth, (b) frequent mixed clusters combine high warmth with low competence (paternalistic) or high competence with low warmth (envious), and (c) distinct emotions (pity, envy, admiration, contempt) differentiate the 4 competence-warmth combinations. Stereotypically, (d) status predicts high competence, and competition predicts low warmth. Nine varied samples rated gender, ethnicity, race, class, age, and disability out-groups. Contrary to antipathy models, 2 dimensions mattered, and many stereotypes were mixed, either pitying (low competence, high warmth subordinates) or envying (high competence, low warmth competitors). Stereotypically, status predicted competence, and competition predicted low warmth.