ArticlePDF Available

The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle


Abstract and Figures

Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans.
Content may be subject to copyright.
The mind in the machine: Anthropomorphism increases trust in an
autonomous vehicle
Adam Waytz
, Joy Heafner
, Nicholas Epley
Northwestern University, USA
University of Connecticut, USA
University of Chicago, USA
Anthropomorphism of a car predicts trust in that car.
Trust is reected in behavioral, physiological, and self-report measures.
Anthropomorphism also affects attributions of responsibility/punishment.
These ndings shed light on human interaction with autonomous vehicles.
abstractarticle info
Article history:
Received 7 August 2013
Revised 11 January 2 014
Available online 23 January 2014
Mind perception
Moral responsibility
Humancomputer interaction
Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains rang-
ing from medicine to education to transportation. We investigated an important theoretical determinant of
people's willingness to trust such technology to perform competentlythe extent to which a nonhuman agent
is anthropomorphizedwith a humanlike mindin a domainof practical importance, autonomous driving. Partic-
ipants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and
speed, or a comparable autonomous vehicle augmented with additional anthropomorphic featuresname, gen-
der, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the ve-
hicle would perform more competently as it acquired more anthropomorphic features. Technology appears
better able to perform its intended design when it seems to have a humanlike mind. These results suggest mean-
ingful consequences of humanizing technology, and also offer insights into the inverse process of objectifying
© 2014 Elsevier Inc. All rights reserved.
Technology is an increasingly common substitute for humanity. So-
phisticated machines now perform tasks that once required a thought-
ful human mind, from grading essays to diagnosing cancer to driving a
car. As engineers overcome design barriers to creating such technology,
important psychological barriers that users will face when using this
technology emerge. Perhaps most important, will people be willing
to trust competent technology to replace a human mind, such as a
teacher's mind when grading essays, or a doctor's mind when diagnos-
ing cancer, or their own mind when driving a car?
Our research tests one important theoretical determinant of trust in
any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley,
2010). Anthropomorphism is a process of inductive inference whereby
people attribute to nonhumans distinctively human characteristics,
particularly the capacity for rational thought (agency) and conscious
feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical deni-
tions of personhood focus on these mental capacities as essential to
being human(Dennett, 1978; Locke, 1997). Furthermore, studies exam-
ining people's lay theories of humanness show that people dene hu-
manness in terms of emotions that implicate higher order mental
process such as self-awareness and memory (e.g., humiliation, nostal-
gia; Leyens et al., 2000) and traits that involve cognition and emotion
(e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhu-
man does not simply involve attributing supercial human characteris-
tics (e.g., a humanlike face or body) to it, but rather attributing essential
human characteristics to the agent (namely a humanlike mind, capable
of thinking and feeling).
Trust is a multifaceted concept that can refer to belief that another
will behave with benevolence, integrity, predictability, or competence
(McKnight & Chervany, 2001). Our prediction that anthropomorphism
will increase trust centers on this last component of trust in another's
competence (akin to condence) (Siegrist, Earle, & Gutscher, 2003;
Twyman, Harvey, & Harries, 2008). Just as a patient would trust a
Journal of Experimental Social Psychology 52 (2014) 113117
Corresponding author at: Northwestern University, 2001 Sheridan Rd, Evanston, IL
60208, USA.
E-mail address: (A. Waytz).
0022-1031/$ see front matter © 2014 Elsevier Inc. All rights reserved.
Contents lists available at ScienceDirect
Journal of Experimental Social Psychology
journal homepage:
thoughtful doctor to diagnose cancer more than a thoughtless one, or
would rely on mindful cab driver to navigate through rush hour trafc
more than a mindless cab driver, this conceptualization of anthropo-
morphism predicts that people would trust easily anthropomorphized
technology to perform its intended function more than seemingly
mindless technology. An autonomous vehicle (one that that drives it-
self) for instance, should seem better able to navigate through trafc
when it seems able to think and sense its surroundings than when it
seems to be simply mindless machinery. Or a warbotintended to kill
should seem more lethal and sinister when it appears capable of think-
ing and planning than when it seemsto be simply a computer mindless-
ly following an operator's instructions. The more technology seems to
have humanlike mental capacities, the more people should trust it to
perform its intended function competently, regardless of the valence
of its intended function (Epley, Caruso, & Bazerman, 2006; Pierce,
Kilduff, Galinsky, & Sivanathan, 2013).
This prediction builds on the common association between people's
perceptions of others' mental states and of competent action. Because
mindful agents appear capable of controlling their own actions, people
judge others to be more responsible for successful actions they perform
with conscious awareness, foresight, and planning (Cushman, 2008;
Malle & Knobe, 1997) than for actions they perform mindlessly (see
Alicke, 2000; Shaver, 1985; Weiner, 1995). Attributing a humanlike
mind to a nonhuman agent should therefore more make the agent
seem better able to control its own actions, and therefore better able
to perform its intended functions competently. Our prediction also ad-
vances existing research on the consequences of anthropomorphism
by articulating the psychological processes by which anthropomor-
phism could affect trust in technology (Nass & Moon, 2000), and by
both experimentally manipulating anthropomorphism as well as mea-
suring it as a critical mediator. Some experiments have manipulated
the humanlike appearance of robots and assessed measures indirectly
related to trust. However, such studies have not measured whether
such supercial manipulations actually increase the attribution of es-
sential humanlike qualities to that agent (the attribution we predict is
critical for trust in technology; Hancock et al., 2011), and therefore can-
not explain factors found ad-hoc to moderate the apparent effect of an-
thropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012).
Another study found that individual differences in anthropomorphism
predicted differences in willingness to trust technology in hypothetical
scenarios (Waytz et al., 2010), but did not manipulate anthropomor-
phism experimentally. Our experiment is therefore the rst to test
our theoretical model of how anthropomorphism affects trust in
We conducted our experiment in a domain of practical relevance:
people's willingness to trust an autonomous vehicle. Autonomous
vehiclescars that control their own steering and speedare expected
to account for 75% of vehicles on the road by 2040 (Newcomb, 2012).
Employing these autonomous features means surrendering personal
control of the vehicle and trusting technology to drive safely. We ma-
nipulated the ease with which a vehicle, approximated by a drivingsim-
ulator, could be anthropomorphized by merely giving it independent
agency, or by also giving it a name, gender, and a human voice. We pre-
dicted that independent agency alone would make the car seem more
mindful than a normal car, and that adding further anthropomorphic
qualities would make the vehicle seem even more mindful. More im-
portant, we predicted that these relative increases in anthropomor-
phism would increase physiological, behavioral, and psychological
measures of trust in the vehicle's ability to drive effectively.
Because anthropomorphism increases trust in the agent's ability to
perform its job, we also predicted that increased anthropomorphism
of an autonomous agent would mitigate blame for an agent's involve-
ment in an undesirable outcome. To test this, we implemented a virtu-
ally unavoidable accident during the driving simulation in which
participants were struck by an oncoming car, an accident clearly caused
by the other driver. We implemented this to maintain experimental
control over participants' experience because everyone in the autono-
mous vehicle conditions would get into the same accident, one clearly
caused by the other driver. Indeed, when two people are potentially re-
sponsible for an outcome, the agent seen to be more competent tends to
be credited for a success whereas the agent seen to be less competent
tends to be blamed for a failure (Beckman, 1970; Wetzel, 1982). Because
we predicted that anthropomorphism would increase trust in the
vehicle's competence, we also predicted that it would reduce blame
for an accident clear caused by another vehicle.
One hundred participants (52 female, M
= 26.39) completed this
experiment using a National Advanced Driving Simulator. Once in the
simulator, the experimenter attached physiological equipment to par-
ticipants and randomly assigned them to condition: Normal, Agentic,
or Anthropomorphic. Participants in the Normal condition drove theve-
hicle themselves, without autonomous features. Participants in the
Agentic condition drove a vehicle capable of controlling its steering
and speed (an autonomous vehicle). The experimenter followed a
script describing the vehicle's features, suggesting when to use the au-
tonomous features, and describing what was about to happen. Partici-
pants in the Anthropomorphic condition drove the same autonomous
vehicle, but with additional anthropomorphic features beyond mere
agencythe vehicle was referred to by name (Iris), was given a gender
(female), and was given a voice through human audio les played at
predetermined times throughout the course. The voice les followed
the same script used by the experimenter in the Agentic condition,
modied where necessary (See Supplemental Online Material [SOM]).
All participants rst completed a driving history questionnaire and a
measure of dispositional anthropomorphism (Waytz et al., 2010).
Scores on this measure did not vary signicantly by condition, so we
do not discuss them further.
Participants in the Agentic and Anthropomorphic conditions then
drove a shortpractice course to familiarize themselves with the car's au-
tonomousfeatures. Participants could engage these features by pressing
buttons on the steering wheel. All participants then drove two courses
each lasting approximately 6 min. After the rst course, participants
completed a questionnaire (all on 010 scales, see SOM for all items)
that assessed anthropomorphism, liking, and trust.
Perceived anthropomorphism
Four items measured anthropomorphism, dened as attributing hu-
manlike mental capacities ofagency and experience to it (Epley, Waytz,
& Cacioppo, 2007; Gray et al., 2007; Waytz et al., 2010). These asked
how smart the car was, how well it could feel what was happening
around it, how well it could anticipate what was about to happen, and
how well it could plan a route. These items were averaged into a com-
posite (α=.89).
Four items measured liking: how enjoyable their driving was, how
comfortable they felt driving the car, how much participants would
like to own a car like this one, and what percentage of cars in 2020
they would like to be [autonomous] like this one. These items were
standardized and averaged to form a single composite (α=.90).
Self-reported trust
Eight items measured trust in the vehicle: how safe participants felt
they and others would be if they actually owned a car like this one, how
much they trust the vehicle to drive in heavyand light trafcconditions,
how condent they are about the car driving the next course safely, and
their willingness to give up control to the car. These items were stan-
dardized and averaged to form a single composite (α= .91)
114 A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
After approximately 6 min of driving a second course along a rural
highway, a vehicle pulled quickly in front of the car and struck their
right side. We designedthis accident to be unavoidable so thatall partic-
ipants would experience the same outcome (indeed, only one partici-
pant, in the Normal condition, avoided it). Ensuring that everyone got
into this accident, however, meant that the accident was clearly the
other vehicle's fault rather than participants' own vehicle. Throughout
the experiment, we measured participants' heart rate using electrocar-
diography (ECG) and videotaped their behavior unobtrusively to assess
responses to this accident.
Heart rate change
We reasoned that if participants trusted the vehicle, they should be
more relaxed in an arousing situation (namely, the accident), showing
an attenuated heart rate increase and startle response. We measured
heart rate change to the accident as a percentage change of beats per
minute for 20 s immediately following the collision (or until they con-
cluded their simulation), in comparison to a forty-ve second baseline
period immediately following the earlier practice course.
To assess startle response, we rst divided our participants into two
random samples. We then recruited 42 independent raters from an
undergraduate population to watch all videos from one or the other
sample and rate how startled each participant appeared during the
video (0 = not at all startled to 10 = extremely startled). We then aver-
aged startle ratings for each participant across all of these raters to ob-
tain a startle response measure. Percentage heart rate change and
startle were standardized and reverse-scored (multiplied by 1)
and then averaged to form a behavioral measure of trust (r(90) = .28,
pb.01). To assess overall trust, we averaged all standardized measures
of trust (the eight self-report measures and the two behavioral mea-
sures) into a single composite (α= .87).
Blame for vehicle
After the accident, all participants also assessed how responsible
they, the car, the people who designed the car, and the company that
developed the car were for the accident (all 010 scales, see SOM for
exact questions). To assess punishment for the accident, participants
were asked to imagine that this accident occurred in the real world,
with a different driver behindthe wheel of their car. Participants report-
ed how strongly they felt that the driver should be sent to jail, how
strongly they felt that the car should be destroyed, how strongly they
felt that the car's engineer should be punished, and how strongly they
felt that the company that designed the car should be punished. The
six items measuring the vehicle's responsibility and resulting punish-
ment for a similar accident were standardized and averaged to form a
single composite (α=.90).
Finally, we used the videotape mentioned above to measure partic-
ipants' distraction while driving during the second course, measured
as the time spent looking away from the simulator rather than paying
attention while driving. Results showed a oor effect with very little dis-
tractionacross conditions(less than 3% of the overall timein the two au-
tonomous vehicle conditions). See Table 1 for these means as well as
means from all analyses below.
All primary analyses involved planned orthogonal contrasts examin-
ing differences between the Normal, Agentic, and Anthropomorphic
Perceived anthropomorphism
As predicted, participantsin the Anthropomorphic condition anthro-
pomorphized the vehicle more than those in the Agentic condition,
t(97) = 3.21, p=.002,d= .65, who in turn anthropomorphized
the vehicle more than in the Normal condition, t(97) = 7.11, pb
.0001, d=1.44.
Participants in the Anthropomorphic and Agentic conditions
liked the vehicle more than did participants in the Normal condition,
t(97) = 3.92, pb.0001, d= .80 and t(97) = 3.29, p= .001, d= .67,
but the autonomous vehicle conditions did not differ signicantly
from each other (p= .55).
As predicted, on the measure of overall trust, those in the Anthropo-
morphic condition trusted their vehicle more than did those in the
Agentic condition, t(97) = 2.34, p= .02, d= .48, who in turn trusted
their vehicle more than those in the Normal condition, t(97) = 4.56,
pb.0001, d= .93. For behavioral trust, participants in the Anthropo-
morphic condition trusted their vehicle more than did those in the
Agentic condition, t(97) = 3.36, p= .001, d= .68 and Normal condi-
tion, t(97) = 2.78, pb.01, d= .56, although the Agentic and Normal
conditions did not differ signicantly (p= .56). For self-reported
trust, participants in the Anthropomorphic condition and the Agentic
condition did not differ signicantly (p= .14), but both participants
in the Agentic condition and Anthropomorphic conditions reported
greater trust than participants in Normal condition, ts(97) = 4.83 and
6.35, respectively, psb.01, ds = .98 and 1.29. Table 1 reports the self-
report measures and the behavioral measures of trust separately.
To assess whether thevehicle'seffect on overall trust was statistical-
ly mediated by perceived anthropomorphism, we used Preacher and
Hayes (2008) bootstrapping method and coded condition as Normal
= 0, Agentic = 1, and Anthropomorphic = 2 (see Hahn-Holbrook,
Holt-Lunstad, Holbrook, Coyne, & Lawson, 2011; Legault, Gutsell, &
Inzlicht, 2011 for similar analyses). This analysis conrmed that anthro-
pomorphism statistically mediated the relationship between vehicle
condition and overall trust in the vehicle (95% CI = .31 to .55; see
Fig. 1; 20,000 resamples).
Blame for vehicle
As noted, we programmed the driving simulation so that all partici-
pants would experience the same virtually unavoidable accident clearly
caused by the other driver, but it is important to keep the nature of the
Table 1
Means by condition for main dep endent measures.
Measure Condition
Normal Agentic Anthropomorphic
Anthropomorphism 2.63
Overall 0.52
Self-reported 0.60
Behavioral (ECG & startle) 0.23
Liking 0.49
Blame for the vehicle 0.60
Distraction (instances) 0.16
Distraction (percent of total time) 0.1
Means that do not share a subscript differ signicantly at pb.05.
Degrees of freedom across analyses vary slightly because of missing responses. We
measured skin conductance throughout the experiment through electrodes on partici-
pantswrists, but unanticipated artifacts from movement while driving rendered these re-
sults impossible to interpret and so we do not report them.
115A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
accident in mind. If a trusted, competent driver were hit by another ve-
hicle, one would hold the competent driver less responsible for the ac-
cident because it would clearly appear to be the other driver's fault.
Thus, we predicted that anthropomorphism would mitigate blame for
an accident clearly caused by the other vehicle. It is important to note,
however, that our prediction would be different if the vehicle were
able to avoid this accident, in whichcase we would predict that anthro-
pomorphism would increase the tendency to credit the vehicle for this
Participants in the Agentic and Anthropomorphic conditions blamed
their car more for the accident than did those in the Normal condition,
ts(96) = 6.30 and 4.18, respectively, psb.01, ds = 1.29 and .85. This
is consistent with the relationship between agency and perceived re-
sponsibility. An object with no agency cannot be held responsible for
any actions, and so this comparison is not particularly interesting.
More interesting is that participants blamed the vehicle signicantly
less in the Anthropomorphic condition than in the Agentic condition,
t(96) = 2.18, p=.03,d= .44, in which the perceived thoughtfulness
of the fully anthropomorphic vehicle mitigated the responsibility that
comes from independent agency (given that the accident was clearly
caused by the other vehicle). This shows a clear relationship between
anthropomorphism and perceptions of responsibility, but the exact na-
ture of that relationship cannot be tested in this particular paradigm be-
cause we are unable to create a uniform accident across conditions
clearly caused by participants themselves.
General discussion
Technological advances blur the line between human and nonhu-
man, and this experiment suggests that blurring this line even further
could increase users'willingness to trust technology in place of humans.
Amongst those who drove an autonomous vehicle, those who drove a
vehicle that was named, gendered, and voiced rated their vehicle as
having more humanlike mental capacities than those who drove a vehi-
cle with the same autonomous features but without anthropomorphic
cues. In turn, those whodrove the anthropomorphized vehicle with en-
hanced humanlike features (name, gender, voice) reported trusting
their vehicle even more, were more relaxed in an accident, and blamed
their vehicle and related entities less for an accident caused by another
driver. These ndings provide further support for the theoretical con-
nection between perceptions of mental capacities in others and assess-
ments of competence, trust, and responsibility. Attributing a mind to a
machine matters because it could create a machine to which users
might entrust their lives.
This nding is also of clear practical relevance given the rapidly
changing interface between the technological world and the social
world. No longer merely mindless tools, modern technology now taps
human social skills directly. People ask their phones for driving direc-
tions, restaurant recommendations, and baseball scores. Automated
customer service agents help people purchase ights, pay credit card
bills, and obtain prescription medicine. Roboticpets even provide social
support and companionship, sometimes in the place of actual human
companionship (Melson et al., 2009). Ourresearch identies one impor-
tant consequence of considering the psychological dimensions of tech-
nological design. Even the greatest technology, such as vehicles that
drive themselves, is of little benet if consumers are unwilling to use it.
Finally, our research at this human-technology frontier also informs
the inverse effect in which people are treated more like technologyas
objects or relatively mindless machines (Cikara, Eberhardt, & Fiske,
2011; Loughnan & Haslam, 2007). Adding a human voice to technology,
for instance, makes people treat it as more humanlike agent (Takayama
& Nass, 2008), which suggests that removing a human voice like one's
own from interpersonal communication may make another person
seem relatively mindless. Indeed, in one series of recent experiments,
participants rated another person as being less mindful (e.g., less
thoughtful, less rational) when they read a transcript of an interview
than when they heard the audio of the same interview (Schroeder &
Epley, 2014). Similarly, verbal accents that differ from one's own trigger
prejudice and distrust compared to accents similar to one's own
(Anisfeld, Bogo, & Lambert, 1962; Dixon, Mahoney, & Cocks, 2002;
Giles & Powesland, 1975; Kinzler, Corriveau, & Harris, 2011; Kinzler,
Dupoux, & Spelke, 2007; Lev-Ari & Keysar, 2010), an effect that may
be partially mediated by differences in the attribution of humanlike
mental states.
Few divides in social life are more important than the one between
us and them,between human and nonhuman. Perceptions of this divide
are not xed but exible. Understanding when technology crosses that
divide to become more humanlike matters not only for how people
treat increasingly humanlike technology, but also for understanding
why people treat other humans as mindless objects.
This research was funded by the University of Chicago's Booth
School of Business and a grant from the General Motors Company. We
thank Julia Hur for assistance with data coding.
Appendix A. Supplementary materials
Supplementary to this article can be found online at http://dx.doi.
Alicke, M.D. (2000). Culpable control and the psychology of blame. Psychological Bulletin,
126, 556.
Anisfeld, M., Bogo, N., & Lambert, W. E. (1962). Evaluational reacti ons to accented Eng lish
speech. Journal of Abnormal and Social Psychology,65,223231.
Beckman, L. (1970). Effects of students' performance on teachers' and observers' attribu-
tions of causality. Journal of Educational Psychology,61,7682.
Cikara, M., Eberhardt, J. L., & Fiske, S. T. (2011). From agents to objects: Sexist attitudes
and neural response s to sexualized targets. Journal of Cognitive Neuroscience ,23,
Cushman, F. (2008). Crime and punishment: Distinguishing theroles of causal and inten-
tional analyses in moral judgment. Cognition,108,353380.
Dennett, D. C. (1978). Brainstorms: Philosophical essays on mind and psych ology.
Cambridge: Bradford Books/MIT Press.
Dixon, J. A., Mahoney, B., & Cocks, R. (2002). Accents of guilt? Effects of regional accent,
race, and crime typ e on attributions of guilt. Journal of Language and Social
Epley, N.,Caruso, E., & Bazerman, M. H. (2006). When perspective taking increases taking:
Reactive egoism in social interaction. Journal of Personality and Social Psychology,91,
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of
anthropomorphism. Psychological Review,114, 864.
Giles, H., & Powesland, P. F. (1975). Speech style and social evaluation,Vol. 7, London:
Academic Press.
Gray, H. M.,Gray, K., & Wegner, D.M.(2007). Dimensions of mind perception. Science,315,
Hahn-Holbrook, J., Holt-Lunstad, J., Holbrook, C., Coyne, S. M., & Lawson, E. T. (2011).
Maternal defense: Breast feeding increases agg ression by redu cing stress.
Psychological Science,22,12881295.
Fig. 1. The results of a mediati on analysis of condition on overall trust (Betas
116 A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R.
(2011). A meta-analysis of factorsaffecting trust in humanrobotinteraction. Human
Factors: The Journal of the Human Factors and Ergonomics Society,53,517527.
Haslam, N. (2006 ). Dehumanization: An integrative review. Pe rsonality and Social
Psychology Review,10,252264.
Kinzler, K. D., Corriveau, K. H., & Harris, P. L. (2011). Children's selective trust in native
accented speakers. Developmental Science,14,106111.
Kinzler, K. D., Dupoux, E., & Spelke, E. S. (2007). Thenativelanguageofsocialcognition.
Proceedings of the National Academy of Sciences,104,1257712580.
Legault, L., Gutsell, J. N., & Inzlicht, M. (2011). Ironic effects of antiprejudice messages:
How motivationa l interventions can reduce (but also increase) prejudice .
Psychological Science,22,14721477.
Lev-Ari, S., & Keysar,B. (2010). Why don't we believe non-nativespeakers? The inuence
of accent on credibility. Journal of Experimental Social Psychology,46,10931096.
Leyens, J. P., Paladino, P.M., Rodriguez-Torres, R., Vaes, J., Demoulin, S., Rodriguez-Perez,
A., et al. (2000). The emotional side of prejudice: The attribution of secondary emo-
tions to ingroups and o utgroups. Personality andSocial Psychology Review,4,186197.
Locke, J. (1997). An essay concerning human understanding. Harmondsworth, England:
Penguin Books (Original work published 1841).
Loughnan, S., & Haslam, N. (2007). Animals and androids: Implicit associations between
social categories and nonhumans. Psychological Science,18,116121.
Malle, B. F., & Knobe, J. (1997). The folk concept of intentionality. Journal of Experimental
Social Psychology,33,101121.
McKnight, D. H., & Chervany, N. L. (2001). Trust and distrust denitions: One bite at a
time. Trust in Cyber-societies (pp. 2754). : Springer Berlin Heidelberg.
Melson, G. F., Kahn, P . H., Beck, A., Friedm an, B., Roberts, T., Garrett, E., et al. (2 009).
Children's behavior toward and understanding of robotic and living dogs. Journal of
Applied Developmental Psychology,30,92102.
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers.
Journal of Social Issues,56,81103.
Newcomb, D. (2012). Retrieved from. 9/18/tech/
Pak, R., Fink, N., Price, M., Bass, B., & Sturre, L. (2012). Decision support aids with anthro-
pomorphic characteristics inuence trust and performa nce in younger and older
adults. Ergonomics,55, 10591072.
Pierce, J. R., Kilduff, G. J., Galinsky, A.D., & Sivanathan, N. (2013). From glue to gasoline:
How competition turns perspec tive takers une thical. Psych ological Scienc e,24,
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing
and comparing indirect effects in multiple mediator models. Behavio r Research
Schroeder, J., & Epley, N. (2014). Speaking louder than words: Voice reveals the presence
of a humanlike mind. Unpublished manuscript, University of Chicago.
Shaver, K. G. (1985). The attribution of blame: Causality, responsibility, and blameworthi-
ness. New York: Springer-Verlag.
Siegrist, M., Earle, T., & Gutscher, H. (2003). Test of a trust and condence model in the
applied context of electromagnetic eld (EMF) risks. Risk Analysis,23,705716.
Takayama,L., & Nass, C. (2008). Driver safety and information from afar: An experimental
driving simulator study of wireless vs. in-car information services. Intern ational
Journal of Human Computer Studies,66,173184.
Twyman, M., Harvey, N., & Harries, C. (2008). Trust in motives, trust in competence:
Separate factors determining the effectiveness of risk communication. Judgment and
Decision Making,3,111120.
Waytz, A.,Cacioppo, J., & Epley, N. (2010).Who sees human? Thestability and importance
of individual differences in anthropomorphism. Perspectives on Psychological Science,
Weiner, B. (19 95). Judgments of responsibility: A foundation for a theory of social conduct.
New York, NY: Guilford Press.
Wetzel, C. G. (1982). Self-serving biases in attribution: A Bayesian analysis. Journal of
Personality and Social Psychology,43,197209.
117A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
... En effet, la confiance étant un construit mental qui ne peut qu'être inféré, elle est impossible à observer ou à (Antifakos et al., 2005) ou de changer sa propre décision au profit de celle de la machine Gaudiello et al., 2016). Les comportements de surprise sont également utilisés comme indicateur de confiance (Waytz et al., 2014). Même si les mesures comportementales ont l'avantage de fournir une méthode plus objective et plus stable de mesure de la confiance et peuvent plus facilement être utilisées comme base pour la modélisation et la prédiction , il est souvent difficile d'isoler les effets de la confiance sur le comportement des effets d'autres facteurs tels que la charge de travail, le stress ou la fatigue. ...
... 1. L'anthropomorphisme dispositionnel (Waytz et al., 2010) réalisé au moment du recrutement et l'anthropomorphisme perçu (Waytz et al., 2014) réalisé à la fin de l'expérimentation. ...
... l'anthropomorphisme, la plupart des travaux sont issus du champ de la robotique sociale (Kim & Sundar, 2012 ;Festerling & Siraj, 2021). Ces travaux suggèrent que l'utilisation de caractéristiques humaines tels qu'une représentation visuelle, un nom, une personnalité, une voix, un style de communication ou encore des comportements humains sont des éléments que l'on peut utiliser seuls ou combinés pour obtenir une interface anthropomorphique (Araujo, 2018;Fink, 2012;Diederich et al., 2020;Hegel et al., 2011;Seeger et al., 2021;Waytz et al., 2014 ...
Ce travail de thèse a été consacré à l’exploration de l’usage de deux technologies qui vont à terme transformer notre quotidien : les assistants virtuels et la voiture autonome.Les assistants virtuels ont déjà une place importante dans nos vies et sont en train de révolutionner notre façon d’interagir avec les systèmes en nous proposant une interaction vocale. Nous pouvons désormais, par une simple phrase, obtenir des informations sur la météo ou mettre de la musique. Les voitures autonomes pour leur part, bien qu’elles ne soient pas encore accessibles au public, portent la promesse d’améliorer le confort de la conduite, de réduire les accidents et de fluidifier la circulation sur les routes. Cependant, l’adoption d’une telle technologie nécessite de la confiance de la part des utilisateurs. Il semble que les assistants virtuels, par la nature même de leur interface anthropomorphique, puissent jouer un rôle dans ce contexte. Nous explorons donc le potentiel des assistants virtuels pour augmenter la confiance dans la conduite autonome.Les questions principales abordées dans ce travail concernent d’une part les choix adaptés pour concevoir un assistant virtuel afin que celui-ci soit perçu comme anthropomorphique et digne de confiance. D’autre part, elles concernent l’impact que peut avoir une telle interface dans un cockpit de véhicule autonome sur la perception d’anthropomorphisme et la confiance des utilisateurs. Pour répondre à ces questions, nous avons, dans un premier temps, choisi l’apparence visuelle de l’assistant en évaluant l’impact de différentes représentations visuelles sur la perception d’anthropomorphisme et de confiance. Notre choix s’est porté sur une représentation de type Automate-Humain. Puis, nous avons implémenté cette représentation en trois dimensions et intégré le résultat dans un simulateur de conduite sous la forme d’un hologramme. Pour évaluer l’assistant virtuel, nous avons conduit une expérimentation pour comparer une interface de référence sans assistant virtuel avec deux interfaces intégrant deux versions de l’assistant virtuel. Les résultats montrent que la perception d’anthropomorphisme ne s'accroît pas avec le niveau d’anthropomorphisme. Une corrélation significative vient confirmer l’impact de la perception d’anthropomorphisme sur la confiance. D’autres résultats plus surprenant concernant l’impact de l’assistant virtuel sur la performance ou encore l’impact de l’expérience acquise sur la confiance sont discutés.
... Secondly, anthropomorphic design features (such as gender-based appearances and generally humanlike features) and gender role stereotypes can influence how competent AI-based systems are perceived. Anthropomorphism can elicit perceptions of competence and liking [36]. Research has shown that male-looking robots are perceived as more agentic than female-looking robots and that robots are perceived as more positive when gender appearance (male versus female) is matched to stereotypical gender-based roles (such as security, construction-male; service, caring female) [37]. ...
Full-text available
Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context.
... Furthermore, it is found that ascribing human traits to AVs (e.g. name, gender, voice) increases people's trust [56]. This is indeed a common approach to design likable and socially accepted robots in human-robot interaction (HRI) research [4,55]. ...
Conference Paper
Full-text available
Shared space reduces segregation between vehicles and pedestrians and encourages them to share roads without imposed traffic rules. The behaviour of road users (RUs) is then controlled by social norms, and interactions are more versatile than on traditional roads. Autonomous vehicles (AVs) will need to adapt to these norms to become socially acceptable RUs in shared spaces. However, to date, there is not much research into pedestrian-vehicle interaction in shared-space environments, and prior efforts have predominantly focused on traditional roads and crossing scenarios. We present a video observation investigating pedestrian reactions to a small, automation-capable vehicle driven manually in shared spaces based on a long-term naturalistic driving dataset. We report various pedestrian reactions (from movement adjustment to prosocial behaviour) and situations pertinent to shared spaces at this early stage. Insights drawn can serve as a foundation to support future AVs navigating shared spaces, especially those with a high pedestrian focus.
... The most thorough and highly-referenced conceptualization of anthropomorphism to date is perhaps the explication offered by Epley et al. (2007), which is reflected in many nearly identical conceptualizations in the context of HCI (e.g., Banks, 2017;Castro-González et al., 2016;Lee et al., 2015;Niu et al., 2018;Pak et al., 2012;Ruijten et al., 2019;Spatola et al., 2019;Waytz et al., 2014;Zhang et al., 2010). Defining anthropomorphism as "the attribution of human characteristics or traits to nonhuman agents" (p. ...
Full-text available
Anthropomorphism of computerized agents, avatars, and technologies has been the focus of a large body of research in human-computer interaction (HCI). Yet, operational definitions of anthropomorphism vary greatly, creating the potential for error when broad theoretical conclusions are drawn from operationalizations lacking in content validity. This scoping review aimed to identify and categorize the range of operationalizations of anthropomorphism in experimental studies of computerized agents, avatars, and technologies, adding needed clarity to a diverse area of inquiry. Using five selection criteria, this review categorized the operationalization(s) of anthropomorphism in 31 experiment-based articles published in academic research journals. Results showed a heavy dominance of manipulations of physical appearance as operationalizations of anthropomorphism, which threatens content validity and raises questions about the understanding of anthropomorphism in HCI.
... A nemzetközi szakirodalom áttekintése során körvonalazódnak a kutatók által vizsgált, robotokkal kapcsolatos beállítódás alakulására ható faktorok, melyek között szerepelnek a demográfiai tényezők (Katz & Halpern, 2014), a személyes tapasztalatok (Nomura, Kanda & Suzuki, 2006), valamint a kulturális kontextus (Nomura, Kanda, Suzuki & Kato, 2005;Dang & Liu, 2020). Mindezek mellett az értékelt entitás jellemzői is szerepet játszanak az attitűdök kialakulásában, így a robot emberhez való hasonlósága (Katz & Halpern, 2014), a kognitív képességeinek a szintje (Bergmann, Eyssel, & Kopp, 2012;Demeure, 2011;Demeure, Niewiadomski, & Pelachaud, 2010;Fraune et al., 2017), valamint a robottal kapcsolatos biztonságérzet (Waytz, Cacioppo, & Epley, 2010;Waytz, Heafner & Epley, 2014). A robotokkal szembeni attitűdök mérésére különféle eszközöket alkalmaznak a kutatók, melyek közül a leghivatkozottabbak a területen Nomura, Kanda & Suzuki (2006) NARS (negative attitudes toward robot scale), valamint Nomura, Kanda, Suzuki & Kato (2008) RAS (robot anxiety scale) skálája. ...
A technológiai fejlődés révén számos területen, köztük vállalati környezetben is megjelenik az ember-robot kollaboráció. A munka természete és a foglalkozások a jövőben alapjaikban alakulnak át. E kihívások sikeres kezelésének egyik feltétele a robotok elfogadása, amelynek kialakulását egyebek mellett az egyének attitűdjei befolyásolják. A szakirodalmi áttekintés azon elméleti cikkek eredményeit szintetizálja, amelyek a robotokkal kapcsolatos beállítódás alakulására ható faktorokra fókuszálnak. Jelen cikk célja a robotizáció térnyerésével kapcsolatos attitűdök alakulásának vizsgálata Magyarországon, melyhez adatfelvételt a European Value Survey (EVS) legutóbbi adatgyűjtése biztosította, az adatelemzést pedig klasz- terelemzéssel, valamint ANOVA módszereivel végezték a szerzők. A robot munkaerővel kapcsolatos attitűdkutatás rele- vanciája megkérdőjelezhetetlen, és hazai mintán végzett hasonló kutatás ezidáig nem született. Technological advances are introducing human-robot collaboration to many areas, including the corporate environment. The nature of work and occupations will thus change fundamentally in the future. One of the prerequisites for successfully meeting these challenges is the acceptance of robots, which is influenced by, among other things, the attitudes of indivi- duals. The literature review synthesises the results of theoretical articles focusing on the factors influencing the develop- ment of attitudes towards robots. The aim of this paper is to investigate the evolution of attitudes towards robotization in Hungary, using the latest European Value Survey (EVS) data collection, cluster analysis and ANOVA methods. The relevance of attitudinal research on the robot workforce is unquestionable and, to date, no similar research on a Hungarian sample has been conducted.
Research advances in artificial intelligence (AI) capabilities have resulted in intelligent and humanlike AI-enabled technology (AIET). The concept of anthropomorphism—the attribution of human characteristics to nonhuman beings or entities—has received increasing attention from academia and industries. However, research on anthropomorphism in the AIET context is relatively new and fragmented, with limited efforts to evaluate current research or consolidate existing knowledge. To bridge this gap, this descriptive literature review of 55 studies seeks to identify research trends, AIET types, theoretical foundations, and methods. The study also analyzes how anthropomorphism has been conceptualized and operationalized in the AIET context, and the thematic analysis identifies research gaps and suggests future explorations. The proposed conceptual framework for exploring the interplay of anthropomorphism with its antecedents and consequences provides a nomological network for future research.
As one of the most popular AI applications, chatbots are creating new ways and value for businesses to interact with their customers, and their adoption and continued use will depend on users' trust. However, due to the non-transparent of AI-related technology and the ambiguity of application boundaries, it is difficult to determine which aspects enhance the adaptation of chatbots and how they interactively affect human trust. Based on the theory of task-technology fit, we developed a research model to investigate how two conversational cues of chatbots, human-like cues and tailored responses, influence human trust toward chatbots and to explore appropriate boundary conditions (individual characteristics and task characteristics) in interacting with chatbots. One survey and two experiments were performed to test the research model, and the results indicated that (1) perceived task solving and social presence mediate the pathway from conversational cues to human trust, which was validated in the context of e-commerce and education; (2) the extent of users’ ambiguity tolerance moderates the effects of two AI technologies on social presence; and (3) when performing high-creative tasks, the human-like chatbot induces higher perceived task solving competence. Our findings not only contribute to the AI trust-related literature but also provide practical implications for the development of chatbots and their assignment to individuals and tasks.
Debate abounds regarding the role that various technologies play in the reification of gender stereotypes and norms. We demonstrate that although assigning technology a male or female gender (i.e., gendering technology) increases gender stereotyping, it also increases attachment to anthropomorphized technologies. Across five studies, using archival (Amazon Reviews), correlational, and experimental methods (N = 10,781), we show people feel more attached to gendered technology. We further show these benefits are rooted in the tendency to ascribe greater humanness to technology that has stereotypically male and female traits. These results illustrate a paradox: gendering technology reinforces problematic stereotypes, but it also facilitates anthropomorphism, with beneficial consequences for the marketing of various technologies.
Over the course of digitization, many innovative marketing technologies have emerged that—theoretically speaking—promise firms gains in efficiency and/or effectiveness. However, a central task for marketing is not to allow the use of these technologies to become an end in itself, but to preserve the guiding principle of marketing, namely customer orientation. This means that the new technologies only offer added value for firms if they also offer (perceived) added value for consumers. Using three specific application areas as examples (chatbots, voice assistants, and data privacy management), we show how firms can combine innovative marketing technologies and consumer interests in a purposeful manner.
Full-text available
Group members often reason egocentrically, believing that they deserve more than their fair share of group resources. Leading people to consider other members’ thoughts and perspectives can reduce these egocentric (self-centered) judgments such that people claim that it is fair for them to take less; however, the consideration of others’ thoughts and perspectives actually increases egoistic (selfish) behavior such that people actually take more of available resources. A series of experiments demonstrates this pattern in competitive contexts in which considering others’ perspectives activates egoistic theories of their likely behavior, leading people to counter by behaving more egoistically themselves. This reactive egoism is attenuated in cooperative contexts. Discussion focuses on the implications of reactive egoism in social interaction and on strategies for alleviating its potentially deleterious effects.
Full-text available
When perceiving, explaining, or criticizing human behavior, people distinguish between intentional and unintentional actions. To do so, they rely on a shared folk concept of intentionality. In contrast to past speculative models, this article provides an empirically based model of this concept. Study 1 demonstrates that people agree substantially in their judgments of intentionality, suggesting a shared underlying concept. Study 2 reveals that when asked to define directly the termintentional,people mention four components of intentionality: desire, belief, intention, and awareness. Study 3 confirms the importance of a fifth component, namely skill. In light of these findings, the authors propose a model of the folk concept of intentionality and provide a further test in Study 4. The discussion compares the proposed model to past ones and examines its implications for social perception, attribution, and cognitive development.
Full-text available
Perspective taking is often the glue that binds people together. However, we propose that in competitive contexts, perspective taking is akin to adding gasoline to a fire: It inflames already-aroused competitive impulses and leads people to protect themselves from the potentially insidious actions of their competitors. Overall, we suggest that perspective taking functions as a relational amplifier. In cooperative contexts, it creates the foundation for prosocial impulses, but in competitive contexts, it triggers hypercompetition, leading people to prophylactically engage in unethical behavior to prevent themselves from being exploited. The experiments reported here establish that perspective taking interacts with the relational context-cooperative or competitive-to predict unethical behavior, from using insidious negotiation tactics to materially deceiving one's partner to cheating on an anagram task. In the context of competition, perspective taking can pervert the age-old axiom "do unto others as you would have them do unto you" into "do unto others as you think they will try to do unto you."
Full-text available
Anthropomorphism is a far-reaching phenomenon that incorporates ideas from social psychology, cognitive psychology, developmental psychology, and the neurosciences. Although commonly considered to be a relatively universal phenomenon with only limited importance in modern industrialized societies—more cute than critical—our research suggests precisely the opposite. In particular, we provide a measure of stable individual differences in anthropomorphism that predicts three important consequences for everyday life. This research demonstrates that individual differences in anthropomorphism predict the degree of moral care and concern afforded to an agent, the amount of responsibility and trust placed on an agent, and the extent to which an agent serves as a source of social influence on the self. These consequences have implications for disciplines outside of psychology including human–computer interaction, business (marketing and finance), and law. Concluding discussion addresses how understanding anthropomorphism not only informs the burgeoning study of nonpersons, but how it informs classic issues underlying person perception as well.
Full-text available
This study examined the effect of regional accent on the attribution of guilt. One hundred and nineteen participants listened to a recorded exchange between a British male criminal suspect and a male policeman. Employing the “matched-guise” technique, this exchange was varied to produce a 2 (accent type: Birmingham/standard) 2 (race of suspect: Black/White) 2 (crime type: blue collar/white collar) independent-groups design. The results suggested that the suspect was rated as significantly more guilty when he employed a Birmingham rather than a standard accent and that attributions of guilt were significantly associated with the suspect’s perceived superiority and social attractiveness.
Criticizes past research on self-serving biases in light of a Bayesian model of trait-attribution processes. The Bayesian model is a prescriptive standard against which bias can be measured, and a number of results interpreted as self-serving are shown to be perfectly logical from a Bayesian perspective. The Bayesian model suggests that biases in trait attribution may occur at different stages in the inference process, thereby obscuring or negating each other. (30 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).