ArticlePDF Available

The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle


Abstract and Figures

Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans.
Content may be subject to copyright.
The mind in the machine: Anthropomorphism increases trust in an
autonomous vehicle
Adam Waytz
, Joy Heafner
, Nicholas Epley
Northwestern University, USA
University of Connecticut, USA
University of Chicago, USA
Anthropomorphism of a car predicts trust in that car.
Trust is reected in behavioral, physiological, and self-report measures.
Anthropomorphism also affects attributions of responsibility/punishment.
These ndings shed light on human interaction with autonomous vehicles.
abstractarticle info
Article history:
Received 7 August 2013
Revised 11 January 2 014
Available online 23 January 2014
Mind perception
Moral responsibility
Humancomputer interaction
Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains rang-
ing from medicine to education to transportation. We investigated an important theoretical determinant of
people's willingness to trust such technology to perform competentlythe extent to which a nonhuman agent
is anthropomorphizedwith a humanlike mindin a domainof practical importance, autonomous driving. Partic-
ipants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and
speed, or a comparable autonomous vehicle augmented with additional anthropomorphic featuresname, gen-
der, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the ve-
hicle would perform more competently as it acquired more anthropomorphic features. Technology appears
better able to perform its intended design when it seems to have a humanlike mind. These results suggest mean-
ingful consequences of humanizing technology, and also offer insights into the inverse process of objectifying
© 2014 Elsevier Inc. All rights reserved.
Technology is an increasingly common substitute for humanity. So-
phisticated machines now perform tasks that once required a thought-
ful human mind, from grading essays to diagnosing cancer to driving a
car. As engineers overcome design barriers to creating such technology,
important psychological barriers that users will face when using this
technology emerge. Perhaps most important, will people be willing
to trust competent technology to replace a human mind, such as a
teacher's mind when grading essays, or a doctor's mind when diagnos-
ing cancer, or their own mind when driving a car?
Our research tests one important theoretical determinant of trust in
any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley,
2010). Anthropomorphism is a process of inductive inference whereby
people attribute to nonhumans distinctively human characteristics,
particularly the capacity for rational thought (agency) and conscious
feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical deni-
tions of personhood focus on these mental capacities as essential to
being human(Dennett, 1978; Locke, 1997). Furthermore, studies exam-
ining people's lay theories of humanness show that people dene hu-
manness in terms of emotions that implicate higher order mental
process such as self-awareness and memory (e.g., humiliation, nostal-
gia; Leyens et al., 2000) and traits that involve cognition and emotion
(e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhu-
man does not simply involve attributing supercial human characteris-
tics (e.g., a humanlike face or body) to it, but rather attributing essential
human characteristics to the agent (namely a humanlike mind, capable
of thinking and feeling).
Trust is a multifaceted concept that can refer to belief that another
will behave with benevolence, integrity, predictability, or competence
(McKnight & Chervany, 2001). Our prediction that anthropomorphism
will increase trust centers on this last component of trust in another's
competence (akin to condence) (Siegrist, Earle, & Gutscher, 2003;
Twyman, Harvey, & Harries, 2008). Just as a patient would trust a
Journal of Experimental Social Psychology 52 (2014) 113117
Corresponding author at: Northwestern University, 2001 Sheridan Rd, Evanston, IL
60208, USA.
E-mail address: (A. Waytz).
0022-1031/$ see front matter © 2014 Elsevier Inc. All rights reserved.
Contents lists available at ScienceDirect
Journal of Experimental Social Psychology
journal homepage:
thoughtful doctor to diagnose cancer more than a thoughtless one, or
would rely on mindful cab driver to navigate through rush hour trafc
more than a mindless cab driver, this conceptualization of anthropo-
morphism predicts that people would trust easily anthropomorphized
technology to perform its intended function more than seemingly
mindless technology. An autonomous vehicle (one that that drives it-
self) for instance, should seem better able to navigate through trafc
when it seems able to think and sense its surroundings than when it
seems to be simply mindless machinery. Or a warbotintended to kill
should seem more lethal and sinister when it appears capable of think-
ing and planning than when it seemsto be simply a computer mindless-
ly following an operator's instructions. The more technology seems to
have humanlike mental capacities, the more people should trust it to
perform its intended function competently, regardless of the valence
of its intended function (Epley, Caruso, & Bazerman, 2006; Pierce,
Kilduff, Galinsky, & Sivanathan, 2013).
This prediction builds on the common association between people's
perceptions of others' mental states and of competent action. Because
mindful agents appear capable of controlling their own actions, people
judge others to be more responsible for successful actions they perform
with conscious awareness, foresight, and planning (Cushman, 2008;
Malle & Knobe, 1997) than for actions they perform mindlessly (see
Alicke, 2000; Shaver, 1985; Weiner, 1995). Attributing a humanlike
mind to a nonhuman agent should therefore more make the agent
seem better able to control its own actions, and therefore better able
to perform its intended functions competently. Our prediction also ad-
vances existing research on the consequences of anthropomorphism
by articulating the psychological processes by which anthropomor-
phism could affect trust in technology (Nass & Moon, 2000), and by
both experimentally manipulating anthropomorphism as well as mea-
suring it as a critical mediator. Some experiments have manipulated
the humanlike appearance of robots and assessed measures indirectly
related to trust. However, such studies have not measured whether
such supercial manipulations actually increase the attribution of es-
sential humanlike qualities to that agent (the attribution we predict is
critical for trust in technology; Hancock et al., 2011), and therefore can-
not explain factors found ad-hoc to moderate the apparent effect of an-
thropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012).
Another study found that individual differences in anthropomorphism
predicted differences in willingness to trust technology in hypothetical
scenarios (Waytz et al., 2010), but did not manipulate anthropomor-
phism experimentally. Our experiment is therefore the rst to test
our theoretical model of how anthropomorphism affects trust in
We conducted our experiment in a domain of practical relevance:
people's willingness to trust an autonomous vehicle. Autonomous
vehiclescars that control their own steering and speedare expected
to account for 75% of vehicles on the road by 2040 (Newcomb, 2012).
Employing these autonomous features means surrendering personal
control of the vehicle and trusting technology to drive safely. We ma-
nipulated the ease with which a vehicle, approximated by a drivingsim-
ulator, could be anthropomorphized by merely giving it independent
agency, or by also giving it a name, gender, and a human voice. We pre-
dicted that independent agency alone would make the car seem more
mindful than a normal car, and that adding further anthropomorphic
qualities would make the vehicle seem even more mindful. More im-
portant, we predicted that these relative increases in anthropomor-
phism would increase physiological, behavioral, and psychological
measures of trust in the vehicle's ability to drive effectively.
Because anthropomorphism increases trust in the agent's ability to
perform its job, we also predicted that increased anthropomorphism
of an autonomous agent would mitigate blame for an agent's involve-
ment in an undesirable outcome. To test this, we implemented a virtu-
ally unavoidable accident during the driving simulation in which
participants were struck by an oncoming car, an accident clearly caused
by the other driver. We implemented this to maintain experimental
control over participants' experience because everyone in the autono-
mous vehicle conditions would get into the same accident, one clearly
caused by the other driver. Indeed, when two people are potentially re-
sponsible for an outcome, the agent seen to be more competent tends to
be credited for a success whereas the agent seen to be less competent
tends to be blamed for a failure (Beckman, 1970; Wetzel, 1982). Because
we predicted that anthropomorphism would increase trust in the
vehicle's competence, we also predicted that it would reduce blame
for an accident clear caused by another vehicle.
One hundred participants (52 female, M
= 26.39) completed this
experiment using a National Advanced Driving Simulator. Once in the
simulator, the experimenter attached physiological equipment to par-
ticipants and randomly assigned them to condition: Normal, Agentic,
or Anthropomorphic. Participants in the Normal condition drove theve-
hicle themselves, without autonomous features. Participants in the
Agentic condition drove a vehicle capable of controlling its steering
and speed (an autonomous vehicle). The experimenter followed a
script describing the vehicle's features, suggesting when to use the au-
tonomous features, and describing what was about to happen. Partici-
pants in the Anthropomorphic condition drove the same autonomous
vehicle, but with additional anthropomorphic features beyond mere
agencythe vehicle was referred to by name (Iris), was given a gender
(female), and was given a voice through human audio les played at
predetermined times throughout the course. The voice les followed
the same script used by the experimenter in the Agentic condition,
modied where necessary (See Supplemental Online Material [SOM]).
All participants rst completed a driving history questionnaire and a
measure of dispositional anthropomorphism (Waytz et al., 2010).
Scores on this measure did not vary signicantly by condition, so we
do not discuss them further.
Participants in the Agentic and Anthropomorphic conditions then
drove a shortpractice course to familiarize themselves with the car's au-
tonomousfeatures. Participants could engage these features by pressing
buttons on the steering wheel. All participants then drove two courses
each lasting approximately 6 min. After the rst course, participants
completed a questionnaire (all on 010 scales, see SOM for all items)
that assessed anthropomorphism, liking, and trust.
Perceived anthropomorphism
Four items measured anthropomorphism, dened as attributing hu-
manlike mental capacities ofagency and experience to it (Epley, Waytz,
& Cacioppo, 2007; Gray et al., 2007; Waytz et al., 2010). These asked
how smart the car was, how well it could feel what was happening
around it, how well it could anticipate what was about to happen, and
how well it could plan a route. These items were averaged into a com-
posite (α=.89).
Four items measured liking: how enjoyable their driving was, how
comfortable they felt driving the car, how much participants would
like to own a car like this one, and what percentage of cars in 2020
they would like to be [autonomous] like this one. These items were
standardized and averaged to form a single composite (α=.90).
Self-reported trust
Eight items measured trust in the vehicle: how safe participants felt
they and others would be if they actually owned a car like this one, how
much they trust the vehicle to drive in heavyand light trafcconditions,
how condent they are about the car driving the next course safely, and
their willingness to give up control to the car. These items were stan-
dardized and averaged to form a single composite (α= .91)
114 A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
After approximately 6 min of driving a second course along a rural
highway, a vehicle pulled quickly in front of the car and struck their
right side. We designedthis accident to be unavoidable so thatall partic-
ipants would experience the same outcome (indeed, only one partici-
pant, in the Normal condition, avoided it). Ensuring that everyone got
into this accident, however, meant that the accident was clearly the
other vehicle's fault rather than participants' own vehicle. Throughout
the experiment, we measured participants' heart rate using electrocar-
diography (ECG) and videotaped their behavior unobtrusively to assess
responses to this accident.
Heart rate change
We reasoned that if participants trusted the vehicle, they should be
more relaxed in an arousing situation (namely, the accident), showing
an attenuated heart rate increase and startle response. We measured
heart rate change to the accident as a percentage change of beats per
minute for 20 s immediately following the collision (or until they con-
cluded their simulation), in comparison to a forty-ve second baseline
period immediately following the earlier practice course.
To assess startle response, we rst divided our participants into two
random samples. We then recruited 42 independent raters from an
undergraduate population to watch all videos from one or the other
sample and rate how startled each participant appeared during the
video (0 = not at all startled to 10 = extremely startled). We then aver-
aged startle ratings for each participant across all of these raters to ob-
tain a startle response measure. Percentage heart rate change and
startle were standardized and reverse-scored (multiplied by 1)
and then averaged to form a behavioral measure of trust (r(90) = .28,
pb.01). To assess overall trust, we averaged all standardized measures
of trust (the eight self-report measures and the two behavioral mea-
sures) into a single composite (α= .87).
Blame for vehicle
After the accident, all participants also assessed how responsible
they, the car, the people who designed the car, and the company that
developed the car were for the accident (all 010 scales, see SOM for
exact questions). To assess punishment for the accident, participants
were asked to imagine that this accident occurred in the real world,
with a different driver behindthe wheel of their car. Participants report-
ed how strongly they felt that the driver should be sent to jail, how
strongly they felt that the car should be destroyed, how strongly they
felt that the car's engineer should be punished, and how strongly they
felt that the company that designed the car should be punished. The
six items measuring the vehicle's responsibility and resulting punish-
ment for a similar accident were standardized and averaged to form a
single composite (α=.90).
Finally, we used the videotape mentioned above to measure partic-
ipants' distraction while driving during the second course, measured
as the time spent looking away from the simulator rather than paying
attention while driving. Results showed a oor effect with very little dis-
tractionacross conditions(less than 3% of the overall timein the two au-
tonomous vehicle conditions). See Table 1 for these means as well as
means from all analyses below.
All primary analyses involved planned orthogonal contrasts examin-
ing differences between the Normal, Agentic, and Anthropomorphic
Perceived anthropomorphism
As predicted, participantsin the Anthropomorphic condition anthro-
pomorphized the vehicle more than those in the Agentic condition,
t(97) = 3.21, p=.002,d= .65, who in turn anthropomorphized
the vehicle more than in the Normal condition, t(97) = 7.11, pb
.0001, d=1.44.
Participants in the Anthropomorphic and Agentic conditions
liked the vehicle more than did participants in the Normal condition,
t(97) = 3.92, pb.0001, d= .80 and t(97) = 3.29, p= .001, d= .67,
but the autonomous vehicle conditions did not differ signicantly
from each other (p= .55).
As predicted, on the measure of overall trust, those in the Anthropo-
morphic condition trusted their vehicle more than did those in the
Agentic condition, t(97) = 2.34, p= .02, d= .48, who in turn trusted
their vehicle more than those in the Normal condition, t(97) = 4.56,
pb.0001, d= .93. For behavioral trust, participants in the Anthropo-
morphic condition trusted their vehicle more than did those in the
Agentic condition, t(97) = 3.36, p= .001, d= .68 and Normal condi-
tion, t(97) = 2.78, pb.01, d= .56, although the Agentic and Normal
conditions did not differ signicantly (p= .56). For self-reported
trust, participants in the Anthropomorphic condition and the Agentic
condition did not differ signicantly (p= .14), but both participants
in the Agentic condition and Anthropomorphic conditions reported
greater trust than participants in Normal condition, ts(97) = 4.83 and
6.35, respectively, psb.01, ds = .98 and 1.29. Table 1 reports the self-
report measures and the behavioral measures of trust separately.
To assess whether thevehicle'seffect on overall trust was statistical-
ly mediated by perceived anthropomorphism, we used Preacher and
Hayes (2008) bootstrapping method and coded condition as Normal
= 0, Agentic = 1, and Anthropomorphic = 2 (see Hahn-Holbrook,
Holt-Lunstad, Holbrook, Coyne, & Lawson, 2011; Legault, Gutsell, &
Inzlicht, 2011 for similar analyses). This analysis conrmed that anthro-
pomorphism statistically mediated the relationship between vehicle
condition and overall trust in the vehicle (95% CI = .31 to .55; see
Fig. 1; 20,000 resamples).
Blame for vehicle
As noted, we programmed the driving simulation so that all partici-
pants would experience the same virtually unavoidable accident clearly
caused by the other driver, but it is important to keep the nature of the
Table 1
Means by condition for main dep endent measures.
Measure Condition
Normal Agentic Anthropomorphic
Anthropomorphism 2.63
Overall 0.52
Self-reported 0.60
Behavioral (ECG & startle) 0.23
Liking 0.49
Blame for the vehicle 0.60
Distraction (instances) 0.16
Distraction (percent of total time) 0.1
Means that do not share a subscript differ signicantly at pb.05.
Degrees of freedom across analyses vary slightly because of missing responses. We
measured skin conductance throughout the experiment through electrodes on partici-
pantswrists, but unanticipated artifacts from movement while driving rendered these re-
sults impossible to interpret and so we do not report them.
115A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
accident in mind. If a trusted, competent driver were hit by another ve-
hicle, one would hold the competent driver less responsible for the ac-
cident because it would clearly appear to be the other driver's fault.
Thus, we predicted that anthropomorphism would mitigate blame for
an accident clearly caused by the other vehicle. It is important to note,
however, that our prediction would be different if the vehicle were
able to avoid this accident, in whichcase we would predict that anthro-
pomorphism would increase the tendency to credit the vehicle for this
Participants in the Agentic and Anthropomorphic conditions blamed
their car more for the accident than did those in the Normal condition,
ts(96) = 6.30 and 4.18, respectively, psb.01, ds = 1.29 and .85. This
is consistent with the relationship between agency and perceived re-
sponsibility. An object with no agency cannot be held responsible for
any actions, and so this comparison is not particularly interesting.
More interesting is that participants blamed the vehicle signicantly
less in the Anthropomorphic condition than in the Agentic condition,
t(96) = 2.18, p=.03,d= .44, in which the perceived thoughtfulness
of the fully anthropomorphic vehicle mitigated the responsibility that
comes from independent agency (given that the accident was clearly
caused by the other vehicle). This shows a clear relationship between
anthropomorphism and perceptions of responsibility, but the exact na-
ture of that relationship cannot be tested in this particular paradigm be-
cause we are unable to create a uniform accident across conditions
clearly caused by participants themselves.
General discussion
Technological advances blur the line between human and nonhu-
man, and this experiment suggests that blurring this line even further
could increase users'willingness to trust technology in place of humans.
Amongst those who drove an autonomous vehicle, those who drove a
vehicle that was named, gendered, and voiced rated their vehicle as
having more humanlike mental capacities than those who drove a vehi-
cle with the same autonomous features but without anthropomorphic
cues. In turn, those whodrove the anthropomorphized vehicle with en-
hanced humanlike features (name, gender, voice) reported trusting
their vehicle even more, were more relaxed in an accident, and blamed
their vehicle and related entities less for an accident caused by another
driver. These ndings provide further support for the theoretical con-
nection between perceptions of mental capacities in others and assess-
ments of competence, trust, and responsibility. Attributing a mind to a
machine matters because it could create a machine to which users
might entrust their lives.
This nding is also of clear practical relevance given the rapidly
changing interface between the technological world and the social
world. No longer merely mindless tools, modern technology now taps
human social skills directly. People ask their phones for driving direc-
tions, restaurant recommendations, and baseball scores. Automated
customer service agents help people purchase ights, pay credit card
bills, and obtain prescription medicine. Roboticpets even provide social
support and companionship, sometimes in the place of actual human
companionship (Melson et al., 2009). Ourresearch identies one impor-
tant consequence of considering the psychological dimensions of tech-
nological design. Even the greatest technology, such as vehicles that
drive themselves, is of little benet if consumers are unwilling to use it.
Finally, our research at this human-technology frontier also informs
the inverse effect in which people are treated more like technologyas
objects or relatively mindless machines (Cikara, Eberhardt, & Fiske,
2011; Loughnan & Haslam, 2007). Adding a human voice to technology,
for instance, makes people treat it as more humanlike agent (Takayama
& Nass, 2008), which suggests that removing a human voice like one's
own from interpersonal communication may make another person
seem relatively mindless. Indeed, in one series of recent experiments,
participants rated another person as being less mindful (e.g., less
thoughtful, less rational) when they read a transcript of an interview
than when they heard the audio of the same interview (Schroeder &
Epley, 2014). Similarly, verbal accents that differ from one's own trigger
prejudice and distrust compared to accents similar to one's own
(Anisfeld, Bogo, & Lambert, 1962; Dixon, Mahoney, & Cocks, 2002;
Giles & Powesland, 1975; Kinzler, Corriveau, & Harris, 2011; Kinzler,
Dupoux, & Spelke, 2007; Lev-Ari & Keysar, 2010), an effect that may
be partially mediated by differences in the attribution of humanlike
mental states.
Few divides in social life are more important than the one between
us and them,between human and nonhuman. Perceptions of this divide
are not xed but exible. Understanding when technology crosses that
divide to become more humanlike matters not only for how people
treat increasingly humanlike technology, but also for understanding
why people treat other humans as mindless objects.
This research was funded by the University of Chicago's Booth
School of Business and a grant from the General Motors Company. We
thank Julia Hur for assistance with data coding.
Appendix A. Supplementary materials
Supplementary to this article can be found online at http://dx.doi.
Alicke, M.D. (2000). Culpable control and the psychology of blame. Psychological Bulletin,
126, 556.
Anisfeld, M., Bogo, N., & Lambert, W. E. (1962). Evaluational reacti ons to accented Eng lish
speech. Journal of Abnormal and Social Psychology,65,223231.
Beckman, L. (1970). Effects of students' performance on teachers' and observers' attribu-
tions of causality. Journal of Educational Psychology,61,7682.
Cikara, M., Eberhardt, J. L., & Fiske, S. T. (2011). From agents to objects: Sexist attitudes
and neural response s to sexualized targets. Journal of Cognitive Neuroscience ,23,
Cushman, F. (2008). Crime and punishment: Distinguishing theroles of causal and inten-
tional analyses in moral judgment. Cognition,108,353380.
Dennett, D. C. (1978). Brainstorms: Philosophical essays on mind and psych ology.
Cambridge: Bradford Books/MIT Press.
Dixon, J. A., Mahoney, B., & Cocks, R. (2002). Accents of guilt? Effects of regional accent,
race, and crime typ e on attributions of guilt. Journal of Language and Social
Epley, N.,Caruso, E., & Bazerman, M. H. (2006). When perspective taking increases taking:
Reactive egoism in social interaction. Journal of Personality and Social Psychology,91,
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of
anthropomorphism. Psychological Review,114, 864.
Giles, H., & Powesland, P. F. (1975). Speech style and social evaluation,Vol. 7, London:
Academic Press.
Gray, H. M.,Gray, K., & Wegner, D.M.(2007). Dimensions of mind perception. Science,315,
Hahn-Holbrook, J., Holt-Lunstad, J., Holbrook, C., Coyne, S. M., & Lawson, E. T. (2011).
Maternal defense: Breast feeding increases agg ression by redu cing stress.
Psychological Science,22,12881295.
Fig. 1. The results of a mediati on analysis of condition on overall trust (Betas
116 A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R.
(2011). A meta-analysis of factorsaffecting trust in humanrobotinteraction. Human
Factors: The Journal of the Human Factors and Ergonomics Society,53,517527.
Haslam, N. (2006 ). Dehumanization: An integrative review. Pe rsonality and Social
Psychology Review,10,252264.
Kinzler, K. D., Corriveau, K. H., & Harris, P. L. (2011). Children's selective trust in native
accented speakers. Developmental Science,14,106111.
Kinzler, K. D., Dupoux, E., & Spelke, E. S. (2007). Thenativelanguageofsocialcognition.
Proceedings of the National Academy of Sciences,104,1257712580.
Legault, L., Gutsell, J. N., & Inzlicht, M. (2011). Ironic effects of antiprejudice messages:
How motivationa l interventions can reduce (but also increase) prejudice .
Psychological Science,22,14721477.
Lev-Ari, S., & Keysar,B. (2010). Why don't we believe non-nativespeakers? The inuence
of accent on credibility. Journal of Experimental Social Psychology,46,10931096.
Leyens, J. P., Paladino, P.M., Rodriguez-Torres, R., Vaes, J., Demoulin, S., Rodriguez-Perez,
A., et al. (2000). The emotional side of prejudice: The attribution of secondary emo-
tions to ingroups and o utgroups. Personality andSocial Psychology Review,4,186197.
Locke, J. (1997). An essay concerning human understanding. Harmondsworth, England:
Penguin Books (Original work published 1841).
Loughnan, S., & Haslam, N. (2007). Animals and androids: Implicit associations between
social categories and nonhumans. Psychological Science,18,116121.
Malle, B. F., & Knobe, J. (1997). The folk concept of intentionality. Journal of Experimental
Social Psychology,33,101121.
McKnight, D. H., & Chervany, N. L. (2001). Trust and distrust denitions: One bite at a
time. Trust in Cyber-societies (pp. 2754). : Springer Berlin Heidelberg.
Melson, G. F., Kahn, P . H., Beck, A., Friedm an, B., Roberts, T., Garrett, E., et al. (2 009).
Children's behavior toward and understanding of robotic and living dogs. Journal of
Applied Developmental Psychology,30,92102.
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers.
Journal of Social Issues,56,81103.
Newcomb, D. (2012). Retrieved from. 9/18/tech/
Pak, R., Fink, N., Price, M., Bass, B., & Sturre, L. (2012). Decision support aids with anthro-
pomorphic characteristics inuence trust and performa nce in younger and older
adults. Ergonomics,55, 10591072.
Pierce, J. R., Kilduff, G. J., Galinsky, A.D., & Sivanathan, N. (2013). From glue to gasoline:
How competition turns perspec tive takers une thical. Psych ological Scienc e,24,
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing
and comparing indirect effects in multiple mediator models. Behavio r Research
Schroeder, J., & Epley, N. (2014). Speaking louder than words: Voice reveals the presence
of a humanlike mind. Unpublished manuscript, University of Chicago.
Shaver, K. G. (1985). The attribution of blame: Causality, responsibility, and blameworthi-
ness. New York: Springer-Verlag.
Siegrist, M., Earle, T., & Gutscher, H. (2003). Test of a trust and condence model in the
applied context of electromagnetic eld (EMF) risks. Risk Analysis,23,705716.
Takayama,L., & Nass, C. (2008). Driver safety and information from afar: An experimental
driving simulator study of wireless vs. in-car information services. Intern ational
Journal of Human Computer Studies,66,173184.
Twyman, M., Harvey, N., & Harries, C. (2008). Trust in motives, trust in competence:
Separate factors determining the effectiveness of risk communication. Judgment and
Decision Making,3,111120.
Waytz, A.,Cacioppo, J., & Epley, N. (2010).Who sees human? Thestability and importance
of individual differences in anthropomorphism. Perspectives on Psychological Science,
Weiner, B. (19 95). Judgments of responsibility: A foundation for a theory of social conduct.
New York, NY: Guilford Press.
Wetzel, C. G. (1982). Self-serving biases in attribution: A Bayesian analysis. Journal of
Personality and Social Psychology,43,197209.
117A. Waytz et al. / Journal of Experimental Social Psychology 52 (2014) 113117
... Previous research showed that providing a visual appearance, specifically an anthropomorphic one, might be a way of ensuring trust toward these agents (Meng et al., 2021;Ekman et al., 2018;Waytz et al., 2014). Similar to human-to-human relationships, in which people tend to judge others' social warmth, honesty, trustworthiness, and intellectual competence based on facial appearance or attire (Zebrowitz & Montepare, 2008;Willis & Todorov, 2006;Smith et al., 2018), human -virtualagent interactions are greatly influenced by visual appearance. ...
... Some researchers have used a single-question measurement (Haesler et al., 2018;Rossi et al., 2018;Shu et al., 2018;), or created multiple items scales specifically for their studies (Nordheim et al., 2019;Byrne & Marín, 2018;Linnemann et al., 2018;Waytz et al., 2014;Garcia et al., 2015;Holthausen et al., 2020). Others re-used scales from psychological research (Martelaro et al., 2016;Sebo et al., 2019;Herse et al., 2018). ...
... Finally, the categories of basic models were rated in terms of trust, likability, and anthropomorphism. The scale from Waytz et al. (2014) was used as it was specifically developed for the context of highly automated driving and included measurement for both trust and anthropomorphism. ...
Full-text available
This article considers the visual appearance of a virtual agent designed to take over the driving task in a highly automated car, to answer the question of which visual appearance is appropriate for a virtual agent in a driving role. The authors first selected five models of visual appearance thanks to a picture sorting procedure (N = 19). Then, they conducted a survey-based study (N = 146) using scales of trust, anthropomorphism, and likability to assess the appropriateness of those five models from an early-prototyping perspective. They found that human and mechanical-human models were more trusted than other selected models in the context of highly automated cars. Instead, animal and mechanical-animal ones appeared to be less suited to the role of a driving assistant. Learnings from the methodology are discussed, and suggestions for further research are proposed.
... For example, making the brand features look like human faces or bodies can lead to a stronger tendency of anthropomorphism to make these brands more easily perceived as human beings [4][5]. Secondly, languagebased marketing strategies like giving products human names [6][7], genders [8]or describing the products in the first person can increase the consumers' tendency to anthropomorphize a brand. Thirdly, such tendency can also be reinforced by expressing specific meanings about brands through rhetorical devices, such as visual or linguistic metaphors or similes [1]. ...
... On the one hand, power perception also affects consumers' perception activities such as loneliness and risk perception. Individuals with high power perception have less need for social relationships and are therefore less likely to feel lonely [6]. They believe that they have the ability to control anthropomorphized products and lower risk perception, therefore they have a stronger intention to use anthropomorphized products. ...
... Furthermore, existing studies on the relationship between power perception and brand anthropomorphism are not clear and specific. For example, power perception boosts people's self-confidence, improves their risk-taking behaviors [6], and reduces risk aversion. Power perception affects consumers' shift in choice of brands [22]. ...
Full-text available
This paper explores the mechanism and boundary conditions for the effect of matching anthropomorphized brand image and individual power perception on consumers’ purchasing intention. Using a Stereotype Content Model, this paper divides brand anthropomorphism into warmth-related and competence-related anthropomorphized images and adopts different methods to activate consumers’ power perception for discussion and verification. The results of the three experiments show that consumers with low power perception prefer warmth-related anthropomorphized brands while those with higher power perception lean towards competence-related ones. Matching high (low) power perception and types of anthropomorphism is mediated by an exchange relationship (communal relationship). The above effects exist only in the context of low perceived risk. When perceived risk is high, regardless of power perception, consumers all prefer competence-related anthropomorphized brands. This paper is of theoretical and practical significance as it not only enriches the research into brand anthropomorphism, but also provides guidance for tailoring strategies of brand anthropomorphism.
... Future work should consider incorporating both individuals' traits and additional robot perceptions such as anthropomorphism and mind perception to build out a fuller predictive model for robot humanization. There is likely a significant relationship between one's tendency to anthropomorphize and perceive mind in robots and their inclination to humanize them, as research has found that anthropomorphism is linked to trust in automated vehicles [99] and that there is individual variation in the tendency to anthropomorphize [100]. Further, the influence of individuals' feelings of autonomy and locus of control should be explored in tandem with other Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH ("Springer Nature"). ...
Full-text available
This study examines facets of robot humanization, defined as how people think of robots as social and human-like entities through perceptions of liking, human-likeness, and rights entitlement. The current study investigates how different trait differences in robots (gender, physical humanness, and relational status) and participants (trait differences in past robot experience, efficacy, and personality) together influence humanization perceptions. Findings show that the robots’ features were less influential than participants’ individual traits. Specifically, participants’ prior real-life exposure to robots and perceived technology competence were positively related to robot humanization, while individuals with higher internal loci of control and negative evaluations of robots in media were less inclined to humanize robots. The implications of these findings for understanding the unfolding “relational turn” in human-machine communication are then considered: specifically, at present, it appears that technological features matter less than people’s ontological understanding of social robots in shaping their humanization perceptions.
... Another AI characteristic we did not consider in our design is the extent of AI's anthropomorphism. Whether artificial agents assume a humanoid appearance is likely to impact people's reactions to them and to their productions (Chamberlain et al., 2018;Glikson & Woolley, 2020;Waytz et al., 2014). This is particularly relevant when considering that people tend to relate to artificial agents in a very similar way as to other humans, as suggested by folk psychology (de Graaf & Malle, 2017;Thellman et al., 2017). ...
Full-text available
With artificial intelligence (AI) increasingly involved in the creation of organizational and commercial artifacts, human evaluators’ role as creativity gatekeepers of AI-produced artifacts will become critical for innovation processes. However, when humans evaluate creativity, their judgment is clouded by biases triggered by the characteristics of the creator. Drawing from folk psychology and algorithm aversion research, we examine whether the identity of the producer of a given artifact as artificial intelligence (AI) or human is a source of bias affecting people’s creativity evaluation of such artifact and what drives this effect. With four experimental studies (N = 2,039), of which two pre-registered, using different experimental designs and evaluation targets, we found that people sometimes—but not always—ascribe lower creativity to a product when they are told that the producer is an AI rather than a human. In addition, we found that people consistently perceive generative AI to exert less effort than human in the creation of a given artifact, which drives the lower creativity ratings ascribed to generative AI producers. We discuss implication of these findings for organizational creativity and innovation in the context of human-AI interaction.
Full-text available
Group members often reason egocentrically, believing that they deserve more than their fair share of group resources. Leading people to consider other members’ thoughts and perspectives can reduce these egocentric (self-centered) judgments such that people claim that it is fair for them to take less; however, the consideration of others’ thoughts and perspectives actually increases egoistic (selfish) behavior such that people actually take more of available resources. A series of experiments demonstrates this pattern in competitive contexts in which considering others’ perspectives activates egoistic theories of their likely behavior, leading people to counter by behaving more egoistically themselves. This reactive egoism is attenuated in cooperative contexts. Discussion focuses on the implications of reactive egoism in social interaction and on strategies for alleviating its potentially deleterious effects.
Full-text available
When perceiving, explaining, or criticizing human behavior, people distinguish between intentional and unintentional actions. To do so, they rely on a shared folk concept of intentionality. In contrast to past speculative models, this article provides an empirically based model of this concept. Study 1 demonstrates that people agree substantially in their judgments of intentionality, suggesting a shared underlying concept. Study 2 reveals that when asked to define directly the termintentional,people mention four components of intentionality: desire, belief, intention, and awareness. Study 3 confirms the importance of a fifth component, namely skill. In light of these findings, the authors propose a model of the folk concept of intentionality and provide a further test in Study 4. The discussion compares the proposed model to past ones and examines its implications for social perception, attribution, and cognitive development.
Full-text available
Perspective taking is often the glue that binds people together. However, we propose that in competitive contexts, perspective taking is akin to adding gasoline to a fire: It inflames already-aroused competitive impulses and leads people to protect themselves from the potentially insidious actions of their competitors. Overall, we suggest that perspective taking functions as a relational amplifier. In cooperative contexts, it creates the foundation for prosocial impulses, but in competitive contexts, it triggers hypercompetition, leading people to prophylactically engage in unethical behavior to prevent themselves from being exploited. The experiments reported here establish that perspective taking interacts with the relational context-cooperative or competitive-to predict unethical behavior, from using insidious negotiation tactics to materially deceiving one's partner to cheating on an anagram task. In the context of competition, perspective taking can pervert the age-old axiom "do unto others as you would have them do unto you" into "do unto others as you think they will try to do unto you."
Full-text available
Anthropomorphism is a far-reaching phenomenon that incorporates ideas from social psychology, cognitive psychology, developmental psychology, and the neurosciences. Although commonly considered to be a relatively universal phenomenon with only limited importance in modern industrialized societies—more cute than critical—our research suggests precisely the opposite. In particular, we provide a measure of stable individual differences in anthropomorphism that predicts three important consequences for everyday life. This research demonstrates that individual differences in anthropomorphism predict the degree of moral care and concern afforded to an agent, the amount of responsibility and trust placed on an agent, and the extent to which an agent serves as a source of social influence on the self. These consequences have implications for disciplines outside of psychology including human–computer interaction, business (marketing and finance), and law. Concluding discussion addresses how understanding anthropomorphism not only informs the burgeoning study of nonpersons, but how it informs classic issues underlying person perception as well.
Criticizes past research on self-serving biases in light of a Bayesian model of trait-attribution processes. The Bayesian model is a prescriptive standard against which bias can be measured, and a number of results interpreted as self-serving are shown to be perfectly logical from a Bayesian perspective. The Bayesian model suggests that biases in trait attribution may occur at different stages in the inference process, thereby obscuring or negating each other. (30 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).