Content uploaded by Lionel Peter Robert
Author content
All content in this area was uploaded by Lionel Peter Robert on Nov 15, 2024
Content may be subject to copyright.
Human Security Robot Interaction and Anthropomorphism: An
Examination of Pepper, RAMSEE, and Knightscope Robots
Xin Ye1and Lionel P. Robert Jr.2
Abstract— The rapid growth in the use of security robots
makes it critical to better understand their interactions with
humans. The impacts of anthropomorphism and interaction sce-
narios were examined via a 3 x 2 between-subjects experiment.
Sixty participants were randomly assigned to interact with one
of three security robots (Knightscope, RAMSEE, or Pepper) in
either an indoor hallway or an outdoor parking lot scenario in
a virtual reality cave. There were significant differences only
between Pepper and Knightscope with Pepper rated higher in
anthropomorphism, ability, integrity, and desire to use than
Knightscope but the interaction scenario has no effect.
I. INTRO DUC TIO N
Security robots are being increasingly employed across
various sectors, including public law enforcement and private
security agencies, to safeguard individuals and property. In
this paper, we define security robots as robots specifically
designed to protect humans and properties by deterring illicit
activities through security tasks such as monitoring, notify-
ing emergencies to security agents, and maintaining order
within a designated area. Security robots provide a unique
solution to contemporary security challenges like patrolling
and surveillance [1], [2], [3]. Additionally, they offer a cost-
effective approach to security assignments involving physical
danger, thereby reducing the need for human personnel to be
exposed to hazardous situations [3], [4], [5], [6], [7]
Current security robots exhibit a wide range of morpholo-
gies with varying degrees of anthropomorphism and are
utilized in both indoor and outdoor environments [8]. For
instance, robots like RoboGuard [9] and Knightscope [8]
lack human-like morphological features, while others such
as RAMSEE [10] and Captain C [11] possess some human-
like characteristics. Additionally, there are humanoid security
robots like RobotMan [7] and NCCU Security Warrior [5].
Although anthropomorphism has been shown to promote
the acceptance of robots [12], [13], [14], it is not clear if simi-
lar effects will carry over to the acceptance of security robots.
More specifically, past research has shown that the impact
of anthropomorphism on the acceptance of robots can vary
greatly based on the robot’s primary purpose and interaction
context [15], [16], [17], [18]. Consequently, determining the
most appropriate anthropomorphic morphological attributes
for security robots remains a challenge.
1Xin Ye is a graduate student at the School of Information. The University
of Michigan, 105 S State St, Ann Arbor, MI 48109, United States of
America xinye@umich.edu
2Lionel P. Robert Jr. is a professor at the School of Information and a
core faculty member at the Michigan Robotics institute. The University of
Michigan, 105 S State St, Ann Arbor, MI 48109, United States of America
lprobert@umich.edu
This paper contributes to the literature by offering in-
sights into whether or not anthropomorphism can be used
to promote security robot acceptance. To accomplish this,
we conduct a between-subjects experiment employing a 3
(robot type: human-like robot, character-like robot, mechan-
ical robot) × 2 (scenario: indoor hallway, outdoor parking
lot) design. This study provides new insights into the impact
of the anthropomorphic design of security robots commonly
used in different interaction scenarios.
II. BACKGROU ND
A. Anthropomorphism and Security Robot
Robot anthropomorphism can be defined as ”the repre-
sentation of robots as humans and/or to attribute human-
like qualities to robots” [19, P.247]. A common approach
to humanizing robots involves manipulating their overall
physical appearance [19], [20], [21]. Previous studies have
demonstrated that anthropomorphic design has a positive
effect on human-related outcomes [17], particularly in social
application domains [22], [23], [24]. For example, Barco et
al. [25] manipulated three types of robots (anthropomorphic,
zoomorphic, and caricatured) and found that people felt
higher psychological closeness towards the anthropomorphic
robot. Zanatto et al. [26] used robots NAO and Baxter to
manipulate human-likeness in robot appearance and discov-
ered significant effects on robot perceptions (such as likeabil-
ity, perceived safety, perceived intelligence) and acceptance
(trust and compliance). Natarajan and Gombolay [27] also
found that participants’ perceived anthropomorphism had a
significant positive relationship with trust.
Despite the potential importance of anthropomorphism,
we know very little regarding its influence on interactions
with security robots. For example, based on our review, only
one study, Li et al. [28], looked at the interaction between
a robot’s appearance and its security task. Researchers de-
signed three types of robot appearances (anthropomorphic,
zoomorphic, and machine-like) to perform various tasks.
However, the study did not find any significant results in
participants’ performance (active response and engagement),
robot acceptance (trust), or perceptions of robots (perceived
likeability and satisfaction). Therefore, to the best of our
knowledge, no direct connection has been found between
anthropomorphism and the acceptance of security robots.
Hypothesis 1: Anthropomorphism will increase the ac-
ceptance of security robots.
Ye, X. and Robert, L.P. (2023). Human Security Robot Interaction and Anthropomorphism: An Examination of Pepper, RAMSEE, and
Knightscope Robots, Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN
2023), Aug 28 - Aug 31, Busan, South Korea.
B. Interaction Scenarios
When investigating interactions between humans and se-
curity robots, another important factor that may be easily
overlooked by researchers is the interaction scenario. The
preference for anthropomorphism in robots is highly context-
sensitive, as different application domains and task types may
elicit different expectations towards robots [15], [17], [18].
For instance, a study by Roesler et al. [29] investigated the
impact of anthropomorphic design on industrial robots and
found that highly anthropomorphic robots were perceived as
less reliable. Similarly, Lohse et al. [30] discovered that a
machine-like robot was preferred over a human-like robot for
tasks with low sociability. In another study, Lin et al. [31]
investigated the contextual factors influencing participants’
trust in security robots and found that trust was higher when
the robot’s decision matched the contextual danger cues.
As future interactions will occur in various dynamic envi-
ronments, such as campuses, lobbies, office buildings, secure
doors, and market checkpoints [32], [33], scenario-based
analysis is crucial. For instance, Lyons et al. [33] conducted
a questionnaire study to examine participants’ desired use
of security robots in different contexts. They discovered that
there was more agreement among men and women on the
use of security robots for indoor locations that people can
be viewed as ”opt-in” locations where people choose to go
into such as homes than open public places where people do
not opt. Therefore, anthropomorphism should have a stronger
influence in public settings than in indoor settings.
Hypothesis 2: Interaction scenario will moderate the im-
pact of anthropomorphism; the impact of anthropomorphism
will be stronger in an outdoor rather than indoor setting.
III. MET HOD
To address our research questions, we conducted a lab-
oratory experiment exploring the effect of robot type and
interaction scenario on human-security robot interaction. For
the purpose of this experiment, we chose three security
robots with distinct morphological features to create varying
levels of anthropomorphism. A 3 (robot type: human-like
robot, character-like robot, mechanical robot) × 2 (scenario:
indoor hallway, outdoor parking lot) between-subjects design
was employed. This study was approved by the University
of Michigan Institutional Review Board.
A. Participants
Sixty-nine participants from the University of Michigan
were recruited. The study involved a duration of 30 to 40
minutes, and participants received $20 for their participation.
All participants met the inclusion criteria: at least 18 years
old, fluent English speakers, and no history of Virtual Reality
(VR) motion sickness. Nine participants were excluded from
the analysis due to the failure of the Wizard-of-Oz method or
because their overall questionnaire scores were beyond 2.5
standard deviations from the mean. The 60 remaining valid
participants (30 female, 30 males) ranged in age from 19 to
46 years (M= 27,SD = 7.08). Participants were randomly
assigned to one condition and the gender is balanced.
B. Apparatus
The experiment was conducted in the Michigan Immersive
Digital Experience Nexus (M.I.D.E.N), a 10 x 10 x 10-foot
(3.048 x 3.048 x 3.048-meter) immersive audio-visual ”cave”
environment featuring 3D stereoscopic projection on the left,
front, and right surfaces. This setup allowed participants to
walk freely within the physical boundaries of the space. The
VR environments were modeled and programmed using Epic
Games Unreal Engine version 4.27, simulating three security
robots (Pepper, RAMSEE, and Knightscope) in two different
scenarios (an indoor hallway and an outdoor parking lot),
as shown in Figure 1. The robots’ voices were generated
using text-to-speech algorithms employing the Microsoft
”David” voice. The Volfoni active-stereo shutter glasses
paired with a Vicon motion-capture system were utilized in
this experiment. Participants wore VR glasses in one of the
VR scenarios and interacted with one of the security robots.
Fig. 1. Panoramic Pictures of the Outdoor Parking Lot Scenario (top) and
Indoor Hallway Scenario (bottom)
C. Experimental Design
This study examines the potential impact of two factors:
the robot type and the interaction scenario. To explore this
hypothesis, we selected three distinct robot types, each with
different anthropomorphic morphological features. The Pep-
per robot, developed by SoftBank Robotics, is a human-like
robot characterized by its highly anthropomorphic design.
As shown in Fig. 2, Pepper features a human-like body,
comprising a torso, a head, and two arms mounted on a
mobile omnidirectional base. The RAMSEE robot, developed
by Gamma 2 Robotics, is a character-like robot, possessing
a moderately anthropomorphic design. This security robot
is composed of a torso with an LCD screen displaying
the virtual face. Unlike Pepper, RAMSEE lacks a human-
shaped head and arms. Lastly, the Knightscope K5 robot
is a mechanical robot designed by Knightscope company.
This autonomous machine exhibits a streamlined, conical
canister-like body which lacks any discernible anthropomor-
phic features. In selecting suitable robots, we chose a popular
human-like robot and two commonly used security robots,
each with varying levels of anthropomorphism based on the
anthropomorphic robot database [34]. Pepper has the highest
anthropomorphism with an overall human likeness score of
42.17. The scores for RAMSEE and Knightscope were not
listed in the database and had to be calculated using the site’s
Robot Human-Likeness Predictor. RAMSEE received a score
of 13.77, indicating a middle anthropomorphism level, while
Knightscope scored lowest in anthropomorphism with 3.96.
Two interaction scenarios are utilized: an indoor hallway
scenario and an outdoor parking lot scenario. These scenarios
were chosen because they are common deployment locations
for security robots and allowed us to evaluate participant
reactions in realistic settings. To ensure consistency in the
experiment, in the initial patrolling task, all robots were
programmed to move along the same patrol trajectory with
the same movements. Additionally, the height of each robot
was controlled to prevent any potential influence. A Wizard-
of-Oz setup [35] was employed to control the robot’s inter-
action dialogues with participants, with the same researcher
controlling the security robot from an unseen location.
Fig. 2. Pepper (Left), RAMSEE (Middle), and Knightscope K5 (Right)
D. Task and Procedure
Participants were guided to an interview room and pro-
vided with a brief introduction to the experiment and the
security robot. Upon signing a consent form, participants
completed preliminary questionnaires and were guided to the
experimental room to interact with the security robot.
Throughout the experiment, researchers remotely operated
the robots using the Wizard-of-Oz approach. Participants
were instructed to complete a series of tasks during their
interaction with the robot. In the initial phase, the security
robot patrolled a predetermined route, detected the partici-
pant, became active, and approached the participant. It then
briefly introduced itself and engaged in a short conversation.
In the second phase, the security robot executed an access
control task by inquiring about the participants’ identities,
such as whether they were students or employees at the
University of Michigan. It subsequently requested to see their
identification, which determined their access authorization.
Once authorized, the security robot initiated the third phase.
It first reminded participants that they were recommended
to wear a mask in this area and then provided information
on the benefits of wearing masks and where they were
available. During the fourth phase, the security robot asked
participants if they had witnessed any suspicious activity
in the vicinity. Finally, it conducted an emotion detection
task by posing questions like, ”You seem a little anxious or
worried; is everything okay?” Throughout the experiment,
participants were encouraged to freely communicate with the
security robot, which responded accordingly based on their
diverse reactions. After the interaction phase, participants
returned to the interview room and completed a set of
post-questionnaires on an iPad using the Qualtrics survey
platform. Subsequently, they were invited to participate in a
semi-structured interview and could withdraw from the study
at any point.
Fig. 3. Participant wearing VR glasses interacting with a security robot.
E. Measures
Demographic information from the participants was col-
lected. Trust was measured using a 4-item questionnaire
adapted from [36], [37]. Trustworthiness was evaluated by
an adapted scale based on [38] and include three dimensions:
ability, integrity, and benevolence. Perception of robots was
assessed by the Godspeed questionnaire [39], which mea-
sures anthropomorphism, perceived intelligence, likability,
and perceived safety. Desire to use was measured using a
modified 5-point Likert item based on [33].
IV. RES ULTS
In this section, we present the quantitative results of our
study. We utilized ANOVAs to examine the main effects of
robot type and scenario on robot acceptance and perceptions,
as well as their interaction. A significance threshold of
0.05 was applied. For all significant main effects, post hoc
comparisons were conducted using the Tukey correction.
A. Measurement Check
The reliability of questionnaires were checked: trust (α=
0.853), ability (α= 0.830), integrity (α= 0.851), benevo-
lence (α= 0.787), likability (α= 0.904), and perceived in-
telligence (α= 0.791) all exceeded the 0.7 recommendation
[40], [41]. The reliability of perceived safety is α= 0.640
after we deleted the first item. The reliability of perceived
anthropomorphism is α= 0.561.
B. Manipulation Check
To confirm that participants perceived the robots with
different anthropomorphism, we examined participants’ per-
ceived anthropomorphism using Godspeed questionnaires.
Significant differences were observed among three robot
types (F= 3.12,p= 0.05,η2
p= 0.099), with mean
scores of 2.07 (SD = 0.44) for the Knightscope robot, 2.34
(SD = 0.56) for the Ramsee robot, and 2.50 (S D = 0.67)
for the Pepper robot. A posthoc test revealed a significant
difference between the Knightscope and Pepper robots (p=
0.04), indicating that the perceived anthropomorphism of
the Pepper robot was significantly higher than that of the
Knightscope robot. However, no significant difference was
found between the RAMSEE robot and either the Pepper
robot (p= 0.64) or the Knightscope robot (p= 0.29).
C. Trust
Robot type was not significant, F(2,54) = 1.75,p=
0.18,η2
p= 0.061, indicating no difference in trust for the
Pepper robot (M= 5.11,SD = 1.10), the RAMSEE robot
(M= 5.12,SD = 0.86), and the Knightscope robot (M=
4.54,SD = 1.36). The main effect of scenario was also
not significant, F(1,54) = 0.07,p= 0.79,η2
p= 0.001,
nor was the interaction of robot type ×scenario significant,
F(1,54) = 0.42,p= 0.66,η2
p= 0.015.
D. Trustworthiness
1) Ability: As depicted in Figure 4, the robot type exerted
a significant impact on the perceived ability of security
robots, F(2,54) = 3.50,p= 0.04,η2
p= 0.115. Post hoc
analysis revealed that participants perceived the Pepper robot
to have a higher ability than the Knightscope robot (p=
0.05), suggesting that a human-like robot elicits a higher
perception of ability compared to a mechanical robot. We
also found a marginally significant difference between the
RAMSEE robot and the Knightscope robot (p= 0.09) which
indicates a trend that people perceived character-like robots
with higher ability than mechanical robot. The difference
between the RAMSEE robot and the Pepper (p= 0.98)
robot was insignificant. The main effect of scenario was
not significant, F(1,54) = 0.35,p= 0.56,η2
p= 0.006,
nor was the interaction of robot type ×scenario significant,
F(1,54) = 0.21,p= 0.81,η2
p= 0.008.
Fig. 4. Effects of anthropomorphism on ability, integrity, and desire to
use. Error bars denote 1 standard error.
2) Integrity: A main effect of robot type on the integrity
of security robots was observed as shown in Figure 4,
F(2,54) = 3.60,p= 0.03,η2
p= 0.118. Post hoc com-
parisons found marginal significant differences between the
Knightscope robot and the Pepper robot (p= 0.06), as well
as between the Knightscope robot and the RAMSEE robot
(p= 0.07). This suggests a trend in which the integrity of
the Knightscope robot (M= 4.16,SD = 1.28) was lower
than that of the Pepper robot (M= 4.97,SD = 0.97) and
the RAMSEE robot (M= 4.97,SD = 0.98). The main
effect of scenario (F(1,54) = 0.45,p= 0.50,η2
p= 0.008)
and the interaction effect between robot type and scenario
(F(2,54) = 0.07,p= 0.94,η2
p= 0.002) were insignificant.
3) Benevolence: No significant differences among the
Knightscope robot (M= 4.86,SD = 1.03), the RAMSEE
robot (M= 5.07,SD = 1.09), and the Pepper robot (M=
5.00,SD = 1.24) were found in participants’ perceived
benevolence toward the security robot (F(2,54) = 0.20,
p= 0.82,η2
p= 0.007). The scenario did not influence
benevolence, F(1,54) = 0.73,p= 0.40,η2
p= 0.013.
The interaction between robot type and scenario was not
significant, F(2,54) = 0.10,p= 0.90,η2
p= 0.004.
E. Perceptions of Robot
1) Likeability: The main effect of robot type (F(2,54) =
2.22,p= 0.12,η2
p= 0.076), the main effect of scenario
(F(1,54) = 0.71,p= 0.40,η2
p= 0.013), and their
interaction effect (F(2,54) = 0.65,p= 0.52,η2
p= 0.024)
on likeability were all insignificant.
2) Perceived Intelligence: Participants perceived intelli-
gence of three robots showed no significant differences
(F(2,54) = 1.33,p= 0.27,η2
p= 0.047), with neither
scenario (F(1,54) = 0.71,p= 0.40,η2
p= 0.013) nor
the interaction of scenario and robot type (F(2,54) = 0.70,
p= 0.50,η2
p= 0.025) having a significant impact.
3) Perceived Safety: Robot type was not significant
(F(2,54) = 2.86,p= 0.07,η2
p= 0.096). Additionally,
the scenario did not exert a significant influence on safety
(F(1,54) = 0.21,p= 0.65,η2
p= 0.004). Interactions
between robot type and scenario were also insignificant
(F(2,54) = 0.80,p= 0.45,η2
p= 0.029).
F. Desire to Use
As shown in Figure 4, robot type has a statistically signif-
icant impact on participants’ desire to use the security robot
(F(2,54) = 4.08,p= 0.02,η2
p= 0.131). Post hoc analysis
results revealed that participants significantly preferred the
Pepper robot over the Knightscope robot (p= 0.04). There
is also a trend that people preferred the RAMSEE robot over
the Knightscope robot (p= 0.06). However, the comparison
between the RAMSEE robot and the Pepper robot (p= 0.99)
was not statistically significant. Additionally, the desire to
use security robots showed no difference between the outdoor
parking lot and the indoor hallway scenarios (F(1,54) =
0.22,p= 0.64,η2
p= 0.004). The interaction between robot
type and the scenario was also found to be insignificant,
F(2,54) = 0.72,p= 0.50,η2
p= 0.026.
V. DISCUSSION AND CONCLUSION
In this study, we investigated the effects of robot type and
interaction scenario on participants’ perceptions and accep-
tance of security robots. Our results demonstrated that robot
type significantly influences robot acceptance, particularly
in terms of ability, integrity, and the desire to use. However,
the impacts of the interaction scenario and the interaction
effect were not observed. We proceed to discuss the study’s
contributions, limitations, and potential future opportunities.
This research offers several contributions. First, it high-
lights the importance of anthropomorphism, which not only
promotes the acceptance of social robots [25], [42], [43],
[44], but also of security robots. Our study revealed that
the anthropomorphism impacts human acceptance of secu-
rity robots. More specifically, individuals perceived Pep-
per as having higher anthropomorphism, ability, integrity
(marginally), and a stronger desire to use compared to
Knightscope. Despite our findings, the Knightscope robot is
an actual security robot while the Pepper is not. This may
indicate that the primary purpose of the Knightscope’s design
may not be to promote acceptance by those that engage with
the robot. The design may be driven by purely functional
requirements and/or perhaps to even discourage direct human
interaction with the robot. Nonetheless, our findings suggest
that, future security robots should incorporate more anthro-
pomorphic designs if they hope to promote acceptance.
Second, our study discovered that the character-like robot
RAMSEE displayed no significant difference in perceived
anthropomorphism when compared to the other two robots.
This finding was surprising, as we had hypothesized that
a robot with more morphological anthropomorphic features
would result in higher perceived anthropomorphism. Al-
though unexpected, we did obtain a weak trend that RAM-
SEE had higher ability, integrity, and desire to use than
Knightscope. One explanation could be that RAMSEE main-
tains important mechanical and anthropomorphic features
simultaneously, which may have led to the lack of distinction
in generally perceived anthropomorphism while still retain-
ing some essential anthropomorphic features. Barco [25] also
found that participants’ perception of the caricatured robot
Cozmo tended to be similar to the human-like robot NAO.
Another explanation is the influence of complex dynam-
ics of anthropomorphic morphological features that we are
currently unaware of [45], [46]. It is possible that specific
anthropomorphic features have an impact on security robot
acceptance [21]. Therefore, future researchers could compare
specific or combinations of anthropomorphic features.
Third, this study did not observe any difference in trust or
perceptions of robots among different robot types. This result
is inconsistent with previous findings in social domains,
which suggest that anthropomorphic robots always engender
better perceptions and higher trust [27], [47], [43], [48]. It is
intriguing to observe that robot type influenced acceptance
but not perceptions of robots. It also impacted trustworthiness
but not trust. One possible explanation for this discrepancy is
the unique context of the security domain, emphasizing the
importance of conducting more HRI research within specific
domains. More specifically, we found that the significant
impacts of anthropomorphism were linked to differences in
ability, marginal differences in integrity, and no differences in
benevolence. It is possible that trust may be driven primar-
ily by integrity and benevolence which anthropomorphism
apparently has a weaker relationship within the context of
security robots. In either case, the results of this study
suggest that the exact interplay between anthropomorphism
and trust requires further detailed analysis in future research.
Fourth, our study demonstrated that the impact of an-
thropomorphism on humans’ perceptions and acceptance of
security robots did not differ between indoor hallways and
outdoor parking lots. In comparison to previous research
[33], [31], our study expanded the literature beyond static
questionnaire contexts by incorporating a simulated robot
and real scenarios involving human interaction. The majority
of previous security robot studies only adopted the access
control task [49], [50], [28], [51], [49], [33]. In order to
better simulate real-world security robots, our study deployed
multiple security tasks. However, further studies are needed
to examine various scenarios such as airports, hospitals, and
hotels to verify whether this trend is generalizable.
One limitation of this study is the low reliability of the
anthropomorphism item in the Godspeed questionnaire. We
recommend that future research employs multiple question-
naires to better assess perceived anthropomorphism [52],
[53]. Simultaneously, our reliance on VR simulations may
limit the study’s external validity. Another limitation is that
we only examined acceptance after the initial interaction.
Future longitudinal research could deploy security robots
in working scenarios to observe people’s long-term, more
realistic, and stable reactions to these robots.
ACKNOWLEDGMENT
We thank the Emerging Technologies Group at the Univer-
sity of Michigan Duderstadt Center, specifically Stephanie
O’Malley, Theodore W. Hall, and Sean Petty, for their
valuable help in the development of the VR experiment.
REFERENCES
[1] K. Sandeep, K. Srinath, and R. Koduri, “Surveillance security robot
with automatic patrolling vehicle,” Inter J Engin Sci Advance Technol,
vol. 2, no. 3, pp. 546–549, 2012.
[2] G. Randelli, L. Iocchi, and D. Nardi, “User-friendly security robots,”
in 2011 IEEE International Symposium on Safety, Security, and Rescue
Robotics. IEEE, 2011, pp. 308–313.
[3] D. Avola, G. L. Foresti, L. Cinque, C. Massaroni, G. Vitale, and
L. Lombardi, “A multipurpose autonomous robot for target recognition
in unknown environments,” in 2016 IEEE 14th International Confer-
ence on Industrial Informatics (INDIN). IEEE, 2016, pp. 766–771.
[4] T. Theodoridis and H. Hu, “Toward intelligent security robots: A
survey,” IEEE Transactions on Systems, Man, and Cybernetics, Part
C (Applications and Reviews), vol. 42, no. 6, pp. 1219–1230, 2012.
[5] R. C. Luo, Y. T. Chou, C. T. Liao, C. C. Lai, and A. C. Tsai, “Nccu
security warrior: An intelligent security robot system,” in IECON
2007-33rd Annual Conference of the IEEE Industrial Electronics
Society. IEEE, 2007, pp. 2960–2965.
[6] G. Trovato, A. Lopez, R. Paredes, D. Quiroz, and F. Cuellar, “Design
and development of a security and guidance robot for employment in
a mall,” International Journal of Humanoid Robotics, vol. 16, no. 05,
p. 1950027, 2019.
[7] G. Trovato, A. Lopez, R. Paredes, and F. Cuellar, “Security and guid-
ance: Two roles for a humanoid robot in an interaction experiment,”
in 2017 26th IEEE International Symposium on Robot and Human
Interactive Communication (RO-MAN). IEEE, 2017, pp. 230–235.
[8] Knightscope Inc., “Knightscope,” Retrieved March 10, 2023 from
https://www.knightscope.com, 2016.
[9] H. R. Everett and D. W. Gage, “From laboratory to warehouse:
Security robots meet the real world,” The International Journal of
Robotics Research, vol. 18, no. 7, pp. 760–768, 1999.
[10] K. D. Atherthon, “RAMSEE Is A Security Guard Robot With In-
frared Vision,” https://www.popsci.com/ramsee-is-security-robot-with-
infrared-vision/, 2016, [Accessed: Mar. 10, 2023].
[11] K. W¨
unsch, “Captain C robot enhances security,” Retrieved March
15, 2023 from https://www.tw-media.com/international/hong-kong-
convention-and-exhibition-centre-captain-c-robot-enhances-security-
and-operational-efficiency-130823, 2021.
[12] J.-g. Choi and M. Kim, “The usage and evaluation of anthropomorphic
form in robot design,” in Undisciplined! Design Research Society
Conference 2008. Sheffield, UK: Sheffield Hallam University, July
2009, pp. 16–19.
[13] M. Schmitz, “Concepts for life-like interactive objects,” in Proceedings
of the fifth international conference on Tangible, embedded, and
embodied interaction, 2010, pp. 157–164.
[14] S. You and L. P. Robert Jr, “Human-robot similarity and willingness to
work with a robotic co-worker,” in Proceedings of the 2018 ACM/IEEE
International Conference on Human-Robot Interaction, 2018, pp. 251–
260.
[15] S. S. Kwak, J. S. Kim, and J. J. Choi, “The effects of organism-versus
object-based robot design approaches on the consumer acceptance of
domestic robots,” International Journal of Social Robotics, vol. 9, pp.
359–377, 2017.
[16] S. Forgas-Coll, R. Huertas-Garcia, A. Andriella, and G. Aleny`
a, “How
do consumers’ gender and rational thinking affect the acceptance of
entertainment social robots?” International Journal of Social Robotics,
vol. 14, no. 4, pp. 973–994, 2022.
[17] E. Roesler, D. Manzey, and L. Onnasch, “A meta-analysis on the ef-
fectiveness of anthropomorphism in human-robot interaction,” Science
Robotics, vol. 6, no. 58, p. eabj5425, 2021.
[18] E. Roesler, L. Naendrup-Poell, D. Manzey, and L. Onnasch, “Why
context matters: the influence of application domain on preferred
degree of anthropomorphism and gender attribution in human–robot
interaction,” International Journal of Social Robotics, vol. 14, no. 5,
pp. 1155–1166, 2022.
[19] L. Robert, “The growing problem of humanizing robots,” International
Robotics & Automation Journal, vol. 3, no. 1, p. 247–248, 2017.
[20] H. Kamide, M. Yasumoto, Y. Mae, T. Takubo, K. Ohara, and T. Arai,
“Comparative evaluation of virtual and real humanoid with robot-
oriented psychology scale,” in 2011 IEEE International Conference
on Robotics and Automation. IEEE, 2011, pp. 599–604.
[21] S. C. Bhatti and L. P. Robert, “What does it mean to anthropomorphize
robots? food for thought for hri research,” in Proceedings of the 2023
ACM/IEEE International Conference on Human-Robot Interaction,
2023, pp. 422–425.
[22] T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially
interactive robots,” Robotics and autonomous systems, vol. 42, no. 3-
4, pp. 143–166, 2003.
[23] C. Breazeal, “Toward sociable robots,” Robotics and autonomous
systems, vol. 42, no. 3-4, pp. 167–175, 2003.
[24] B. R. Duffy, “Anthropomorphism and the social robot,” Robotics and
autonomous systems, vol. 42, no. 3-4, pp. 177–190, 2003.
[25] A. Barco, C. de Jong, J. Peter, R. K¨
uhne, and C. L. van Straten, “Robot
morphology and children’s perception of social robots: An exploratory
study,” in Companion of the 2020 ACM/IEEE International Conference
on Human-Robot Interaction, 2020, pp. 125–127.
[26] D. Zanatto, M. Patacchiola, A. Cangelosi, and J. Goslin, “Generalisa-
tion of anthropomorphic stereotype,” International Journal of Social
Robotics, vol. 12, pp. 163–172, 2020.
[27] M. Natarajan and M. Gombolay, “Effects of anthropomorphism and
accountability on trust in human robot interaction,” in Proceedings
of the 2020 ACM/IEEE international conference on human-robot
interaction, 2020, pp. 33–42.
[28] D. Li, P. P. Rau, and Y. Li, “A cross-cultural study: Effect of robot
appearance and task,” International Journal of Social Robotics, vol. 2,
pp. 175–186, 2010.
[29] E. Roesler, L. Onnasch, and J. I. Majer, “The effect of anthropo-
morphism and failure comprehensibility on human-robot trust,” in
Proceedings of the human factors and ergonomics society annual
meeting, vol. 64, no. 1. SAGE Publications Sage CA: Los Angeles,
CA, 2020, pp. 107–111.
[30] M. Lohse, F. Hegel, A. Swadzba, K. Rohlfing, S. Wachsmuth, and
B. Wrede, “What can i do for you? appearance and application of
robots,” in Proceedings of AISB, vol. 7, 2007, pp. 121–126.
[31] J. Lin, A. R. Panganiban, G. Matthews, K. Gibbins, E. Ankeney,
C. See, R. Bailey, and M. Long, “Trust in the danger zone: individual
differences in confidence in robot threat assessments,” Frontiers in
psychology, p. 1426, 2022.
[32] S. A. Khan, T. S. Bhatia, S. Parker, and L. B¨
ol¨
oni, “Modeling the
interaction between mixed teams of humans and robots and local
population for a market patrol task.” in FLAIRS Conference. Citeseer,
2012.
[33] J. B. Lyons, C. S. Nam, S. A. Jessup, T. Q. Vo, and K. T. Wynne,
“The role of individual differences as predictors of trust in autonomous
security robots,” in 2020 IEEE International Conference on Human-
Machine Systems (ICHMS). IEEE, 2020, pp. 1–5.
[34] E. Phillips, X. Zhao, D. Ullman, and B. F. Malle, “What is human-
like?: Decomposing robots’ human-like appearance using the anthro-
pomorphic robot (abot) database,” in 2018 13th ACM/IEEE Interna-
tional Conference on Human-Robot Interaction (HRI), 2018, pp. 105–
113.
[35] L. D. Riek, “Wizard of oz studies in hri: a systematic review and
new reporting guidelines,” Journal of Human-Robot Interaction, vol. 1,
no. 1, pp. 119–136, 2012.
[36] L. P. Robert, A. R. Denis, and Y.-T. C. Hung, “Individual swift trust
and knowledge-based trust in face-to-face and virtual team members,”
Journal of management information systems, vol. 26, no. 2, pp. 241–
279, 2009.
[37] F. D. Schoorman and G. A. Ballinger, “Leadership, trust and client
service in veterinary hospitals,” 2006, unpublished Working Paper.
[38] R. C. Mayer and J. H. Davis, “The effect of the performance appraisal
system on trust for management: A field quasi-experiment.” Journal
of applied psychology, vol. 84, no. 1, p. 123, 1999.
[39] C. Bartneck, D. Kuli´
c, E. Croft, and S. Zoghbi, “Measurement
instruments for the anthropomorphism, animacy, likeability, perceived
intelligence, and perceived safety of robots,” International Journal of
Social Robotics, vol. 1, pp. 71–81, 2009.
[40] E. G. Carmines and R. A. Zeller, Reliability and Validity Assessment.
Sage Publications, 1979.
[41] C. Fornell and D. F. Larcker, “Structural equation models with
unobservable variables and measurement error: Algebra and statistics,”
Journal of Marketing Research, vol. 18, no. 3, 1981.
[42] C. Bartneck, “Who like androids more: Japanese or us americans?”
in RO-MAN 2008-The 17th IEEE International Symposium on Robot
and Human Interactive Communication. IEEE, 2008, pp. 553–557.
[43] F. Hegel, S. Krach, T. Kircher, B. Wrede, and G. Sagerer, “Understand-
ing social robots: A user study on anthropomorphism,” in RO-MAN
2008-The 17th IEEE International Symposium on Robot and Human
Interactive Communication. IEEE, 2008, pp. 574–579.
[44] D. Kuchenbrandt, F. Eyssel, S. Bobinger, and M. Neufeld, “Minimal
group-maximal effect? evaluation and anthropomorphization of the
humanoid robot nao,” in Social Robotics: Third International Con-
ference, ICSR 2011, Amsterdam, The Netherlands, November 24-25,
2011. Proceedings 3. Springer, 2011, pp. 104–113.
[45] C. Moro, S. Lin, G. Nejat, and A. Mihailidis, “Social robots and
seniors: A comparative study on the influence of dynamic social
features on human–robot interaction,” International Journal of Social
Robotics, vol. 11, pp. 5–24, 2019.
[46] T. Zhang, D. B. Kaber, B. Zhu, M. Swangnetr, P. Mosaly, and
L. Hodge, “Service robot feature design effects on user perceptions
and emotional responses,” Intelligent service robotics, vol. 3, pp. 73–
88, 2010.
[47] M. M. Van Pinxteren, R. W. Wetzels, J. R¨
uger, M. Pluymaekers,
and M. Wetzels, “Trust in humanoid robots: implications for services
marketing,” Journal of Services Marketing, 2019.
[48] M. B. Mathur and D. B. Reichling, “An uncanny game of trust: social
trustworthiness of robots inferred from subtle anthropomorphic facial
cues,” in Proceedings of the 4th ACM/IEEE international conference
on Human robot interaction, 2009, pp. 313–314.
[49] D. Gallimore, J. B. Lyons, T. Vo, S. Mahoney, and K. T. Wynne,
“Trusting robocop: Gender-based effects on trust of an autonomous
robot,” Frontiers in Psychology, vol. 10, p. 482, 2019.
[50] O. Inbar and J. Meyer, “Politeness counts: perceptions of peacekeeping
robots,” IEEE Transactions on Human-Machine Systems, vol. 49,
no. 3, pp. 232–240, 2019.
[51] A. Lopez, R. Paredes, D. Quiroz, G. Trovato, and F. Cuellar, “Robot-
man: A security robot for human-robot interaction,” in 2017 18th
International Conference on Advanced Robotics (ICAR). IEEE, 2017,
pp. 7–12.
[52] C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner,
“The robotic social attributes scale (rosas) development and valida-
tion,” in Proceedings of the 2017 ACM/IEEE International Conference
on human-robot interaction, 2017, pp. 254–262.
[53] N. Spatola, B. K¨
uhnlenz, and G. Cheng, “Perception and evaluation
in human–robot interaction: The human–robot interaction evaluation
scale (hries)—a multicomponent approach of anthropomorphism,”
International Journal of Social Robotics, vol. 13, no. 7, pp. 1517–
1539, 2021.