Fig 1 - uploaded by Julie Carpenter
Content may be subject to copyright.
Tron-X (L), PKD (Center) and AIBO (R). 

Tron-X (L), PKD (Center) and AIBO (R). 

Source publication
Conference Paper
Full-text available
Robots, specifically androids, become increasingly important in the consumer market where they are marketed as toys or companions, as well as in the industry, where they will increasingly often play the role of a co-worker. The developers in various robotics communities are divided about design issues in these companion-worker androids. While some...

Contexts in source publication

Context 1
... interaction (HCI) literature recognize the growing importance of social interaction between humans and computers (interfaces, autonomous agents or robots), and the idea that people treat computers as social actors [7], preferring to interact with agents that are expressive, at least in the entertainment domain [5]. As androids are deployed across domestic, military and commercial fields, there is an acute need for further consideration of human factors. If computers are perceived as social actors, android interfaces which clearly emulate human facial expression, social interaction, voice and overall appearance will generate empathetic inclinations from humans. Indeed, development goals for many androids’ interface designs are publicly revealed to be intentionally anthropomorphic for human social interaction. Jeffrey Smith, ASIMO’s (Honda) North American project leader, said, "ASIMO's good looks are deliberate. A humanoid appearance is key to ASIMO's acceptance in society" [10]. In other words, engineers are designing interfaces based on the theory that a realistic human interface is essential to an immersive human-robot interaction experience, to create a situation that mimics a natural human-human interaction. Sparrow (2002) identifies robots that are designed to engage in and replicate significant social and emotional relationships as “ersatz companions” [8]. Designing androids with anthropomorphized appearance for more natural communication encourages a fantasy that interactions with the robot are thoroughly humanlike and promote emotional or sentimental attachment. Therefore, although androids may never truly experience human emotions themselves, even a modestly humanlike appearance that elicits emotional attachment from humans would change the robot’s role from machine into persuasive actor in human society [4]. For that reason, there should be further exploration of the roles of robot companions in society and the value placed on relationships with them. According to Mori's Uncanny Valley theory [6], the degree of empathy that people will feel towards robots heightens as the robots become increasingly human-looking. However, there is a point on Mori’s anthropomorphic scale just before robots become indistinguishable from humans where people suddenly find the robot’s appearance disconcerting. The “Uncanny Valley” is the point at which robots appear almost human, but are imperfect enough to produce a negative reaction from people. Therefore, as maintained by Mori, until fully human robots are a possibility, humans will have an easier time accepting humanoid machines that are not particularly realistic- looking. The differing schools of thought and applicability regarding android interface design coupled with the forthcoming availability of humanoid robots on the market leads to a need for greater understanding about the complexities and ramifications of android-human interaction. Although there have been great strides in android technology and development, some individual situations and contexts have yet to be thoroughly tested. Because of the nature of academic and professional development and the prohibitive aspects of research previously discussed here, experiments have predominantly centered on one specific android per test. In the experiment described in this paper, simulation of presence via computer-based representations of robots offered a preliminary understanding of human interactions with different robot interfaces. Robots may have more social presence than screen-based characters, which might justify the additional expense and effort in creating and maintaining their physical embodiment in specific situations, such as collaborative activities. For practical reasons this study on multiple androids was only possible using screen based representations of them. This disembodiment of the robots might have consequences, but since only screen representations were used, they should be evenly spread across all conditions. In actions and situations where people interact with robots as co-workers, it is necessary to define human-robot collaboration as opposed to human-robot interaction: collaboration is working with others, while interaction involves action on someone or something else [1]. The focus of our research is the exploration of human relationships with anthropomorphized robots in collaborative situations. If computers and robots are treated as social actors, we would expect that they were punished for benefiting from a team’s performance without or with only little own contribution. It has already been demonstrated that subjects get angry and punish not only humans, but also computers when they feel the computer has treated them unfairly in a bargaining game [3]. In order to not lead the participants in one direction, we also offered the possibility of praise in the experiment reported here. The research questions that follow from this line of thought are related to the use of praise and punishment: 1) Are robots punished for benefiting from a team’s performance without own contribution? 2) Are robots praised for good performance? 3) Are robots punished and praised equally a) often and b) intensely as humans? 4) Does the extent of taking advantage without own contribution (low vs. high error rate) have an effect on the punishment behavior? We are also interested in how the participants perceive their own praise and punishment behavior afterwards, and how they evaluate their own and the partner’s performance, such as: 5) Do subjects misjudge their praise and punishment behavior when asked after the game? 6) Do subjects judge their praise and punishment behavior differently for humans and robots? 7) Is the robot’s performance estimated correctly? We are curious whether the “computers as social actors” (CASA) theory holds true for robots, or if other effects come into play when humans interact with robots. We used three robots: the humanoids Tron-X (Festo AG) and PKD (Hanson Robotics) which represent different levels of anthropomorphism, and AIBO (Sony) as a zoomorphic robot. In addition to the robots, we used a human and a computer as partners in the experiment to see if computers are treated like humans in a praise and punishment scenario. Our interest in attempting to observe the Uncanny Valley led to the choice of robots used. Tron-X and PKD represent different levels of anthropomorphism (see Figure 1, under 3.4 Materials). PKD is extremely humanlike in countenance, especially so in static pictures, while Tron-X has blue “skin” and visible mechanical works. From an exterior design standpoint, AIBO represents a dog through general shape alone and does not attempt a realistic canine representation (through “fur” or other ...
Context 2
... participants took part in this preliminary experiment, 6 of them were male, and 6 were female. The mean age of the participants was 29.9 years, ranging from 21 to 54. The subjects were Master’s and Ph.D. students in Psychology or Engineering. Participants received course credit or candy for their participation. We conducted a 5 (partner) x 2 (error rate) within subject experiment, manipulating interaction partner (human, computer, robot1: PKD, robot2: Tron-X, robot3: AIBO) and error rate (high: 40%, low: 20%). The experiment software automatically recorded the following measurements: • Frequency of praises and punishments: Number of incidences in which the participant gave plus points or minus points. • Intensity of praises and punishments: Average number of plus points or minus points given by the participant, ranging from 1 to 5. • Subject and partner errors: Number of errors made by the participant and the partner. During the experiment, questionnaires were conducted, recording the following measurements: • Self- evaluation of praise and punishment behavior: Self-reported frequency and intensity of the praises and punishments given by the participant. • Self- evaluation of own and partner’s performance: Self-reported number of errors made by the participant and the partner. • Satisfaction: Participant’s satisfaction with his/her own and the partner’s performance after task completion, rated on a 5 point rating scale. A post-test questionnaire and interview measured the following: • Sympathy for each robot, rated on a 6 point rating scale. • Human likeness of the PKD and Tron-X robot, rated on a 6 point rating scale. • Believability task: Did participants believe that the robots were able to do the task, measured on a yes/no scale. • Believability robot: Did participants believe that he/she interacted with a real robot, measured on a yes/no scale. The experiment was set up as a tournament, in which humans, robots and computers played together in 2-member- teams. The participants were teamed up with a human, a computer, and each robot in random order. The subject played together with one partner per round. One round consisted of two trials in which the partner would either make 20% or 40% errors. The orders of the trials were counterbalanced. Each trial consisted of 20 tasks. The performance of both players equally influenced the team score. To win the competition both players had to perform well. The participants were told that the tournament was held simultaneously in three different cities, and due to the geographical distance the team partners could not be met in person; subjects would use a computer to play and communicate with their partners. Every time the participant played together with a robot, a picture of the robot was shown on the screen as an introduction. No picture was shown if the participant played together with a human or a computer, because it can be expected that the participants were already familiar with humans. Furthermore, they were already sitting in front of a computer and hence it appeared superfluous to add another picture of a computer on the computer screen if the participant played in a team together with a computer. Since robots are much less familiar to the general publications, pictures were shown in those conditions. After the instruction, the participants completed a brief demographic survey, and conducted an exercise trial with the software. Following the survey, subjects had the opportunity to ask questions before the tournament started. The participants’ task was to name or count objects that were shown on the computer display. The participants were told that these tasks might be easy for themselves but that it would be much more difficult for computers and robots. To guarantee equal chances for all players and teams, the task had to be on a level that the computers and robots could perform. After the participants entered their answer on the computer the result was shown. It was indicated if the participants and his/her partner's answer were correct. If the partner’s answer was wrong, the participant could give minus points. If the participant decided to do so, he/she had to decide how many minus points to give. If the partner’s answer was correct, the participant could choose if and how many plus points he/she wanted to give to the partner. Subjects were told that for the team score, correct answers of the participant and the partner were counted. A separate score for each individual was kept for the number of plus and minus points. At the end, there would be a winning team, and a winning individual. After each trial, the participant had to estimate how many errors the partner had made, how often the participant had punished the partner with minus points and how often the participant had praised the partner with plus points. In addition, the participants had to judge how many plus and minus points they had given to the partner. After each round, the participant was asked for his/her satisfaction with the performance of his/her partner and her/his own performance. Then, the participants started a new round with a new partner. After the tournament, a questionnaire was administered, each using a 6-point rating scale response asking about the subject’s sympathy toward each robot and about the humanlike or machinelike aspects of each robot. In an interview, the participant was asked if he/she believed that he/she played with real robots and if he/she thought the task was solvable for robots. Finally, participants were debriefed. The experiment took approximately 40 minutes. For the experiment, we used pictures of the robots PKD (Hanson Robotics), Tron-X (Festo AG), and ERS-7 AIBO (Sony); Figure 1 shows the photographs used. The pictures were displayed on the computer screen each round so the participant knew what the current partner looked like. No picture was shown when the participant was teamed up with a human or a computer. For the task, 120 pictures with one or several objects on them were used (examples shown in Figure 2). The objects had to be named or ...

Citations

... (Bartneck et al., 2007)等。然而, 机器人更高程度的 拟人化同时也会使人类知觉到更严重的威胁, 尤其 是其能力高于人类时(Yogeeswaran et al., 2016)。本 ...
... Similarly, children judged the helpfulness of a robot using gaze movements [23]. Agents' life-likeness has been shown to enhance their perceived social presence [24]; help people understand and predict their behavior [25], by triggering the attribution of intentions [26]; enhance communication [11]; and facilitate collaboration [27]. One main motivation for using life-likeness in robotic design is to facilitate more accepting [5], [25] and meaningful [28] interactions. ...
Article
Full-text available
Assigning lifelike qualities to robotic agents (Anthropomorphism) is associated with complex affective interpretations of their behavior. These anthropomorphized perceptions are traditionally elicited through robots' designs. Yet, aerial robots (or drones) present a special case due to their – traditionally – non-anthropomorphic design, and prior research shows conflicting evidence on their perception as either person-like, animal-like, or machine-like. In this work, we explore how people perceive drones in a cross-dimensional space between these three dimensions by varying the affective state presented on the drone. To capture these perceptions, we developed a novel measurement instrument AnZoMa . We describe the design, use, and deployment of the instrument in an online study (N=98). The study results suggest that different drone emotions triggered people to attribute various characteristics to the drone (e.g., interaction metaphors, traits, and features) and variations in acceptability of drone affective states. These results demonstrate the interdependencies between affective perceptions and anthropomorphism of drones. We conclude by discussing the necessity to integrate cross-dimensional perception of anthropomorphism in human-drone interaction and affective computing. This work contributes a novel tool to measure the dimensions and gravity of anthropomorphism and insights into interdependencies between different affective states displayed on drones and their anthropomorphized perception.
... For example, the Japanese tend to regard robots as companions, despite not being living creatures. Western cultural history originates in Greek philosophy, as represented by Judeo-Christian monotheism, which maintains the linear worldview that connects the beginning and the end (Bartneck et al., 2006;Coeckelbergh, 2022). In this view, God is omnipotent and only God can infuse inanimate objects with life (Bartneck et al., 2006;Coeckelbergh, 2022). ...
... Western cultural history originates in Greek philosophy, as represented by Judeo-Christian monotheism, which maintains the linear worldview that connects the beginning and the end (Bartneck et al., 2006;Coeckelbergh, 2022). In this view, God is omnipotent and only God can infuse inanimate objects with life (Bartneck et al., 2006;Coeckelbergh, 2022). Therefore, attempting to create a life by inventing autonomous robots can be interpreted as a sacrilegious invasion of God's realm. ...
Article
Service robots are humanoid and non-humanoid machines that communicate and deliver services to customers of an organisation. They are Artificial intelligence (AI) enabled and display human intelligence (Wirtz et al., 2018; Blut et al., 2021). Service robots may undertake cognitive-analytical activities and emotional-social duties. Artificial Intelligence is built-in to service robots, allowing them to interact with the customer as regular hospitality services thrive on providing interpersonal interactions to create customer value. As substitutes for human employees, service robots may posit a psychological and emotional challenge to the traditional view of hospitality services, such as human frontline employees. Professional Service Robots (PRS) have proven to have the potential to drastically change the service industry. The use of PSR is lagging in an African context, necessitating more research on factors that may influence acceptance. This study aims to explore the cultural factors that influence consumers’ acceptance of PSR. The Service Robot Acceptance Model (sRAM) is adopted as a guiding framework for this study. Using an exploratory qualitative research approach data is collected using three focus groups, with 16 participants in total, using the simulation method. Interviews were also conducted with seven participants who were purposively selected based on age, gender, and race. Sexual orientation was found to have a positive influence on acceptance while beliefs and norms were barriers to acceptance, with the Ubuntu philosophy being one of the main reasons for rejection. Language appeared to have a huge role, as forwarded by the sRAM. The results suggest that acceptance of PSR is also dependent on cultural factors, however, its influence is lesser in certain types of service sectors. The research recommends that practitioners, service robot developers, and implementers should consider the culture of the consumers when implementing service robots.
... According to Mori's Uncanny Valley theory the degree of expectation towards the robot increases as it becomes more human-looking. However, in his theory there is a point on the anthropomorphic scale where the robot's appearance becomes confusing and it is difficult to distinguish between humans and robots [42]. This was proved in the research carried out by Bartneck et al. in which all the robots were not treated as the same by the participants because of their anthropomorphic or zoomorphic appearance which differentiated them from each other [43]. ...
... Participants, including older adults, expressed a significant social acceptance of humanoid robots [22,23], considering them more conscientious and intelligent [8]. Furthermore, human-like robots tend to be perceived as less threatening by adults, and they are treated more politely compared to machine-like robots [24][25][26]. Comparable results have been found also for virtual avatars, as human-realistic avatars elicited greater affinity and preference than cartoon-like avatars [27]. On the other hand, quite a few studies support the opposite position. ...
Article
Full-text available
Unlabelled: Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots' body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots. Supplementary information: The online version contains supplementary material available at 10.1007/s12369-022-00917-7.
... Research on praise in HRI has been conducted from different perspectives. Some work has focused on: investigating how humans attribute praise and punishment to robots (Bartneck et al, 2006(Bartneck et al, , 2008, robot-delivered praise for increasing user motivation (Fasola and Matarić, 2013;Schneider and Kummert, 2016), self-efficacy (Zafari et al, 2019), as well as personalized robot praise toward children (Serholt and Barendregt, 2016) and older adults (Thompson et al, 2017); other work has evaluated the role of praise in game-based interactions (Ali et al, 2021) and nurturing praise in a therapy context (Tapus et al, 2008), and still other research has examined the relationship between robot-delivered praise, trust, and compliance (Ghazali et al, 2019). ...
Article
Full-text available
Recent research suggests that implicit self-theories—a theory predicated on the idea that people’s underlying beliefs about whether self-attributes, such as intelligence, are malleable (incremental theory) or unchangeable (entity theory), can influence people’s perceptions of emerging social robots developed for everyday use. Other avenues of research have identified a close link between ability and effort-focused praise and the promotion of individual implicit self-theories. In line with these findings, we posit that implicit self-theories and robot-delivered praise can interactively influence the way people evaluate a social robot, after a challenging task. Specifically, we show empirically that those endorsing more of an entity theory, indicate more favorable responses to a robot that delivers ability praise than to one that delivers effort praise. In addition, we show that those endorsing more of an incremental theory, remain largely unaffected by either praise type, and instead evaluate a robot favorably regardless of the praise it delivers. Together, these findings expand the state-of-the-art, by providing evidence of an interactive match between implicit self-theories and ability, and effort-focused praise in the context of a human-robot interaction.
... People behave diferently when interacting with a pet robot and with a humanoid robot [4]. Robots that are human-like in both appearance and behaviour are treated less harshly than machine-like robots [7]. In the ield of social robots, there is an increased tendency to build robots that resemble humans in their appearance. ...
Article
Full-text available
As social robots become more prominent in our lives, their interaction with humans takes an increasing role, and new collaborative scenarios emerge. This development brings the need to realize robust test methods enabling the design and evaluation of HRI teaming tasks to prove functionality and promote adoption. In this paper, we present a general purpose and repeatable methodology for conducting studies in collaborative HRI in the range of robotic competitions. The methodology includes a step-by-step approach to design HRI teaming tasks tailored to be enacted in a robotic competition and to evaluate the performance of social robots to execute the designed tasks, exploring the relationship between robots’ performance and user perceptions based on the feedback of the users participating to such tasks. We assess the feasibility of the methodology to design and evaluate a HRI teaming task in the context of SciRoc (“Smart CIties RObotics Challenges”) competition, which targets at investigating the impact of social of robots in smart cities.
... Beck (2016) also argued that giving co-bots the status of a legal resident would enable them to be held responsible for their actions and decisions. Furthermore, as legal residents, it may be possible to implement what Bartneck, Reichenbach and Carpenter (2006) suggested earlier, that co-bots should be paid wages for work done. Mputhia (2019) argued that one of the issues being discussed is whether cobots, as legal residents, can also enjoy intellectual property rights such as patents. ...
... The only difference between them and their human counterparts is the fact that they are intelligent machines. Thus, as Bartneck, Reichenbach and Carpenter (2006) argue, cobots, just like their human counterparts, should be praised or punished depending on their performance. Working independently of each other mirrors the normal human working associations where employees consider each other as co-workers or colleagues. ...
Book
Full-text available
This ten chapter book focuses on the relationship between information knowledge and technology with regard to the Fourth Industrial Revolution. We need to transform, be more creative and innovative to develop and sustain existing and emerging capabilities in the ‘new normal’, which has been accidentally accelerated by COVID-19 requirements. Technologies for digital transformation create the opportunity for Africa to bypass traditional phases of industrial development as the continent has done successfully in the past. Challenges addressed in the book are artificial intelligence, inequality, social, justice, ethics, access and success, LISE, policy, infrastructure and cybersecurity.The book is written by eminent professors in Africa who have contributed substantially to research in at least one of the following areas: library and information science, records management, computer science, information systems, information and knowledge management, information ethics, data science, e-learning and digital scholarship.
... This was done to create an environment in which something was at stake, which is necessary to establish an ideal trust environment [23]. A graphical overview of the game flow can be found in Figure 2. The first round of the game was based on Bartneck et al. [1]. The purpose of this round was to create a competitive and collaborative setting on which the results displayed in the first ranking could be based. ...
... Out of different technological devices, robots in particular benefit from both anthropomorphism and zoomorphism, and hence the increasing tendency in the field of robotics to build human-like or animal-like machines (Bartneck et al., 2006;Złotowski et al., 2015). Their physical presence in the real world broadens the scope of possible social reactions toward them, for example, when they are regarded to be more accountable for their behavior than only virtually present computer agents (Kahn et al., 2012). ...
Article
Full-text available
Robots have the potential to transform our existing categorical distinctions between "property" and "persons." Previous research has demonstrated that humans naturally anthropomorphize them, and this tendency may be amplified when a robot is subject to abuse. Simultaneously, robots give rise to hopes and fears about the future and our place in it. However, most available evidence on these mechanisms is either anecdotal, or based on a small number of laboratory studies with limited ecological validity. The present work aims to bridge this gap through examining responses of participants (N = 160) to four popular online videos of a leading robotics company (Boston Dynamics) and one more familiar vacuum cleaning robot (Roomba). Our results suggest that unexpectedly human-like abilities might provide more potent cues to mind perception than appearance, whereas appearance may attract more compassion and protection. Exposure to advanced robots significantly influences attitudes toward future artificial intelligence. We discuss the need for more research examining groundbreaking robotics outside the laboratory.