Article

The benefit of being physically present: A survey of experimental works comparing copresent Robots, telepresent Robots and virtual Agents

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The effects of physical embodiment and physical presence were explored through a survey of 33 experimental works comparing how people interacted with physical robots and virtual agents. A qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user's environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analogue; robots also led to better user performance when they were collocated as opposed to shown via video on a screen. However, participants did not respond differently to physical robots and virtual agents when both were displayed digitally on a screen – suggesting that physical presence, rather than physical embodiment, characterizes people's responses to social robots. Implications for understanding psychological response to physical and virtual agents and for methodological design are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Conducting HRI research in such a language-and culturally diverse environment, with the added complications of the pandemic, has also meant that HRI user studies must be conducted online. While there is general agreement that in-person (co-present) user studies are preferred to online (tele-present) studies [4], there is also consensus that not all in-person user studies yield better results [5]. Although online HRI studies are a viable alternative to in-person studies, they may cause higher levels of frustration in participants because of their accent or dialect and the inability of the robot to fully understand either [6]. ...
... HRI user studies on interaction modalities explored whether online (virtual or tele-present) versus in-person (co-present) interactions influenced a participant's emotion, behavior, attitude, or perception [4]. ...
... According to Li et al. meta-analysis, co-present (inperson/physical) robot interactions are superior to telepresent (online/video) robot and virtual agent interactions [4]. The truth, however, is more complicated. ...
... A meta-analysis of HRI user studies by Li focused on determining whether online (virtual or tele-present) versus inperson (co-present) interaction modalities affected the participant's attitude, behavior, perception, emotion, etc. [14]. This work considered interaction modality to be the combination of presence and embodiment. ...
... As we have noted, investigating the interaction modality is critical because it gives important insight into how it affects a user's psychosocial reactions. From the meta-analysis done by Li the conclusion was that co-present (in-person/physical ) robot interactions are better than both tele-present (online/video) robot interactions and virtual agent interactions [14]. The reality, however, is not that straightforward because within the Li study itself, there are cases where tele-present interactions had better outcomes than co-present interactions, and in some cases, crosseffects were observed [14]. ...
... From the meta-analysis done by Li the conclusion was that co-present (in-person/physical ) robot interactions are better than both tele-present (online/video) robot interactions and virtual agent interactions [14]. The reality, however, is not that straightforward because within the Li study itself, there are cases where tele-present interactions had better outcomes than co-present interactions, and in some cases, crosseffects were observed [14]. ...
... Agents can be created to resemble humans, animals, objects, robots, or mystical creatures (Straßmann and Krämer, 2017). They can also be virtual or physical, i.e., created as a virtual character that is only presented on digital screens, or physically with tangible materials and structure (Li, 2015). ...
... When encountering physical agents, users reported a stronger tendency of considering the agents as real persons (Lee et al., 2019) and form higher trust levels (Kraus et al., 2016). In terms of the influence of physicality on user performance, Li (2015) suggested that the presence of a physical robot will draw more user attention, which could lead to lower driving performance. However, in a mathematical puzzle-solving task, no significant influences of physicality were found in terms of users' performance (Hoffmann and Krämer, 2013). ...
... But in further spotlight analysis, no significant results were found. These findings contrast with the current findings related to physicality (Li, 2015;Mann et al., 2015). The size of physical agents might cause a possible explanation for this. ...
Article
Full-text available
As technological development is driven by artificial intelligence, many automotive manufacturers have integrated intelligent agents into in-vehicle information systems (IVIS) to create more meaningful interactions. One of the most important decisions in developing agents is how to embody them, because the different ways of embodying agents will significantly affect user perception and performance. This study addressed the issue by investigating the influences of agent embodiments on users in driving contexts. Through a factorial experiment (N = 116), the effects of anthropomorphism level (low vs. high) and physicality (virtual vs. physical presence) on users' trust, perceived control, and driving performance were examined. Results revealed an interaction effect between anthropomorphism level and physicality on both users' perceived control and cognitive trust. Specifically, when encountering high-level anthropomorphized agents, consumers reported lower ratings of trust toward the physically present agent than toward the virtually present one, and this interaction effect was mediated by perceived control. Although no main effects of anthropomorphism level or physicality were found, additional analyses showed that anthropomorphism level significantly improved users' cognitive trust for those unfamiliar with IVIS. No significant differences were found in terms of driving performances. These results indicate the influences of in-vehicle agents' embodiments on drivers' experience.
... Virtual agents are promising tools to investigate emotional interactions with high ecological validity and control (Parsons, 2015;Pan and Hamilton, 2018). Androids may be comparably useful in this respect, and also have the unique advantage of being physically present (Li, 2015). If androids' facial expressions can be developed and validated based on psychological evidence, they will constitute an important research tool for investigating emotional interactions. ...
... Interactions with virtual agents may promote both ecological validity and control (Parsons, 2015;Pan and Hamilton, 2018); however, virtual agents are obviously not physically present, which may limit ecological validity to some degree. Several studies have reported that physically present robots elicited greater emotional responses than virtual agents (e.g., Bartneck, 2003;Fasola and Mataric, 2013;Li et al., 2019; for a review, see Li, 2015). Taken together, our data suggest that androids like Nikola, which are human-like in appearance and facial expressions, and can physically coexist with humans, are valuable research tools for ecologically valid and controlled research on facial emotional interaction. ...
Article
Full-text available
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids’ facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola’s emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
... It has also been shown that the physical presence of robots improves task compliance and increases positive understandings of interactions [3]. In fact, a review of 33 experimental studies has shown that physically present robots have more authority, are more persuasive, and are perceived more positively compared to when the same robot or a virtual agent is digitally displayed on a screen [22], which makes them very suitable for interventions. ...
... 24 8.60 Receiving social skills training (e.g., how to communicate and engage with other people). 22 7.89 Monitoring and tracking mood changes and providing feedback on my progress. 21 7.53 Practicing social skills via role plays (e.g., practicing public speaking). ...
... Still, since a virtual agent is not physically present, it induces less psychological response than a fully physically embodied robot does. This issue was studied in a survey of 33 experimental studies, which compared people's interaction with physical robots and virtual agents [156]. Results from the survey discovered that physically present robots were perceived as more persuasive and positive than virtual agents, and they induced better user performance and more salient behavioural and attitudinal responses [156]. ...
... This issue was studied in a survey of 33 experimental studies, which compared people's interaction with physical robots and virtual agents [156]. Results from the survey discovered that physically present robots were perceived as more persuasive and positive than virtual agents, and they induced better user performance and more salient behavioural and attitudinal responses [156]. Such findings hint at the potential social robots have in facilitating therapeutic outcomes and consequently, mitigating the current challenges experienced in accessing mental health services. ...
Article
Full-text available
Social anxiety disorder or social phobia is a condition characterized by debilitating fear and avoidance of different social situations. We provide an overview of social anxiety and evidence-based behavioural and cognitive treatment approaches for this condition. However, treatment avoidance and attrition are high in this clinical population, which calls for innovative approaches, including computer-based interventions, that could minimize barriers to treatment and enhance treatment effectiveness. After reviewing existing assistive technologies for mental health interventions, we provide an overview of how social robots have been used in many clinical interventions. We then propose to integrate social robots in conventional behavioural and cognitive therapies for both children and adults who struggle with social anxiety. We categorize the different therapeutic roles that social robots can potentially play in activities rooted in conventional therapies for social anxiety and oriented towards symptom reduction, social skills development, and improvement in overall quality of life. We discuss possible applications of robots in this context through four scenarios. These scenarios are meant as ‘food for thought’ for the research community which we hope will inspire future research. We discuss risks and concerns for using social robots in clinical practice. This article concludes by highlighting the potential advantages as well as limitations of integrating social robots in conventional interventions to improve accessibility and standard of care as well as outlining future steps in relation to this research direction. Clearly recognizing the need for future empirical work in this area, we propose that social robots may be an effective component in robot-assisted interventions for social anxiety, not replacing, but complementing the work of clinicians. We hope that this article will spark new research, and research collaborations in the highly interdisciplinary field of robot-assisted interventions for social anxiety.
... Video call compared to face-to-face. Since talking to a physical robot in the same room might give a different experience than talking with a robot in front of a camera [13], 15 participants were asked how they think the videomediated interaction compares to a face-to-face interaction with the robot. Note that this was a hypothetical question for most children, since they had never interacted with a robot face-to-face. ...
... The work we describe comes with several limitations. 1) The children that participated (11)(12)(13) were older than our target audience (10)(11)(12), where Druin [6] found children of ages 7-10 to be the best design partners. 2) Researchers and teachers had limited time to spend with each design group, while past studies suggest inter-generational work best [6]. ...
Preprint
Full-text available
Our research project (CHATTERS) is about designing a conversational robot for children's digital information search. We want to design a robot with a suitable conversation, that fosters a responsible trust relationship between child and robot. In this paper we give: 1) a preliminary view on an empirical study around children's trust in robots that provide information, which was conducted via video call due to the COVID-19 pandemic. 2) We also give a preliminary analysis of a co-design workshop we conducted, where the pandemic may have impacted children's design choices. (3) We close by describing the upcoming research activities we are developing.
... Finally, the persuasive role of thick AI will likely be extremely complex as there are many technological variables (e.g., voice only versus embodied; humanoid versus machine-like) that will likely interact with contextual and individual difference factors to influence persuasion. In particular, an AA's embodiment is important to human's immersion in, and reactions to, an interaction: A review of over 30 experimental studies found that physically present and embodied robots (when compared with entities digitally displayed on a screen) were more persuasive and generally perceived more positively (Li, 2015). Thus, physical presence may provide thicker social cues (Li, 2015)-particularly primary cues (Lombard & Xu, 2021)-highlighting the importance of affordances in the MAIN model (Sundar, 2008). ...
... In particular, an AA's embodiment is important to human's immersion in, and reactions to, an interaction: A review of over 30 experimental studies found that physically present and embodied robots (when compared with entities digitally displayed on a screen) were more persuasive and generally perceived more positively (Li, 2015). Thus, physical presence may provide thicker social cues (Li, 2015)-particularly primary cues (Lombard & Xu, 2021)-highlighting the importance of affordances in the MAIN model (Sundar, 2008). Therefore, future research should investigate how the persuasive influence of a technology's modality, agency, AI Complicates Persuasion M. Dehnert and P. A. Mongeau interactivity, and navigability interacts with social cues to influence persuasive effectiveness and perceptions of presence. ...
Article
Artificial intelligence (AI) has profound implications for both communication and persuasion. We consider how AI complicates and promotes rethinking of persuasion theory and research. We define AI-based persuasion as a symbolic process in which a communicative-AI entity generates, augments, or modifies a message—designed to convince people to shape, reinforce, or change their responses—that is transmitted to human receivers. We review theoretical perspectives useful for studying AI-based persuasion—the Computers Are Social Actors (CASA) paradigm, the Modality, Agency, Interactivity, and Navigability (MAIN) model, and the heuristic-systematic model of persuasion—to explicate how differences in AI complicate persuasion in two ways. First, thin AI exhibits few (if any) machinic (i.e., AI) cues, social cues might be available, and communication is limited and indirect. Second, thick AI exhibits ample machinic and social cues, AI presence is obvious, and communication is direct and interactive. We suggest avenues for future research in each case.
... Robots provide an important physical presence, with many open questions relating to studying the effective responses to robots in human-agent interaction [13]. Physically present agents could prove to be an assistive technology in education [14], home environments [15], as well as to promote wellbeing in older populations [16]. ...
... We utilize cues from our facial expressions, gaze orientation, and body language to augment the speech, such that if a word was missed, or the attention of the receiver was momentarily fixed on something else, the meaning of the communicated message can still be recovered from the context as a whole. Robots provide an important physical presence in human factors research, which can potentially be used to mimic these biological processes of communication [13]. ...
Conference Paper
Full-text available
Joint action problems remain a difficult endeavor when social robots are involved. The level of patience that a person has when interacting with a robot has been shown to vary based on their expectations of the robot's capabilities and the difficulty of the task. Social robots, especially humanoid ones, require an additional level of consideration when designing behaviors and interactions to meet such expectations. We aim to explore an approach rooted in the perceived embodiment of a humanoid robot to improve the interaction experience. This manuscript outlines previous work and inspirations for our development thus far. By utilizing a definition of embodiment, we have identified four mutual perturbatory channels that humanoid robots share with people. Through an experimental joint action scenario, we aim to quantify the bandwidth of these perturbatory channels, which we can use to further maximize the embodiment of the robot and improve the user-experience of interaction with the robot.
... To address more general factual questions from the public, this strategy can be extended to seek for answers through domain-specific ontologies, web crawling or open-domain question-answering systems, such as IBM Watson 2 . However, when a QA system is to be designed for a (physical) social robot, it adds another difficulty layer for questions answering due to the embodiment factor [9][10][11][12] and physical presence [13,14], which can affect users' decision-making [15], thus potentially biasing users' question-asking behaviour. Moreover, the novelty effect can play a crucial role [16][17][18][19]. ...
... For example, Cantrell et al. [33] used humanhuman interaction in a team search instructional task to create their dialogue corpus for training. However, for work with humanoid social robots, this may not yield the best dialogue results due to the difference the physical presence of the robot may bring to the conversation [13]. Cruz-Sandoval et al. [34] recognised this issue and proposed the creation of a HRI corpus for conversational dialogue for training machine learning systems. ...
Article
Full-text available
The role of a human assistant, such as receptionist, is to provide specific information to the public. Questions asked by the public are often context dependent and related to the environment where the assistant is situated. Should similar behaviour and questions be expected when a social robot offers the same assistant service to visitors? Would it be sufficient for the robot to answer only service-specific questions, or is it necessary to design the robot to answer more general questions? This paper aims to answer these research questions by investigating the question-asking behaviour of the public when interacting with a question-answering social robot. We conducted the study at a university event that was open to the public. Results demonstrate that almost no participants asked context-specific questions to the robot. Rather, unrelated questions were common and included queries about the robot’s personal preferences, opinions, thoughts and emotional state. This finding contradicts popular belief and common sense expectations from what is otherwise observed during similar human–human interactions. In addition, we found that incorporating non-context-specific questions in a robot’s database increases the success rate of its question-answering system.
... Much of this work has compared physically embodied robots to virtual agents depicted on screens. This research demonstrates that embodied physical presence leads to greater influence [17], learning outcomes [18], task performance [19], [20], gaze following from infants [21], proximity [17], exercise [22], pervasiveness [23], positive perception [23], and social facilitation [24], forgiveness [24], enjoyableness [1], [22], [25], helpfulness [22], [26], and social attractiveness [22]. However, these works have not considered morphologies that blend the physical and the virtual, as is enabled by AR Technologies. ...
... Much of this work has compared physically embodied robots to virtual agents depicted on screens. This research demonstrates that embodied physical presence leads to greater influence [17], learning outcomes [18], task performance [19], [20], gaze following from infants [21], proximity [17], exercise [22], pervasiveness [23], positive perception [23], and social facilitation [24], forgiveness [24], enjoyableness [1], [22], [25], helpfulness [22], [26], and social attractiveness [22]. However, these works have not considered morphologies that blend the physical and the virtual, as is enabled by AR Technologies. ...
Conference Paper
Full-text available
Augmented Reality (AR) or Mixed Reality (MR) enables innovative interactions by overlaying virtual imagery over the physical world. For roboticists, this creates new opportunities to apply proven non-verbal interaction patterns, like gesture, to physically-limited robots. However, a wealth of HRI research has demonstrated that there are real benefits to physical embodiment (compared, e.g., to virtual robots displayed on screens). This suggests that AR augmentation of virtual robot parts could lead to similar challenges. In this work, we present the design of an experiment to objectively and subjectively compare the use of AR and physical arms for deictic gesture, in AR and physical task environments. Our future results will inform robot designers choosing between the use of physical and virtual arms, and provide new nuanced understanding of the use of mixed-reality technologies in HRI contexts.
... As an augmentation of human caregivers with respect to the substantial health care labor shortage and the high burden of caregiving, robots may provide care with high repeatability and without any complaints or fatigue [9]. Moreover, a meta-analysis by Li [10] comparing how people interacted with physical robots and virtual agents showed that physically present robots were found to be more persuasive, perceived more positively, and resulted in better user performance compared to virtual agents. Furthermore, robots can facilitate social interaction, communication, engagement, and positive mood to improve the performance of the system [11]. ...
Article
In this work, an online survey was used to understand the acceptability of humanoid robots and users’ needs in using these robots to assist with care among people with Alzheimer’s disease and related dementias (ADRD), their family caregivers, health care professionals, and the general public. From November 1, 2020 to March 13, 2021, a total of 631 complete responses were collected, including 80 responses from people with mild cognitive impairment or ADRD, 245 responses from caregivers and health care professionals, and 306 responses from the general public. We carried out detailed caparisons between people with ADRD, caregivers, and general public for their opinions about robot acceptance, robotic functionality, usability, and ethical issues. Overall, people with ADRD, caregivers, and the general public showed positive attitudes towards using the robot to assist with care for people with ADRD. The top three functions of robots required by the group of people with ADRD were reminders to take medicine, emergency call service, and helping contact medical services. Additionally, we included a discussion of the comments, suggestions, and concerns from the caregivers and the general public. We recognized common concerns raised by the participants, including the cost of the robot, the machine-like voice of the robot, and reduced acceptability of the robot by people with ADRD due to cognitive deficit. The results of this article are of significant relevance for the applications of social robotics in dementia care and for biomedical interventions related to AI and robotics in healthcare. Moreover, the discussions on potentials and limitations identified in this article will shed light for future design, development, and evaluation of socially assistive robots for people living with dementia.
... Assistive robots can also aid patients through social interaction, rather than offering physical support (Feil-Seifer and Mataric, 2005): these are known as Socially Assistive Robots (SAR). The robot's embodiment positively affects the users' motivation and performance, through noncontact feedback, encouragement and constant monitoring (Brooks et al., 2012;Li, 2015;Vasco et al., 2019). The Kaspar robot is a child-sized humanoid designed to assist autistic children in learning new social communication skills, while improving their engagement abilities and attention (Wood et al., 2021). ...
Article
Full-text available
According to the World Health Organization 1, ² the percentage of healthcare dependent population, such as elderly and people with disabilities, among others, will increase over the next years. This trend will put a strain on the health and social systems of most countries. The adoption of robots could assist these health systems in responding to this increased demand, particularly in high intensity and repetitive tasks. In a previous work, we compared a Socially Assistive Robot (SAR) with a Virtual Agent (VA) during the execution of a rehabilitation task. The SAR consisted of a humanoid R1 robot, while the Virtual Agent represented its simulated counter-part. In both cases, the agents evaluated the participants’ motions and provided verbal feedback. Participants reported higher levels of engagement when training with the SAR. Given that the architecture has been proven to be successful for a rehabilitation task, other sets of repetitive tasks could also take advantage of the platform, such as clinical tests. A commonly performed clinical trial is the Timed Up and Go (TUG), where the patient has to stand up, walk 3 m to a goal line and back, and sit down. To handle this test, we extended the architecture to evaluate lower limbs’ motions, follow the participants while continuously interacting with them, and verify that the test is completed successfully. We implemented the scenario in Gazebo, by simulating both participants and the interaction with the robot ³ . A full interactive report is created when the test is over, providing the extracted information to the specialist. We validate the architecture in three different experiments, each with 1,000 trials, using the Gazebo simulation. These experiments evaluate the ability of this architecture to analyse the patient, verify if they are able to complete the TUG test, and the accuracy of the measurements obtained during the test. This work provides the foundations towards more thorough clinical experiments with a large number of participants with a physical platform in the future. The software is publicly available in the assistive-rehab repository ⁴ and fully documented.
... It seems as the physical embodiment of a SR outperforms a virtual one, both in task performance and the perception of the users. However, results are more inconclusive, if the concepts of physical presence and embodiment are separated, by either comparing physically present SIAs to virtually present SIAs, or comparing physical SIAs with virtual SIAs both presented on a screen [Li 2015]. ...
... Social robots are designed to interact with humans in a natural, interpersonal way (Breazeal et al., 2016) and to support their users through social interaction, often in educational contexts. Research suggests that the physical presence of a robot has a positive impact on learning outcomes relative to virtual representations or no learning support (Leyzberg et al., 2012;Kennedy et al., 2015;Li, 2015). In the context of learning, robots can take on two different roles: Either as a passive educational tool (e.g., using a robot to teach students programming of a robot) or as an active participant in the learning situation. ...
Article
Full-text available
Learning in higher education scenarios requires self-directed learning and the challenging task of self-motivation while individual support is rare. The integration of social robots to support learners has already shown promise to benefit the learning process in this area. In this paper, we focus on the applicability of an adaptive robotic tutor in a university setting. To this end, we conducted a long-term field study implementing an adaptive robotic tutor to support students with exam preparation over three sessions during one semester. In a mixed design, we compared the effect of an adaptive tutor to a control condition across all learning sessions. With the aim to benefit not only motivation but also academic success and the learning experience in general, we draw from research in adaptive tutoring, social robots in education, as well as our own prior work in this field. Our results show that opting in for the robotic tutoring is beneficial for students. We found significant subjective knowledge gain and increases in intrinsic motivation regarding the content of the course in general. Finally, participation resulted in a significantly better exam grade compared to students not participating. However, the extended adaptivity of the robotic tutor in the experimental condition did not seem to enhance learning, as we found no significant differences compared to a non-adaptive version of the robot.
... The reason is that embodied agents have a physical body and are physically present in a job interview situation. These characteristics are expected to make a robot more engaging and to elicit more favorable psychological responses, e.g., empathy and trust, and a greater sense of social presence compared to communication via a screen or a telephone (Li, 2015;Seo et al., 2015). If, in addition to these advantages, a teleoperated robot as a fair proxy is able to reduce or eliminate biases from the job interview, it is plausible that this type of interview could yield higher perceptions of fairness than a face-to-face job interview. ...
Article
Full-text available
This research examines the perceived fairness of two types of job interviews: robot-mediated and face-to-face interviews. The robot-mediated interview tests the concept of a fair proxy in the shape of a teleoperated social robot. In Study 1, a mini-public (n=53) revealed four factors that influence fairness perceptions of the robot-mediated interview and showed how HR professionals' perception of fair personnel selection is influenced by moral pragmatism despite clear moral awareness of discriminative biases in interviews. In Study 2, an experimental survey (n=242) conducted at an unemployment center showed that the respondents perceived the robot-mediated interview as fairer than the face-to-face interview. Overall, the studies suggest that HR professionals and jobseekers exhibit diverging fairness perceptions and that the business case for the robot-mediated interview undermines its social case (i.e., reducing discrimination). The paper concludes by addressing key implications and avenues for future research.
... In fact, endowing the AI systems with humanoid features has shown to be effective for encouraging health-related behavior. A series of studies have shown that the humanoid AI system, or robots, were more effective for healthcare treatment delivery [3,4], weight management [5,6], and persuasion in health promotion [7][8][9], than non-humanoid media such as smartphones, computers, or screen-based avatars. ...
Article
Full-text available
Humans tend to interact socially with humanoid devices, such as robots. Therefore, a possible application of robotics technology is the promotion of pro-social behavior, namely recycling. To test the effectiveness of this application, an experimental setting in which participants were required to dispose of waste was created. Two types of electronic instructors, a robot and a tablet computer, were located close to the disposal area to provide instructions on appropriate waste disposal. The comparison of the effectiveness of the two types of electronic instructors found that participants exposed to the robot sorted the waste more accurately than participants exposed to the tablet computer. Scores for perceived anthropomorphism and induced empathy were higher for the robot than the tablet computer. We conclude that robots, because of their anthropomorphic features, are more likely to evoke empathy than tablet computers, and thus robots can be more effective in promoting pro-social behavior.
... Future research could therefore assess how increasing the level of embodiment alters communication processes, such as when robots allow for more nuanced movements and gestures. In addition, some work has begun to examine the effects of embodiment when users interact through virtual avatars, or through augmented and virtual reality systems that capture natural body movements (e.g., through the use of sensors), but more work is needed to clarify the precise relationship between these different forms of embodiment, the subjective experience of presence, and interpersonal processes (Li, 2015;Mutlu, 2020). For example, future work may ask what types of embodiment produce the strongest experience of presence, and whether activating a sense of presence represents a necessary condition for building common ground through an embodied system. ...
Article
Technology can facilitate communication across large distances. Although today’s technologies enable partners to convey rich verbal and non-verbal information, past research suggests that geographic distance can still hamper remote collaboration. In this study, we investigate whether a telepresence robot, by offering an embodiment of the user, allows communicators to experience their remote partners as being “really there,” overcoming distance effects. We conducted a two-by-two (distance: on-campus vs. across-the-country; embodiment: video-mediated vs. robot-mediated) between-subjects experiment, assessing collaboration in self-disclosure, persuasion, and negotiation tasks. Results showed that, while local participants viewed their remote partners as more present when communicating via telepresence robot, they also exhibited greater impression management in a self-disclosure task than did participants in video-mediated interactions. Consistent with embodiment helping to overcome geographic distance effects, we found that greater geographic distance had a negative impact on collaboration outcomes when a negotiation task was conducted via video-mediated communication, but not when conducted via robot-mediated communication. We did not observe effects of geographic distance, or interaction effects between embodiment and geographic distance, in the self-presentation and persuasion tasks. These findings suggest that a partner’s embodiment may change how individuals present themselves, and how geographic distance is experienced in remote collaboration, although these effects may vary across types of tasks being conducted remotely.
... embodiment, HCI research has determined that users (most of the time) prefer interactions with physically situated robots and with human-like appearance and behaviour 5 . Physical embodiment has been shown to have an impact in interactions [137,230,299,309,401], with elderly users [151], with children [225], interactions through tele-presence [270,420], or with comparisons to virtual embodiments [44,154,215,229,262,288,333,421]. Even though not all computing needs to be embodied, there is significant evidence that users find interactions with embodied technologies more satisfying. ...
Thesis
Full-text available
This dissertation presents advances in HCI through a series of studies focusing on task-oriented interactions between humans and between humans and machines. The notion of mutual understanding is central, also known as grounding in psycholinguistics, in particular how people establish understanding in conversations and what interactional phenomena are present in that process. Addressing the gap in computational models of understanding, interactions in this dissertation are observed through multisensory input and evaluated with statistical and machine-learning models. As it becomes apparent, miscommunication is ordinary in human conversations and therefore embodied computer interfaces interacting with humans are subject to a large number of conversational failures. Investigating how these inter- faces can evaluate human responses to distinguish whether spoken utterances are understood is one of the central contributions of this thesis. The first papers (Papers A and B) included in this dissertation describe studies on how humans establish understanding incrementally and how they co-produce utterances to resolve misunderstandings in joint-construction tasks. Utilising the same interaction paradigm from such human-human settings, the remaining papers describe collaborative interactions between humans and machines with two central manipulations: embodiment (Papers C, D, E, and F) and conversational failures (Papers D, E, F, and G). The methods used investigate whether embodiment affects grounding behaviours among speakers and what verbal and non-verbal channels are utilised in response and recovery to miscommunication. For application to robotics and conversational user interfaces, failure detection systems are developed predicting in real-time user uncertainty, paving the way for new multimodal computer interfaces that are aware of dialogue breakdown and system failures. Through the lens of Theory, Studies, and Computation, a comprehensive overview is presented on how mutual understanding has been observed in interactions with humans and between humans and machines. A summary of literature in mutual understanding from psycholinguistics and human-computer interaction perspectives is reported. An overview is also presented on how prior knowledge in mutual understanding has and can be observed through experimentation and empirical studies, along with perspectives of how knowledge acquired through observation is put into practice through the analysis and development of computational models. Derived from literature and empirical observations, the central thesis of this dissertation is that embodiment and mutual understanding are intertwined in task-oriented interactions, both in successful communication but also in situations of miscommunication.
... The studies in the modality category investigated the persuasive effect of the presence of social robot, compared to other forms of persuasive agents (i.e., non-social robots, pamphlets, desktop PC, kiosk, virtual agents and human) (see Table 6). In line with media equation theory [55] and a previous survey study on different types of agent presence [43], the studies in the current survey have provided ample evidence that the presence of SR can lead to positive compliance to persuasive communications. Generally, when compared to other means of persuasion, SR have been shown to have a stronger influence [28,38,49,84,86]. ...
Article
Full-text available
There is a growing body of work reporting on experimental work on social robotics (SR) used for persuasive purposes. We report a comprehensive review on persuasive social robotics research with the aim to better inform their design, by summarizing literature on factors impacting their persuasiveness. From 54 papers, we extracted the SR’s design features evaluated in the studies and the evidence of their efficacy. We identified five main categories in the factors that were evaluated: modality, interaction, social character, context and persuasive strategies. Our literature review finds generally consistent effects for factors in modality, interaction and context, whereas more mixed results were shown for social character and persuasive strategies. This review further summarizes findings on interaction effects of multiple factors for the persuasiveness of social robots. Finally, based on the analysis of the papers reviewed, suggestions for factor expression design and evaluation, and the potential for using qualitative methods and more longer-term studies are discussed.
... Recognizability of Ekman's basic expressions is a common test used to gauge the abilities of an expressive robot face [190]. The recorded accuracies are seen as a good sign especially since only video clips of the robot were shown-physically present robots are perceived more persuasively, and result in better user performance than their visually presented counterparts [191]; physical presence often seems crucial for good perception of emotional information conveyed by a robotic agent [192]. It was interesting to observe that the emotion of disgust was inconsistently identified in this study, since this emotion is frequently omitted from these kinds of experiments, due to its specific expression that also includes a nose movement [189,192]. ...
Article
Full-text available
This paper shows the structure of a mechanical system with 9 DOFs for driving robot eyes, as well as the system’s ability to produce facial expressions. It consists of three subsystems which enable the motion of the eyeballs, eyelids, and eyebrows independently to the rest of the face. Due to its structure, the mechanical system of the eyeballs is able to reproduce all of the motions human eyes are capable of, which is an important condition for the realization of binocular function of the artificial robot eyes, as well as stereovision. From a kinematic standpoint, the mechanical systems of the eyeballs, eyelids, and eyebrows are highly capable of generating the movements of the human eye. The structure of a control system is proposed with the goal of realizing the desired motion of the output links of the mechanical systems. The success of the mechanical system is also rated on how well it enables the robot to generate non-verbal emotional content, which is why an experiment was conducted. Due to this, the face of the human-like robot MARKO was used, covered with a face mask to aid in focusing the participants on the eye region. The participants evaluated the efficiency of the robot’s non-verbal communication, with certain emotions achieving a high rate of recognition.
... Tangibility is associated with people having a shape in mind when they think about a term and whether a term is associated with an entity humans can touch [25]. People may interact diferently with agents having a physical appearance compared to disembodied agents, perceive them as more socially present [38,41], and may have diferent expectations regarding relationship building with more tangible entities [25]. With respect to the terms we use, we imagine that terms such as "computer" or "robot" are more likely perceived as tangible compared to terms such as "algorithm" or "artifcial intelligence" since the former have a shape while the latter refect disembodied manifestations of ADM systems. ...
... Wainer et al. (2007) postulated "a robot's physical presence augments its ability to generate rich communication" (Wainer et al., 2007;Deng et al., 2019), emphasizing that physical embodiment provides more natural and social cues that can be utilized to communicate intentions and internal states (Lohan et al., 2010;Deng et al., 2019). Similar results where people find physical co-present robots to be more engaging than digitally embodied forms have been replicated in other labs (Li, 2015). These results suggests a VUI agent's embodiment can affect users' engagement and perception, making it a critical feature to consider in VUI agent development. ...
Article
Full-text available
As voice-user interfaces (VUIs), such as smart speakers like Amazon Alexa or social robots like Jibo, enter multi-user environments like our homes, it is critical to understand how group members perceive and interact with these devices. VUIs engage socially with users, leveraging multi-modal cues including speech, graphics, expressive sounds, and movement. The combination of these cues can affect how users perceive and interact with these devices. Through a set of three elicitation studies, we explore family interactions ( N = 34 families, 92 participants, ages 4–69) with three commercially available VUIs with varying levels of social embodiment. The motivation for these three studies began when researchers noticed that families interacted differently with three agents when familiarizing themselves with the agents and, therefore, we sought to further investigate this trend in three subsequent studies designed as a conceptional replication study. Each study included three activities to examine participants’ interactions with and perceptions of the three VUIS in each study, including an agent exploration activity, perceived personality activity, and user experience ranking activity. Consistent for each study, participants interacted significantly more with an agent with a higher degree of social embodiment, i.e., a social robot such as Jibo, and perceived the agent as more trustworthy, having higher emotional engagement, and having higher companionship. There were some nuances in interaction and perception with different brands and types of smart speakers, i.e., Google Home versus Amazon Echo, or Amazon Show versus Amazon Echo Spot between the studies. In the last study, a behavioral analysis was conducted to investigate interactions between family members and with the VUIs, revealing that participants interacted more with the social robot and interacted more with their family members around the interactions with the social robot. This paper explores these findings and elaborates upon how these findings can direct future VUI development for group settings, especially in familial settings.
... Robotic systems have been effectively employed in educational applications with the aim of increasing engagement and social interaction among youngsters, rehabilitation or therapy, as well as enhancing the overall learning experience [1]. In particular, there are many examples in the existing literature where the use of robots has made the educational experience more engaging and enjoyable, thus supporting knowledge retention, and leading to an overall positive perception of the experience, e.g., [2], [3], [4]. ...
Preprint
This paper describes the methodology and outcomes of a series of educational events conducted in 2021 which leveraged robot swarms to educate high-school and university students about epidemiological models and how they can inform societal and governmental policies. With a specific focus on the COVID-19 pandemic, the events consisted of 4 online and 3 in-person workshops where students had the chance to interact with a swarm of 20 custom-built brushbots -- small-scale vibration-driven robots optimized for portability and robustness. Through the analysis of data collected during a post-event survey, this paper shows how the events positively impacted the students' views on the scientific method to guide real-world decision making, as well as their interest in robotics.
... Furthermore, the physical embodiment may also influence human interaction. Human participants may express their emotions more intensively while interacting with a humanoid robot rather than interacting with a virtual agent [11]. Therefore, this study focuses on human-robot negotiations where a humanoid robot interacts with human negotiators by following a speech-based negotiation protocol [2]. ...
Conference Paper
Full-text available
Negotiation is one of the crucial processes for resolving conflicts between parties. In automated negotiation, agent designers mostly take the opponent's offers and the remaining time into account while designing their strategies. While designing a negotiating agent interacting with a human directly, other information such as the opponent's emotional changes during the negotiation can establish a better interaction and reach an admissible settlement for joint interests. Accordingly, this paper proposes a bidding strategy for humanoid robots, incorporating their opponents' emotional states and awareness of the agent's changing behavior.
... Current works on human-AI interaction suggests that in every mode of interaction, an interface plays a vital role, as it facilitates mutual communication (Li, 2015). The communication features of an AI system can influence how humans approach them (Mou & Xu, 2017). ...
Article
Full-text available
In this paper, the authors focus on Artificial Intelligence as a tangible technology that is designed to sense, comprehend, act, and learn. There are two manifestations of AI in the medical service: an algorithm that analyzes and interprets the test result and a virtual assistant that communicates the result to the patient. The aim of this paper is to consider how AI can substitute a doctor in measuring human health and how the interaction with virtual assistant impacts one’s visual attention processes. Theoretically, the article refers to the following research strands: Human-Computer Interaction, technology in services, implementation of AI in the medical sector, and behavioral economy. By conducting an eye-tracking experimental study, it is demonstrated that the perception of medical diagnosis does not differ across experimental groups (human vs. AI). However, it is observed that participants exposed to AI-based assistant focused more on button allowing to contact a real doctor.
... For instance, Lee and colleagues suggested that the primary function of social robots is to interact with humans (Lee et al., 2006). While scholars may diverge in whether social robots should be embodied, humanlike, or fully autonomous (Breazeal, 2003;Li, 2015;Pfeifer and Scheier, 1999;Zhao, 2006), social robots should at least feature a certain degree of automation and be partly used for social interaction with humans. ...
Article
The Computers are Social Actors (CASA) paradigm was proposed more than two decades ago to understand humans' interaction with computer technologies. Today, as emerging technologies like social robots become more personal and persuasive, questions of how users respond to them socially, what individual factors leverage the relationship, and what constitutes the social influence of these technologies need to be addressed. A lab experiment was conducted to examine the interactions between individual differences and social robots' vocal and kinetic cues. Results suggested that users developed more trust in a social robot with a human voice than with a synthetic voice. Users also developed more intimacy and interest in the social robot when it was paired with humanlike gestures. Moreover, individual differences including users' gender, attitudes toward robots, and robot exposure affected their psychological responses. The theoretical, practical, and ethical value of the findings was further discussed in the study.
... Individuals may experience social presence when they feel the pressure from group members. Presence could relate to the way that computer agent is displayed to others (Li, 2015;Li, Kizilcec, Bailenson, & Ju, 2016). Lee (2004b) conceptualized social presence as "a psychological state in which virtual social actors (paraauthentic or artificial) are experienced as actual social actors in either sensory or non-sensory ways" (p. ...
Article
Past works on the human-computer relationship has investigated a) how computers can mediate the communication between people and b) how computer users perceive computers as social entities. However, little research has investigated how the two fields of research inform, challenge, and integrate with each other. By combining the Computers are Social Actors paradigm and the Social Identity Model of Deindividuation Effects, the present work provides an entry point into the conversation between these two fields. Specifically, this study examines how individuals may form group relations with computer agents. An experiment using a between-subject factorial design was conducted to explore the relationship between the two theoretical frameworks. The findings suggested that sharing the same color cues with multiple computer agents would lead to users' group identification with computer agents. Group identification with computer agents would further influence group conformity, conformity intention, group attraction, and group trustworthiness. However, the degree of compliance with and trust in computer agents was contingent on how much users felt as if these agents had been real humans.
... 19 The complexity of this challenge is increased by the changes in behavioral and attitudinal response when comparing those that directly engage in HRI with physically embodied robots to those who are asked their opinions on images or videos of SARs. 20 Researchers must determine which features are important based on user abilities and interaction context, while ensuring SAR accessibility to a broad and diverse userbase of older adults. ...
Article
Full-text available
As the global population ages, there is an increase in demand for assistive technologies that can alleviate the stresses on healthcare systems. The growing field of socially assistive robotics (SARs) offers unique solutions that are interactive, engaging, and adaptable to different users’ needs. Crucial to having positive human-robot interaction (HRI) experiences in senior care settings is the overall design of the robot, considering the unique challenges and opportunities that come with novice users. This paper presents a novel study that explores the effect of SAR design on HRI in senior care through a results-oriented analysis of the literature. We provide key design recommendations to ensure inclusion for a diverse set of users. Open challenges of considering user preferences during design, creating adaptive behaviors, and developing intelligent autonomy are discussed in detail. SAR features of appearance and interaction mode along with SAR frameworks for perception and intelligence are explored to evaluate individual developments using metrics such as trust, acceptance, and intent to use. Drawing from a diverse set of features, SAR frameworks, and HRI studies, the discussion highlights robot characteristics of greatest influence in promoting wellbeing and aging-in-place of older adults and generates design recommendations that are important for future development.
... Similarly, interactions with social robots have been reported to be more motivating and engaging than a virtual reality counterpart [35]. Furthermore, through their survey on people's perception of physically present social robots versus virtual agents, ref. [36] found that physically present social robots were perceived more positively and considered more persuasive. -Social robots compared to other AI-driven technologies: Several studies have shown that people prefer social robots over other technologies such as tablets or smartphones. ...
Article
Full-text available
The inclusion of technologies such as telepractice, and virtual reality in the field of communication disorders has transformed the approach to providing healthcare. This research article proposes the employment of similar advanced technology – social robots, by providing a context and scenarios for potential implementation of social robots as supplements to stuttering intervention. The use of social robots has shown potential benefits for all the age group in the field of healthcare. However, such robots have not yet been leveraged to aid people with stuttering. We offer eight scenarios involving social robots that can be adapted for stuttering intervention with children and adults. The scenarios in this article were designed by human–robot interaction (HRI) and stuttering researchers and revised according to feedback from speech-language pathologists (SLPs). The scenarios specify extensive details that are amenable to clinical research. A general overview of stuttering, technologies used in stuttering therapy, and social robots in health care is provided as context for treatment scenarios supported by social robots. We propose that existing stuttering interventions can be enhanced by placing state-of-the-art social robots as tools in the hands of practitioners, caregivers, and clinical scientists.
Article
Socially Assistive Robots (SARs) and immersive Virtual Reality (iVR) are interactive platforms that promote user engagement, which can motivate users to adhere to therapeutic frameworks. SARs use social presence to create affective relationships with users, leveraging the human tendency to be driven by social interactions. iVR uses spatial presence to provide an intense multisensory experience that submerges users in a virtual world. We adapted two such platforms – a SAR and an iVR – to deliver cognitive training (CT), by integrating established cognitive tasks in gamified environments that convey a strong sense of presence. Sixty-four participants underwent CT with both platforms. We tested: (1) their perception of both platforms; (2) whether they preferred one over the other in the short term; (3) their projected preferences for long-term training; and (4) whether their preferences correlated with personal characteristics. They preferred the virtual experience in the short term across age and gender. For long-term CT, there was equal projected preference for both platforms. It may be that a combination of social and spatial presence might yield engagement in long-term training.
Chapter
Social robots are increasingly being used in education. They can take over various roles including teaching assistant, tutor, and novice. This chapter aims to provide a conceptual overview of the phenomenon. A classification of social robots is outlined; the criteria are visual appearance, social capabilities, and autonomy and intelligence. The majority of robots used in education are humanoid; Nao from SoftBank Robotics is a quasi-standard type. An important social capability is empathy; a model illustrating how a robot can show empathy is discussed. A taxonomy is presented in order to capture the various degrees of robot autonomy. To achieve autonomy, artificial intelligence is necessary. This chapter advocates for a symbiotic design approach where tasks are collaboratively carried out by the teacher and the social robot, utilizing the complementary strength of both parties. This may be in line with the concept of hybrid intelligence. The ethical aspects of social robot use are explored, including privacy, control, responsibility, and the role of teachers. Moreover, the acceptance of social robots is discussed. Overall, attitudes toward social robots seem to be positive; however, there are also contrary findings. Finally, results are presented from a technology acceptance study with a sample of N = 462 university students from the social sciences. The chapter closes with suggestions for further research.
Conference Paper
Full-text available
The research on physically and socially situated artificial agents could complement and enrich computational models of creativity. This paper discusses six perspective lines of inquiry at the intersection of creativity and social robotics. It provides a description of ways in which the field of social robotics may influence (and be influenced by) creativity research in psychology and speculates how human-machine co-creation will affect the notions of both human and artificial creativity. By discussing potential research areas, the authors hope to outline an agenda for future collaboration between creativity scholars in psychology, social robotics, and computer science.
Chapter
Smart technologies are becoming rapidly used in various industries successfully. The tourism industry stands as one of the evolving industries, benefiting from these smart technological developments such as virtual and augmented reality, robotics, and internet of things. Anticipatory, experiential, and reflective are the three main phases of the consumer behavior process in tourism, which are regarded as pre-travel, during travel, and post-travel phases in tourism. Smart tourism technologies are being implemented to enhance the tourist experience in these phases of their journey. This chapter aims to highlight the smart tourism technology applications in every phase of consumer experience by presenting examples from the tourism industry.
Article
Virtual robots, including virtual animals, are expected to play a major role within affective and aesthetic interfaces, serious games, video instruction, and the personalization of educational instruction. Their actual impact, however, will very much depend on user perception of virtual characters as the uncanny valley hypothesis has shown that the design of virtual characters determines user experiences. In this article, we investigated whether the uncanny valley effect, which has already been found for the human‐like appearance of virtual characters, can also be found for animal‐like appearances. We conducted an online study (N = 163) in which six different animal designs were evaluated in terms of the following properties: familiarity, commonality, naturalness, attractiveness, interestingness, and animateness. The study participants differed in age (under 10–60 years) and origin (Europe, Asia, North America, and South America). For the evaluation of the results, we ranked the animal‐likeness of the character using both expert opinion and participant judgments. Next to that, we investigated the effect of movement and morbidity. The results confirm the existence of the uncanny valley effect for virtual animals, especially with respect to familiarity and commonality, for both still and moving images. The effect was particularly pronounced for morbid images. For naturalness and attractiveness, the effect was only present in the expert‐based ranking, but not in the participant‐based ranking. No uncanny valley effect was detected for interestingness and animateness. This investigation revealed that the appearance of virtual animals directly affects user perception and thus, presumably, impacts user experience when used in applied settings. The results confirm the existence of the uncanny valley effect for virtual animals, especially with respect to familiarity and commonality, for both still and moving images. The effect was particularly pronounced for morbid images. For naturalness and attractiveness, the effect was only present in the expert‐based ranking, but not in the participant‐based ranking. No uncanny valley effect was detected for interestingness and animateness. This investigation revealed that the appearance of virtual animals directly affects user perception and thus, presumably, impacts user experience when used in applied settings.
Article
Despite the growing interest in artificial intelligence (AI) technology in the retail and service industry, consumer research on AI especially virtual agents (VAs) has been underexplored. To fill the void, this study investigates how consumers build relationships with VAs through the lens of trust. Due to its unique characteristics (e.g., disembodied representation, interactive capabilities), VAs differ from other technologies in how trust is developed. Drawn from the “computers as social actors” (CASA) paradigm and the extended Technology Acceptance Model (TAM), we proposed and empirically tested the consumer-VA trust model, in which trust serves as a second-order construct with three first-order dimensions (i.e., competence, integrity, and self-efficacy). In addition, the relationships among consumer-VA trust, consumer perceptions, and behavioral intention were examined. Using a survey with 192 usable responses, our research indicated that consumer-VA trust positively impacts perceived usefulness and perceived enjoyment, which in turn increase consumers’ intention to continue use of VAs. This research provides theoretical implications on consumer adoption of VAs and practical implications for marketing strategies for this new technology.
Article
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a computer < robot < human pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Article
In this paper we present a Multimodal Echoborg interface to explore the effect of different embodiments of an Embodied Conversational Agent (ECA) in an interaction. We compared an interaction where the ECA was embodied as a virtual human (VH) with one where it was embodied as an Echoborg, i.e, a person whose actions are covertly controlled by a dialogue system. The Echoborg in our study not only shadowed the speech output of the dialogue system but also its non-verbal actions. The interactions were structured as a debate between three participants on an ethical dilemma. First, we collected a corpus of debate sessions with three humans debaters. This we used as baseline to design and implement our ECAs. For the experiment, we designed two debate conditions. In one the participant interacted with two ECAs both embodied by virtual humans). In the other the participant interacted with one ECA embodied by a VH and the other by an Echoborg. Our results show that a human embodiment of the ECA overall scores better on perceived social attributes of the ECA. In many other respects the Echoborg scores as poorly as the VH except copresence .
Article
Intelligent virtual assistants (IVAs) can help older adults with information queries. Examining older adults’ preferences for IVAs’ information presentation can help improve user experience and older adults’ acceptance of IVAs. This study investigated the effects of information modality and feedback on older adults’ social presence, attitudinal outcomes (i.e., perceived enjoyment and satisfaction), and acceptance (i.e., perceived ease of use, perceived usefulness, and behavioral intention to use). A total of 102 subjects were recruited to participate in two experiments. Results show that the visual-auditory bimodality is superior to single visual modality and single auditory modality for older adults based on the perceptions of social presence, attitudinal outcomes, and acceptance. Older adults perceived greater social presence, perceived enjoyment, satisfaction, and acceptance with text feedback than without in IVAs. Social-oriented voice feedback can improve older adults’ perceptions of social presence, enjoyment, satisfaction, and acceptance than task-oriented voice feedback. This study provides practical implications in the design of IVAs’ information presentation targeted at older adults.
Article
Full-text available
Telepresence robots are becoming popular in social interactions involving health care, elderly assistance, guidance, or office meetings. There are two types of human psychological experiences to consider in robot-mediated interactions: (1) telepresence, in which a user develops a sense of being present near the remote interlocutor, and (2) co-presence, in which a user perceives the other person as being present locally with him or her. This work presents a literature review on developments supporting robotic social interactions, contributing to improving the sense of presence and co-presence via robot mediation. This survey aims to define social presence, co-presence, identify autonomous “user-adaptive systems” for social robots, and propose a taxonomy for “co-presence” mechanisms. It presents an overview of social robotics systems, applications areas, and technical methods and provides directions for telepresence and co-presence robot design given the actual and future challenges. Finally, we suggest evaluation guidelines for these systems, having as reference face-to-face interaction.
Chapter
In lectures, lecturers need to control the attention of learners to make them interested and to maintain their attention while monitoring their situation. Our previous work suggests that a robot as lecturer properly conducts non-verbal lecture behavior to have better learner engagement and attention control advantages over human lecturers. However, it is difficult to maintain learners’ interests in a longer lecture, although attention control would be possible in shorter lectures. This causes them to miss following the contents of lectures. In this work, we have developed an interactive robot lecture system in which the robot interacts with learners by means of non-verbal behavior for keeping their attention when they miss the lecture. Towards attracting their attention, the system first estimates their states of understanding and attention from their posture based on a presentation scenario, which represents the lecture sequence. Next, the system attempts to keep the learners’ attention with interactive lecture behavior by re-constructing the presentation scenario with their estimated states. The reconstruction is done with the lecture behavior model designed in this work. The interactive behavior is implemented using NAO, a humanoid robot, by combining pause, repeat, and skip behavior with paralanguage. We conducted a case study with 10 participants whose purpose was to evaluate the impressions of interactive behavior generated by our system. The results suggest that it is effective in keeping attention.
Chapter
The subjective experiences and satisfaction of using technology to collaborate remotely may differ due to the individual differences of personal characteristics. The present study aims to investigate the influence of empathy tendency on user experience. Twelve groups of three participants completed a decision-making task in the virtual environment. The results revealed a significant correlation between personal traits (i.e., empathy and the big five personalities), user experience (i.e., social presence), and satisfaction. The level of cognitive empathy has a positive effect on the feeling of social presence, social immersion, and outcome satisfaction in the virtual environment, while is not associated with media satisfaction. The findings of this study suggest that the cognitive ability of empathy, namely the ability to identify with and understand the views of others may increase one’s experience and satisfaction in remote collaboration. This study provides an empirical exploration of team interactions in virtual environments and advances user research by identifying the relationship between user’s traits (empathy), user experience, and satisfaction.
Chapter
More and more companies have started to use nonhuman agents for employment interviews, making the selection process easier, faster, and unbiased. To assess the effectiveness of the above, in this paper, we systematically analyzed, reviewed, and compared human interaction with a social robot, a digital human, and another human under the same scenario simulating the first phase of a job interview. Our purpose is to allow the understanding of human reactions, concluding to a disclosure of the human needs towards human – nonhuman interaction. We also explored how the appearance and the physical presence of an agent can affect human perception, expectations, and emotions. To support our research, we used time-related and acoustic features of audio data, as well as psychometric data. Statistically significant differences were found for almost all extracted features and especially for intensity, speech rate, frequency, and response time. We also developed a Machine Learning model that can recognize the nature of the interlocutor a human interacts with. Although human was generally preferred, the interest level was higher and the shyness level was lower during human-robot interaction. Thus, we believe that, following some improvements, social robots, compared to digital humans, have the potential to act effectively as job interviewers.
Chapter
This paper proposes the evaluation of the local and remote interaction of a Pepper Robot and a human presenter answering questions from high-school students at the Universidad de Costa Rica’s vocational fair. The interactions were presented in two: 1) a group interacted locally in the same room 2) a group interacted remotely via online meeting. Within a sample of 18 Costa Rican high-school students, this study assessed criteria such as: perceived enjoyment, intention to use, perceived sociability, trust, intelligence, animacy, anthropomorphism, and sympathy, utilizing testing tools such as Unified Theory of Acceptance and Use of Technology (UTAUT) and Godspeed Questionnaire (GSQ). These instruments identified significant differences during the interaction in the perceived sociability and anthropomorphism in both scenarios. Suggesting different relevant information regarding the perception of the interaction with the robot and perception of the robot itself in both cases.
Article
In this paper, we examine the process of designing robot-performed iconic hand gestures in the context of a long-term study into second language tutoring with children of approximately 5 years old. We explore four factors that may relate to their efficacy in supporting second language tutoring: the age of participating children; differences between gestures for various semantic categories, e.g. measurement words, such as small, versus counting words, such as five; the quality (comprehensibility) of the robot’s gestures; and spontaneous reenactment or imitation of the gestures. Age was found to relate to children’s learning outcomes, with older children benefiting more from the robot’s iconic gestures than younger children, particularly for measurement words. We found no conclusive evidence that the quality of the gestures or spontaneous reenactment of said gestures related to learning outcomes. We further propose several improvements to the process of designing and implementing a robot’s iconic gesture repertoire.
Article
Purpose The rapid progress of information and communication technologies enables business creators to access a wide variety of tools. These tools facilitate electronic exchanges and interactions with customers and companies. The purpose of this study is to test and compare the effectiveness of two virtual reality technologies, the avatar and anthropomorphic virtual agents, on consumers’ psychological states and perceived realism. Design/methodology/approach An experimental survey was conducted to measure the potential superiority of the anthropomorphic virtual agent over the avatar and to identify the determining characteristics of the anthropomorphic virtual agent’s effectiveness. An experimental website was designed for the purpose of the study. A total of 1,262 internet users participated in the experiment. Findings Results confirm the superiority of the anthropomorphic virtual agent over the avatar in affecting consumers’ flow state, telepresence experience and perceived realism. These findings can be explained by the humanized characteristics of this type of agent (i.e. verbal and nonverbal language). Originality/value The originality of this research lies in the study of different forms of social interactivity. This latter has been little studied and essentially treated with a dichotomous perspective (presence/absence of a virtual agent). New trends in digital marketing challenge entrepreneurs to be proactive and to anticipate customers’ behavior on their online stores. That is why, virtual reality technologies, namely, anthropomorphic agents, can be considered as a relevant tool to engage in efficient inbound marketing strategies. Today, the development of intelligent technologies encourages entrepreneurs operating online to design more interactive, realistic and humanized virtual merchant environments that are more adapted to the realities of the new consumption trends and environment.
Article
Full-text available
This paper presents a study that compared the elder user enjoyment of a game of trivia in three conditions: participants playing the game with a laptop PC vs. a robot vs. a virtual agent. Statistical analysis did not show any significant difference of the three devices on user enjoyment while qualitative analysis revealed a preference for the laptop PC condition, followed by the robot and the virtual agent. The elderly participants were concentrated on the task performance rather on the interaction with systems. They preferred laptop PC condition mainly because there were less interfaces distracting them from performing the task proposed by the game. Further, the robot was preferred to a virtual agent because of its physical presence. Some issues of the experiment design are raised and directions for future research are suggested to gain more insight into the effects of agent embodiment on human-agent interaction.
Conference Paper
Full-text available
Nowadays, robots and virtual agents become companions for humans. They seem to have distinct roles in the Human-Machine Interaction. Thus, when developing a new application, it is judicious to wonder which the better is. In the Robadom project, a homecare robot has to assist elderly at home. The robot provides cognitive stimulation game. We developed StimCards, a cognitive card-based game. The principle question is: is the robot the best interlocutor in this context? This paper presents an evaluation of StimCards. Participants are children because French elderly is reluctant to robots and because they will be the future hypothetical users.
Conference Paper
Full-text available
In this paper we present StimCards: an interactive game for cognitive training exercises. To increase the impact of this game we experiment four kinds of interfaces: a basic computer, an embodied conversational agent and a robot with two different appearances. The report of these experiments shows that the robot is the best positive feedback for cognitive game.
Article
Full-text available
People's physical embodiment and presence increase their salience and importance. We predicted people would anthropomorphize an embodied humanoid robot more than a robot-like agent, and a collocated more than a remote robot. A robot or robot-like agent interviewed participants about their health. Participants were either present with the robot/agent, or interacted remotely with the robot/agent projected life-size on a screen. Participants were more engaged, disclosed less undesirable behavior, and forgot more with the robot versus the agent. They ate less and anthropomorphized most with the collocated robot. Participants interacted socially and attempted conversational grounding with the robot/agent though aware it was a machine. Basic questions remain about how people resolve the ambiguity of interacting with a humanlike nonhuman.
Article
Full-text available
Androids have the potential to reinvigorate the social and cognitive sciences — both by serving as an experimental apparatus for evaluating hypotheses about human interaction and as a testing ground for cognitive models. Unlike other robotics techniques, androids can illuminate how interaction draws on human appearance and behavior. When cognitive models are implemented in androids, feelings associated with the uncanny valley provide heightened feedback for diagnosing flaws in the models during human–android interaction. This enables a detailed examination of real-time factors in human social interaction. Not only can android science inform us about human beings, but it can also contribute to a methodology for creating interactive robots and a set of principles for their design. By doing this, android science can help us devise a new kind of interface. Since our expressive bodies and perceptual and motor systems have co-evolved to work together, it seems natural for robot engineers to exploit this by building androids, rather than hoping for people to gradually adapt themselves to mechanical-looking robots. In the longer term, androids may prove to be a useful tool for understanding social learning, interpersonal relationships, and how human brains and bodies turn themselves into persons (MacDorman & Cowley, 2006). Of course, there are many ways to investigate human perception and interaction and to explore the potential for interactive robotics. Android science is only one of them. Although the uncanny valley plays a special role in android science, the nature of the phenomenon should rightly be investigated by other approaches too.
Article
Full-text available
There are a number of psychological phenomena in which dramatic emotional responses are evoked by seemingly innocuous perceptual stimuli. A well known example is the 'uncanny valley' effect whereby a near human-looking artifact can trigger feelings of eeriness and repulsion. Although such phenomena are reasonably well documented, there is no quantitative explanation for the findings and no mathematical model that is capable of predicting such behavior. Here I show (using a Bayesian model of categorical perception) that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could account for these phenomena. The model is not only the first quantitative explanation of the uncanny valley effect, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions.
Article
Full-text available
The development of robots that closely resemble human beings can contribute to cognitive research. An android provides an experimental apparatus that has the potential to be controlled more precisely than any human actor. However, preliminary results indicate that only very humanlike devices can elicit the broad range of responses that people typically direct toward each other. Conversely, to build androids capable of emulating human behavior, it is necessary to investigate social activity in detail and to develop models of the cognitive mechanisms that support this activity. Because of the reciprocal relationship between android development and the exploration of social mechanisms, it is necessary to establish the field of android science. Androids could be a key testing ground for social, cognitive, and neuroscientific theories as well as platform for their eventual unification. Nevertheless, subtle flaws in appearance and movement can be more apparent and eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit our model of human other but do not measure up to it. If so, very humanlike robots may provide the best means of pinpointing what kinds of behavior are perceived as human, since deviations from human norms are more obvious in them than in more mechanical-looking robots. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that, by playing on an innate fear of death, an uncanny robot elicits culturally-supported defense responses for coping with death’s inevitability. An experiment, which borrows from methods used in terror management research, was performed to test this hypothesis. [Thomson Reuters Essential Science Indicators: Fast Breaking Paper in Social Sciences, May 2008]
Chapter
Full-text available
The functions of social dialogue between people in the context of performing a task is discussed, as well as approaches to modelling such dialogue in embodied conversational agents. A study of an agent’s use of social dialogue is presented, comparing embodied interactions with similar interactions conducted over the phone, assessing the impact these media have on a wide range of behavioural, task and subjective measures. Results indicate that subjects’ perceptions of the agent are sensitive to both interaction style (social vs. task-only dialogue) and medium.
Article
Full-text available
Two experiments were conducted to investigate the relative effectiveness of physical embodiment on social presence of social robots. The results of Experiment 1 show positive effects of physical embodiment of social robots (PESR) in the feeling of social presence, the general evaluation of social robots, the assessment of public opinion of social robots, and the evaluation of interaction with social robots. The result of a path analysis also provides the evidence of the mediating effect of social presence in people's general evaluation of social robots. However, the results of Experiment 2 show that PESR without touch-input capability causes negative effects. Implications for the relative effectiveness of PESR, the importance of tactile communication in human-robot interaction, as well as the market potential for social robots in relation to loneliness are discussed.
Article
Full-text available
In this paper, we describe an evaluation of the impact of embodiment, the effect of different kinds of embod-iment, and the benefits of different aspects of embod-iment, on direction-giving systems. We compared a robot, embodied conversational agent (ECA), and GPS giving directions, when these systems used speaker-perspective gestures, listener-perspective gestures and no gestures. Results demonstrated that, while there was no difference in direction-giving performance between the robot and the ECA, and little difference in partic-ipants' perceptions, there was a considerable effect of the type of gesture employed, and several interesting in-teractions between type of embodiment and aspects of embodiment.
Article
Full-text available
In this paper we discuss Augmented Reality (AR) displays in a general sense, within the context of a Reality-Virtuality (RV) continuum, encompassing a large class of "Mixed Reality" (MR) displays, which also includes Augmented Virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different Mixed Reality display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework, comprising: Extent of World Knowledge (EWK), Reproduction Fidelity (RF) and Extent of Presence Metaphor (EPM). A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.
Conference Paper
Full-text available
To investigate differences on impressions of and behaviors toward anthropomorphized artifacts between different generations, a psychological experiment was conducted with between-subjects design of the elderly vs. university students and a small-sized real humanoid robot vs. a virtual CG robot on a computer display. The results showed that 1) more elderly subjects complied with the real robot than the student subjects, 2) the elderly subjects felt more positive impressions of both the robots than the student subjects, 3) the student subjects felt less attachment to the virtual robot than the real robot, and 4) the student subjects felt less attachment to the virtual robot in comparison with the elderly subjects. Then, the paper discusses implications on assistive robots for the elderly in domestic fields.
Conference Paper
Full-text available
In this work, we further test the hypothesis that physical embodiment has a measurable effect on performance and impression of social interactions. Support for this hypothesis would suggest fundamental differences between virtual agents and robots from a social standpoint and would have significant implications for human-robot interaction. We have refined our task-based metrics to give a measurement, not only of the participant's immediate impressions of a coach for a task, but also of the participant's performance in a given task. We measure task performance and participants' impression of a robot's social abilities in a structured task based on the Towers of Hanoi puzzle. Our experiment compares aspects of embodiment by evaluating: (1) the difference between a physical robot and a simulated one; and (2) the effect of physical presence through a co-located robot versus a remote, tele-present robot. With a participant pool (n=21) of roboticists and non- roboticists, we were able to show that participants felt that an embodied robot w as more appealing and perceptive of the world than non-embodied robots. A larger pool of participants (n=32) also demonstrated that the embodied robot was seen as most helpful, watchful, and enjoyable when compared to a remote tele-present robot and a simulated robot.
Conference Paper
Full-text available
Autonomous robots are agents with physical bodies that share our environment. In this work, we test the hypothesis that physical embodiment has a measurable effect on performance and perception of social interactions. Support of this hypothesis would suggest fundamental differences between virtual agents and robots from a social standpoint and have significant implications for human-robot interaction. We measure task performance and perception of a robot's social abilities in a structured but open-ended task based on the Towers of Hanoi puzzle. Our experiment compares aspects of embodiment by evaluating: (1) the difference between a physical robot and a simulated one; (2) the effect of physical presence through a co-located robot versus a remote tele-present robot. We present data from a pilot study with 12 subjects showing interesting differences in perception of remote physical robot's and simulated agent's attention to the task, and task enjoyment
Article
Full-text available
This paper discusses the issues pertinent to the development of a meaningful social interaction between robots and people through employing degrees of anthropomorphism in a robot’s physical design and behaviour. As robots enter our social space, we will inherently project/impose our interpretation on their actions similar to the techniques we employ in rationalising, for example, a pet’s behaviour. This propensity to anthropomorphise is not seen as a hindrance to social robot development, but rather a useful mechanism that requires judicious examination and employment in social robot research.
Conference Paper
Full-text available
We have already confirmed that the artificial subtle expressions (ASEs) from a robot can accurately and intuitively convey its internal states to participants [10]. In this paper, we then experimentally investigated whether the ASEs from an on-screen artifact could also convey the artifact's internal states to participants in order to confirm whether the ASEs can be consistently interpreted regardless of the types of artifacts. The results clearly showed that the ASEs expressed from an on-screen artifact succeeded in accurately and intuitively conveying the artifact's internal states to the participants. Therefore, we confirmed that the ASEs' interpretations were consistent regardless of the types of artifacts.
Conference Paper
Full-text available
This study examines the roles of gender and visual realism in the persuasiveness of speakers. Participants were presented with a persuasive passage delivered by a male or female person, virtual human, or virtual character. They were then assessed on attitude change and their ratings of the argument, message, and speaker. The results indicated that the virtual speakers were as effective at changing attitudes as real people. Male participants were more persuaded when the speaker was female than when the speaker was male, whereas female participants were more persuaded when the speaker was male than when the speaker was female. Cross gender interactions occurred across all conditions, suggesting that some of the gender stereotypes that occur with people may carry over to interaction with virtual characters. Ratings of the perceptions of the speaker were more favorable for virtual speakers than for human speakers. We discuss the application of these findings in the design of persuasive human computer interfaces. AUTHOR KEYWORDS
Conference Paper
Full-text available
Physical proximity and appearance guide people to interact with each other in different ways [1,6]. However, in Video-Mediated Communications (VMC), these are distorted in various ways. Monitors and camera zooms make people look close or far, monitors and camera angles can be high or low making people look tall or short, volume can be loud or soft, making people sound assertive or submissive, --all independent of the true physical characteristics or intentions of the participants. Here we test the apparent height of a person on how dominant they are in a group decision-making task. We found that the artificially tall people had more influence in the group decision than the artifically short people.
Article
We present a socially assistive robot (SAR) system designed to engage elderly users in physical exercise. We discuss the system approach, design methodology, and implementation details, which incorporate insights from psychology research on intrinsic motivation, and we present five clear design principles for SAR-based therapeutic interventions. We then describe the system evaluation, consisting of a multi-session user study with older adults (n = 33), to evaluate the effectiveness of our SAR exercise system and to investigate the role of embodiment by comparing user evaluations of similar physically and virtually embodied coaches. The results validate the system approach and effectiveness at motivating physical exercise in older adults according to a variety of user performance and outcomes measures. The results also show a clear preference by older adults for the physically embodied robot coach over the virtual coach in terms of enjoyableness, helpfulness, and social attraction, among other factors.
Article
Many researchers use Wizard of Oz (WoZ) as an experimental technique, but there are methodologi-cal concerns over its use, and no comprehensive criteria on how to best employ it. We systematically review 54 WoZ experiments published in the primary HRI publication venues from 2001 -2011. Us-ing criteria proposed by Fraser and Gilbert (1991), Green et al. (2004), Steinfeld et al. (2009), and Kelley (1984), we analyzed how researchers conducted HRI WoZ experiments. Researchers mainly used WoZ for verbal (72.2%) and non-verbal (48.1%) processing. Most constrained wizard produc-tion (90.7%), but few constrained wizard recognition (11%). Few reported measuring wizard error (3.7%), and few reported pre-experiment wizard training (5.4%). Few reported using WoZ in an iterative manner (24.1%). Based on these results we propose new reporting guidelines to aid future research.
Conference Paper
We discuss current approaches to the development of natural language dialogue systems, and claim that they do not sufficiently consider the unique qualities of man-machine interaction as distinct from general human discourse. We conclude that empirical studies of this unique communication situation is required for the de-velopment of user-friendly interactive systems. One way of achieving this is through the use of so-called Wizard of Oz studies. We describe our work in this ar-ea. The focus is on the practical execution of the studies and the methodological conclusions that we have drawn on the basis of our experience. While the focus is on natural language interfaces, the methods used and the conclusions drawn from the results obtained are of relevance also to other kinds of intelligent interfaces. 1 THE NEED FOR WIZARD OF OZ STUDIES Dialogue has been an active research area for quite some time in natural language processing. It is fair to say that researchers studying dialogue and discourse have developed their theories through detailed analysis of empirical data from many di-
Conference Paper
This research investigates proper movement correlation as well as the overall perception of human subjects' interaction with a simulated agent and an embodied agent in a physical therapeutic scenario. Using computer vision techniques coupled with the Microsoft Kinect to quantify reaching kinematics, correlation was assessed by aliging movements with a Vicon Motion Capture System as well as determining how well the specific exercises were mimicked. The results indicate that this approach is a viable alternative to Motion Capturing Systems for assessing certain movements during therapy. The results also indicate that there is some dependence on the use of an embodied agent as opposed to a simulated agent when assessing adherence.
Conference Paper
Children with a chronic disease like diabetes need to learn how to self manage their disease. Knowledge about their condition is indispensable to reach this goal. Within the European project ALIZ-E a robot companion is being developed that should, among others attributes, have the capability to educate children. In this paper, a virtual agent on a screen is compared with a physical robot on the aspects of performance (learning), attention and motivation. The experiment consisted of two sessions in which children played a quiz consisting of health related questions with both the robot and the virtual agent, there was a week between the two sessions. It was found that performance and motivation were not affected by the embodiment, but the robot did attract more attention and, when forced to choose, the children had a preference for the robot.
Article
form only given. Cyberspace technology often grants us (or others) control over our self-representations. At the click of a button, one can alter our avatars' appearance and behavior. Indeed, in virtual reality we can often appear to others as ideal in stature and weight, what ever we want in terms of age and gender, and exhibit perfect form while surfing a forty foot wave. Centuries of philosophical discussion and decades of social science research has explored the concept of “the self”, but in the digital age we are encountering identitybending only imagined by science fiction authors. In this talk, I explore a research program that explores what William Gibson referred to as “the infinite plasticity” of digital identity. In particular, I address two research areas. The first, called The Proteus Effect, explores the consequences of choosing avatars whose /appearance/ differs from our own. Over forty years ago, social psychologists demonstrated self perception effects, for example wearing a black uniform causes more aggressive behavior. Similarly, as we choose our avatars online, do our avatars change us in turn? A series of studies explore how putting people in avatars of different attractiveness, height, and age alter not only behavior online but also subsequent actions in the physical world. The second area examines the consequences of choosing avatars whose /behavior/ differs from our own, specifically the phenomenon of seeing oneself in the third person performing an action one has never physically performed. Once a three-dimensional model resembling a specific person has been constructed, that model can be animated to perform any action fathomable to programmers. A series of studies examine how watching one's own self behave in novel manners affects memory, health behavior, and persuasion. I discuss related communication and psychological theories, as well as implications for citizens living in the digital age.
Article
Studies comparing physically embodied robots with virtually embodied screen characters (e.g. Powers et al., 2007. Jung and Lee, 2004.) have resulted in unsimilar findings with respect to subjective (users’ evaluations) as well as objective (e.g. task performance of the users) measurements. The comparability of these results is mainly impeded by the use of different robots, a variety of virtual embodiments (video recording, computer simulation, animated characters, etc.) and different interaction scenarios. To overcome this problem, an experimental study was conducted in which the embodiment of an artificial entity was varied systematically as well as the type of interaction using a 2 2 between subjects design (N1⁄483). Participants interacted with either a robot or a virtual representation of this robot (on a screen) in a task-oriented or a persuasive–conversational scenario. The results revealed that participants perceived the robot as more competent than the virtual character in the task-oriented scenario, but the opposite was true for the persuasive–conversational scenario. Furthermore, participants in the task- oriented scenario felt better after the interaction than participants who had a persuasive–conversational interaction with the artificial entity, regardless of its embodiment. No statistically significant differences between the experimental conditions emerged with respect to objective measures (persuasion and task performance). Various explanations for these findings are discussed and implications for the application of robots and virtual characters are derived.
Article
In this paper, we investigate the role of physical embodiment of a robot and its degrees of freedom in HRI. Both factors have been suggested to be relevant in definitions of embodiment, and so far we do not understand their effects on the way people interact with robots very well. Linguistic analyses of verbal interactions with robots differing with respect to physical embodiment and degrees of freedom provide a useful methodology to investigate factors conditioning human-robot interaction. Results show that both physical embodiment and degrees of freedom influence interaction, and that the effect of physical embodiment is located in the interpersonal domain, concerning in how far the robot is perceived as an interaction partner, whereas degrees of freedom influence the way users project the suitability of the robot for the current task.
Article
Embodiment has become an important concept in many areas of cognitive science. There are, however, very different notions of exactly what embodiment is and what kind of body is required for what type of embodied cognition. Hence, while many nowadays would agree that humans are embodied cognizers, there is much less agreement on what kind of artifact could be considered embodied. This paper identifies and contrasts six different notions of embodiment which can roughly be characterized as (1) structural coupling between agent and environment, (2) historical embodiment as the result of a history of struct ural coupling, (3) physical embodiment, (4) organismoid embodiment, i.e. organism- like bodily form (e.g., humanoid robots), (5) organismic embodiment of autopoietic, living systems, and (6) social embodiment.
Article
Neural mechanisms of higher-order cognitive processes are hard to study by using nonhuman primates. Of these mechanisms, the mental rotation task is the one of the best studied. When subjects decide whether two shapes presented at various orientations are the identical or mirror images, their reaction time increases with the angle of rotation between the shapes. Recent magnetic resonance imaging (MRI) and magnetoencephalographic (MEG) studies revealed that activities of the premotor area and/or the parietal association area are related to the angular difference between two objects. However, there are two kinds of degree of difficulty in the mental rotation task of three-dimensional objects. One is based on the angular difference and the other is based on the rotation method itself. Keeping these two difficulties in mind, this paper evaluates the activities for a mental rotation task . Here, the performance of the subjects are sufficient as judged by measuring response time prior to MEG experiment. Results reveal that the activities in the right occipital area contralateral to the left visual stimulus field were found in eight out of 12 cases in the range of 150-200 ms. On the other hand, no activities were found in the left. These results are consistent with contralateral dominance of the anatomical connection. fMRI researches showed activities of both parietal association area for the mental rotation of three-dimensional object. In this study, activities of these areas were estimated in both 2-D and 3-D rotation. In addition to this result, the number of subjects, whose activities of posterior part of both parietal association areas were estimated, was increased in 3-D rotation compared with 2-D rotation. From this result, it is implied that these activities are related to the degree of difficulties of rotation method itself. In addition to these result, activities were observed in the posterior part of parietal association area ipsilateral to those in the premotor area in two out of four cases in 2-D rotation and in three out of three cases in 3-D rotation. 3-D rotation requires subjects to imagine the invisible parts of visual stimuli for judging whether the visual stimulus is an identical pair or not. It is believed that this requirement activates the fro- nto-parietal circuit which is used in visuo-motor tasks.
Article
This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Article
Understanding how people perceive robot gestures will aid the design of robots capable of social interaction with humans. We examined the generation and perception of a restricted form of gesture in a robot capable of simple head and arm movement, referring to point-light animation and video experiments in human motion to derive our hypotheses. Four studies were conducted to look at the effects of situational context, gesture complexity, emotional valence and author expertise. In Study 1, four participants created gestures with corresponding emotions based on 12 scenarios provided. The resulting gestures were judged by 12 participants in a second study. Participants’ recognition of emotion was better than chance and improved when situational context was provided. Ratings of lifelikeness were found to be related to the number of arm movements (but not head movements) in a gesture. In Study 3, five novices and five puppeteers created gestures conveying Ekman’s six basic emotions which were shown to 12 Study 4 participants. Puppetry experience improved identification rates only for the emotions of fear and disgust, possibly because of limitations with the robot’s movement. The results demonstrate the communication of emotion by a social robot capable of only simple head and arm movement. KeywordsHuman-robot interaction–Gesture design–Communication of emotions–Puppetry
Article
The mental rotation is one of the most widely used for the study of higher human cognitive processes. However, the detailed mechanisms in the brain related to the mental rotation are still unknown. In the present study, magnetoencephalographic (MEG) activities related to the mental rotation of three-dimensional objects were measured by a whole head type of 306-channel SQUID system. Differences in functional localization in the brain between twoand three-dimensional (2-D and 3-D) rotation tasks of 3-D objects were compared in order to investigate the cognitive processes for visualization of hidden parts of 3-D objects, because only 3-D rotation requires the visualization of the hidden parts of the 3-D objects. Activities in the parietal association area were increased during the 3-D rotation task, compared with the 2-D rotation task. This result indicates that the parietal association area acts as important role for the visualization of hidden parts of visual stimuli.
Article
Daily health self-management, such as the harmonization of food, exercise and medication, is a major problem for a large group of older adults with obesity or diabetics. Computer-based personal assistance can help to behave healthy by persuading and guiding older adults. For effective persuasion, the assistant should express social behaviors (e.g., turn taking, emotional expressions) to be trustworthy and show empathy. From the motivational interviewing method and synthetic assistants’ literature, we derived a set of social behaviors, and implemented a subset in a physical character, a virtual character and a text interface. The first behavior type concerns conversing with high-level dialogue (semantics, intentions), which could be implemented in all 3 assistants. The other behavior types could only be implemented in the characters: showing natural cues (e.g., gaze, posture), expressing emotions (e.g., compassionate face), and accommodating social conversations (e.g., turn taking). In an experiment, 24 older adults (45–65) interacted with the text interface and one of the characters, conform a “one-week diabetics scenario”. They experienced the virtual and physical character as more empathic and trustworthy than the text-based assistant, and expressed more conversational behavior with the characters. However, it seems that the preference of interacting with the character or the text interface was influenced by the conscientiousness of the participant; more conscientious people liked the text interface better. Older adults responded more negative to the characters that lacked the social behaviors than to the text interface. Some differences between the virtual and physical character probably occurred due to the specific constraints of the physical character.