Chapter

Emotions in (Human-Robot) Relation. Structuring Hybrid Social Ecologies

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This essay tackles the core question of machine emotion research—“Can machines have emotions?”—with regard to “social robots”, the new class of machines designed to function as “social partners” for humans. Our aim, however, is not to provide an answer to that question. Rather we argue that “robotics of emotion” moves us to ask a different question—“Can robots establish meaningful affective coordination with human partners?” Developing a series of arguments relevant to theory of emotion, philosophy of AI and the epistemology of synthetic models, we argue that the answer is positive, and lays grounds for an innovative ethical approach to emotional robots. This ethical project, which elsewhere we introduced as “synthetic ethics”, rejects the diffused ethical condemnation of emotional robots as “cheating” technology. Synthetic ethics focuses on the sustainability of emerging mixed human–robot social ecologies. Therefore, it rejects the purely negative ethical approach to social robotics, and promotes a critical case-by-case inquiry into the type of human flourishing that can result from human–robot affective coordination.KeywordsAffective coordination(Machine) EmotionPhilosophy of AISocial roboticsSynthetic method

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
In these remarks, we highlight which links connect the concepts of reciprocity and reaction as well as, most importantly, how this comparison might influence renewed theorizations of social interaction. As it behooves such an attempt, Marcel Mauss’ framework is our starting point: in particular, his emphasis on mutuality’s indeterminacy aptly illustrates the inherent openness toward others’ responses (i.e., toward their topicality, potential disloyalty or misunderstanding, emotionality, etc.), envisioned here as the actual initiator of “the socius”. Then, we rely on Simondon’s concept of transindividuality to further dismantle any linear, reciprocal image of symmetry, by analysing concrete manifestations of reciprocity that exist in and through the practices of coexistence — we pose that reactions quintessentially represent these operational vectors. The conclusion looks into human-AI interaction sequences, to exemplify how reciprocity and reactive state can show all their essential uncertainties and potentials.
Article
Linguistic competence is, among other things, the cognitive ability of using words appropriately. Following this notion, being a competent language user cannot be merely simulated because pretending to use words appropriately already is using words appropriately. Since current AI already can use words appropriately it may be that AI already has linguistic competence instead of merely simulating it. However, in this article I argue that there is at least one domain of linguistic competence that can be simulated: grasping lexical effects. Since grasping lexical effects presupposes an ability which AI lacks, namely feeling emotions, I argue that AI can only grasp the cognitive part of words but not their lexical effects. AI can only simulate to grasp lexical effects. Hence, lexical effects are a domain of linguistic competence AI cannot grasp.
Article
Full-text available
Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.
Article
Full-text available
Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.
Article
Full-text available
Can the sciences of the artificial positively contribute to the scientific exploration of life and cognition? Can they actually improve the scientific knowledge of natural living and cognitive processes, from biological metabolism to reproduction, from conceptual mapping of the environment to logic reasoning, language, or even emotional expression? To these kinds of questions our article aims to answer in the affirmative. Its main object is the scientific emergent methodology often called the "synthetic approach", which promotes the programmatic production of embodied and situated models of living and cognitive systems in order to explore aspects of life and cognition not accessible in natural systems and scenarios. The first part of this article presents and discusses the synthetic approach, and proposes an epistemological framework which promises to warrant genuine transmission of knowledge from the sciences of the artificial to the sciences of the natural. The second part of this article looks at the research applying the synthetic approach to the psychological study of emotional development. It shows how robotics, through the synthetic methodology, can develop a particular perspective on emotions, coherent with current psychological theories of emotional development and fitting well with the recent "cognitive extension" approach proposed by cognitive sciences and philosophy of mind.
Article
Full-text available
IOne central issue in social robotics research is the question of the affective involvement of users. The problem of creating a robot able to establish and to participate competently in dynamic affective exchanges with human partners has been recognized as fundamental, especially for the success of projects involving assistive or educational robotics. This locates social robotics at the crossroad of many interconnected issues related to various disciplines, such as epistemology, cognitive science, sociology and ethics.Among these issues are, for example, the epistemological and theoretical problems of defining how emotions can be represented in a robot and under which conditions robots are able to participate effectively in emotional and empathic dynamics with human beings. Can robots experience emotions, or can they only express them? If we identify robotic ‘emotions’ as ‘pure simulations’, to which no actual inner experience corresponds, what are the conditions under which we can con ...
Article
Full-text available
This article deals with contemporary research aimed at building emotional and empathic robots, and gives an overview of the field focusing on its main characteristics and ongoing transformations. It interprets the latter as precursors to a paradigmatic transition that could significantly change our social ecologies. This shift consists in abandoning the classical view of emotions as essentially individual states, and developing a relational view of emotions, which, as we argue, can create genuinely new emotional and empathic processes—dynamics of “human-robot” affective coordination supporting the development of mixed (human-robot) ecologies.
Article
Full-text available
Social human-robot interaction (HRI) has been widely recognised as one of the major challenges for robotics. This social aspect includes designing robots that embraces various humanlike characteristics regarding robot appearance and behaviour. Few HRI studies, however, address the core goal of human social interaction, i.e. the development of the self. This paper argues that social robots can constitute an illusion or an extension of the human self and should be developed as such. The concept of the self and its meanings for HRI research are here discussed from the symbolic interactionism perspective.
Conference Paper
Full-text available
This paper discusses people's willingness to suspend their beliefs of what is living in physical social robotics. Our attention focuses on users' perception of the machines they interact with, in particular how much fiction can persist in human-robot interaction. We expand on this concept of suspension of disbelief (SoD) in order to understand the role and nature of user engagement in robotics and to discuss a balance between fantasy and realism, between user expectation and the design challenges faced by robotics.
Article
Full-text available
The development of robots that closely resemble human beings can contribute to cognitive research. An android provides an experimental apparatus that has the potential to be controlled more precisely than any human actor. However, preliminary results indicate that only very humanlike devices can elicit the broad range of responses that people typically direct toward each other. Conversely, to build androids capable of emulating human behavior, it is necessary to investigate social activity in detail and to develop models of the cognitive mechanisms that support this activity. Because of the reciprocal relationship between android development and the exploration of social mechanisms, it is necessary to establish the field of android science. Androids could be a key testing ground for social, cognitive, and neuroscientific theories as well as platform for their eventual unification. Nevertheless, subtle flaws in appearance and movement can be more apparent and eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit our model of human other but do not measure up to it. If so, very humanlike robots may provide the best means of pinpointing what kinds of behavior are perceived as human, since deviations from human norms are more obvious in them than in more mechanical-looking robots. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that, by playing on an innate fear of death, an uncanny robot elicits culturally-supported defense responses for coping with death’s inevitability. An experiment, which borrows from methods used in terror management research, was performed to test this hypothesis. [Thomson Reuters Essential Science Indicators: Fast Breaking Paper in Social Sciences, May 2008]
Conference Paper
Full-text available
The human body constructs itself into a person by becoming attuned to the affective consequences of its actions in social relationships. Norms develop that ground perception and action, providing standards for appraising conduct. The body finds itself motivated to enact itself as a character in the drama of life, carving from its beliefs, intentions, and experiences a unique identity and perspective. If a biological body can construct itself into a person by exploiting social mechanisms, could an electromechanical body, a robot, do the same? To qualify for personhood, a robot body must be able to construct its own identity, to assume different roles, and to discriminate in forming friendships. Though all these conditions could be considered benchmarks of personhood, the most compelling benchmark, for which the above mentioned are prerequisites, is the ability to sustain long-term relationships. Long-term relationships demand that a robot continually recreate itself as it scripts its own future. This benchmark may be contrasted with those of previous research, which tend to define personhood in terms that are trivial, subjective, or based on assumptions about moral universals. Although personhood should not in principle be limited to one species, the most humanlike of robots are best equipped for reciprocal relationships with human beings
Article
Full-text available
At a time of increased social usage of net and collaborative applications, a robust and detailed theory of social presence could contribute to our understanding of social behavior in mediated environments, allow researchers to predict and measure differences among media interfaces, and guide the design of new social environments and interfaces. A broader theory of social presence can guide more valid and reliable measures. The article reviews, classifies, and critiques existing theories and measures of social presence. A set of criteria and scope conditions is proposed to help remedy limitations in past theories and measures and to provide a contribution to a more robust theory and measure of social presence.
Article
Full-text available
As a social species, humans rely on a safe, secure social surround to survive and thrive. Perceptions of social isolation, or loneliness, increase vigilance for threat and heighten feelings of vulnerability while also raising the desire to reconnect. Implicit hypervigilance for social threat alters psychological processes that influence physiological functioning, diminish sleep quality, and increase morbidity and mortality. The purpose of this paper is to review the features and consequences of loneliness within a comprehensive theoretical framework that informs interventions to reduce loneliness. We review physical and mental health consequences of loneliness, mechanisms for its effects, and effectiveness of extant interventions. Features of a loneliness regulatory loop are employed to explain cognitive, behavioral, and physiological consequences of loneliness and to discuss interventions to reduce loneliness. Loneliness is not simply being alone. Interventions to reduce loneliness and its health consequences may need to take into account its attentional, confirmatory, and memorial biases as well as its social and behavioral effects.
Article
Full-text available
If robotic companions are to be used in the near future by aging adults, they have to be accepted by them. In the process of developing a methodology to measure, predict and explain acceptance of robotic companions, we researched the influence of social abilities, social presence and perceived enjoyment. After an experiment (n=30) that included collecting usage data and a second experiment (n=40) with a robot in a more and less sociable condition we were able to confirm the relevance of these concepts. Results suggest that social abilities contribute to the sense of social presence when interacting with a robotic companion and this leads, through higher enjoyment to a higher acceptance score.
Article
Full-text available
Involving our corporeal bodies in interaction can create strong affective experiences. Systems that both can be influenced by and influence users corporeally exhibit a use quality we name an affective loop experience. In an affective loop experience, (i) emotions are seen as processes, constructed in the interaction, starting from everyday bodily, cognitive or social experiences; (ii) the system responds in ways that pull the user into the interaction, touching upon end users' physical experiences; and (iii) throughout the interaction the user is an active, meaning-making individual choosing how to express themselves—the interpretation responsibility does not lie with the system. We have built several systems that attempt to create affective loop experiences with more or less successful results. For example, eMoto lets users send text messages between mobile phones, but in addition to text, the messages also have colourful and animated shapes in the background chosen through emotion-gestures with a sensor-enabled stylus pen. Affective Diary is a digital diary with which users can scribble their notes, but it also allows for bodily memorabilia to be recorded from body sensors mapping to users' movement and arousal and placed along a timeline. Users can see patterns in their bodily reactions and relate them to various events going on in their lives. The experiences of building and deploying these systems gave us insights into design requirements for addressing affective loop experiences, such as how to design for turn-taking between user and system, how to create for ‘open’ surfaces in the design that can carry users' own meaning-making processes, how to combine modalities to create for a ‘unity’ of expression, and the importance of mirroring user experience in familiar ways that touch upon their everyday social and corporeal experiences. But a more important lesson gained from deploying the systems is how emotion processes are co-constructed and experienced inseparable from all other aspects of everyday life. Emotion processes are part of our social ways of being in the world; they dye our dreams, hopes and bodily experiences of the world. If we aim to design for affective interaction experiences, we need to place them into this larger picture.
Article
Man and machine are rife with fundamental differences. Formal research in artificial intelligence and robotics has for half a century aimed to cross this divide, whether from the perspective of understanding man by building models, or building machines which could be as intelligent and versatile as humans. Inevitably, our sources of inspiration come from what exists around us, but to what extent should a machine’s conception be sourced from such biological references as ourselves? Machines designed to be capable of explicit social interaction with people necessitates employing the human frame of reference to a certain extent. However, there is also a fear that once this man-machine boundary is crossed that machines will cause the extinction of mankind. The following paper briefly discusses a number of fundamental distinctions between humans and machines in the field of social robotics, and situating these issues with a view to understanding how to address them.
Book
This book is for both robot builders and scientists who study human behaviour and human societies. Scientists do not only collect empirical data but they also formulate theories to explain the data. Theories of human behaviour and human societies are traditionally expressed in words but, today, with the advent of the computer they can also be expressed by constructing computer-based artefacts. If the artefacts do what human beings do, the theory/blueprint that has been used to construct the artefacts explains human behaviour and human societies. Since human beings are primarily bodies, the artefacts must be robots, and human robots must progressively reproduce all we know about human beings and their societies. And, although they are purely scientific tools, they can have one very important practical application: helping human beings to better understand the many difficult problems they face today and will face in the future - and, perhaps, to find solutions for these problems.
Book
According to Rosalind Picard, if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, even to have and express emotions. The latest scientific findings indicate that emotions play an essential role in decision making, perception, learning, and more—that is, they influence the very mechanisms of rational thinking. Not only too much, but too little emotion can impair decision making. According to Rosalind Picard, if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, even to have and express emotions. Part 1 of this book provides the intellectual framework for affective computing. It includes background on human emotions, requirements for emotionally intelligent computers, applications of affective computing, and moral and social questions raised by the technology. Part 2 discusses the design and construction of affective computers. Although this material is more technical than that in Part 1, the author has kept it less technical than typical scientific publications in order to make it accessible to newcomers. Topics in Part 2 include signal-based representations of emotions, human affect recognition as a pattern recognition and learning problem, recent and ongoing efforts to build models of emotion for synthesizing emotions in computers, and the new application area of affective wearable computers.
Chapter
Affective computing relates to, arises from, or intentionally influences emotion. It is a continuously growing multidisciplinary field that explores how technology can inform an understanding of human affect, how interactions between humans and technologies can be impacted by affect, how systems can be designed to utilize affect to enhance capabilities, and how sensing and affective strategies can transform human and computer interaction. Encompassing disciplines, such as engineering, psychology, education, cognitive science, sociology, and others; this chapter explores some of the historical foundations and general goals of affective computing, approaches to affect detection and generation, applications, ethical considerations, and future directions.
Chapter
This chapter discusses android science as an interdisciplinary framework bridging robotics and cognitive science. Android science is expected to be a fundamental research area in which the principles of human-human communications and human-robot communications are studied. In the framework of android science, androids enable us to directly exchange bodies of knowledge gained by the development of androids in engineering and the understanding of humans in cognitive science. As an example of practice in android science, this chapter introduces geminoids, very humanlike robots modeled on real persons, and explains how body ownership transfer occurs for the operator of a geminoid. The phenomenon of body ownership transfer is studied with a geminoid and a brain-machine interface system.
Chapter
What will it be like to admit Artificial Companions into our society? How will they change our relations with each other? How important will they be in the emotional and practical lives of their owners – since we know that people became emotionally dependent even on simple devices like the Tamagotchi? How much social life might they have in contacting each other? The contributors to this book discuss the possibility and desirability of some form of long-term computer Companions now being a certainty in the coming years. It is a good moment to consider, from a set of wide interdisciplinary perspectives, both how we shall construct them technically as well as their personal philosophical and social consequences. By Companions we mean conversationalists or confidants – not robots – but rather computer software agents whose function will be to get to know their owners over a long period. Those may well be elderly or lonely, and the contributions in the book focus not only on assistance via the internet (contacts, travel, doctors etc.) but also on providing company and Companionship, by offering aspects of real personalization.
Article
In this article, we ask how much, if anything, of Robert Frank's (1988, 2004) theory of emotions as evolved strategic commitment devices can survive rejection of its underlying game-theoretic model. Frank's thesis is that emotions serve to prevent people from reneging on threats and promises with enough reliability to support cooperative equilibria in prisoner's dilemmas and similar games with inefficient dominant equilibria. We begin by showing that Frank, especially in light of recent revisions to the theory, must be interpreted as endorsing a version of so-called 'constrained maximization' as proposed by Gauthier (1986). This concept has been subjected to devastating criticism by Binmore (1994), which we endorse: no consistent mathematical sense can be made of games in which constrained maximization is allowed. However, this leaves open the question of whether Frank has identified a genuine empirical phenomena by means of his confused theoretical model. We argue that he in fact has; but that seeing this depends on our rejecting a muddled folk-psychological model of emotions, which Frank himself follows, according to which emotions are inner states of people. Instead, following Dennett (1987, 1991) and other so-called 'externalist' philosophers of cognitive science, we argue that emotions, properly speaking, are social signals coded in culturally evolved intentional conventions that find their identity conditions outside of individuals, in the social environment. As such, their evolutionary proper functions lie in their capacity to enable individuals to solve what we call 'game determination' problems that is, coordination on multiple-equilibrium meta-games over which base-games to play. This allows emotions to indeed serve as commitment devices in assurance games (though not in prisoner's dilemmas). Thus the empirical core of Frank's thesis is recovered, though only by way of drastic revisions to both the game theory and the psychology incorporated in his model.
Article
This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Book
The book includes all the background material required to understand the principles underlying intelligence, as well as enough detailed information on intelligent robotics and simulated agents so readers can begin experiments and projects on their own. By the mid-1980s researchers from artificial intelligence, computer science, brain and cognitive science, and psychology realized that the idea of computers as intelligent machines was inappropriate. The brain does not run "programs"; it does something entirely different. But what? Evolutionary theory says that the brain has evolved not to do mathematical proofs but to control our behavior, to ensure our survival. Researchers now agree that intelligence always manifests itself in behavior—thus it is behavior that we must understand. An exciting new field has grown around the study of behavior-based intelligence, also known as embodied cognitive science, "new AI," and "behavior-based AI." This book provides a systematic introduction to this new way of thinking. After discussing concepts and approaches such as subsumption architecture, Braitenberg vehicles, evolutionary robotics, artificial life, self-organization, and learning, the authors derive a set of principles and a coherent framework for the study of naturally and artificially intelligent systems, or autonomous agents. This framework is based on a synthetic methodology whose goal is understanding by designing and building. The book includes all the background material required to understand the principles underlying intelligence, as well as enough detailed information on intelligent robotics and simulated agents so readers can begin experiments and projects on their own. The reader is guided through a series of case studies that illustrate the design principles of embodied cognitive science. Bradford Books imprint
Article
Prototypes of interactive computer systems have been built that can begin to detect and label aspects of human emotional expression, and that respond to users experiencing frustration and other negative emotions with emotionally supportive interactions, demonstrating components of human skills such as active listening, empathy, and sympathy. These working systems support the prediction that a computer can begin to undo some of the negative feelings it causes by helping a user manage his or her emotional state. This paper clarifies the philosophy of this new approach to human–computer interaction: deliberately recognising and responding to an individual user's emotions in ways, that help users meet their needs. We define user needs in a broader perspective than has been hitherto discussed in the HCI community, to include emotional and social needs, and examine technology's emerging capability to address and support such needs. We raise and discuss potential concerns and objections regarding this technology, and describe several opportunities for future work.
Article
This paper explores the topic of social robots—the class of robots that people anthropomorphize in order to interact with them. From the diverse and growing number of applications for such robots, a few distinct modes of interaction are beginning to emerge. We distinguish four such classes: socially evocative, social interface, socially receptive, and sociable. For the remainder of the paper, we explore a few key features of sociable robots that distinguish them from the others. We use the vocal turn-taking behavior of our robot, Kismet, as a case study to highlight these points.
Article
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. “Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.
Article
Science provides the best way of understanding the world. Public understanding of science is limited: science goes against common sense, the earth moves round the sun. Paranormal beliefs are all too common and they go completely against science, there is a mystical element in our brains. Unlike religion, science is universal and is almost entirely independent of the particular culture in which it is performed. It had is origin in ancient Greece. Whenever a new technology is introduced it is not for the scientists to take an ethical decision about how it should be used, but they must make public the implications.
Toward a more robust theory and measure of social presence
  • F Biocca
  • C Harms
  • J K Burgoon
Acting together in dis-harmony. Cooperating to conflict and cooperation in conflict
  • P Dumouchel
Emotion modeling for social robots
  • A Paiva
  • I Leite
  • T Ribeiro
What is a human? Interaction Studies
  • P H Kahn
  • PH Kahn