Conference Paper

Recognition in Human-Robot Interaction: The Gateway to Engagement

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This shift in focus on methods as well as the focus on action and intention recognition between humans and robots is the major topic of this paper. We present a basic social interaction skills [24,26,28,30], is necessary for developing into engaging in more advanced forms of social interaction such as joint actions and mutual collaboration [38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55]. In other words, it requires that robots are able to perceive similar emotional and behavioral patterns and environmental cues as humans do (e.g., [1,5,[53][54][55]). ...
... In other words, it requires that robots are able to perceive similar emotional and behavioral patterns and environmental cues as humans do (e.g., [1,5,[53][54][55]). Accordingly, it has been acknowledged that the interaction quality between humans and robots has to progress to a sufficient degree that it is comparable to the fluent, trustworthy, and smooth interaction currently accomplished between humans [52], which implies that the robot has to act autonomously to some extent. Although autonomous action is a core aspect for many kinds of robots, the autonomy concept is considered rather problematic, because it has received substantially separate interpretations within different research communities [53][54][55]. ...
... In the fields of HRI and robotics, there has been a lot of research on robots identifying, understanding, and predicting human intention and actions (e.g., [1,[17][18][19][20]24,33,[39][40][41]45,[50][51][52]). It should be noted, however, that there is little existent work where robots are able to fully satisfy the requirements for having recognition capacities, although they display some aspect of recognition. ...
Article
Full-text available
The coexistence of robots and humans in shared physical and social spaces is expected to increase. A key enabler of high-quality interaction is a mutual understanding of each other’s actions and intentions. In this paper, we motivate and present a systematic user experience (UX) evaluation framework of action and intention recognition between humans and robots from a UX perspective, because there is an identified lack of this kind of evaluation methodology. The evaluation framework is packaged into a methodological approach called ANEMONE (action and intention recognition in human robot interaction). ANEMONE has its foundation in cultural-historical activity theory (AT) as the theoretical lens, the seven stages of action model, and user experience (UX) evaluation methodology, which together are useful in motivating and framing the work presented in this paper. The proposed methodological approach of ANEMONE provides guidance on how to measure, assess, and evaluate the mutual recognition of actions and intentions between humans and robots for investigators of UX evaluation. The paper ends with a discussion, addresses future work, and some concluding remarks.
... Brinck and Balkenius point to three steps towards achieving mutual recognition of actions: identification, confirmation, and turn-taking [7,8]. The former, identification, is arguably also the most important one since it deals with the recognition of another individual based on available attributes such as movement, actions, direction of gaze, language, gestures, etc. ...
... The second step, confirmation, is about assuring the other party that identification has occurred. The third and final step, turn-taking, is the establishment of an immediate and dynamic coupling between a human and a robot, unfolding smoothly where the actions and behaviours of each party are dependent on the actions and behaviours of the other [7,8]. ...
Conference Paper
Full-text available
The Operator 4.0 typology depicts the collaborative operator as one of eight operator working scenarios of operators in Industry 4.0. It signifies collaborative robot applications and the interaction between humans and robots working collaboratively or cooperatively towards a common goal. For this collaboration to run seamlessly and effortlessly, human-robot communication is essential. We briefly discuss what trust, predictability, and intentions are, before investigating the communicative features of both self-driving cars and collaborative robots. We found that although communicative external HMIs could arguably provide some benefits in both domains, an abundance of clues to what an autonomous car or a robot is about to do are easily accessible through the environment or could be created simply by understanding and designing legible motions.
... In this paper, we revealed that, although many taxonomies and levels of collaboration exist in human-robot interaction, knowledge and insights on successful interaction strategies for achieving mutual actions and intention recognition between humans and robots in manufacturing contexts currently is understudied. We suggest that future work in HRI and HRC should, to a larger extent than currently is being done, take more inspiration from socio-cognitive theories of human interaction and collaboration as researchers are doing in the social robotics field [10,23]. Besides disentangling the scattered use of inconsistent terminology, a deeper theoretical basis may provide faster progress and more efficient outcomes for identifying and implementing appropriate human-robot interaction strategies in industrial contexts, which should be beneficial in many aspects. ...
Chapter
Full-text available
The industrial evolutions require robots to be able to share physical and social space with humans in such a way that interaction and coexistence are positively experienced by human workers. A prerequisite is the possibility for the human and the robot to mutually perceive, interpret and act on each other’s actions and intentions. To achieve this, strategies for human-robot interaction are needed that are adapted to operators’ needs and characteristics in the industrial contexts. In this paper, we aim to present various taxonomies of levels of automation, human-robot interaction, and human-robot collaboration suggested for the envisioned factories of the future. Based on this foundation, we propose a compass direction for continued research efforts which both zooms in and zooms out on how to develop applicable human-robot interaction strategies that are worker-centric in order to obtain effective, efficient, safe, sustainable, and pleasant human-robot collaboration and coexistence.
... This information is intended to form expectations "here and now" of the forthcoming communication and it is meant to be used by the operator as a cue to infer the communication intent of the swarm. This ostensive signal is thus meant to be the first perceptible signal that allows the operator to perceive the swarm as an agent (Brinck & Balkenius, 2019). From these characteristics, we created two motions where the goal is to provide a clear visual perceptive signal to the operator to initiate the humanswarm communication. ...
Article
Full-text available
Many people are fascinated by biological swarms, but understanding the behavior and inherent task objectives of a bird flock or ant colony requires training. Whereas several swarm intelligence works focus on mimicking natural swarm behaviors, we argue that this may not be the most intuitive approach to facilitate communication with the operators. Instead, we focus on the legibility of swarm expressive motions to communicate mission-specific messages to the operator. To do so, we leverage swarm intelligence algorithms on chain formation for resilient exploration and mapping combined with acyclic graph formation (AGF) into a novel swarm-oriented programming strategy. We then explore how expressive motions of robot swarms could be designed and test the legibility of nine different expressive motions in an online user study with 98 participants. We found several differences between the motions in communicating messages to the users. These findings represent a promising starting point for the design of legible expressive motions for implementation in decentralized robot swarms.
Chapter
The growing importance of technology in daily life has led to a focus on making robots think like humans to enhance the integration of humans and robots in Cyber-Physical Systems (CPS). Cognitive science and psychology offer important knowledge and tools for integrating human-like learning processes into robots. The challenge is to enhance robots with prior knowledge and information, rather than starting the learning process from scratch. The goal of this research is to enable efficient interaction and co-existence of humans, robots, and other agents in CPS. This paper presents a review of the current academic literature on identifying human intentions and feeding robots for their effectiveness when interacting with humans. As a new contribution, this paper also proposes a state-of-the-art solution for human intent recognition studies and focuses our research roadmap on emotion recognition using Vital Signs including electroencephalography (EEG) data (signals) to understand the intent of human action using deep learning techniques. The research also compares the prediction performance of recurrent neural networks (RNN) with other algorithms. Understanding humans’ intent using vital signs for effective co-existence of humans in the cyber physical system and how to identify the intent of the agent and ensure that it aligns with the context of the given task or goal based on immediate perceptible visual attributes and dynamic properties (the perception of movement, gaze, vocalization, and emotional state.)KeywordsIntent RecognitionVital SignsEEG SignalContext Aware
Chapter
Full-text available
The aim of this position paper is to propose a reflection on how to account for and investigate the many ways in which interaction between robots and humans requires co-ordination, negotiation or reformulation of meanings emerging in the ongoing interaction. Towards this aim, we argue, a perspective shift may be needed: to frame interactions as social engagements whose meaning can only be understood by the standpoint of those participating in it. We first present research on psychological benchmarks and design patterns for sociality in the HRI field. Then we provide arguments for including a new interactional element currently missing in the literature: participatory sense-making processes. As we will argue, such elements can be conceived and operationalised both as a relational benchmark as well as an interactional pattern, therefore proving useful for HRI research.
Article
Full-text available
The Collingridge dilemma or ‘dilemma of control’ presents a problem at the intersection of law, society and technology. New technologies can still be influenced, whether by regulation or policy, in their early stage of development, but their impact on society remains unpredictable. In contrast, once new technologies have become embedded in society, their implications and consequences are clear, but their development can no longer be affected. Resulting in the great challenge of the pacing problem – how technological development increasingly outpaces the creation of appropriate laws and regulations. My paper examines the problematic entanglement and relationship of Artificial Intelligence (AI) and a key aspect of the rule of law, legal certainty. AI is our modern age’s fastest developing and most important technological advancement, a key driver for global socio-economic development, encompassing a broad spectrum of technologies between simple automation and autonomous decision-making. It has the potential to improve healthcare, transportation, communication and to contribute to climate change mitigation. However, its development carries an equal amount of risk, including opaque decision-making, gender-based or other kinds of discrimination, intrusion into private lives and misuse for criminal purposes. The transformative nature of AI technology impacts and challenges law and policymaking. The paper considers the impact of AI through legal certainty on the rule of law, how it may undermine its various elements, among others foreseeability, comprehensibility and clarity of norms. It does so by elaborating on AI’s potential threat brought on by its opacity (‘black box effect’), complexity, unpredictability and partially autonomous behaviour, which all can impede the effective verification of compliance with and the enforcement of new as well as already existing legal rules in international, European and national systems. My paper offers insight into a human-centric and risk-based approach towards AI, based on consideration of legal and ethical questions surrounding the topic, to help ensure transparency and legal certainty in regulatory interventions for the benefit of optimising efficiency of new technologies as well as protecting the existing safeguards of legal certainty.
Article
Full-text available
Consistent evidence suggests that the way we reach and grasp an object is modulated not only by object properties (e.g., size, shape, texture, fragility and weight), but also by the types of intention driving the action, among which the intention to interact with another agent (i.e., social intention). Action observation studies ascribe the neural substrate of this ‘intentional’ component to the putative mirror neuron (pMNS) and the mentalizing (MS) systems. How social intentions are translated into executed actions, however, has yet to be addressed. We conducted a kinematic and a functional Magnetic Resonance Imaging (fMRI) study considering a reach-to-grasp movement performed towards the same object positioned at the same location but with different intentions: passing it to another person (social condition) or putting it on a concave base (individual condition). Kinematics showed that individual and social intentions are characterized by different profiles, with a slower movement at the level of both the reaching (i.e., arm movement) and the grasping (i.e., hand aperture) components. fMRI results showed that: (i) distinct voxel pattern activity for the social and the individual condition are present within the pMNS and the MS during action execution; (ii) decoding accuracies of regions belonging to the pMNS and the MS are correlated, suggesting that these two systems could interact for the generation of appropriate motor commands. Results are discussed in terms of motor simulation and inferential processes as part of a hierarchical generative model for action intention understanding and generation of appropriate motor commands.
Article
Full-text available
Cooperative dynamic manipulation enlarges the manipulation repertoire of human–robot teams. By means of synchronized swinging motion, a human and a robot can continuously inject energy into a bulky and flexible object in order to place it onto an elevated location and outside the partners’ workspace. Here, we design leader and follower controllers based on the fundamental dynamics of simple pendulums and show that these controllers can regulate the swing energy contained in unknown objects. We consider a complex pendulum-like object controlled via acceleration, and an “arm—flexible object—arm” system controlled via shoulder torque. The derived fundamental dynamics of the desired closed-loop simple pendulum behavior are similar for both systems. We limit the information available to the robotic agent about the state of the object and the partner’s intention to the forces measured at its interaction point. In contrast to a leader, a follower does not know the desired energy level and imitates the leader’s energy flow to actively contribute to the task. Experiments with a robotic manipulator and real objects show the efficacy of our approach for human–robot dynamic cooperative object manipulation.
Chapter
Full-text available
The question of the relation between the collective and the individual has had a long but patchy history within both philosophy and psychology. In this chapter we consider some arguments that could be adopted for the primacy of the we, and examine their conceptual and empirical implications. We argue that the we needs to be seen as a developing and dynamic identity, not as something that exists fully fledged from the start. The concept of we thus needs more nuanced and differentiated treatment than currently exists, distinguishing it from the idea of a ‘common ground’ and discerning multiple senses of ‘we-ness’. At an empirical level, beginning from the shared history of human evolution and prenatal existence, a simple sense of pre-reflective we-ness, we argue, emerges from second-person I-you engagement in earliest infancy. Developmentally, experientially and conceptually, engagement remains fundamental to the we throughout its many forms, characterized by reciprocal interaction and conditioned by the normative aspects of mutual addressing.
Chapter
Full-text available
We argue that social robots should be designed to behave similarly to humans, and furthermore that social norms constitute the core of human interaction. Whether robots can be designed to behave in human-like ways turns on whether they can be designed to organize and coordinate their behavior with others' social expectations. We suggest that social norms regulate interaction in real time, and agents rely on dynamic information about their own and others' attention, intention and emotion to perform social tasks.
Article
Full-text available
According to situated, embodied, and distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize this coordinative function treat language as a system of replicable constraints on individual and interactive dynamics. In this paper, we argue that the integration of the replicable-constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such a synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly, the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple timescales. Because the ontogenetic timescale offers a convenient window into the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.
Article
Full-text available
As social animals, it is crucial to understand others’ intention. But is it possible to detect social intention in two actions that have the exact same motor goal? In the present study, we presented participants with video clips of an individual reaching for and grasping an object to either use it (personal trial) or to give his partner the opportunity to use it (social trial). In Experiment 1, the ability of naïve participants to classify correctly social trials through simple observation of short video clips was tested. In addition, detection levels were analyzed as a function of individual scores in psychological questionnaires of motor imagery, visual imagery, and social cognition. Results revealed that the between-participant heterogeneity in the ability to distinguish social from personal actions was predicted by the social skill abilities. A second experiment was then conducted to assess what predictive mechanism could contribute to the detection of social intention. Video clips were sliced and normalized to control for either the reaction times (RTs) or/and the movement times (MTs) of the grasping action. Tested in a second group of participants, results showed that the detection of social intention relies on the variation of both RT and MT that are implicitly perceived in the grasping action. The ability to use implicitly these motor deviants for action-outcome understanding would be the key to intuitive social interaction.
Chapter
Full-text available
The present account explains (i) which elements of nonverbal reference are intersubjective, (ii) what major effects intersubjectivity has on the general development of intentional communication and at what stages, and (iii) how intersubjectivity contributes to triggering the general capacity for nonverbal reference in the second year of life. First, intersubjectivity is analysed in terms of a sharing of experiences that is either mutual or individual, and either dyadic or triadic. Then it is shown that nonverbal reference presupposes intersubjectivity in the communicative intent indicators and referential behaviour, and indirectly in modifications of previous behaviour in response to communication failure. It is argued that different forms of intersubjectivity entail different types of communicative skills. A comprehensive analysis of data on gaze-related intersubjective behaviour in young infants shows that interaffectivity and interattentionality enable referential skills early in development and together allow for complex behaviour. Early referential skills, it is proposed, arise by other mechanisms than in nonverbal reference. Reliable and consistent use of nonverbal reference occurs when interaffectivity and interattentionality coalesce with interintentionality, which affords general cognitive skills that together permit a decontextualisation of communicative behaviour.
Article
Full-text available
The first part of the article examines some recent studies on the early development of social norms that examine young children’s understanding of codified rule games. It is argued that the constitutive rules that define the games cannot be identified with social norms and therefore the studies provide limited evidence about socio-normative development. The second part reviews data on children’s play in natural settings that show that children do not understand norms as codified or rules of obligation, and that the norms that guide social interaction are dynamic, situated, and heterogeneous. It is argued that normativity is intersubjective and negotiable and starts to develop in the first year, emerging as a practical skill that depends on participatory engagement. Three sources of compliance are discussed: emotional engagement, nonverbal agreement, and conversation.
Article
Full-text available
Complementary colors are color pairs which, when combined in the right proportions, produce white or black. Complementary actions refer here to forms of social interaction wherein individuals adapt their joint actions according to a common aim. Notably, complementary actions are incongruent actions. But being incongruent is not sufficient to be complementary (i.e., to complete the action of another person). Successful complementary interactions are founded on the abilities: (i) to simulate another person’s movements, (ii) to predict another person’s future action/s, (iii) to produce an appropriate incongruent response which differ, while interacting, with observed ones, and (iv) to complete the social interaction by integrating the predicted effects of one’s own action with those of another person. This definition clearly alludes to the functional importance of complementary actions in the perception–action cycle and prompts us to scrutinize what is taking place behind the scenes. Preliminary data on this topic have been provided by recent cutting-edge studies utilizing different research methods. This mini-review aims to provide an up-to-date overview of the processes and the specific activations underlying complementary actions.
Article
Full-text available
In cognitive psychology, studies concerning the face tend to focus on questions about face recognition, theory of mind (ToM) and empathy. Questions about the face, however, also fit into a very different set of issues that are central to ethics. Based especially on the work of Levinas, philosophers have come to see that reference to the face of another person can anchor conceptions of moral responsibility and ethical demand. Levinas points to a certain irreducibility and transcendence implicit in the face of the other. In this paper I argue that the notion of transcendence involved in this kind of analysis can be given a naturalistic interpretation by drawing on recent interactive approaches to social cognition found in developmental psychology, phenomenology, and the study of autism.
Article
Full-text available
Social and developmental psychologists have stressed the pervasiveness and strength of humans’ tendencies to conform and to imitate, and social anthropologists have argued that these tendencies are crucial to the formation of cultures. Research from four domains is reviewed and elaborated to show that divergence is also pervasive and potent, and it is interwoven with convergence in a complex set of dynamics that is often unnoticed or minimized. First, classic research in social conformity is reinterpreted in terms of truth, trust, and social solidarity, revealing that dissent is its most salient feature. Second, recent studies of children’s use of testimony to guide action reveal a surprisingly sophisticated balance of trust and prudence, and a concern for truth and charity. Third, new experiments indicate that people diverge from others even under conditions where conformity seems assured. Fourth, current studies of imitation provide strong evidence that children are both selective and faithful in who, what, and why they follow others. All of the evidence reviewed points toward children and adults as being engaged, embodied partners with others, motivated to learn and understand the world, others, and themselves in ways that go beyond goals and rules, prediction and control. Even young children act as if they are in a dialogical relationship with others and the world, rather than acting as if they are solo explorers or blind followers. Overall, the evidence supports the hypothesis that social understanding cannot be reduced to convergence or divergence, but includes ongoing activities that seek greater comprehensiveness and complexity in the ability to act and interact effectively, appropriately, and with integrity.
Article
Full-text available
The past years have seen an increasing debate on cooperation and its unique human character. Philosophers and psychologists have proposed that cooperative activities are characterized by shared goals to which participants are committed through the ability to understand each other’s intentions. Despite its popularity, some serious issues arise with this approach to cooperation. First, one may challenge the assumption that high-level mental processes are necessary for engaging in acting cooperatively. If they are, then how do agents that do not possess such ability (preverbal children, or children with autism who are often claimed to be mind-blind) engage in cooperative exchanges, as the evidence suggests? Secondly, to define cooperation as the result of two de-contextualized minds reading each other’s intentions may fail to fully acknowledge the complexity of situated, interactional dynamics and the interplay of variables such as the participants’ relational and personal history and experience. In this paper we challenge such accounts of cooperation, calling for an embodied approach that sees cooperation not only as an individual attitude toward the other, but also as a property of interaction processes. Taking an enactive perspective, we argue that cooperation is an intrinsic part of any interaction, and that there can be cooperative interaction before complex communicative abilities are achieved. The issue then is not whether one is able or not to read the other’s intentions, but what it takes to participate in joint action. From this basic account, it should be possible to build up more complex forms of cooperation as needed. Addressing the study of cooperation in these terms may enhance our understanding of human social development, and foster our knowledge of different ways of engaging with others, as in the case of autism.
Conference Paper
Full-text available
In this literature review we explain anthropomorphism and its role in the design of socially interactive robots and human-robot interaction. We illustrate the social phenomenon of anthropomorphism which describes people's tendency to attribute lifelike qualities to objects and other non lifelike artifacts. We present theoretical backgrounds from social sciences, and integrate related work from robotics research, including results from experiments with social robots. We present different approaches for anthropomorphic and humanlike form in a robot's design related to its physical shape, its behavior, and its interaction with humans. This review provides a comprehensive understanding of anthropomorphism in robotics, collects and reports relevant references, and gives an outlook on anthropomorphic human-robot interaction.
Article
Full-text available
The article explores the possibilities of formalizing and explaining the mechanisms that support spatial and social perspective alignment sustained over the duration of a social interaction. The basic proposed principle is that in social contexts the mechanisms for sensorimotor transformations and multisensory integration (learn to) incorporate information relative to the other actor(s), similar to the “re-calibration” of visual receptive fields in response to repeated tool use. This process aligns or merges the co-actors’ spatial representations and creates a “Shared Action Space” (SAS) supporting key computations of social interactions and joint actions; for example, the remapping between the coordinate systems and frames of reference of the co-actors, including perspective taking, the sensorimotor transformations required for lifting jointly an object, and the predictions of the sensory effects of such joint action. The social re-calibration is proposed to be based on common basis function maps (BFMs) and could constitute an optimal solution to sensorimotor transformation and multisensory integration in joint action or more in general social interaction contexts. However, certain situations such as discrepant postural and viewpoint alignment and associated differences in perspectives between the co-actors could constrain the process quite differently. We discuss how alignment is achieved in the first place, and how it is maintained over time, providing a taxonomy of various forms and mechanisms of space alignment and overlap based, for instance, on automaticity vs. control of the transformations between the two agents. Finally, we discuss the link between low-level mechanisms for the sharing of space and high-level mechanisms for the sharing of cognitive representations.
Article
Full-text available
We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot's motion and compliance according to the partner's actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner's motion.
Article
Full-text available
The theory-theory is not supported by evidence in the everyday actions of infants and toddlers whose lives a Theory of Mind is meant radically to transform. This paper reviews some of these challenges to the theory-theory, particularly from communication and deception. We argue that the theory’s disconnection from action is both inevitable and paradoxical. The mind–behaviour dualism upon which it is premised requires a conceptual route to knowing minds and disallows a real test of the theory through the study of action. Taking engagement seriously avoids these problems and requires that both lay people and psychologists be participants rather than observers in order to know, and indeed to create, minds.
Article
Full-text available
Our objective is to improve legibility of robot navigation behavior in the presence of moving humans. We examine a human-aware global navigation planner in a path crossing situation and assess the legibility of the resulting navigation behavior. We observe planning based on fixed social costs and static search spaces to perform badly in situations where robot and human move towards the same point. To find an improved cost model, we experimentally examine how humans deal with path crossing. Based on the results we provide a new way of calculating social costs with context dependent costs without increasing the search space. Our evaluation shows that a simulated robot using our new cost model moves more similar to humans. This shows how comparison of human and robot behavior can help with assessing and improving legibility.
Conference Paper
Full-text available
This paper describes our approach on the development of the expression of emotions on a robot with constrained facial expressions. We adapted principles and practices of animation from Disney™ and other animators for robots, and applied them on the development of emotional expressions for the EMYS robot. Our work shows that applying animation principles to robots is beneficial for human understanding of the robots' emotions.
Article
Full-text available
This article reconstructs Hegel’s notion of experience and self-consciousness. It is argued that at the center of Hegel’s phenomenology of consciousness is the notion that experience is shaped by identification and sacrifice. Experience is the process of self-constitution and self-transformation of a self-conscious being that risks its own being. The transition from desire to recognition is explicated as a transition from the tripartite structure of want and fulfillment of biological desire to a socially structured recognition that is achieved only in reciprocal recognition, or reflexive recognition. At the center of the Hegelian notion of selfhood is thus the realization that selves are the locus of accountatibility. To be a self, it is concluded, is to be the subject of normative statuses that refer to commitments; it means to be able to take a normative stand on things, to commit oneself and undertake responsibilities.
Article
Full-text available
This memory can be seen as a generalization of the working memory for novel foods introduced in chapter 5. In that simple working memory, only the sickness sensor could recall the stored memory. For a more general working memory, any sufficiently specific subschema from any modality should recall the entire sensory schema with which it was stored. For example, the activation of a place representation will read out the expectations of the stimulus that will be perceived at that location. In...
Conference Paper
Full-text available
The development of robots that closely resemble human beings enables us to investigate many phenomena related to human interactions that could not otherwise be investigated with mechanical-looking robots. This is because more humanlike devices are in a better position to elicit the kinds of responses people direct at each other. In particular, we cannot ignore the role of appearance in giving us a subjective impression of social presence or intelligence. However, this impression is influenced by behavior and the complex relationship between it and appearance. As Masahiro Mori observed, a humanlike appearance does not necessarily give a positive impression. We propose a hypothesis as to how appearance and behavior are related and map out a plan for android research to investigate this hypothesis. We then examine a study that evaluates the behavior of androids according to the patterns of gaze fixations they elicit in human subjects. Studies such as these, which integrate the development of androids with the investigation of human behavior, constitute a new research area fusing engineering and science.
Article
Full-text available
Intentional communication is perceptually based and about attentional objects. Three attention mechanisms are distinguished: scanning, attention attraction, and attention-focusing. Attention-focusing directs the subject towards attentional objects. Attention-focusing is goal-governed (controlled by stimulus) or goal-intended (under the control of the subject). Attentional objects are perceptually categorised functional entities that emerge in the interaction between subjects and environment. Joint attention allows for focusing on the same attentional object simultaneously (mutual object-focused attention), provided that the subjects have focused on each other beforehand (subject-subject attention). It results in intentional communication if the subjects attend to each other as subjects (i) capable of attending, and (ii) attending in a goal-intended way. Intentional communication is fundamentally imperative and adapted to action.
Conference Paper
Full-text available
Previous research has found people to transfer behaviors from social interaction among humans to interactions with computers or robots. These findings suggest that people will talk to a robot which looks like a child in a similar way as people talking to a child. However, in a previous study in which we compared speech to a simulated robot with speech to preverbal, 10 months old infants, we did not find the expected similarities. One possibility is that people were targeting an older child than a 10 months old. In the current study, we address the similarities and differences between speech to four different age groups of children and a simulated robot. The results shed light on how people talk to robots in general.
Conference Paper
Full-text available
As robots enter the everyday physical world of people, it is important that they abide by society's unspoken social rules such as respecting people's personal spaces. In this paper, we explore issues related to human personal space around robots, beginning with a review of the existing literature in human-robot interaction regarding the dimensions of people, robots, and contexts that influence human-robot interactions. We then present several research hypotheses which we tested in a controlled experiment (N = 30). Using a 2 (robotics experience vs. none: between-participants) × 2 (robot head oriented toward a participant's face vs. legs: within-participants) mixed design experiment, we explored the factors that influence proxemic behavior around robots in several situations: (1) people approaching a robot, (2) people being approached by an autonomously moving robot, and (3) people being approached by a teleoperated robot. We found that personal experience with pets and robots decreases a person's personal space around robots. In addition, when the robot's head is oriented toward the person's face, it increases the minimum comfortable distance for women, but decreases the minimum comfortable distance for men. We also found that the personality trait of agreeableness decreases personal spaces when people approach robots, while the personality trait of neuroticism and having negative attitudes toward robots increase personal spaces when robots approach people. These results have implications for both human-robot interaction theory and design.
Article
Full-text available
Humans spend most of their time interacting with other people. It is the motor organization subtending these social interactions that forms the main theme of this article. We review recent experimental studies testing whether it is possible to differentiate the kinematics of an action performed by an agent acting in isolation from the kinematics of the very same action performed within a social context. The results indicate that social context shapes action planning and that in the context of a social interaction, flexible online adjustments take place between partners. These observations provide novel insights on the social dimension of motor planning and control.
Article
Full-text available
Previous research investigated the contributions of target objects, situational context and movement kinematics to action prediction separately. The current study addresses how these three factors combine in the prediction of observed actions. Participants observed an actor whose movements were constrained by the situational context or not, and object-directed or not. After several steps, participants had to indicate how the action would continue. Experiment 1 shows that predictions were most accurate when the action was constrained and object-directed. Experiments 2A and 2B investigated whether these predictions relied more on the presence of a target object or cues in the actor's movement kinematics. The target object was artificially moved to another location or occluded. Results suggest a crucial role for kinematics. In sum, observers predict actions based on target objects and situational constraints, and they exploit subtle movement cues of the observed actor rather than the direct visual information about target objects and context.
Article
In conjunction with what is often called the industry 4.0, the new machine age, or the rise of the robots, the authors of this paper have each experienced the following phenomenon. At public events and roundtable discussions, among our circles of friends, or during interviews with the media, we are asked on a surprisingly regular basis: "How must humankind adapt to the imminent process of technological change? What do we have to learn in order to keep pace with the smart new machines? What new skills do we need to understand the robots?"
Chapter
In Persons in Relation John Macmurray draws a powerful distinction (one not usually recognized in modern studies of the development of this process in children) between spectators and participants in the process of knowing other minds: In reflection we isolate ourselves from dynamic relations with the Other; we withdraw into ourselves, adopting the attitude of spectators, not of participants. We are then out of touch with the world, and for touch we must substitute vision; for a real contact with the Other an imagined contact; and for real activity an activity of imagination. (Macmurray, 1991, p. 16)
Chapter
This chapter introduces a new model of infant-mother interaction - the 'Dyadic States of Consciousness' (DSC). In an infant-mother interaction, each individual communicates their affective evaluation of the state of what is going on in the interaction. In response to the induction of meaning in the other, infants and mothers adjust their behaviour to maintain coordinated dyadic state. When the mutual induction is successful, a DSC is formed, meanings from the other' state of consciousness (SOC) are incorporated, and their SOCs gain coherence and complexity.
Article
During the 1920s and the 1930s, the notion of reification brought about recurring themes that concerned social and cultural critique. This term was used to describe the increasing level of unemployment, the economic crisis, and other such historical events that characterized the Weimar Republic. By combining concepts adapted from prominent philosophers such as Karl Marx, Max Weber, and Georg Simmel, George Lukács was able to come up with a three-part dissertation-"Reification and the Consciousness of the Proletariat" which prompted that the forms of life in such circumstances be examined as a consequence of social reification. This chapter illustrates four indicators which demonstrate how the term reification has veered away from the definition it acquired from the Weimar Republic and has moved toward a more theoretical discourse. © 2008 by The Regents of the University of California. All rights reserved.
Chapter
In this contribution we give an overview and discussion of the basic steps of System Identification. The four main ingredients of the process that takes us from observed data to a validated model are: (1) The data itself, (2) The set of candidate models, (3) The criterion of fit and (4) The validation procedure. We discuss how these ingredients can be blended to a useful mix for model-building in practice.
Book
(from the foreword) This book offers a selection of findings (from intersensory perception) research.... The evidence reported in this book shows intersensory interactions to be developmentally primitive and pervasive. Intersensory interactions thus may not be the product of development but a cause that propels development forward.. (from the book) The theme of this book involves the development of intersensory perception and how it occurs across different species.
Article
This article describes an emotional adaption approach to proactively trigger increased helpfulness towards a robot in task-related human-robot interaction (HRI). Based on social-psychological predictions of human behavior, the approach aims at inducing empathy, paired with a feeling of similarity in human users towards the robot. This is achieved by two differently expressed emotional control variables: by an explicit statement of similarity before task-related interaction, and implicitly expressed by adapting the emotional state of the robot to the mood of the human user, such that the current values of the human mood in the dimensions of pleasure, arousal, and dominance (PAD) are matched. The thereby shifted emotional state of the robot serves as a basis for the generation of task-driven emotional facial- and verbal expressions, employed to induce and sustain high empathy towards the robot throughout the interaction. The approach is evaluated in a user study utilizing an expressive robot head. The effectiveness of the approach is confirmed by significant experimental results. An analysis of the individual components of the approach reveals significant effects of explicit emotional adaption on helpfulness, as well as on the HRI-key concepts anthropomorphism and animacy. [FULL TEXT available at: https://mediatum.ub.tum.de/doc/1216553/1216553.pdf]
Article
This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an upper speed limit. The artificial potential field ap proach has been extended to collision avoidance for all ma nipulator links. In addition, a joint space artificial potential field is used to satisfy the manipulator internal joint con straints. This method has been implemented in the COSMOS system for a PUMA 560 robot. Real-time collision avoidance demonstrations on moving obstacles have been performed by using visual sensing.
Article
Recent developments strive for realizing robotic systems that not only interact, but closely collaborate with humans in performing everyday manipulation tasks. Successful collaboration requires the integration of the individual partner's intentions into a shared action plan, which may involve continuous negotiation of intentions. We focus on collaboration in a kinesthetic task, i.e., joint object manipulation. Here, ways must be found to integrate individual motion and force inputs from the members of the human-robot team, in order to achieve the joint task goal. Before guidelines on how robots should act in this process can be formulated, clarification on whether humans use the haptic channel for communicating their intentions is needed. This paper investigates this question in an experimental setup involving two collaborating humans. We consider physical effort as well as performance as indicators of successful intention integration. Our results strongly suggest that intention integration is enhanced via the haptic channel, i.e., that haptic communication takes place, especially in the case of shared decision situations. This provides a motivation for future investigations to model the process of intention integration itself in order to realize successful haptic human-robot collaboration.
Article
Cristina Bicchieri examines social norms, such as fairness, cooperation, and reciprocity, in an effort to understand their nature and dynamics, generated expectations and evolution and change. Drawing on intellectual traditions and methods, including those of social psychology, experimental economics and evolutionary game theory, Bicchieri provides an integrated account of how social norms emerge and why and when we follow them. Examining the existence and survival of inefficient norms, she demonstrates how norms evolve in ways that depend upon the psychological dispositions of the individual and how such dispositions may impair social efficiency. © Cristina Bicchieri 2006 and Cambridge University Press, 2010.
Article
In this paper I consider how thinking emerges out of human infants' relatedness towards the personal and non-personal world. I highlight the contrast between cognitive aspects and cognitive components of psychological functioning, and propose that even when thinking has become a partly separable component of the mind, affective and conative aspects inhere in its nature. I provide illustrative evidence from recent research on the developmental psychopathology of autism. In failing to adopt a developmental perspective, contemporary theorizing has displaced thinking from where it is properly situated - intimately woven with feeling as well as action, and infused with qualities of interpersonal relatedness from which its structure is derived.
Chapter
Imitation and mirroring processes are necessary but not sufficient conditions for children to develop human sociality. Human sociality entails more than the equivalence and connectedness of perceptual experiences. It corresponds to the sense of a shared world made of shared values. It originates from complex ‘open’ systems of reciprocation and negotiation, not just imitation and mirroring processes that are by definition ‘closed’ systems. From this premise, we argue that if imitation and mirror processes are important foundations for sociality, human inter-subjectivity develops primarily in reciprocation, not just imitation. Imitation provides a basic sense of social connectedness and mutual acknowledgment of existing with others that are ‘like me.’ However, it does not allow for the co-construction of meanings with others. For human sociality to develop, imitation and mirroring processes need to be supplemented by an open system of reciprocation. Developmental research shows that from the second month, mirroring, imitative, and other contagious emotional responses are by-passed. Imitation gives way to first signs of reciprocation (primary intersubjectivity), joint attention to objects (secondary intersubjectivity), the emergence of values that are jointly represented and negotiated with others (tertiary intersubjectivity), and eventually the development of an ethical stance accompanying theories of mind by 4 years of age. We review this development and propose that if mirroring processes enable individuals to bridge their subjective experiences, human inter-subjectivity proper develops from reciprocal social exchanges that lead to value negotiation and mutual recognition, both cardinal trademarks of human sociality. KeywordsSociality–Reciprocation–Inter-subjectivity–Co-construction
Article
Collaborative control is a teleoperation system model based on human–robot dialogue. With this model, the robot asks questions to the human in order to obtain assistance with cognition and perception. This enables the human to function as a resource for the robot and help to compensate for limitations of autonomy. To understand how collaborative control influences human–robot interaction, we performed a user study based on contextual inquiry (CI). The study revealed that: (1) dialogue helps users understand problems encountered by the robot and (2) human assistance is a limited resource that must be carefully managed.
Conference Paper
In face-to-face conversations, speakers are continuously checking whether the listener is engaged in the conversation and change the conversational strategy if the listener is not fully engaged in the conversation. With the goal of building a conversational agent that can adaptively control conversations with the user, this study analyzes the user's gaze behaviors and proposes a method for estimating whether the user is engaged in the conversation based on gaze transition 3-gram patterns. First, we conduct a Wizard-of-Oz experiment to collect the user's gaze behaviors. Based on the analysis of the gaze data, we propose an engagement estimation method that detects the user's disengagement gaze patterns. The algorithm is implemented as a real-time engagement-judgment mechanism and is incorporated into a multimodal dialogue manager in a conversational agent. The agent estimates the user's conversational engagement and generates probing questions when the user is distracted from the conversation. Finally, we conduct an evaluation experiment using the proposed engagement-sensitive agent and demonstrate that the engagement estimation function improves the user's impression of the agent and the interaction with the agent. In addition, probing performed with proper timing was also found to have a positive effect on user's verbal/nonverbal behaviors in communication with the conversational agent.