Article

Joint action with artificial agents: Human-likeness in behaviour and morphology affects sensorimotor signaling and social inclusion

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Sensorimotor signaling is a key mechanism underlying coordination in humans. The increasing presence of artificial agents, including robots, in everyday contexts, will make joint action with them as common as a joint action with other humans. The present study investigates under which conditions sensorimotor signaling emerges when interacting with them. Human participants were asked to play a musical duet either with a humanoid robot or with an algorithm run on a computer. The artificial agent was programmed to commit errors. Those were either human-like (simulating a memory error) or machine-like (a repetitive loop of back-and-forth taps). At the end of the task, we tested the social inclusion toward the artificial partner by using a ball-tossing game. Our results showed that when interacting with the robot, participants showed lower variability in their performance when the error was human-like, relative to a mechanical failure. When the partner was an algorithm, the pattern was reversed. Social inclusion was affected by human-likeness only when the partner was a robot. Taken together, our findings showed that coordination with artificial agents, as well as social inclusion, are influenced by how human-like the agent appears, both in terms of morphological traits and in terms of behaviour.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recent work at the intersection of cognitive science and social robotics has shown that humans can extend their prediction skills to the perception of robots and form expectations about robots based on their prior experience [5,6,16,17]. These studies manipulated expectations towards robots by means of using stimuli that have mismatches in a variety of visual dimensions including appearance (form), motion, and interaction. ...
... Using a similar paradigm, [5] showed differential activity in the parietal cortex for an agent that moved in an unexpected way compared to others that moved in an expected way, which they interpreted as a prediction error within the framework of predictive coding [13,14]. Furthermore, in a study that investigates sensorimotor signaling in human-robot interaction, [17] shows that people show lower variability in their performance when a human-like robot commits a human-like error compared to a mechanical error and that the pattern is reversed when the agent is non-human-like morphologically. ...
... An important contribution of our study to the previous work on prediction in HRI is its multimodal nature. Although there are studies that examined the effect of expectations on the perception of robots, most of these studies were done in the visual modality [5,6,17]. Given the recent work that highlights the importance of voice in HRI [18] and the developments in text-to-speech technology, it has become essential to go beyond the visual modality and incorporate the effects of voice on communication and interaction with artificial agents. ...
Article
Full-text available
Recent work in cognitive science suggests that our expectations affect visual perception. With the rise of artificial agents in human life in the last few decades, one important question is whether our expectations about non-human agents such as humanoid robots affect how we perceive them. In the present study, we addressed this question in an audio–visual context. Participants reported whether a voice embedded in a noise belonged to a human or a robot. Prior to this judgment, they were presented with a human or a robot image that served as a cue and allowed them to form an expectation about the category of the voice that would follow. This cue was either congruent or incongruent with the category of the voice. Our results show that participants were faster and more accurate when the auditory target was preceded by a congruent cue than an incongruent cue. This was true regardless of the human-likeness of the robot. Overall, these results suggest that our expectations affect how we perceive non-human agents and shed light on future work in robot design.
... Interactive human-robot studies have shown that human-like errors performed by a humanoid robot have an impact on people's sensorimotor signaling, reflected in lower variability in their performance [43]. Furthermore, human-like range of behavioral variability in joint action seems to be critical for ascription of humanness to a humanoid robot [44]. ...
... For example, joint sense of agency does not emerge with disembodied partners (e.g., a desktop machine) presumably due to the absence of sensorimotor matching abilities [49]. Moreover, while human-like errors performed by a humanoid robot have an impact on human's sensorimotor signaling, the reverse pattern arises for a disembodied partner, i.e., mechanical errors lead to lower variability in human performance compared to human-like errors [43]. Additionally, social signals exhibited by disembodied artificial agents do not fully elicit socio-cognitive mechanisms [50]. ...
Preprint
Full-text available
Verga, Kotz, and Ravignani’s paper on “The evolution of social timing” [1] provides an elegant proposal into how sociality and timing are interconnected and ubiquitous in human and nonhuman animal interactions, and proposes a framework to investigate the evolution of social timing and why it is so fundamental to social animals. The authors outline some of the key mechanisms that support social timing, such as interpersonal synchronization and mutual adaptation, but do not go into depth with respect to the fine-grained temporal properties and mechanisms underlying individual and interpersonal signals, which may lead to different social outcomes. Here we propose that these fine-grained temporal mechanisms are key to understanding the social timing and consequently sociality of embodied social living species. We first summarize the fine-grained temporal mechanisms in human interactions that we argue are at the very core of social interactions, and show how these can predict important social consequences. Next, we examine whether such mechanisms can lead to varying perceptions of sociality, attribution of agency, and interaction parameters when embedded in artificial embodied agents (e.g., social robots). Finally, we argue that the fine-grained individual and interpersonal temporal dynamics are the key social mechanisms lacking in disembodied artificial intelligence (AI), such as ChatGPT, which ultimately reveals its lack of sociality.
... Interactive human-robot studies have shown that human-like errors performed by a humanoid robot have an impact on people's sensorimotor signaling, reflected in lower variability in their performance [43]. Furthermore, human-like range of behavioral variability in joint action seems to be critical for ascription of humanness to a humanoid robot [44]. ...
... For example, joint sense of agency does not emerge with disembodied partners (e.g., a desktop machine) presumably due to the absence of sensorimotor matching abilities [49]. Moreover, while human-like errors performed by a humanoid robot have an impact on human's sensorimotor signaling, the reverse pattern arises for a disembodied partner, i.e., mechanical errors lead to lower variability in human performance compared to human-like errors [43]. Additionally, social signals exhibited by disembodied artificial agents do not fully elicit socio-cognitive mechanisms [50]. ...
... In the AI and robotics literature, the view that HRI can involve different forms of joint action is fairly widespread (e.g., Belhassein et al., 2022;Ciardo et al., 2022;Clodic et al., 2017;Grynszpan et al., 2019;Iqbal & Riek, 2017). Often, research on joint action in HRI is based on the view that two (or more) agents act together when they can be roughly said to (a) share a goal, (b) intend to engage in actions towards that goal, and (c) have a common knowledge of the other agent's (sub)goals, intentions, and actions (Bratman, 1992(Bratman, , 1993(Bratman, , 1997Tomasello & Carpenter, 2017;Vesper et al., 2010;Sebanz et al., 2006; counter-arguments can be found in Clodic et al., 2017;Iqbal & Riek, 2017;Pacherie, 2011). ...
Article
Full-text available
In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level. Smart machines , i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by humans. Consequently, their performances show a touch of action and even autonomy. HMI is therefore often described as a sort of “cooperation” rather than as a mere application of a tool. Some authors even go as far as subsuming cooperation with smart machines under the label of partnership , akin to cooperation between human agents sharing a common goal. In this paper, we explore how far the notion of shared agency and partnership can take us in our understanding of human interaction with smart machines. Discussing different topoi related to partnerships in general, we suggest that different kinds of “partnership” depending on the form of interaction between agents need to be kept apart. Building upon these discussions, we propose a tentative taxonomy of different kinds of HMI distinguishing coordination, collaboration, cooperation, and social partnership.
... Existen enfoques que procesan datos espaciales, temporales y de uso para establecer patrones de cambios psicológicos en las personas que habitan ambientes urbanos (Ojha et al. 2019). Estudios recientes analizan la influencia de ambientes edificados y el comportamiento humano con agentes artificiales y manejo de datos (Ciardo et al., 2022). ...
Article
Full-text available
El espacio público es el escenario idóneo para analizar y evaluar la correlación existente entre el comportamiento y las condicionantes morfológicas en la estructura urbana. Esta relación entre el comportamiento de habitantes y la configuración espacial se experimenta como un resultante social. Desde esta perspectiva, se percibe el éxito del diseño urbano desde la capacidad que tiene de conciliar las condiciones espaciales con las alternativas que permite a las relaciones colectivas, desde una relación estrecha y bidireccional. De acuerdo con los estudios urbanos sobre aproximaciones morfológicas, a partir de la segunda mitad del siglo XX se reconocen dos configuraciones formales: primero, la que se identifica como esquema de ciudad tradicional, en tanto entender las ciudades como estructuras interconectadas de edificaciones, por lo que los vacíos que quedan entre ellas son los que configuran las manzanas; segundo, la denominada funcionalista, cuya configuración de edificaciones comprende una disposición libre y aislada en el espacio, generando esquemas indefinidos (Carmona, 2010, p. 77). Esta investigación presenta un estudio comparativo de análisis para los espacios públicos, el cual conjuga estudios relacionados con tipo-morfología de la ciudad y tratados cruciales sobre el comportamiento en los espacios públicos. Se evalúan los patrones espaciales y de comportamiento en dos áreas en la ciudad de Quito, Ecuador: por un lado, una plaza en el Centro Histórico con un esquema tradicional: Plaza “La Merced”; y por otro, un espacio urbano con un esquema funcionalista: Plaza “La República”. El estudio se enfoca en la generación de información diagnóstica que proviene directamente de la observación sobre el comportamiento social y el análisis de los elementos morfológicos del espacio estudiado. Finalmente, este estudio contrasta la revisión de condiciones espaciales específicas de un lugar con las dinámicas de comportamiento de sus habitantes.
Article
Humans may deprive each other of human qualities if the social context encourages it. But what about the opposite: do people attribute human traits to non-human entities without a mind, such as Artificial Intelligence (AI)? Perceived humanness is based on the assumption that the other can act (has agency) and has experiences (thoughts and feelings). This review shows that AI fails to fully elicit these two dimensions of mind perception. Embodied AI may trigger agency attribution, but only humans trigger the attribution of experience. Importantly, people are more likely to attribute mind in general and agency specifically to AI that resembles the human form. Lastly, people’s pre-dispositions and the social context affect people’s tendency to attribute human traits to AI.
Article
Full-text available
Artificial agents are on their way to interact with us daily. Thus, the design of embodied artificial agents that can easily cooperate with humans is crucial for their deployment in social scenarios. Endowing artificial agents with human-like behavior may boost individuals’ engagement during the interaction. We tested this hypothesis in two screen-based experiments. In the first one, we compared attentional engagement displayed by participants while they observed the same set of behaviors displayed by an avatar of a humanoid robot and a human. In the second experiment, we assessed the individuals’ tendency to attribute anthropomorphic traits towards the same agents displaying the same behaviors. The results of both experiments suggest that individuals need less effort to process and interpret an artificial agent’s behavior when it closely resembles one of a human being. Our results support the idea that including subtle hints of human-likeness in artificial agents’ behaviors would ease the communication between them and the human counterpart during interactive scenarios.
Article
Full-text available
What is the key to successful interaction? Is it sufficient to represent a common goal, or does the way our partner achieves that goal count as well? How do we react when our partner misbehaves? We used a turn-taking music-like task requiring participants to play sequences of notes together with a partner, and we investigated how people adapt to a partner’s error that violates their expectations. Errors consisted of either playing a wrong note of a sequence that the agents were playing together (thus preventing the achievement of the joint goal) or playing the expected note with an unexpected action. In both cases, we found post-error slowing and inaccuracy suggesting the participants’ implicit tendency to correct the partner’s error and produce the action that the partner should have done. We argue that these “joint” monitoring processes depend on the motor predictions made within a (dyadic) motor plan and may represent a basic mechanism for mutual support in motor interactions.
Article
Full-text available
In the present study, we examine how person categorization conveyed by the combination of multiple cues modulates joint attention. In three experiments, we tested the combinatory effect of age, sex, and social status on gaze-following behaviour and pro-social attitudes. In Experiments 1 and 2, young adults were required to perform an instructed saccade towards left or right targets while viewing a to-be-ignored distracting face (female or male) gazing left or right, that could belong to a young, middle-aged, or elderly adult of high or low social status. Social status was manipulated by semantic knowledge (Experiment 1) or through visual appearance (Experiment 2). Results showed a clear combinatory effect of person perception cues on joint attention (JA). Specifically, our results showed that age and sex cues interacted with social status information depending on the modality through which it was conveyed. In Experiment 3, we further investigated our results by testing whether the identities used in Experiments 1 and 2 triggered different pro-social behaviour. The results of Experiment 3 showed that the identities resulting as more distracting in Experiments 1 and 2 were also perceived as more in need and prompt helping behaviour. Taken together, our evidence shows a combinatorial effect of age, sex, and social status in modulating the gaze following behaviour, highlighting a complex and dynamic interplay between person categorization and joint attention.
Article
Full-text available
The present study aimed to examine event-related potentials (ERPs) of action planning and outcome monitoring in human-robot interaction. To this end, participants were instructed to perform costly actions (i.e. losing points) to stop a balloon from inflating and to prevent its explosion. They performed the task alone (individual condition) or with a robot (joint condition). Similar to findings from human-human interactions, results showed that action planning was affected by the presence of another agent, robot in this case. Specifically, the early readiness potential (eRP) amplitude was larger in the joint, than in the individual, condition. The presence of the robot affected also outcome perception and monitoring. Our results showed that the P1/N1 complex was suppressed in the joint, compared to the individual condition when the worst outcome was expected, suggesting that the presence of the robot affects attention allocation to negative outcomes of one's own actions. Similarly, results also showed that larger losses elicited smaller feedback-related negativity (FRN) in the joint than in the individual condition. Taken together, our results indicate that the social presence of a robot may influence the way we plan our actions and also the way we monitor their consequences. Implications of the study for the human-robot interaction field are discussed.
Article
Full-text available
Social signals, such as changes in gaze direction, are essential cues to predict others’ mental states and behaviors (i.e., mentalizing). Studies show that humans can mentalize with nonhuman agents when they perceive a mind in them (i.e., mind perception). Robots that physically and/or behaviorally resemble humans likely trigger mind perception, which enhances the relevance of social cues and improves social-cognitive performance. The current experiments examine whether the effect of physical and behavioral influencers of mind perception on social-cognitive processing is modulated by the lifelikeness of a social interaction. Participants interacted with robots of varying degrees of physical (humanlike vs. robot-like) and behavioral (reliable vs. random) human-likeness while the lifelikeness of a social attention task was manipulated across five experiments. The first four experiments manipulated lifelikeness via the physical realism of the robot images (Study 1 and 2), the biological plausibility of the social signals (Study 3), and the plausibility of the social context (Study 4). They showed that humanlike behavior affected social attention whereas appearance affected mind perception ratings. However, when the lifelikeness of the interaction was increased by using videos of a human and a robot sending the social cues in a realistic environment (Study 5), social attention mechanisms were affected both by physical appearance and behavioral features, while mind perception ratings were mainly affected by physical appearance. This indicates that in order to understand the effect of physical and behavioral features on social cognition, paradigms should be used that adequately simulate the lifelikeness of social interactions.
Article
Full-text available
As the field of social robotics has been dynamically growing and expanding over various areas of research and application, in which robots can be of assistance and companionship for humans, this paper offers a different perspective on a role that social robots can also play, namely the role of informing us about flexibility of human mechanisms of social cognition. The paper focuses on studies in which robots have been used as a new type of “stimuli” in psychological experiments to examine whether similar mechanisms of social cognition would be activated in interaction with a robot, as would be elicited in interaction with another human. Analysing studies in which a direct comparison has been made between a robot and a human agent, the paper examines whether for robot agents, the brain re-uses the same mechanisms that have been developed for interaction with other humans in terms of perception, action representation, attention and higher-order social cognition. Based on this analysis, the paper concludes that the human socio-cognitive mechanisms, in adult brains, are sufficiently flexible to be re-used for robotic agents, at least for those that have some level of resemblance to humans.
Article
Full-text available
Sensorimotor communication is a form of communication instantiated through body movements that are guided by both instrumental, goal-directed intentions and communicative, social intentions. Depending on the social interaction context, sensorimotor communication can serve different functions. This article aims to disentangle three of these functions: (a) an informing function of body movements, to highlight action intentions for an observer; (b) a coordinating function of body movements, to facilitate real-time action prediction in joint action; and (c) a performing function of body movements, to elicit emotional or aesthetic experiences in an audience. We provide examples of research addressing these different functions as well as some influencing factors, relating to individual differences, task characteristics, and situational demands. The article concludes by discussing the benefits of a closer dialog between separate lines of research on sensorimotor communication across different social contexts.
Article
Full-text available
A wealth of research in recent decades has investigated the effects of various forms of coordination upon prosocial attitudes and behavior. To structure and constrain this research, we provide a framework within which to distinguish and interrelate different hypotheses about the psychological mechanisms underpinning various prosocial effects of various forms of coordination. To this end, we introduce a set of definitions and distinctions that can be used to tease apart various forms of prosociality and coordination. We then identify a range of psychological mechanisms that may underpin the effects of coordination upon prosociality. We show that different hypotheses about the underlying psychological mechanisms motivate different predictions about the effects of various forms of coordination in different circumstances.
Article
Full-text available
In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behavior with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance toward an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behavior of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased toward the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance toward artificial agents, at least in some contexts.
Article
Full-text available
What mechanisms distinguish interactive from non-interactive actions? To answer this question we tested participants while they took turns playing music with a virtual partner: in the interactive joint action condition, the participants played a melody together with their partner by grasping (C note) or pressing (G note) a cube-shaped instrument, alternating in playing one note each. In the non-interactive control condition, players' behavior was not guided by a shared melody, so that the partner's actions and notes were irrelevant to the participant. In both conditions, the participant's and partner's actions were physically congruent (e.g., grasp-grasp) or incongruent (e.g., grasp-point), and the partner's association between actions and notes was coherent with the participant's or reversed. Performance in the non-interactive condition was only affected by physical incongruence, whereas joint action was only affected when the partner's action-note associations were reversed. This shows that task interactivity shapes the sensorimotor coding of others' behaviors, and that joint action is based on active prediction of the partner's action effects rather than on passive action imitation. We suggest that such predictions are based on Dyadic Motor Plans that represent both the agent's and the partner's contributions to the interaction goal, like playing a melody together.
Article
Full-text available
The world surrounding us has become increasingly technological. Nowadays, the influence of automation is perceived in each aspect of everyday life. If automation makes some aspects of life easier, faster and safer, empirical data also suggests that it could have negative performance and safety consequences regarding human operators, a set of difficulties called the "out-of-the-loop" (OOTL) performance problem. However, after decades of research, this phenomenon remains difficult to grasp and counter. In this paper, we propose a neuroergonomics approach to treat this phenomenon. We first describe how automation impacts human operators. Then, we present the current knowledge relative to this OOTL phenomenon. Finally, we describe how recent insights in neurosciences can help characterize, quantify and compensate this phenomenon.
Article
Full-text available
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Article
Full-text available
In joint action, multiple people coordinate their actions to perform a task together. This often requires precise temporal and spatial coordination. How do co-actors achieve this? How do they coordinate their actions toward a shared task goal? Here, we provide an overview of the mental representations involved in joint action, discuss how co-actors share sensorimotor information and what general mechanisms support coordination with others. By deliberately extending the review to aspects such as the cultural context in which a joint action takes place, we pay tribute to the complex and variable nature of this social phenomenon.
Conference Paper
Full-text available
Gaze following occurs automatically in social interactions, but the degree to which we follow gaze strongly depends on whether an agent is believed to have a mind and is therefore socially relevant for the interaction. The current paper investigates whether the social relevance of a robot can be manipulated via its physical appearance and whether there is a linear relationship between appearance and gaze following in a counter-predictive gaze cueing paradigm (i.e., target appears with a high likelihood opposite of the gazed-at location). Results show that while robots are capable of inducing gaze following, the degree to which gaze is passively followed does not linearly decrease with physical human-likeness. Rather, the relationship between appearance and gaze following is best described by an inverted u-shaped pattern, with automatic cueing effects (i.e., attending to the cued location) for agents of mixed human-likeness and reversed cueing effects (i.e., attending to the predicted location) for agents of either full human-likeness (100% human) or full robot-likeness (100% robot). The results are interpreted with regard to cognitive resource theory and design implications are discussed.
Article
Full-text available
Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).
Article
Full-text available
Although the importance of communication is recognized in several disciplines, it is rarely studied in the context of online social interactions and joint actions. During online joint actions, language and gesture are often insufficient and humans typically use non-verbal, sensorimotor forms of communication to send coordination signals. For example, when playing volleyball, an athlete can exaggerate her movements to signal her intentions to her teammates (say, a pass to the right) or to feint an adversary. Similarly, a person who is transporting a table together with a co-actor can push the table in a certain direction to signal where and when he intends to place it. Other examples of "signaling" are over-articulating in noisy environments and over-emphasizing vowels in child-directed speech. In all these examples, humans intentionally modify their action kinematics to make their goals easier to disambiguate. At the moment no formal theory exists of these forms of sensorimotor communication and signaling. We present one such theory that describes signaling as a combination of a pragmatic and a communicative action, and explains how it simplifies coordination in online social interactions. We cast signaling within a "joint action optimization" framework in which co-actors optimize the success of their interaction and joint goals rather than only their part of the joint action. The decision of whether and how much to signal requires solving a trade-off between the costs of modifying one's behavior and the benefits in terms of interaction success. Signaling is thus an intentional strategy that supports social interactions; it acts in concert with automatic mechanisms of resonance, prediction, and imitation, especially when the context makes actions and intentions ambiguous and difficult to read. Our theory suggests that communication dynamics should be studied within theories of coordination and interaction rather than only in terms of the maximization of information transmission.
Article
Full-text available
Performing online complementary motor adjustments is quintessential to joint actions since it allows interacting people to coordinate efficiently and achieve a common goal. We sought to determine whether, during dyadic interactions, signaling strategies and simulative processes are differentially implemented on the basis of the interactional role played by each partner. To this aim, we recorded the kinematics of the right hand of pairs of individuals who were asked to grasp as synchronously as possible a bottle-shaped object according to an imitative or complementary action schedule. Task requirements implied an asymmetric role assignment so that participants performed the task acting either as (1) Leader (i.e., receiving auditory information regarding the goal of the task with indications about where to grasp the object) or (2) Follower (i.e., receiving instructions to coordinate their movements with their partner's by performing imitative or complementary actions). Results showed that, when acting as Leader, participants used signaling strategies to enhance the predictability of their movements. In particular, they selectively emphasized kinematic parameters and reduced movement variability to provide the partner with implicit cues regarding the action to be jointly performed. Thus, Leaders make their movements more "communicative" even when not explicitly instructed to do so. Moreover, only when acting in the role of Follower did participants tend to imitate the Leader, even in complementary actions where imitation is detrimental to joint performance. Our results show that mimicking and signaling are implemented in joint actions according to the interactional role of the agent, which in turn is reflected in the kinematics of each partner.
Article
Full-text available
Studies demonstrate that elite athletes are able to extract kinematic information of observed domain-specific actions to predict their future course. Little is known, however, on the perceptuo-motor processes and neural correlates of the athletes' ability to predict fooling actions. Combining psychophysics and transcranial magnetic stimulation, we explored the impact of motor and perceptual expertise on the ability to predict the fate of observed actual or fake soccer penalty kicks. We manipulated the congruence between the model's body kinematics and the subsequent ball trajectory and investigated the prediction performance and cortico-spinal reactivity of expert kickers, goalkeepers, and novices. Kickers and goalkeepers outperformed novices by anticipating the actual kick direction from the model's initial body movements. However, kickers were more often fooled than goalkeepers and novices in cases of incongruent actions. Congruent and incongruent actions engendered a comparable facilitation of kickers' lower limb motor representation, but their neurophysiological response was correlated with their greater susceptibility to be fooled. Moreover, when compared with actual actions, motor facilitation for incongruent actions was lower among goalkeepers and higher among novices. Thus, responding to fooling actions requires updation of simulative motor representations of others' actions and is facilitated by visual rather than by motor expertise.
Article
Full-text available
We investigated the possibility that mothers modify their infant-directed actions in ways that might assist infants’ processing of human action. In a between-subjects design, 51 mothers demonstrated the properties of five novel objects either to their infant (age 6–8 months or 11–13 months) or to an adult partner. As predicted, demonstrations to infants were higher in interactiveness, enthusiasm, proximity to partner, range of motion, repetitiveness and simplicity, indicating that mothers indeed modify their infant-directed actions in ways that likely maintain infants’ attention and highlight the structure and meaning of action. The findings demonstrate that ‘motherese’ is broader in scope than previously recognized, including modifications to action as well as language.
Article
Full-text available
We report about the iCub, a humanoid robot for research in embodied cognition. At 104 cm tall, the iCub has the size of a three and half year old child. It will be able to crawl on all fours and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/FDL licenses. The entire design is available for download from the project homepage and repository (http://www.robotcub.org). In the following, we will concentrate on the description of the hardware and software systems. The scientific objectives of the project and its philosophical underpinning are described extensively elsewhere [1].
Chapter
Full-text available
Article
Full-text available
Why does chanting, drumming or dancing together make people feel united? Here we investigate the neural mechanisms underlying interpersonal synchrony and its subsequent effects on prosocial behavior among synchronized individuals. We hypothesized that areas of the brain associated with the processing of reward would be active when individuals experience synchrony during drumming, and that these reward signals would increase prosocial behavior toward this synchronous drum partner. 18 female non-musicians were scanned with functional magnetic resonance imaging while they drummed a rhythm, in alternating blocks, with two different experimenters: one drumming in-synchrony and the other out-of-synchrony relative to the participant. In the last scanning part, which served as the experimental manipulation for the following prosocial behavioral test, one of the experimenters drummed with one half of the participants in-synchrony and with the other out-of-synchrony. After scanning, this experimenter "accidentally" dropped eight pencils, and the number of pencils collected by the participants was used as a measure of prosocial commitment. Results revealed that participants who mastered the novel rhythm easily before scanning showed increased activity in the caudate during synchronous drumming. The same area also responded to monetary reward in a localizer task with the same participants. The activity in the caudate during experiencing synchronous drumming also predicted the number of pencils the participants later collected to help the synchronous experimenter of the manipulation run. In addition, participants collected more pencils to help the experimenter when she had drummed in-synchrony than out-of-synchrony during the manipulation run. By showing an overlap in activated areas during synchronized drumming and monetary reward, our findings suggest that interpersonal synchrony is related to the brain's reward system.
Article
Full-text available
Performing joint actions often requires precise temporal coordination of individual actions. The present study investigated how people coordinate their actions at discrete points in time when continuous or rhythmic information about others' actions is not available. In particular, we tested the hypothesis that making oneself predictable is used as a coordination strategy. Pairs of participants were instructed to coordinate key presses in a two-choice reaction time task, either responding in synchrony (Experiments 1 and 2) or in close temporal succession (Experiment 3). Across all experiments, we found that coactors reduced the variability of their actions in the joint context compared with the same task performed individually. Correlation analyses indicated that the less variable the actions were, the better was interpersonal coordination. The relation between reduced variability and improved coordination performance was not observed when pairs of participants performed independent tasks next to each other without intending to coordinate. These findings support the claim that reducing variability is used as a coordination strategy to achieve predictability. Identifying coordination strategies contributes to the understanding of the mechanisms involved in real-time coordination.
Article
Full-text available
The tendency to mimic and synchronize with others is well established. Although mimicry has been shown to lead to affiliation between co-actors, the effect of interpersonal synchrony on affiliation remains an open question. The authors investigated the relationship by having participants match finger movements with a visual moving metronome. In Experiment 1, affiliation ratings were examined based on the extent to which participants tapped in synchrony with the experimenter. In Experiment 2, synchrony was manipulated. Affiliation ratings were compared for an experimenter who either (a) tapped to a metronome that was synchronous to the participant's metronome, (b) tapped to a metronome that was asynchronous, or (c) did not tap. As hypothesized, in both studies, the degree of synchrony predicted subsequent affiliation ratings. Experiment 3 found that the affiliative effects were unique to interpersonal synchrony.
Article
Full-text available
Video recordings of naturally occurring interactions in England, France, the Netherlands, Italy, Greece, Scotland, and Ireland were coded and analyzed to examine the effects of culture, gender, and age on interpersonal distance, body orientation, and touch. Results partially supported expected differences between contact cultures of southern Europe and noncontact cultures of northern Europe with respect to touch. More touch was observed among Italian and Greek dyads than among English, French, and Dutch dyads. In addition, an interaction effect between age and gender for body orientation suggested opposite development trends for mixed-sex dyads and male dyads. Whereas mixed dyads tended to maintain less direct orientations as they aged, male dyads maintained more direct orientations.
Article
Full-text available
Perceiving other people's behaviors activates imitative motor plans in the perceiver, but there is disagreement as to the function of this activation. In contrast to other recent proposals (e.g., that it subserves overt imitation, identification and understanding of actions, or working memory), here it is argued that imitative motor activation feeds back into the perceptual processing of conspecifics' behaviors, generating top-down expectations and predictions of the unfolding action. Furthermore, this account incorporates recent ideas about emulators in the brain-mental simulations that run in parallel to the external events they simulate-to provide a mechanism by which motoric involvement could contribute to perception. Evidence from a variety of literatures is brought to bear to support this account of perceiving human body movement.
Article
Full-text available
Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.
Article
Humans have a striking ability to coordinate their actions with each other to achieve joint goals. The tight interpersonal coordination that characterizes joint actions is achieved through processes that help with preparing for joint action as well as processes that are active while joint actions are being performed. To prepare for joint action, partners form representations of each other’s actions and tasks and the relation between them. This enables them to predict each other’s upcoming actions, which, in turn, facilitates coordination. While performing joint actions, partners’ coordination is maintained by (a) monitoring whether individual and joint outcomes correspond to what was planned, (b) predicting partners’ action parameters on the basis of familiarity with their individual actions, (c) communicating task-relevant information unknown to partners in an action-based fashion, and (d) relying on coupling of predictions through dense perceptual-information flow between coactors. The next challenge for the field of joint action is to generate an integrated perspective that links coordination mechanisms to normative, evolutionary, and communicative frameworks.
Chapter
The exploitation of Social Assistive Robotics (SAR) will bring to the emergence of a new category of users, namely experts in clinical rehabilitation, who do not have a background in robotics. The first aim of the present study was to address individuals’ attitudes towards robots within this new category of users. The secondary aim was to investigate whether repetitive interactions with the robot affect such attitudes. Therefore, we evaluated both explicit and implicit attitudes towards robots in a group of therapists rehabilitating children with neurodevelopmental disorders. The evaluation took place before they started a SAR intervention (T0), ongoing (T1), and at the end of it (T2). Explicit attitudes were evaluated using self-report questionnaires, whereas implicit attitudes were operationalized as the perception of the robot as a social partner and implicit associations regarding the concept of “robot”. Results showed that older ages and previous experience with robots were associated with negative attitudes toward robots and lesser willingness to perceive the robot as a social agent. Explicit measures did not vary across time, whereas implicit measures were modulated by increased exposure to robots: the more clinicians were exposed to the robot, the more the robot was treated as a social partner. Moreover, users’ memory association between the concept of a robot and mechanical attributes weakens across evaluations. Our results suggest that increased exposure to robots modulates implicit but not explicit attitudes.
Article
In our daily lives, we need to predict and understand others’ behavior in order to navigate through our social environment. Predictions concerning other humans’ behavior usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called ‘adoption of the intentional stance.’ In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture, and human-robot interaction. We propose that adopting the intentional stance might be a pivotal factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance and examines literature related to the development of the intentional stance across life span. Subsequently, we discuss cultural norms as grounded in the intentional stance, and finally, we focus on the issue of adopting the intentional stance toward artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how – and also when – we predict and explain robots’ behavior by referring to mental states is of high interest. The paper concludes with a discussion on ethical consequences of adopting the intentional stance toward robots, and sketches future directions in research on this topic.
Article
In the presence of others, sense of agency (SoA), i.e. the perceived relationship between our own actions and external events, is reduced. The present study aimed at investigating whether the phenomenon of reduced SoA is observed in human-robot interaction, similarly to human-human interaction. To this end, we tested SoA when people interacted with a robot (Experiment 1), with a passive, non-agentic air pump (Experiment 2), or when they interacted with both a robot and a human being (Experiment 3). Participants were asked to rate the perceived control they felt on the outcome of their action while performing a diffusion of responsibility task. Results showed that the intentional agency attributed to the artificial entity differently affect the performance and the perceived SoA on the outcome of the task. Experiment 1 showed that, when participants successfully performed an action, they rated SoA over the outcome as lower in trials in which the robot was also able to act (but did not), compared to when they were performing the task alone. However, this did not occur in Experiment 2, where the artificial entity was an air pump, which had the same influence on the task as the robot, but in a passive manner and thus lacked intentional agency. Results of Experiment 3 showed that SoA was reduced similarly for the human and robot agents, threby indicating that attribution of intentional agency plays a crucial role in reduction of SoA. Together, our results suggest that interacting with robotic agents affects SoA, similarly to interacting with other humans, but differently from interacting with non-agentic mechanical devices. This has important implications for the applied of social robotics, where a subjective decrease in SoA could have negative consequences, such as in robot-assisted care in hospitals.
Article
Previous research has demonstrated that people can reliably distinguish between actions with different instrumental intentions on the basis of the kinematic signatures of these actions (Cavallo, Koul, Ansuini, Capozzi, & Becchio, 2016). It has also been demonstrated that different informative intentions result in distinct action kinematics (McEllin, Knoblich, & Sebanz, 2017). However, it is unknown whether people can discriminate between instrumental actions and actions performed with an informative intention, and between actions performed with different informative intentions, on the basis of kinematic cues produced in these actions. We addressed these questions using a visual discrimination paradigm in which participants were presented with point light animations of an actor playing a virtual xylophone. We systematically manipulated and amplified kinematic parameters that have been shown to reflect different informative intentions. We found that participants reliably used both spatial and temporal cues in order to discriminate between instrumental actions and actions performed with an informative intention, and between actions performed with different informative intentions. Our findings indicate that the informative cues produced in joint action and teaching go beyond serving a general informative purpose and can be used to infer specific informative intentions.
Article
The present study investigated if the gaze-cuing effect (i.e., the tendency for observers to respond faster to targets in locations that were cued by others' gaze direction than to not-cued targets) is modulated by the type of relationship (i.e., cooperative or competitive) established during a previous interaction with a cuing face. In two experiments, participants played a series of single-shot games of a modified version of the two-choice Prisoner's Dilemma against eight simulated contenders. They were shown a fictive feedback indicating if the opponents chose to cooperate or compete with them. Opponents' faces were then used as stimuli in a standard gaze-cuing task. In Experiment 1 females classified as average in competitiveness were tested, while in Experiment 2 females classified as high and low in competitiveness were tested. We found that only in females classified as low and average in competitiveness the gaze-cuing effect for competitive contenders was greater than for cooperative contenders. These findings suggest that competitive opponents represent a relevant source of information within the social environment and female observers with low and average levels of competition cannot prevent from keeping their eyes over them. Copyright © 2015 Elsevier B.V. All rights reserved.
Article
This chapter is concerned with some of the issues involved in understanding how perception contributes to the control of actions. Roughly speaking, the term of action refers to any meaningful segment of an organisms intercourse with its environment. Two important features of this preliminary definition can be brought out more clearly when “actions” are contrasted with “responses” and “movements”. Unlike response-centered approaches to psychology, which consider the organisms activity more or less determined by the actual stimulus information, the action approach emphasizes intentional control as being simultaneous with (or even prior to) informational control of activity, assuming that intentional processes fix the rules for the selection and use of stimulus information (Heuer Prinz, 1987; Neumann Prinz, 1987). Unlike movement-centered approaches, which describe the organisms activity in terms of the dynamics of muscular contraction patterns and the kinematics of the resulting body movements, the action approach stresses the environmental consequences that go along with these bodily events, contending that meaningful interactions with the environment, rather than movements per se, should be considered the effective functional units of activity (Fowler Turvey, 1982; Neisser, 1985).
Article
We investigated whether people monitor the outcomes of their own and their partners' individual actions as well as the outcome of their combined actions when performing joint actions together. Pairs of pianists memorized both parts of a piano duet. Each pianist then performed one part while their partner performed the other; EEG was recorded from both. Auditory outcomes (pitches) associated with keystrokes produced by the pianists were occasionally altered in a way that either did or did not affect the joint auditory outcome (i.e., the harmony of a chord produced by the two pianists' combined pitches). Altered auditory outcomes elicited a feedback-related negativity whether they occurred in the pianist's own part or the partner's part, and whether they affected individual or joint action outcomes. Altered auditory outcomes also elicited a P300 whose amplitude was larger when the alteration affected the joint outcome compared with individual outcomes and when the alteration affected the pianist's own part compared with the partner's part. Thus, musicians engaged in joint actions monitor their own and their partner's actions as well as their combined action outcomes, while at the same time maintaining a distinction between their own and others' actions and between individual and joint outcomes.
Article
Adaptive action is the function of cognition. It is constrained by the properties of evolved brains and bodies. An embodied perspective on social psychology examines how biological constrains give expression to human function in socially situated contexts. Key contributions in social psychology have highlighted the interface between the body and cognition, but theoretical development in social psychology and embodiment research remain largely disconnected. The current special issue reflects on recent developments in embodiment research. Commentaries from complementary perspectives connect them to social psychological theorizing. The contributions focus on the situatedness of social cognition in concrete interactions, and the implementation of cognitive processes in modal instead of amodal representations. The proposed perspectives are highly compatible, suggesting that embodiment can serve as a unifying perspective for psychology. Copyright © 2009 John Wiley & Sons, Ltd.
Chapter
Cooperative and competitive processes result primarily from two basic types of goal interdependence: positive (where the goals are linked in such a way that the amount or probability of a person's goal attainment is positively correlated with the amount or probability of another obtaining his goal) and negative (where the goals are linked in such a way that the amount or probability of goal attainment is negatively correlated with the amount or probability of the other's goal attainment). To put it colloquially, if you're positively linked with another, then you sink and swim together; with negative linkage, if the other sinks, you swim, and if the other swims, you sink.
Article
What kinds of processes and representations make joint action possible? In this paper, we suggest a minimal architecture for joint action that focuses on representations, action monitoring and action prediction processes, as well as ways of simplifying coordination. The architecture spells out minimal requirements for an individual agent to engage in a joint action. We discuss existing evidence in support of the architecture as well as open questions that remain to be empirically addressed. In addition, we suggest possible interfaces between the minimal architecture and other approaches to joint action. The minimal architecture has implications for theorising about the emergence of joint action, for human-machine interaction, and for understanding how coordination can be facilitated by exploiting relations between multiple agents' actions and between actions and the environment.
Article
Adults modify their communication when interacting with infants, and these modifications have been tied to infant attention. However, the effect infant-directed action on infant behavior is understudied. This study examined whether infant-directed action affects infants, specifically their attention to and exploratory behaviors with objects. Forty-eight 8- to 10-month-old infants and their caregivers participated in a laboratory session during which caregivers demonstrated objects to infants using infant-directed action. Results indicated that variation in amplitude and repetition were tied to differences in infant attention, and varying levels of repetition were tied to differences in object exploration.
Article
The aim of the present study was to investigate the effects of communicative intention on action. In Experiment 1 participants were requested to reach towards an object, grasp it, and either simply lift it (individual condition) or lift it with the intent to communicate a meaning to a partner (communicative condition). Movement kinematics were recorded using a three-dimensional motion analysis system. The results indicate that kinematics was sensitive to communicative intention. Although the to-be-grasped object remained the same, movements performed for the 'communicative' condition were characterized by a kinematic pattern which differed from those obtained for the 'individual' condition. These findings were confirmed in a subsequent experiment in which the communicative condition was compared to a control condition, in which the communicative exchange was prevented. Results are discussed in terms of cognitive pragmatics and current knowledge on how social behavior shapes action kinematics.
Article
Understanding the goals or intentions of other people requires a broad range of evaluative processes including the decoding of biological motion, knowing about object properties, and abilities for recognizing task space requirements and social contexts. It is becoming increasingly evident that some of this decoding is based in part on the simulation of other people's behavior within our own nervous system. This review focuses on aspects of action understanding that rely on embodied cognition, that is, the knowledge of the body and how it interacts with the world. This form of cognition provides an essential knowledge base from which action simulation can be used to decode at least some actions performed by others. Recent functional imaging studies or action understanding are interpreted with a goal of defining conditions when simulation operations occur and how this relates with other constructs, including top-down versus bottom-up processing and the functional distinctions between action observation and social networks. From this it is argued that action understanding emerges from the engagement of highly flexible computational hierarchies driven by simulation, object properties, social context, and kinematic constraints and where the hierarchy is driven by task structure rather than functional or strict anatomic rules.
Article
Since the mid-1990s, research on interpersonal acceptance and exclusion has proliferated, and several paradigms have evolved that vary in their efficiency, context specificity, and strength. This article describes one such paradigm, Cyberball, which is an ostensibly online ball-tossing game that participants believe they are playing with two or three others. In fact, the "others" are controlled by the programmer. The course and speed of the game, the frequency of inclusion, player information, and iconic representation are all options the researcher can regulate. The game was designed to manipulate independent variables (e.g., ostracism) but can also be used as a dependent measure of prejudice and discrimination. The game works on both PC and Macintosh (OS X) platforms and is freely available.
La robotica assistenziale sociale come strumento per promuovere lo sviluppo socio-cognitivo: Vantaggi, limiti e prospettive future
  • Ciardo