"Qiming Shen et al. studied motor interference and motor coordination in human-humanoid interactions for different types of visual stimuli (robot, pendulum and moving dot). The authors concluded that participants tended to synchronize with agents having a better appearance, which means that a robot perceived as close as possible to a social entity may facilitate human-robot interaction . "
[Show abstract][Hide abstract] ABSTRACT: Future robots must co-exist and directly interact with human beings. Designing these agents imply solving hard problems linked to human-robot interaction tasks. For instance, how a robot can choose an interacting partner among various agents and how a robot locates regions of interest in its visual field. Studies of neurobiology and psychology collectively named synchrony as an indispensable parameter for social interaction. We assumed that Human-Robot interaction could be initiated by synchrony detection. In this paper, we present a developmental approach for analyzing unintentional synchronization in human-robot interaction. Using our neural network model, the robot learns from a babbling step its inner dynamics by associating its own motor activities (oscillators) with the visual stimulus induced by its own motion. After learning the robot is capable of choosing an interacting agent and of localizing the spatial position of its preferred partner by synchrony detection.
"None of these previous studies was able to disentangle whether biological motion is the only requirement for MI or whether other morphological similarities between agent and observer have to be present. A recent study investigating motor coordination proposed that rather than any single feature the overall perception of the agent as a “social entity”, e.g., elicited by top-down information, is the critical factor . "
[Show abstract][Hide abstract] ABSTRACT: Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another's movement. This so-called "action-perception coupling" is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one's own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what 'human-like' means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents.
PLoS ONE 06/2012; 7(6):e39637. DOI:10.1371/journal.pone.0039637 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Physical human-robot interaction has the potential to be useful in a number of domains, but this will depend on how people respond to the robot’s actions. For some domains, such as healthcare, a robot is likely to initiate physical contact with a person’s body. In order to investigate how people respond to this type of interaction, we conducted an experiment with 56 people in which a robotic nurse autonomously touched and wiped each participant’s forearm. On average, participants had a favorable response to the first time the robot touched them. However, we found that the perceived intent of the robot significantly influenced people’s responses. If people believed that the robot intended to clean their arms, the participants tended to respond more favorably than if they believed the robot intended to comfort them, even though the robot’s manipulation behavior was the same. Our results suggest that roboticists should consider this social factor in addition to the mechanics of physical interaction. Surprisingly, we found that participants in our study responded less favorably when given a verbal warning prior to the robot’s actions. In addition to these main results, we present post-hoc analyses of participants’ galvanic skin responses (GSR), open-ended responses, attitudes towards robots, and responses to a second trial.
International Journal of Social Robotics 01/2014; 6(1). DOI:10.1007/s12369-013-0215-x · 1.21 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.