Interaction Studies

Published by John Benjamins Publishing
Online ISSN: 1572-0381
Print ISSN: 1572-0373
Publications
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.
 
It is possible for a language to emerge with no direct linguistic history or outside linguistic influence. Al-Sayyid Bedouin Sign Language (ABSL) arose about 70 years ago in a small, insular community with a high incidence of profound prelingual neurosensory deafness. In ABSL, we have been able to identify the beginnings of phonology, morphology, syntax, and prosody. The linguistic elements we find in ABSL are not exclusively holistic, nor are they all compositional, but a combination of both. We do not, however, find in ABSL certain features that have been posited as essential even for a proto-language. ABSL has a highly regular syntax as well as word-internal compounding, also highly regular but quite distinct from syntax in its patterns. ABSL, however, has no discernable word-internal structure of the kind observed in more mature sign languages: no spatially organized morphology and no evident duality of phonological patterning.
 
During the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) in Demiris, Y. and Khadhouri, B., (2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot's resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses' requests, with respect to reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes
 
Scenarios for the emergence or bootstrap of a lexicon involve the repeated interaction between at least two agents who must reach a consensus on how to name N objects using H words. Here we consider minimal models of two types of learning algorithms: cross-situational learning, in which the individuals determine the meaning of a word by looking for something in common across all observed uses of that word, and supervised operant conditioning learning, in which there is strong feedback between individuals about the intended meaning of the words. Despite the stark differences between these learning schemes, we show that they yield the same communication accuracy in the realistic limits of large N and H, which coincides with the result of the classical occupancy problem of randomly assigning N objects to H words.
 
This special issue focuses on a relatively new line of research on human communication which investigates the generalities of human semiosis rather than the specifics of spoken dialogue. In spite of its brief history, experimental semiotics has already grown in a few directions. The contributions in this issue reflect the different methodologies used in previous work. Experimental semioticians have begun to explore a number of factors that affect the emergence and evolution of human communication. Here we identify three main themes that emerge from such explorations which we believe can become major directions for future research in the field. The first theme concerns the social interactions that support the emergence and evolution of communication systems. The second theme concerns the linguistic structures that emerge in semiotic games. The third theme concerns the very emergence of communication. Previous studies demonstrated that humans seem to possess a strong talent for communicating in fairly challenging conditions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
The first Young Researchers in Human-Robot Interaction Workshop, held on March 1, 2006 in Salt Lake City, Utah, provides insight into how to facilitate the establishment of the HRI community. Organized in conjunction with the first annual ACM/IEEE Human Robot Interaction Conference, the NSF-sponsored workshop assembled 15 graduate students from 5 different countries in computer science, psychology, engineering, and the arts to build the HRI community. This report highlights recommendations from discussion sessions, a synopsis of the plenary address, and representative examples of the participants' presentations. Participants emphasized that HRI is a unique field, requiring knowledge in computing, psychology, and communications despite the differences in the courses, methods, and philosophies across disciplines. The following are needed for future growth in HRI: (i) stable, canonical robotics platforms for research purposes, (ii) a multidisciplinary community infrastructure to connect researchers, and (iii) a "Berlitz phrasebook" and collected reference materials for helping understand the "other" disciplines.
 
To what extent do keas, Nestor notabilis , learn from each other? We tested eighteen captive keas, New Zealand parrots, in a tool use task involving visual feature discrimination and social learning. The keas were presented with two adjacent tubes, each containing a physically distinct baited platform. One platform could be collapsed by insertion of a block into the tube to release the bait; the other platform could not be collapsed. In contrast to birds that acted on their own (“individual learners”), birds that could observe a demonstrator bird operated the collapsible platform first. However, they soon changed their behaviour to inserting blocks indiscriminately in either tube. When we reversed the collapsibility of the platforms, only adult observers but neither their demonstrators that had individually learnt nor the juveniles immediately altered their former preference. Observers, however, did not simply reverse their initial preference but rather moved to and then stayed at a chance performance as to where to insert a block first. In conclusion, the keas' overt exploration soon overrode the effect of social learning. We argue that such behaviour might help keas to find more efficient extractive foraging techniques in their native variable, low-risk environment. Keywords: tool use behaviour; social learning; reversal learning; physical cognition
 
The goal of this workshop is to address the darker side of HCI by examining how computers sometimes bring about the expression of negative emotions. In particular, we are interested in the phenomenon of human beings abusing computers. Such behavior can take many forms, ranging from the verbal abuse of conversational agents to physical attacks on the hardware. In some cases, particularly in the case of embodied conversational agents, there are questions about how the machine should respond to verbal assaults. This workshop is also interested in understanding the psychological underpinnings of negative behavior involving computers. In this regard, we are interested in exploring how HCI factors influence human-to-human abuse in computer-mediated communication. The overarching objective of this workshop is to sketch a research agenda on the topic of the misuse and abuse of interactive technologies that will lead to design solutions capable of protecting users and restraining disinhibited behaviors.
 
use of the pen to make use of the facial muscles involved with smiling 
Summary of emotional effects in speech (relative to neutral speech)
Avatars in There.com automatically adjust their posture during a conversation 
Numerous research groups around the world are attempting to build realistic and believable autonomous embodied agents that attempt to have natural interactions with users. Research into these entities has primarily focused on their potential to enhance human-computer interaction. As a result, there is little understanding of the potential for embodied entities to abuse and manipulate users for questionable purposes. We highlight the potential opportunities for abuse when interacting with embodied agents in virtual worlds and discuss how our social interactions with such entities can contribute to abusive behaviour. Suggestions for reducing such risks are also provided, along with suggestions for important future research areas.
 
Based on an integrated theoretical framework, this study analyzes user acceptance behavior toward socially interactive robots focusing on the variables that influence the users' attitudes and intentions to adopt robots. Individuals' responses to questions about attitude and intention to use robots were collected and analyzed according to different factors modified from a variety of theories. The results of the proposed model explain that social presence is key to the behavioral intention to accept social robots. The proposed model shows the significant roles of perceived adaptivity and sociability, both of which affect attitude as well as influence perceived usefulness and perceived enjoyment, respectively. These factors can be key features of users' expectations of social robots, which can give practical implications for designing and developing meaningful social interaction between robots and humans. The new set of variables is specific to social robots, acting as factors that enhance attitudes and behavioral intentions in human-robot interactions. Keywords: Robot acceptance model; Socially interactive robots; Social robots; Social presence
 
Teacher expectations: Self-fulfilling prophecies and accuracy in naturalistic studies 
This paper contests social psychology's emphasis on the biased, erroneous, and constructed nature of social cognition by: (1) showing how the extent of bias and error in classic research is overstated; (2) summarizing research regarding the accuracy of social beliefs; and (3) describing how social stereotypes sometimes improve person perception accuracy. A Goodness of Judgment Index is also presented to extract evidence regarding accuracy from research focusing on bias. We conclude that accuracy is necessary for understanding social cognition.
 
Potential speech abilities constitute a key component in the description of the Neandertals and their relations with modern Homo Sapiens. Since Lieberman & Crelin postulated in 1971 the theory that “Neanderthal man did not have the anatomical prerequisites for producing the full range of human speech“ their speech capability has been a subject of hot debate for over 30 years, and remains a controversial question. In this study, we first question the methodology adopted by Lieberman and Crelin, and we point out articulatory and acoustic flaws in the data and the modeling. Then we propose a general articulatory-acoustic framework for testing the acoustic consequences of the trade-off between oral and pharyngeal cavities. Specifically, following Honda & Tiede (1998), we characterize this trade-off by a Laryngeal Height Index (LHI) corresponding to the length ratio of the pharyngeal cavity to the oral cavity. Using an anthropomorphic articulatory model controlled by lips, jaw, tongue and larynx parameters, we can generate the Maximal Vowel Space (MVS), which is a triangle in the F1 / F2 plane, the three point vowels /a/, /i/, and /u/ being located at its three extremities. We sample the evolution of the position of the larynx from birth to adulthood with four different LHI values, and we show that the associated MVS are very similar. Therefore, the MVS of a given vocal tract does not depend on the LHI: gestures of the tongue body, lips and jaw allow compensations for differences in the ratio between the dimensions of the oral cavity and pharynx. We then infer that the vowel space of Neandertals (with high or low larynx) was potentially no smaller than that of a modern human and that Neandertals could produce all the vowels of the world's languages. Neandertals were no more vocally handicapped than children at birth are. Therefore, there is no reason to believe that the lowering of the larynx and a concomitant increase in pharynx size are necessary evolutionary pre-adaptations for speech. However, since our study is strictly limited to the morphological and acoustic aspects of the vocal tract, we cannot offer any definitive answer to the question of whether Neandertals could produce human speech or not.
 
I present the symbol grounding problem in the larger context of a materialist theory of content and then present two problems for causal, teleo-functional accounts of content. This leads to a distinction between two kinds of mental representations: presentations and symbols; only the latter are cognitive. Based on Milner's and Goodale's dual route model of vision, I posit the existence of precise interfaces between cognitive systems that are activated during object recognition. Interfaces are constructed as a child learns, and is taught, how to interact with her environment; hence, interface structure has a social determinant essential for symbol grounding. Symbols are encoded in the brain to exploit these interfaces, by having projections to the interfaces that are activated by what the symbols stand for. I conclude by situating my proposal in the context of Harnad's (1990) solution to the symbol grounding problem and responding to three standard objections.
 
Human-robot tutoring scenario
Action demonstration in the blockworld task
Linguistic characteristics: Complexity
Linguistic characteristics: Grammatical mood
It has been proposed that the design of robots might benefit from interactions that are similar to caregiver-child interactions, which is tailored to children's respective capacities to a high degree. However, so far little is known about how people adapt their tutoring behaviour to robots and whether robots can evoke input that is similar to child-directed interaction. The paper presents detailed analyses of speakers' linguistic behaviour and non-linguistic behaviour, such as action demonstration, in two comparable situations: In one experiment, parents described and explained to their nonverbal infants the use of certain everyday objects; in the other experiment, participants tutored a simulated robot on the same objects. The results, which show considerable differences between the two situations on almost all measures, are discussed in the light of the computer-as-social-actor paradigm and the register hypothesis. Keywords: child-directed speech (CDS); motherese; robotese; motionese; register theory; social communication; human-robot interaction (HRI); computers-as-social-actors; mindless transfer
 
Drawing of the experimental situation. A wire mesh separates the experimenter from the parrot. A transparent plastic sheet with a hole in the middle is fixed on the wire mesh. The experimenter gives seeds through the hole (motivational trials), or touches the plastic sheet with a seed without transferring it to the parrot (Unable and Unwilling trials). At both sides of the table, bottle caps are placed in containers and are accessible to the parrot at all times 
Data from the six trials of each condition ('Unable' , 'Unwilling' , 'Distracted') for each individual. Time (in seconds): Total duration spent looking away from the experimenter. Total number of behaviours recorded during different experimental conditions
Intentionality plays a fundamental part in human social interactions and we know that interpretation of behaviours of conspecifics depends on the intentions underlying them. Most of the studies on intention attribution were undertaken with primates. However, very little is known on this topic in animals more distantly related to humans such as birds. Three hand-reared African grey parrots ( Psittacus erithacus ) were tested on their ability to understand human intentional actions. The subjects’ attention was not equally distributed across the conditions and their behavioural pattern also changed depending on the condition: the parrots showed more requesting behaviours (opening of the beak and request calls) when the experimenter was unwilling to give them seeds, and bit the wire mesh more that represented the obstacle when the experimenter was trying to give them food. For the first time we showed that a bird species, like primates, may be sensitive to behavioural cues of a human according to his intentions. Keywords: Grey parrots; intention attribution; theory of mind
 
In dealing with the nature of protolanguage, an important formative factor in its development, and one that would surely have influenced that nature, has too often been neglected: the precise circumstances under which protolanguage arose. Three factors are involved in this neglect: a failure to appreciate radical differences between the functions of language and animal communication, a failure to relate developments to the overall course of human evolution, and the supposition that protolanguage represents a package, rather than a series of separate developments that sequentially impacted the communication of pre-humans. An approach that takes these factors into account is very briefly suggested.
 
It is widely assumed that language in some form or other originated by piggybacking on pre-existing learning mechanism not dedicated to language. Using evolutionary connectionist simulations, we explore the implications of such assumptions by determining the effect of constraints derived from an earlier evolved mechanism for sequential learning on the interaction between biological and linguistic adaptation across generations of language learners. Artificial neural networks were initially allowed to evolve "biologically" to improve their sequential learning abilities, after which language was introduced into the population. We compared the relative contribution of biological and linguistic adaptation by allowing both networks and language to change over time. The simulation results support two main conclusions: First, over generations, a consistent head-ordering emerged due to linguistic adaptation. This is consistent with previous studies suggesting that some apparently arbitrary aspects of linguistic structure may arise from cognitive constraints on sequential learning. Second, when networks were selected to maintain a good level of performance on the sequential learning task, language learnability is significantly improved by linguistic adaptation but not by biological adaptation. Indeed, the pressure toward maintaining a high level of sequential learning performance prevented biological assimilation of linguistic-specific knowledge from occurring.
 
System Architecture
Agent Model
Showing outcome when user as truth seeker type (character choice: truth seeker)
Showing dynamic change of tactic within the interactive story (character choice: coward)
Interactive narratives have been used in a variety of applications, including video games, educational games, and training simulations. Maintaining engagement within such environments is an important problem, because it affects entertainment, motivation, and presence. Performance arts theorists have discussed and formalized many techniques that increase engagement and enhance dramatic content of art productions. While constructing a narrative manually, using these techniques, is acceptable for linear media, using this approach for interactive environments results in inflexible experiences due to the unpredictability of users' actions. Few researchers attempted to develop adaptive interactive narrative experiences. However, developing a quality interactive experience is largely an art process, and many of these adaptive techniques do not encode artistic principles. This paper presents a new interactive narrative architecture designed using a set of dramatic techniques that have been formulated based on several years of training in film and theatre.
 
The ability to appropriately sequence a list of discrete items is an important facet in performing routine cognitive tasks and may play a significant role in the acquisition of early communication skills. Though the serial learning abilities of some species, such as chimpanzees and rhesus macaques are well documented, there is virtually no information on the extent of these skills with gorillas. In this study, a young female western lowland gorilla has demonstrated the ability to learn a list of seven Arabic numerals by selecting them on a monitor behind an infrared touchframe. As list length increased (from three to seven), Rollie required fewer trials to reach the criterion, suggesting an acquisition of skills associated with serial learning. Rollie was most likely to correctly select the first item in the sequence and least likely to select the penultimate item in the sequence. Her performance suggests serial learning abilities in the range of other non-verbal species and provides preliminary evidence that supports the assertion that gorillas can succeed at this form of serial learning task.
 
The study of animal behavior, and particularly avian behavior, has advanced significantly in the past 50 years. In the early 1960s, both ethologists and psychologists were likely to see birds as simple automatons, incapable of complex cognitive processing. Indeed, the term “avian cognition“ was considered an oxymoron. Avian social interaction was also seen as based on rigid, if sometimes complicated, patterns. The possible effect of social interaction on cognition, or vice versa, was therefore something almost never discussed. Two paradigm shifts—one concerning animal cognition and one concerning social interaction—began to change perceptions in, respectively, the early 1970s and 1980s, but only more recently have researchers actively investigated how these two areas intersect in the study of avian behavior. The fruits of such intersection can be seen in the various papers for this special issue. I provide some brief background material before addressing the striking findings of current projects. In some cases, researchers have adapted early classic methods and in other cases have devised new paradigms, but in all instances have demonstrated avian capacities that were once thought to be the exclusive domain of humans or at least nonhuman primates. Keywords: avian cognition; avian social learning; avian observational learning; avian communication
 
Wilderness Search and Rescue (WiSAR) is the process of finding and assisting persons who are lost in remote wilderness areas. Because such areas are often rugged or relatively inaccessible, searching for missing persons can take huge amounts of time and resources. Camera-equipped mini-Unmanned Aerial Vehicles (UAVs) have the potential for speeding up the search process by enabling searchers to view aerial video of an area of interest while closely coordinating with nearby ground searchers. In this paper, we report on lessons learned by trying to use UAVs to support WiSAR. Our research methodology has relied heavily on field trials involving searches conducted under the direction of practicing search and rescue personnel but using simulated missing persons. Lessons from these field trials include the immediate importance of seeing things well in the video, the field need for defining and supporting various roles in the search team, role-specific needs like supporting systematic search by providing a visualization tool to represent the quality of the search, and the on-going need to better support interactions between ground and video searchers. Surprisingly to us, sophisticated autonomous search patterns were less critical than we anticipated, though advances in video enhancement and visualizing search progress, as well as ongoing work to model the likely location of a missing person, open up the possibility of closing the loop between UAV path-planning, search quality, and the likely location of a moving missing person. Keywords: Unmanned Aerial Vehicles, Wilderness Search and Rescue, Human–Robot Interaction, Human Factors, Field Robotics, Graphical User Interfaces
 
This paper presents a human-robot interaction framework where a robot can infer implicit affective cues of a human and respond to them appropriately. Affective cues are inferred by the robot in real-time from physiological signals. A robot-based basketball game is designed where a robotic "coach" monitors the human participant's anxiety to dynamically reconfigure game parameters to allow skill improvement while maintaining desired anxiety levels. The results of the above-mentioned anxiety-based sessions are compared with performance-based sessions where in the latter sessions, the game is adapted only according to the player's performance. It was observed that 79% of the participants showed lower anxiety during anxiety-based session than in the performance-based session, 64% showed a greater improvement in performance after the anxiety-based session and 71% of the participants reported greater overall satisfaction during the anxiety-based sessions. This is the first time, to our knowledge, that the impact of real-time affective communication between a robot and a human has been demonstrated experimentally.
 
This essay reviews theory and research regarding the “Michelangelo phenomenon,” which describes the manner in which close partners shape one another’s dispositions, values, and behavioral tendencies. Individuals are more likely to exhibit movement toward their ideal selves to the degree that their partners exhibit affirming perception and behavior; exhibiting confidence in the self’s capacity and enacting behaviors that elicit key features of the self’s ideal. In turn, movement towards the ideal self yields enhanced personal well-being and couple well-being. We review empirical evidence regarding this phenomenon and discuss self and partner variables that contribute to the process.
 
A map of the field study site Bossou in the Republic of Guinea, and the neighbouring Nimba Mountain range (map provided by N. Granier).
Human–wildlife interactions have existed for thousands of years, however as human populations increase and human impact on natural ecosystems becomes more intensive, both parties are increasingly being forced to compete for resources vital to both. Humans can value wildlife in many contexts promoting coexistence, while in other situations, such as crop-raiding, wildlife conflicts with the interests of people. As our closest phylogenetic relatives, chimpanzees ( Pan troglodytes ) in particular occupy a special importance in terms of their complex social and cultural relationship with humans. A case study is presented that focuses on the Bossou chimpanzees’ ( Pan troglodytes verus ) perspective of their habitat in the Republic of Guinea, West Africa, by highlighting the risks and opportunities presented by a human-dominated landscape, and detailing their day-to-day coexistence with humans. Understanding how rural people perceive chimpanzees and how chimpanzees adapt to living in anthropogenic environments will enhance our understanding of how people-wildlife interactions develop into situations of conflict and therefore can generate sustainable solutions to prevent or mitigate situations of conflict.
 
Precursors of inferential capacities concerning self- and other- understanding may be found in the basic experience of social contingency and emotional sharing. The emergence of a sense of self- and other-agency receives special attention here, as a foundation for self-understanding. We propose that synchrony, an amodal parameter of contingent self-other relationships, should be especially involved in the development of a sense of agency. To explore this framework, we have manipulated synchrony in various ways, either by delaying mother's response to infant's behaviour, disorganizing mother's internal synchrony between face and voice, freezing the partner in a still attitude, or on the contrary maximizing synchrony through imitation. We report results obtained with healthy and clinical populations that are supposed to be at the beginning of basic experiences concerning the ownership of their actions: infants of 2 months and 6 months, low-functioning children with autism and MA matched young children with Down Syndrome. Our results support the idea of a two-step process linking understanding of self to understanding of other and leading on to form the concept of human beings as universally contingent entities.
 
Average frequency of interaction (2a) and average distance (2b) among all pairs of males and those of females (2b). Above, in each figure the corrected Mann-Whitney U test indicates the significance of effects of intensity of aggression at a species level. In the diagram, the point indicates the median, vertical bars range from the minimum to the lower quartile and from the upper quartile to the maximum value. Grey lines connect female-male data points obtained from a single run. Below, in each figure the significance of the Wilcoxon test of sex differences is shown. F = female, m = male. Initial dominance values for females and males are 16, 32. High and low indicate 'species' with high and low intensity of aggression. Two-tailed statistical significance of ***: < 0.001, of **: < 0.01, of * < 0.05, of (*): < 0.10. For further explanations see text.  
In recent studies of primates, the question has been raised whether competitive regimes (egalitarian versus despotic) are species-specific or should rather be considered as sex-specific. To study this problem we use an individual-oriented model called DomWorld in which artificial agents are equipped merely to group and compete. In former studies of this model, dominance style appeared to be strongly influenced by the intensity of aggression: by increasing only this intensity of aggression, a great number of the characteristics of an egalitarian society switched to those of a despotic one. Here, we investigate, using DomWorld, a competitive regime of artificial males and females that differ exclusively in their fighting capacity; males having a higher intensity of aggression and a higher initial capacity of winning, such as may be due to a male-biased sexual dimorphism. Unexpectedly it appears that, in the model, even if the intensity of aggression of males is greater than that of females, their hierarchy is still significantly weaker and thus their society less differentiated and more egalitarian than that of females. The explanation is that, due to the higher initial dominance of males (compare larger body size), single events of victory and defeat lead to less differentiation than among females. The greater the sexual difference in initial dominance between the sexes is the more egalitarian the males behave among themselves compared to the behaviour of the females among themselves. These effects are already visible during the initial phases of the hierarchical development. Since results resemble findings among primates, also in real primates their degree of sexual dimorphism may influence the competitive regime of each sex.
 
We empirically tested Hemelrijk’s agent-based model (Hemelrijk 1998), in which dyadic agonistic interaction between primate-group subjects determines their spatial distribution and whether or not the dominant subject has a central position with respect to the other subjects. We studied a group of captive red-capped mangabeys ( Cercocebus torquatus torquatus ) that met the optimal conditions for testing this model (e.g. a linear dominance hierarchy). We analyzed the spatial distribution of the subjects in relation to their rank in the dominance hierarchy and the results confirmed the validity of this model. In accordance with Hemelrijk’s model (Hemelrijk 1998), the group studied showed an ambiguity-reducing strategy that led to non-central spatial positioning on the part of the dominant subject, thus confirming the model indirectly. Nevertheless, for the model to be confirmed directly, the group has to adopt a risk-sensitive strategy so that observers can study whether dominant subjects develop spatial centrality. Our study also demonstrated that agent-based models are a good tool for the study of certain complex behaviors observed in primates because these explanatory models can help formulate suggestive hypotheses for exploring new lines of research in primatology. Keywords: Dominance-hierarchy rank; spatial distribution; Cercocebus torquatus ; agent-based models
 
What is the hallmark of success in human-agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent's behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human-robot interaction.
 
Aggregation is one of the most widespread phenomena in animal groups and often represents a collective dynamic response to environmental conditions. In social species the underlying mechanisms mostly obey self-organized principles. This phenomenon constitutes a powerful model to decouple purely social components from ecological factors. Here we used a model of cockroach aggregation to address the problems of sensitivity of collective patterns and control of aggregation dynamics. The individual behavioural rules (as a function of neighbour density) and the emergent collective patterns were previously quantified and modelled by Jeanson et al. (2003, 2004). We first present the diverse spatio-temporal patterns of a derived model in response to parameter changes, either involving social or non-social interactions. This sensitivity analysis is then extended to evaluate the evolution of these patterns in mixed societies of sub-populations with different behavioural parameters. Simple linear or highly non-linear collective responses emerge. We discuss their potential application to control animal populations by infiltration of biomimetic autonomous robots that mimic cockroach behaviour. We suggest that detailed behavioural models are a prerequisite to do so.
 
Shows some examples of LSAs classified into subtypes taken from an analysis of 78 LSAs (Code, 1982a) 
Current work in the evolution of language and communication is emphasising a close relationship between the evolution of speech, language and gestural action. This paper briefly explores some evolutionary implications of a range of closely related impairments of speech, language and gesture arising from left frontal brain lesions. I discuss aphasic lexical speech automatisms (LSAs) and their resolution with some recovery into agrammatism with apraxia of speech, an impairment of speech planning and programming. I focus attention on the most common forms of LSAs, expletives and the pronoun+modal/aux subtype, and propose that further research into these phenomena can contribute to the debate. I briefly discuss recent studies of progressively degenerating neurological conditions resulting in progressive patterns of cognitive impairments that promises to provide insight into the evolutionary relationships between speech, language and gesture.
 
The Frame/Content theory deals with how and why the first language evolved the present-day speech mode of programming syllable “Frame“ structures with segmental (consonant and vowel) “Content“ elements. The first words are considered, for biomechanical reasons, to have had the simple syllable frame structures of pre-speech babbling (e.g., “bababa“), and were perhaps parental terms, generated within the parent-infant dyad. Although all gestural origins theories (including Arbib's theory reviewed here) have iconicity as a plausible alternative hypothesis for the origin of the meaning-signal link for words, they all share the problems of how and why a fully fledged sign language, necessarily involving a structured phonology, changed to a spoken language.
 
We develop a new theory of the cognitive changes around 4 years of age by trying to explain why understanding of false belief and of alternative naming emerge at this age (Doherty & Perner, 1998). We make use of the notion of discourse referents (DR: Karttunen, 1976) as it is used in File Change Semantics (Heim, 2002), one of the early forms of the more widely known Discourse Representation Theory (Kamp & Reyle, 1993). The assumed cognitive change exists in how children can link DRs in their mind to external referents. The younger children check whether the conditions for a DR match the conditions of an external entity (an implicit/procedural understanding of reference). The older children, in addition, have an explicit understanding of reference in virtue of making explicit identity assertions. This involves the metarepresentational ability of representing that different DRs represent the same external referent, which — we argue — is required for alternative naming and for the false belief task.
 
This study examines the effects of sex and familiarity on Americans' talk to dogs during play, using categories derived from research comparing mothers' and fathers' talk to infants. Eight men and fifteen women were videotaped whilst playing with their own dog and with another person's dog, and their utterances were codified for features common to infant-directed talk. Women used the baby talk speech register more than men, and both men and women used this register more when interacting with the unfamiliar dog than with the familiar dog. When playing with the familiar dog, women talked more than men, and their talk was more suggestive of friendliness and having a conversation. When playing with the unfamiliar dog people used more praise, more conversational gambits, a more diverse vocabulary, and longer utterances than when playing with the familiar dog, suggesting that when playing with the unfamiliar dog, people pretended to have more of a conversation, were more attentive to appearing friendly and were less attentive to the dog's limited understanding. Overall, however, men and women used similar forms of talk when interacting with a dog, whether familiar or not.
 
Mind is seen as a collection of abilities to take decisions in biologically relevant situations. Mind shaping means to form habits and decision rules of how to proceed in a given situation. Problem-specific decision rules constitute a modular mind; adaptive mind-shaping is likely to be module-specific. We present examples from different behaviour `faculties' throughout the animal kingdom, grouped according to important mind-shaping factors to illustrate three basically different mind-shaping processes: (I) external stimuli guide the differentiation of a nervous structure that controls a given behaviour; (II) information comes in to direct a fixed behaviour pattern to its biological goal, or to complete an inherited behaviour program; (III) specific stimuli activate or inactivate a pre-programmed behaviour. Mind-shaping phenomena found in the animal kingdom are suggested as `null-hypotheses' when looking at how human minds might be shaped.
 
This paper presents arguments for considering the anterior cingulate cortex (ACC) as a critical structure in intentional communication. Different facets of intentionality are discussed in relationship to this neural structure. The macrostructural and microstructural characteristics of ACC are proposed to sustain the uniqueness of its architecture, as an overlap region of cognitive, affective and motor components. At the functional level, roles played by this region in communication include social bonding in mammals, control of vocalization in humans, semantic and syntactic processing, and initiation of speech. The involvement of the anterior cingulate cortex in social cognition is suggested where, for infants, joint attention skills are considered both prerequisites of social cognition and prelinguistic communication acts. Since the intentional dimension of gestural communication seems to be connected to a region previously equipped for vocalization, ACC might well be a starting point for linguistic communication.
 
Pointing in apes and humans with respect to three environmental variables 
Pointing by apes is near-ubiquitous in captivity, yet rare in their natural habitats. This has implications for understanding both the ontogeny and heritability of pointing, conceived as a behavioral phenotype. The data suggest that the cognitive capacity for manual deixis was possessed by the last common ancestor of humans and the great apes. In this review, nonverbal reference is distinguished from symbolic reference. An operational definition of intentional communication is delineated, citing published or forthcoming examples for each of the defining criteria from studies of manual gestures in apes. Claims that chimpanzees do not point amongst themselves or do not gesture declaratively are refuted with published examples. Links between pointing and cognitive milestones in other domains relating means to ends are discussed. Finally, an evolutionary scenario of pointing as an adaptation to changes in hominid development is briefly sketched.
 
Recent work on the emergence and evolution of human communication has focused on getting novel systems to evolve from scratch in the laboratory. Many of these studies have adopted an interactive construction approach, whereby pairs of participants repeatedly interact with one another to gradually develop their own communication system whilst engaged in some shared task. This paper describes four recent studies that take a different approach, showing how adaptive structure can emerge purely as a result of cultural transmission through single chains of learners. By removing elements of interactive communication and focusing only on the way in which language is repeatedly acquired by learners, we hope to gain a better understanding of how useful structural properties of language could have emerged without being intentionally designed or innovated.
 
Social robots are designed to interact with humans. That is why they need interaction models that take social behaviors into account. These usually influence many of a robot’s abilities simultaneously. Hence, when designing robots that users will want to interact with, all components need to be tested in the system context, with real users and real tasks in real interactions. This requires methods that link the analysis of the robot’s internal computations within and between components (system level) with the interplay between robot and user (interaction level). This article presents Systemic Interaction Analysis (SInA) as an integrated method to (a) derive prototypical courses of interaction based on system and interaction level, (b) identify deviations from these, (c) infer the causes of deviations by analyzing the system’s operational sequences, and (d) improve the robot iteratively by adjusting models and implementations. Keywords: analysis tools, user studies, autonomous robots
 
Arbitrariness and systematicity are two of language's most fascinating properties. Although both are characterizations of the mappings between signals and meanings, their emergence and evolution in communication systems has generally been explored independently. We present an experiment in which both arbitrariness and systematicity are probed. Participants invent signs from scratch to refer to a set of items that share salient semantic features. Through interaction, the systematic re-use of arbitrary signal elements emerges.
 
Special purpose service robots have already entered the market and their users’ homes. Also the idea of the general purpose service robot or personal robot companion is increasingly discussed and investigated. To probe human–robot interaction with a mobile robot in arbitrary domestic settings, we conducted a study in eight different homes. Based on previous results from laboratory studies we identified particular interaction situations which should be studied thoroughly in real home settings. Based upon the collected sensory data from the robot we found that the different environments influenced the spatial management observable during our subjects’ interaction with the robot. We also validated empirically that the concept of spatial prompting can aid spatial management and communication, and assume this concept to be helpful for Human–Robot Interaction (HRI) design. In this article we report on our exploratory field study and our findings regarding, in particular, the spatial management observed during show episodes and movement through narrow passages . Keywords: COGNIRON, Domestic Service Robotics, Robot Field Trial, Human Augmented Mapping (HAM), Human–Robot Interaction (HRI), Spatial Management, Spatial Prompting
 
Our exploratory research aims at suggesting design principles for educational software dedicated to people with high functioning autism. In order to explore the efficiency of educational games, we developed an experimental protocol to study the influence of the specific constraints of the learning areas (spatial planning versus dialogue understanding) as well as Human Computer Interface modalities. We designed computer games that were tested with 10 teenagers diagnosed with high functioning autism, during 13 sessions, at the rate of one session per week. Participants' skills were assessed before and after a training period. A group of 10 typical children matched on academic level also took part in the experiment. A software platform was developed to manage interface modalities and log users' actions. Moreover, we annotated video recordings of two sessions. Results underline the influence of the task and interface modalities on executive functions.
 
A list of proposed benchmarks 
Socially assistive robotics (SAR) is a growing area of research. Evaluating SAR systems presents novel challenges. Using a robot for a socially assistive task can have various benefits and ethical implications. Many questions are important to understanding whether a robot is effective for a given application domain. This paper describes several benchmarks for evaluating SAR systems. There exist numerous methods for evaluating the many factors involved in a robot's design. Benchmarks from psychology, anthropology, medicine, and human-robot interaction are proposed as measures of success in evaluating a given SAR system and its impact on the user and broader population.
 
1.The robot has two different appearances used in the trials (the centre figure shows the 'undressed' version revealing the robotic parts that control its movement.)
1.Interaction between non-autistic children (reproduced transcript 7 ), drawings based on photos of real events.  
Interactive robots are used increasingly not only in entertainment and service robotics, but also in rehabilitation, therapy and education. The work presented in this paper is part of the Aurora project, rooted in assistive technology and robot-human interaction research. Our primary aim is to study if robots can potentially be used as therapeutically or educationally useful `toys'. In this paper we outline the aims of the project that this study belongs to, as well as the specific qualitative contextual perspective that is being used. We then provide an in-depth evaluation, in part using Conversation Analysis (CA), of segments of trials where three children with autism interacted with a robot as well as an adult. We focus our analysis primarily on joint attention which plays a fundamental role in human development and social understanding. Joint attention skills of children with autism have been studied extensively in autism research and therefore this behaviour provides a relevant focus for our study. In the setting used, joint attention emerges from natural and spontaneous interactions between a child and an adult. We present the data in the form of transcripts and photo stills. The examples were selected from extensive video footage for illustrative purposes, i.e. demonstrating how children with autism can respond to the changing behaviour of their co-participant, i.e. the experimenter. Furthermore, our data shows that the robot provides a salient object, or mediator for joint attention. The paper concludes with a discussion of implications of this work in the context of further studies with robots and children with autism within the Aurora project, as well as the potential contribution of robots to research into the nature of autism.
 
Although chimpanzees have been reported to understand to some extent others' visual perception, previous studies using food requesting tasks are divided on whether or not chimpanzees understand the role of eye gaze. One plausible reason for this discrepancy may be the familiarity of the testing situation. Previous food requesting tasks with negative results used an unfamiliar situation that may be difficult for some chimpanzees to recognize as a requesting situation, whereas those with positive results used a familiar situation. The present study tested chimpanzees' understanding of others' attentional states by comparing two requesting situations: an unfamiliar situation in which food was put on a table, and a familiar situation in which chimpanzees requested food held by an experimenter. Chimpanzees showed evidence of understanding the experimenter's attentional variations and the role of eye gaze only in the latter task. This suggests that an unfamiliar requesting situation may keep subjects from expressing their understanding of others' attentional states even though they are sensitive to them. Keywords: Understanding attention; Social cognition; Chimpanzees
 
A great deal of research has been performed recently on robots that feature functions for communicating with humans in daily life, i.e., communication robots. We consider it important to develop methods to measure humans' attitudes and emotions that may prevent them from interaction with communication robots, as indices to study short-term and long-term interaction between humans and communication robots. This study is aimed at exploring the influence of negative attitudes toward robots, focusing on applications of communication robots to daily-life services. First, a scale of negative attitudes toward robots consisting of three subordinate scales, “negative attitudes toward situations of interaction with robots,“ “negative attitudes toward the social influence of robots,“ and “negative attitudes toward emotions in interaction with robots,“ was developed based on a data sample comprising of 263 Japanese university students. This scale was administered to 240 Japanese university students to confirm its validity and reliability. In this paper, we report on the results of analyses of these data samples. Moreover, we discuss some future problems including a comparison of attitudes toward robots between nations.
 
Attribution theory is used as a conceptual framework for examining how causal beliefs about peer harassment influence how victims think and feel about themselves. Evidence is presented that victims who make characterological self-blaming attributions (“it must be me ”) are particularly at risk of negative self-views. Also examined is the influence of social context, particularly the ethnic composition of schools and classrooms. It was found that students who were both victims of harassment and members of the majority ethnic group were more vulnerable to self-blaming attributions. In contrast, greater ethnic diversity, that is, classrooms where no one group was in the majority, tended to ward off self-blaming tendencies. Studies of peer harassment are a good context for examining one of the main themes of the special issue, which is how the social context (e.g., peer groups, ethnic groups) influences the way individuals think and feel about themselves.
 
This paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. This approach is inspired by nondirective play therapy. The experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under specific conditions in order to guide the child or ask her questions about reasoning or affect related to the robot. This approach has been tested in a long-term study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. The children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and Affect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. They also expressed some interest in the robot, including, on occasion, affect. Keywords: Human–Robot Interaction, Robot-Mediated Therapy, Robot-Assisted Play, Non-Directive Play Therapy, Assistive Technology, Autism, Children
 
With the aim of studying foundations for self-other relations and understanding, we conducted an experimental investigation of a specific aspect of imitation in children with autism: the propensity to copy self-other orientation. We hypothesised that children with autism would show limitations in identifying with the stance of another person. We tested 16 children with autism and 16 non-autistic children with learning difficulties, matched on both chronological and verbal mental age, for their propensity to imitate the self- or other-orientated aspects of another person’s actions. All participants were attentive to the demonstrator and copied her actions, but the children with autism were significantly less likely to imitate those aspects of her actions that involved movement in relation to her own vis-à-vis the child’s body. There were a number of children with autism who copied the identical geometric orientation of objects acted-upon. These results suggest that children with autism have a diminished propensity to identify with other people, and point to the importance of this mechanism for shaping self-other relations and flexible thinking.
 
Imitation of mother by a 2-month-old. This infant's imitation of the mother was preceded by an imitation of the infant by the mother.  
a.Imitation scores of children with autism of different developmental ages. The histogram combines two informations: the color informs about the higher developmental level of imitation achieved by each child and the score informs about the percentage of imitation performed, whatever the level.  
b.Levels of imitation recognition of children with autism. The color informs about the higher developmental level of imitation recognition achieved by each child.  
Adopting a functionalist perspective, we emphasize the interest of considering imitation as a single capacity with two functions: communication and learning. These two functions both imply such capacities as detection of novelty, attraction toward moving stimuli and perception-action coupling. We propose that the main difference between the processes involved in the two functions is that, in the case of learning, the dynamics is internal to the system constituted by an individual whereas in the case of communication, the dynamics concerns the system composed by the perception of one individual coupled with the action of the other. In this paper, we compare the first developmental steps of imitation in three populations: typically developing children, children with autism, and robots. We show evidence of strong correlations between imitating and being imitated in typical infants and low-functioning children with autism. Relying on this evidence, the robotic perspective is to provide a generic architecture able not only to learn via imitation but also to interact as an emerging property of the system constituted by two similar architectures with a different history.
 
IW coordinated two-handed iconic gesture without vision.  
Early humans formed language units consisting of global and discrete dimensions of semiosis in dynamic opposition, or 'growth points.' At some point, gestures gained the power to orchestrate actions, manual and vocal, with significances other than those of the actions themselves, giving rise to cognition framed in dual terms. However, our proposal emphasizes natural selection of joint gesture-speech, not 'gesture-first' in language origin.
 
Top-cited authors
Karl F. MacDorman
  • Indiana University School of Informatics and Computing
Steven O. Entezari
Mark Aronoff
  • Stony Brook University
Carol Padden
  • University of California, San Diego
Wendy Sandler
  • University of Haifa