ArticlePDF Available

Workshop on Behavioral Patterns and Interaction Modelling for Personalized Human-Robot Interaction 2020 Generating empathic responses from a social robot: An integrative multimodal communication framework using Sima Robot

Authors:

Abstract

Despite their rapid development, social robots still face difficulties for achieving smooth communication with humans, particularly when we consider interactions within an emotional context. To contribute in this search for meaningful human-robot interaction (HRI), we present an interaction framework focusing on children, that introduces a multimodal interaction model to parametrize the social robot's empathic responses according to type of emotional valence and intensity. This work in progress presents an interaction sequence of five steps between a child and a user-programmable social robot (SIMA), for five basic emotions and four mood/affective states. The goal of the model is that of parametrizing the robot's empathic emotional responses considering multimodal communication aspects in a simultaneous way, such as: communication chronemics (e.g. time of the day, speech reaction time) and kinesics (e.g. amplitude of micro arm movements to mirror emotions). Ultimately, through completion of the five stages, the model aims at boosting emotional self-awareness and learning in children.
Workshop on Behavioral Patterns and Interaction Modelling for Personalized Human-Robot Interaction 2020
12
RODRÍGUEZ-HIDALGO ET AL.
Generating empathic responses from a social robot: An integrative
multimodal communication framework using Sima Robot
Carmina Rodríguez-Hidalgo
School of Communications
and Journalism
Universidad Adolfo Ibáñez
carmina.rodriguez@uai.cl
Alejandro Pantoja
Sima robot
Fablab University of Chile
Santiago, Chile
apantojas@gmail.com
Felipe Araya
Sima robot
Fablab University of Chile
Santiago, Chile
felipearaya@simarobot.com
Hugo Araya
Sima robot
Fablab University of Chile
Santiago, Chile
hugo@simarobot.com
Virginia Dias
Sima robot
Fablab University of Chile
Santiago, Chile
virginia@simarobot.com
ABSTRACT
Despite their rapid development, social robots still face difficulties
for achieving smooth communication with humans, particularly
when we consider interactions within an emotional context. To
contribute in this search for meaningful human-robot interaction
(HRI), we present an interaction framework focusing on children,
that introduces a multimodal interaction model to parametrize the
social robot's empathic responses according to type of emotional
valence and intensity. This work in progress presents an interaction
sequence of five steps between a child and a user-programmable
social robot (SIMA), for five basic emotions and four
mood/affective states. The goal of the model is that of
parametrizing the robot’s empathic emotional responses
considering multimodal communication aspects in a simultaneous
way, such as: communication chronemics (e.g. time of the day,
speech reaction time) and kinesics (e.g. amplitude of micro arm
movements to mirror emotions). Ultimately, through completion of
the five stages, the model aims at boosting emotional self-
awareness and learning in children.
CCS CONCEPTS
• Information systems • Human factors • User interfaces
Interaction styles
KEYWORDS
Social robots, empathy, multimodality, interaction, non-verbal
language, emotion.
Workshop
on Behavioral Patterns and Interaction Modelling for
Personalized Human-Robot Interaction 2020, March 2326, 2020,
Cambridge, United Kingdom. This paper is published under the Creative
Commons Attribution 4.0 International (CC BY 4.0) license. To view a
     
  

1 Introduction
Social robots have emerged as new communication partners [1] and
as such, face the challenge of achieving new meaningful ways to
communicate with humans. Although social robots pose new
communicative affordances which allow a social interaction [2],
there is still room for improvement in aspects such as synchronous
communication between a robot and a human, and vice versa. For
instance, social robots can still improve their simultaneous
responses in a more empathic and social manner. Empathic
communication involves vicariously responding to the perceived
emotional state of others [3]. This work in progress posits that
tailoring the verbal and non-verbal emotional responses of the
social robot to users’ self-reported emotional states, can result in
increased perceptions of perceived empathy. Consequently, and
through this enhanced empathy, a number of positive
communicational and psychological outcomes can be reached, such
as increased rapport, liking and persuasiveness. Also of
importance, this empathic communication model is ultimately
intended to reach prosocial objectives in the human user, such as
enhanced emotional awareness.
2 Theoretical framework
The literature and theories informing the framework pertain to
human machine communication and social psychology. First, the
media equation theory [4] states that people tend to treat machines
as they would treat people. This theory gives ground to assume that
a meaningful interaction between a humanoid social robot and a
child can be emotionally effective, since humans, and particularly
  

Secondly, emotional mimicry in context theory [6] states that
emotional mimicry occurs in highly contextual communicational
situations and that this mimicry can act as a social regulator. This
means that emotional mimicry in a one on one communicative
Abstract
1 Introduction
2 Theoretical framework
Generating empathic responses from a social robot:
An integrative multimodal communication framework using Sima Robot
Workshop on Behavioral Patterns and Interaction Modelling for Personalized Human-Robot Interaction 2020
13
RODRÍGUEZ-HIDALGO ET AL.
situation can help people better deal with their emotions.
Therefore, by having the social robot mimic the child’s facial and
bodily expression according to the emotion, we have grounds to
assume that it can help children better recognize his/her emotions
and thus offer a leeway to deal with them and become more
emotional self-aware. Moreover, emotional mimicry according to
the users’ state may provide a sense of empathy. Although
variously defined, for the purposes of this study we understand
empathy as an emotional response to the perceived emotions of
others [7]. In this case, the social robot would emotionally react to
the perceived emotion of the child, which functions as a triggering
input for the robot to display an emotionally tailored response.
Third, emotional self-awareness theory posits that higher attention
to ourselves leads to judging our own behavior, fostering
recognition of felt emotions [8]. Emotions are caused by a set of
complex and synchronized component responses which often
results from adjustments or disappointments in achieving one’s
goal [9]. Learning to identify emotions is an essential skillset to
function well in life [9]. As children of young age are still learning
to recognize and accept their emotions [10], we posit that social
robots can help children reach higher emotional self-awareness to
help the child better manage his or her negative emotions and also
learn to savor the positive emotions even more.
This work in progress presents an integrated framework that aims
to parametrize the social robot’s verbal and non-verbal
communication, according to type of emotion, with the goal of
establishing a smooth interactive communication sequence with a
child. Preliminary, our focus is on 8 to 11-year-old children, based
on their developmental characteristics [11]. In sum, this research
seeks to advance the field of Human Machine Communication
since the proposed framework can aid in developing desirable traits
in social robots, such as empathic responses. [12] has outlined a list
of ‘desiderata’ or basic social functions that a social robot should
have, such as: (a) performing multiple speech acts; (b) interact
affectively; (c) respond to more than one command; (d)
purposefully speak, among others. We believe that the proposed
framework represents a step forward in this direction. Moreover,
the framework could also help illuminate and improve the social
and emotional responses of other social robots, particularly
educational ones.
Explaining the framework
The framework is applicable in an anthropomorphic social robot,
Sima robot (www.simarobot.com). The robot functions through a
cell phone embedded in a 3D anthropomorphic body. It is able to
perform a variety of emotional facial expressions and body
movements. The robot’s original purpose is to teach preschoolers,
   
           
        
         
        

array of facial emotional expressions and can react emotionally to
whether the child has replied correctly or incorrectly to a question.
For a small example of the facial emotional expressions of the
social robot, see figure 2.
Figure 1. Sima robot (image source: simarobot.com)
Figure 2. Small example of SIMA’s facial expressions.
The framework has two main components, the verbal and
nonverbal. These have been programmed in a mobile phone
application (SimaRobot). The interactions are powered by Watson
IBM, a question-answering computer system, based on DeepQA
software, which is capable of answering questions in natural
language [14]. To establish a dialog, the robot uses the
smartphone’s Automatic Voice Recognition (AVR) feature
through the app, similar to a system such as Google Assistant.
When the child speaks, Watson activates its dialog features through
a language algorithm - based on different keywords and sentence
fragments - and is able to respond with the most ‘sensible’ solution.
Therefore, the robot can recognize words via the app and when
“hearing” certain keywords and intents in the child’s speech to one
of Sima’s questions (e.g., how are you feeling?), it can trigger a
spoken verbal response from an array (or combinations of) replies.
The nonverbal component consists of a series of bodily and facial
expressions to respond contingently by using meaningful
expressions and gestures in response to the child’s self-reported
emotion. The robot’s body movements include for instance raising
one or both arms, walk, or stand on one leg. In particular, the
present framework utilizes the robots’ arm movements as
nonverbal means to represent the valence and intensity of the
recognized emotion [15], similar as to how humans would
physically react and express emotions. For instance, the emotion of
surprise when e.g., hearing good news, implies that the robot
energetically waves its arms towards the upper body (similar to a
     
         
         

3 Explaining the framework
Workshop on Behavioral Patterns and Interaction Modelling for Personalized Human-Robot Interaction 2020
14
RODRÍGUEZ-HIDALGO ET AL.
WOODSTOCK’18, June, 2018, El Paso, Texas USA
Figure 3. Examples of SIMA’s emotional facial and bodily
movements in reaction to the keywords (or related phrases),
including the words “surprise” (left) and “sad” (right).
To what respects the robots’ facial expressions, this nonverbal
aspect uses the robot’s “face,” via the Simarobot app using the
telephone screen. Although Sima already uses various facial
expressions in the context of its educational games, the framework
includes expressions specially developed for empathic
communication, for instance those displayed on Figure 3. Because
the telephone screen is able to show video images, Sima’s “eyes”
can rapidly change form (e.g., from happy to sad eyes) and wink.
The robot’s facial expressions (see figure 2), have been developed
based on the universal expressions of the five basic affective states
identified by several studies, particularly [16]. So far, the
framework has been developed to support interactive sequences
tailored to five basic emotions: happiness, surprise, fear, anger,
sadness and four affective/dispositional states: excitation, relaxed,
sleepy and bored. Both responses, verbal and nonverbal, occur
simultaneously in response to a verbal input, as programmed via
Watson and the phone app. Next, we detail the main interaction
stages of the framework.
Five interaction stages
The framework considers five levels of an interaction sequence
between the child and the robot. To make the talk run more
smoothly, the idea is to make the child acquainted with the robot at
class level, first, by introducing itself and starting a casual
interaction (e.g., “Hi my name is Sima and I’m here to help you
learn through play. What is your name?”). To increase
personalization, two main features have been implemented. First,
the robot shall talk to and respond to the child using the child’s first
name. Several studies account for positive interaction effects when
using the person’s name, such as increased ratings of a robot’s
friendliness [17]. Second, each of the five levels contains three
main “conversation tracks” throughout, which are tailored to
different times of the day (morning, afternoon, evening, e.g., “good
morning! How are you feeling today? Or “it is late, how about we
play a little before you go to bed?”). Each of these responses is
given according to the real time, which is easy to implement as the
         
        
     

We describe hereunder the five levels of an interaction sequence
between the child and the robot. The five levels are as follows:
2.1. Communication initiation. After greeting the child according
to time of the day (good morning/afternoon/evening) and having a
brief “filler” conversation, SIMA robot initiates a conversation by
launching probing questions to ask how the child is feeling.
2.2. Social interaction. When the child responds, the social robot
speech recognition will be activated with the occurrence of a
particular emotion word(s). This word will trigger the social robot
to respond simultaneously both verbally and non-verbally,
depending on the emotion, initiating a social interaction. In this
phase, emotional ‘mirroring’ occurs because the social robot
displays both verbal and non-verbal signs of the child’s self-
reported emotion.
2.3) Emotional self-awareness. After the child responds with a
certain emotion, the child is prompted to reflect on their emotional
state as the social robot would try to instigate emotional self-
awaremess in the child. An example question would be: why do
you think that you feel this way?
2.4) Transformation. In this stage the goal is to transform the
emotion, either into a more positive emotional state (in the case of
negative emotions), or to savour positive emotions even more (in
case of positive emotions). What is key in this stage is that the
social robot will attempt this transformation through what we term
‘positive action’, that is, proposing to play one Sima’s educational
games.
2.5) Action. In this stage, provided that the child agrees, the social
robot and the child play a game or educational activity. In future
versions, we study creating an educational or gaming activities
tailored to the particular emotion.
In this way, the framework aims to: (1) identify the child’s
emotional state; (2) make the social robot respond in a socially and
emotionally congruent (empathic) way, depending on the child’s
expressed emotion on the previous stage; (3) parametrize both the
verbal and non-verbal emotional responses of the social robot
towards the child; (4) initiate an action with the child (e.g. play an
educational game) which would modify the child negative
emotional state or increase his or her positive emotions. Ultimately,
a broader goal is to achieve more meaningful communication
between a social robot and a human, in this case, the child, in a
situation which creates the best possible mood for learning,
contributing to an effective learning environment [19].
Preliminary discussion
The present framework intends to make empathic responses
feasible between a child and a social robot. Its goal is to parametrize
the robot’s responses by considering the child’s emotional state.
The framework is integrative, because it comprises both verbal
and nonverbal expressions of emotions concerning the robots’
kinesics and non-verbal communication elements, such as facial
4 Five interaction stages
Workshop on Behavioral Patterns and Interaction Modelling for Personalized Human-Robot Interaction 2020
15
RODRÍGUEZ-HIDALGO ET AL.
expressions and body movements, which attempt to be congruent
with the emotion expressed by the child, and thus has the potential
of creating an illusion of emotional mirroring, a response which is
congruent with the child’s self-reported emotion, in this way
enabling meaningful HRI.
Though the framework has been initially tested, our future research
will determine whether (1) children can actually recognize the
robots’ mood; (2) to what extent the robot emotional expression
affects the children’s feelings. For (1), there are empirical grounds
to assume that children would be able to recognize the emotions in
the social robot, as an experimental study found that participants
could distinguish between negative and positive robot mood from
the robot physical movements and speech [18]. For (2), there are
both theoretical and empirical evidence to presuppose that Sima’s
reactions will affect a child feelings and behaviors, based on mood
transfer findings [19].
Although in initial pre-tests the phone Automatic Voice
Recognition (AVR) behaves well for the first stage of
communication initiation, we observe that the interaction can flow
rather smoothly, particularly in chronemic aspects such as the
robot’s reaction time. Because it is powered through an app, the
robot responds in a timely manner compared to other interactions
in which robot takes a rather long time processing the verbal input.
However, in moments that the robot does not recognize the
children’s speech, there is a communication breakdown. A possible
solution is to continue working on the robot’s array of recognized
responses, a challenging task in itself. Further, a shortcoming in the
current framework is that the emotion recognition occurs only as
verbal input from the child, the robot is not yet able to identify the
individuals’ emotions considering aspects such as facial
expressions or tone of voice. It must be noted that adding this
possibility comes with an added set of complications (e.g. the
child’s privacy when activating facial recognition in the phone
device).
Further, it is equally challenging to provide an effective response
in every step because the robot should be able to capture and decode
a vast array of phrases. Whereas an adult may be perfectly capable
of stating “I feel sad,” a child may be more spontaneous in his or
her emotional expressions. Of relevance would be to improve the
AVR to interpret non-verbal utterances such as “grrr” or howls, or
other random noises as representative of emotional states.
Even though Sima is powered through a telephone app, and that the
functionality of its voice recognition feature may be similar to
Google Assistant, we believe that one main advantage of the
present framework is that it provides situated, embodied
interaction, allowing anthropomorphic nonverbal and verbal
language, which already provides situatedness and embodiment to
the interaction, features which have been shown to lead to
improved learning outcomes [20]. According to [20] “interactions
between a child and robot should be contingent and multimodal,
         
 

during the tutoring sessions and adapt to these”. Although the
framework considers interactions in only five levels, we believe it
represents a first step towards making an interaction with a social
robot more personable and empathic, in a manner that can
contribute towards better learning outcomes and higher emotional
awareness.
Conflict of interest statement
The first author declares that this study has been conducted in the
absence of any commercial or financial relationship with Sima
robot that could be construed as a potential conflict of interest.
REFERENCES
[1] Shanyang Zhao. Humanoid social robots as a medium of communication. New
Media & Society (2006). 8(3). 401-419. doi: 10.1177/14614 44806061951.
[2] Carmina Rodriguez-Hidalgo. Me and my robot smiled at one another: The process
of socially enacted communicative affordance in human-machine
communication. Human-Machine Communication (2020 ), 1(1), 4.
[3] Albert Mehrabian and No rman Epstein. A measure of emotional empathy.
Journal of Personality. 40, 4. 525-543.
[4] Byron Reeves. Clifford Nass. 1996. The media equation: how people treat
computers, television and new media like real people and places. Cambridge
University press: UK.
[5] Marco Nalin, Ilaria Baroni, Ivana Kruijff-Korbayova, Lola Cañamero, Mathew
Lewis, AryelBeck, Heriberto Cuayahuitl, Alberto Sanna. Children’s adaptation
in a multi-session interaction with a humanoid robot. In 21st International
Symposium on robot and human interactive communication. Pages 351-357
IEEE , 2012.
[6] Ursula Hess and Agneta Fischer. Emotional mimicry as social regulation.
Personality and social ps ychology review. 17 (2013). 142-157. 

 Geo Albert Mehrabian and Norman Epstein. A measure of emotional empathy.
Journal of Personality (1972), 40, 4. 525 – 543.
[8] George Herbert Mead. 1934. Mind, self and society. University of Chicago Press:
Chicago, IL.
[9] Klaus Scherer. What are emotions? And how can they be measured? Social
science information. 44 (2005). 695-729.
https://doi.org/10.1177/0539018405058216.
[10] Daniel Goleman. Emotional intelligence: Why it can matter more than IQ.
Bloomsbury publishing. (1996).
[11] Herbert Ginsburg. Piaget’s theory of intellectual development. Prentice-Hall, Inc:
Washington.
[12] Nikolaos Mavridis. A review of ve rba land non-verbal human- robot interactive
communication. Robotics and autonomous systems. (2015) 22-35. Doi:
10.1016/j.robot.2014.09.031.
[13] Masahiro Mori. The uncanny valley. Energy (1970). 33 -35.
[14] IBM Corporation, DeepQA Project Frequently Asked Questions. Retrieved from:
https://researcher. Watson.ibm.com/researcher/
[15] Junchao Xu, Joost Broekens, Koen Hindrinks and Mark Neerincx. Mood
contagion of robot body language in human robot interaction. Autonomous
agents and multi-agent systems, 29. (2015). 1216-1248.
[16] Scherer, K. R. Emotion. 2000. In M. Hewstone, & W. Stroebe (Eds.),
Introduction to Social Psychology: A European Perspective (3rd ed.) (pp. 151-
191). Oxford: Blackwell.
[17] Deborah Johanson et al. Smiling and use of first-name by a healthcare receptionist
robot: Effects on user perceptions, attitudes and behaviours. Journal of behavioral
robotics 11, 1. 40-51. doi: 10.1515/pjbr-2020-0008.
[18] Brant Burleson. 2010. The nature of interpersonal communication. The Handbook
of communication science. Sage: London.
[19] Lev Vygotsky. 1978. Socio-cultural theory. M ind in society.
[20] Nikolaos Mavridis. A review of verbal and non-verbal human-robo t interactive
communication. Robotics and autonomous systems. (2015) 22-35. Doi:
10.1016/j.robot.2014.09.031.
[21] Roland Neumann and Fritz Strack. Mood contagion: the automatic transfer of
mood between person7s. Journal of pe rsonality and social psychology.79. 211-
223. (2000). doi: 10.1037//0022-3514.79.2.211.
[22] Paul Vogt et al. Child-robot interactions for second language tutoring to preschool
children. Frontiers in human neuroscience, 2017, 11. doi:
10.3389/fnhum.2017.00073.
6 Conflict of interest statement
References
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The current studies aimed to find out whether a nonintentional form of mood contagion exists and which mechanisms can account for it. In these experiments participants who expected to be tested for text comprehension listened to an affectively neutral speech that was spoken in a slightly sad or happy voice. The authors found that (a) the emotional expression induced a congruent mood state in the listeners, (b) inferential accounts to emotional sharing were not easily reconciled with the findings, (c) different affective experiences emerged from intentional and nonintentional forms of emotional sharing, and (d) findings suggest that a perception–behavior link (T. L. Chartrand & J. A. Bargh, 1999) can account for these findings, because participants who were required to repeat the philosophical speech spontaneously imitated the target person's vocal expression of emotion.
Article
Full-text available
Robots are now starting to be developed and used as receptionists in health applications. In this regard, it is important that robots’ behavioural skills are developed and researched so that people have appropriate and comfortable interactions. Smiling and use of first name are two more important social communication skills used during human interactions. While smiling and use of first name are often employed by robots in human interactions, the effect of these behaviours on perceptions of receptionist robots has not yet been experimentally investigated. This study explored the effects of robot smiling and robot use of the participant’s first name on perceptions of robot friendliness, mind, and personality, as well as attitudes and smiling behaviour. Forty participants interacted with a medical receptionist robot four times, in a two by two repeated measures design. Both smiling and use of first name had significant positive effects on participants’ perceptions of robot personality. Robot smiling also showed significant effects on participants’ overall attitudes towards robots, ratings of robot friendliness, and perceptions of the robot’s mind, and increased the frequency of participants’ own smiling. There were no significant interaction effects. Robot smiling in particular can enhance user perceptions of robots and increase reciprocal smiling.
Article
Full-text available
The term affordance has been inconsistently applied both in robotics and communication. While the robotics perspective is mostly object-based, the communication science view is commonly user-based. In an attempt to bring the two perspectives together, this theoretical paper argues that social robots present new social communicative affordances emerging from a two-way relational process. I first explicate conceptual approaches of affordance in robotics and communication. Second, a model of enacted communicative affordance in the context of Human-Machine Communication (HMC) is presented. Third and last, I explain how a pivotal social robot characteristic—embodiment—plays a key role in the process of social communicative affordances in HMC, which may entail behavioral, emotional, and cognitive effects. The paper ends by presenting considerations for future affordance research in HMC.
Article
Full-text available
In this digital age social robots will increasingly be used for educational purposes, such as second language tutoring. In this perspective article, we propose a number of design features to develop a child-friendly social robot that can effectively support children in second language learning, and we discuss some technical challenges for developing these. The features we propose include choices to develop the robot such that it can act as a peer to motivate the child during second language learning and build trust at the same time, while still being more knowledgeable than the child and scaffolding that knowledge in adult-like manner. We also believe that the first impressions children have about robots are crucial for them to build trust and common ground, which would support child-robot interactions in the long term. We therefore propose a strategy to introduce the robot in a safe way to toddlers. Other features relate to the ability to adapt to individual children’s language proficiency, respond contingently, both temporally and semantically, establish joint attention, use meaningful gestures, provide effective feedback and monitor children’s learning progress. Technical challenges we observe include automatic speech recognition (ASR) for children, reliable object recognition to facilitate semantic contingency and establishing joint attention, and developing human-like gestures with a robot that does not have the same morphology humans have. We briefly discuss an experiment in which we investigate how children respond to different forms of feedback the robot can give.
Article
Full-text available
The aim of our work is to design bodily mood expressions of humanoid robots for interactive settings that can be recognized by users and have (positive) effects on people who interact with the robots. To this end, we develop a parameterized behavior model for humanoid robots to express mood through body language. Different settings of the parameters, which control the spatial extent and motion dynamics of a behavior, result in different behavior appearances expressing different moods. In this study, we applied the behavior model to the gestures of the imitation game performed by the NAO robot to display either a positive or a negative mood. We address the question whether robot mood displayed simultaneously with the execution of functional behaviors in a task can (a) be recognized by participants and (b) produce contagion effects. Mood contagion is an automatic mechanism that induces a congruent mood state by means of the observation of another person’s emotional expression. In addition, we varied task difficulty to investigate how the task load mediates the effects. Our results show that participants are able to differentiate between positive and negative robot mood and they are able to recognize the behavioral cues (the parameters) we manipulated. Moreover, self-reported mood matches the mood expressed by the robot in the easy task condition. Additional evidence for mood contagion is provided by the fact that we were able to replicate an expected effect of negative mood on task performance: in the negative mood condition participants performed better on difficult tasks than in the positive mood condition, even though participants’ self-reported mood did not match that of the robot.
Article
Full-text available
In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion.
Article
Full-text available
This article examines the emerging phenomenon of humanoid social robots and human-humanoid interactions. A central argument of this article is that humanoid social robots belong to a special type of robotic technology used for communicating and interacting with humans. These robotic entities, which can be in either mechanical or digital form, are autonomous, interactive and humanlike. Some of them are used to interact with humans for utilitarian purposes and others are designed to trigger human emotions. Incorporation of such robotic entities into the realm of social life invariably alters the condition as well as the dynamics of human interaction, giving rise to a synthetic society in which humans co-mingle with humanoids. More research is needed to investigate the social and cultural impact of this unfolding robotic revolution.
Article
The new edition of "Piaget's Theory of Intellectual Development" is updated to include a description of important work, particularly on development and learning, conducted during the last 10-15 yrs of Piaget's life. (PsycINFO Database Record (c) 2012 APA, all rights reserved)