January 2020
·
106 Reads
Despite their rapid development, social robots still face difficulties for achieving smooth communication with humans, particularly when we consider interactions within an emotional context. To contribute in this search for meaningful human-robot interaction (HRI), we present an interaction framework focusing on children, that introduces a multimodal interaction model to parametrize the social robot's empathic responses according to type of emotional valence and intensity. This work in progress presents an interaction sequence of five steps between a child and a user-programmable social robot (SIMA), for five basic emotions and four mood/affective states. The goal of the model is that of parametrizing the robot's empathic emotional responses considering multimodal communication aspects in a simultaneous way, such as: communication chronemics (e.g. time of the day, speech reaction time) and kinesics (e.g. amplitude of micro arm movements to mirror emotions). Ultimately, through completion of the five stages, the model aims at boosting emotional self-awareness and learning in children.