Catherine Pelachaud

Catherine Pelachaud
French National Centre for Scientific Research | CNRS · Laboratoire Traitement et Communication de l’Information (LTCI)

About

489
Publications
127,145
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
11,028
Citations
Citations since 2017
105 Research Items
3895 Citations
20172018201920202021202220230200400600
20172018201920202021202220230200400600
20172018201920202021202220230200400600
20172018201920202021202220230200400600

Publications

Publications (489)
Preprint
Full-text available
This paper addresses the challenge of transferring the behavior expressivity style of a virtual agent to another one while preserving behaviors shape as they carry communicative meaning. Behavior expressivity style is viewed here as the qualitative properties of behaviors. We propose TranSTYLer, a multimodal transformer based model that synthesizes...
Chapter
The human face is a key channel of communication in human-human interaction. When communicating, humans spontaneously and continuously display various facial gestures, which convey a large panel of information to the interlocutors. Likewise, appropriate and coherent co-speech facial gestures are essential to render human-like and smooth interaction...
Chapter
During an interaction, interlocutors emit multimodal social signals to communicate their intent by exchanging speaking turns smoothly or through interruptions, and adapting to their interacting partners which is referred to as interpersonal synchrony. We are interested in understanding whether the synchrony of multimodal signals could help to disti...
Article
Full-text available
Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style tran...
Article
Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style tran...
Preprint
Full-text available
In this study, we address the importance of modeling behavior style in virtual agents for personalized human-agent interaction. We propose a machine learning approach to synthesize gestures, driven by prosodic features and text, in the style of different speakers, even those unseen during training. Our model incorporates zero-shot multimodal style...
Preprint
Full-text available
Socially Interactive Agents (SIAs) are physical or virtual embodied agents that display similar behavior as human multimodal behavior. Modeling SIAs' non-verbal behavior, such as speech and facial gestures, has always been a challenging task, given that a SIA can take the role of a speaker or a listener. A SIA must emit appropriate behavior adapted...
Book
SIVA’23 - Socially Interactive Human-like Virtual Agents From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and an...
Article
Full-text available
In this work, we focus on human-agent interaction where the role of the socially interactive agent is to optimize the amount of information to give to a user. In particular, we developed a dialog manager able to adapt the agent's conversational strategies to the preferences of the user it is interacting with to maximize the user's engagement during...
Book
The Handbook on Socially Interactive Agents provides a comprehensive overview of the research fields of Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics. Socially Interactive Agents (SIAs), whether virtually or physically embodied, are autonomous agents that are able to perceive an environment including people or othe...
Chapter
Full-text available
This chapter contains a collection of interviews on current challenges and future directions that researchers are faced with when working with Socially Interactive Agents (SIAs).
Preprint
Full-text available
Modeling virtual agents with behavior style is one factor for personalizing human agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero shot multimodal style tran...
Preprint
Image schema is a recurrent pattern of reasoning where one entity is mapped into another. Image schema is similar to conceptual metaphor and is also related to metaphoric gesture. Our main goal is to generate metaphoric gestures for an Embodied Conversational Agent. We propose a technique to learn the vector representation of image schemas. As far...
Chapter
Full-text available
During an interaction, interactants exchange speaking turns. Exchanges can be done smoothly or through interruptions. Listeners can display backchannels, send signals to grab the speaking turn, wait for the speaker to yield the turn, or even interrupt and grab the speaking turn. Interruptions are very frequent in natural interactions. To create bel...
Conference Paper
Full-text available
We propose a semantically-aware speech driven model to generate expressive and natural upper-facial and head motion for Embodied Conversational Agents (ECA). In this work, we aim to produce natural and continuous head motion and upper-facial gestures synchronized with speech. We propose a model that generates these gestures based on multimodal inpu...
Chapter
Studies in human-human interaction have introduced the concept of F-formation to describe the spatial organization of participants during social interaction. This paper aims at detecting such F-formations in images of video sequences. The proposed approach combines a voting scheme in the visual field of each participant and a memory process to make...
Article
Full-text available
This report documents the program and the outcomes of Dagstuhl Seminar 21381 "Conversational Agent as Trustworthy Autonomous System (Trust-CA)". First, we present the abstracts of the talks delivered by the Seminar’s attendees. Then we report on the origin and process of our six breakout (working) groups. For each group, we describe its contributor...
Conference Paper
This paper presents a description of ongoing research that aims to improve the interaction between human and Embodied Conversational Agent (ECA). The main idea is to model the interactive loop between human and agent such as the virtual agent can continuously adapt its behavior according to one's partner. This work, based on recurrent neural networ...
Preprint
Full-text available
We propose a semantically-aware speech driven method to generate expressive and natural upper-facial and head motion for Embodied Conversational Agents (ECA). In this work, we tackle two key challenges: produce natural and continuous head motion and upper-facial gestures. We propose a model that generates gestures based on multimodal input features...
Conference Paper
Full-text available
To communicate with human interlocutors, embodied conversa- tional agent use multi-modal signals. The goal of our project was to implement a social eye-gaze in a agent to evaluate its effect in interlocutors. Precisely, we focused on the being-seen-feeling (BSF) during a socially interactive context. This feeling is labeled as the inference we have...
Conference Paper
Full-text available
The development of applications with intelligent virtual agents (IVA) often comes with integration of multiple complex components. In this article we present the Agents United Platform: an open source platform that researchers and developers can use as a starting point to setup their own multi-IVA applications. The new platform provides developers...
Book
The Handbook on Socially Interactive Agents provides a comprehensive overview of the research fields of Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics. Socially Interactive Agents (SIAs), whether virtually or physically embodied, are autonomous agents that are able to perceive an environment including people or othe...
Article
Full-text available
Adaptation is a key mechanism in human–human interaction. In our work, we aim at endowing embodied conversational agents with the ability to adapt their behavior when interacting with a human interlocutor. With the goal to better understand what the main challenges concerning adaptive agents are, we investigated the effects on the user’s experience...
Chapter
Full-text available
Communicative gestures and speech acoustic are tightly linked. Our objective is to predict the timing of gestures according to the acoustic. That is, we want to predict when a certain gesture occurs. We develop a model based on a recurrent neural network with attention mechanism. The model is trained on a corpus of natural dyadic interaction where...
Article
Full-text available
Research over the past decades has demonstrated the explanatory power of emotions, feelings, motivations, moods, and other affective processes when trying to understand and predict how we think and behave. In this consensus article, we ask: has the increasingly recognized impact of affective phenomena ushered in a new era, the era of affectivism?
Article
Full-text available
Language resources for studying doctor–patient interaction are rare, primarily due to the ethical issues related to recording real medical consultations. Rarer still are resources that involve more than one healthcare professional in consultation with a patient, despite many chronic conditions requiring multiple areas of expertise for effective tre...
Conference Paper
Full-text available
Politeness behaviors could affect individuals' decisions heavily in their daily lives and may therefore also play an important role in human-agent interactions. This study considers the impact of politeness behaviors made by a virtual agent, already in a small face-to-face conversational group with another agent, on a human participant as they appr...
Preprint
Full-text available
Communicative gestures and speech prosody are tightly linked. Our objective is to predict the timing of gestures according to the prosody. That is, we want to predict when a certain gesture occurs. We develop a model based on a recurrent neural network with attention mechanism. The model is trained on a corpus of natural dyadic interaction where th...
Article
Full-text available
Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to...
Article
Full-text available
Humans have the ability to convey an array of emotions through complex and rich touch gestures. However, it is not clear how these touch gestures can be reproduced through interactive systems and devices in a remote mediated communication context. In this paper, we explore the design space of device-initiated touch for conveying emotions with an in...
Chapter
Full-text available
Embodied Conversational Agents (ECAs) are a promising medium for human-computer interaction, since they are capable of engaging users in real-time face-to-face interaction [1, 2]. Users’ formed impressions of an ECA (e.g. favour or dislike) could be reflected behaviourally [3, 4]. These impressions may affect the interaction and could even remain a...
Conference Paper
Full-text available
Myriad of applications involve the interaction of humans with machines , such as reception agents, home assistants, chatbots or autonomous vehicles' agents. Humans can control the virtual agents by the mean of various modalities including sound, vision, and touch. As the number of these applications increases, a key problem is the requirement of in...
Article
Full-text available
An Embodied Conversational Agent (ECA) is a virtual character designed to interact with humans in the most natural way. In the recent years, ECAs have been deployed in various contexts, such as commercial consulting and social training. In the context of social training, the virtual agent should be able to express different social attitudes in orde...
Article
The papers in this special section were presented at the 14TH IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019) that was held in Lille, France, 14–18 May 2019.
Article
Full-text available
Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to...
Conference Paper
Full-text available
We propose a paradigm called Skin-On interfaces, in which interactive devices have their own (artificial) skin, thus enabling new forms of input gestures for end-users (e.g. twist, scratch). Our work explores the design space of Skin-On interfaces by following a bio-driven approach: (1) From a sensory point of view, we study how to reproduce the lo...
Article
Full-text available
In social interactions between humans and Embodied Conversational Agents (ECAs) conversational interruptions may occur. ECAs should be prepared to detect, manage and react to such interruptions in order to keep the interaction smooth, natural and believable. In this paper, we examined nonverbal reactions exhibited by an interruptee during conversat...
Conference Paper
Full-text available
A social interaction implies a social exchange between two or more persons, where they adapt and adjust their behaviors in response to their interaction partners. With the growing interest in human-agent interactions, it is desirable to make these interactions more natural and human like. In this context, we aim at enhancing the quality of the inte...
Conference Paper
Full-text available
In the recent years, engagement modeling has gained increasing attention due the important role it plays in human-agent interaction. The agent should be able to detect, in real time, the engagement level of the user in order to react accordingly. In this context, our goal is to develop a computational model to predict engagement level of the user i...
Article
Full-text available
In this paper we present a computational model for managing the impressions of warmth and competence (the two fundamental dimensions of social cognition) of an Embodied Conversational Agent (ECA) while interacting with a human. The ECA can choose among four different self-presentational strategies eliciting different impressions of warmth and/or co...
Conference Paper
Full-text available
This paper presents progress and challenges in developing a platform for multi-character, argumentation based, interaction with a group of virtual coaches for healthcare advice and promotion of healthy behaviours. Several challenges arise in the development of such a platform, e.g., choosing the most effective way of utilising argumentation between...
Conference Paper
Full-text available
When interacting with others, we form an impression that can be declined along the two psychological dimensions of warmth and competence. By managing them, high level of engagement in an interaction can be maintained and reinforced. Our aim is to develop a virtual agent that can form and maintain a positive impression on the user that can help in i...
Conference Paper
Full-text available
Our objective is to develop a machine-learning model that allows a virtual agent to automatically perform appropriate communicative gestures. Our first step is to compute when a gesture should be performed. We express this as classification problem. We initially split the data into NoGesture class and HasGesture class. We develop a model based on r...
Conference Paper
Full-text available
Agents (virtual/physical) in a learning environment can be introduced in different roles, such as a tutor, mentor, motivator, expert, peer student etc. Each agent type brings an expertise, creating a unique social relationship with students. Depending on their role, agents have specific goals and beliefs, as well as attitudes towards the learners,...