Elisabetta Bevacqua

Elisabetta Bevacqua
  • Brest National Engineering School

About

86
Publications
22,732
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,477
Citations
Introduction
Current institution
Brest National Engineering School

Publications

Publications (86)
Conference Paper
Full-text available
In emergency medical procedures, positive and trusting interactions between followers and leaders are imperative. That interaction is even more important when a virtual agent assumes the leader role and a human assumes the follower role. In order to manage the human-computer interaction, situational leadership is employed to match the human followe...
Conference Paper
Full-text available
We propose a virtual medical assistant to guide both novice and expert caregivers through a procedure without the direct help of medical professionals. Our medical assistant uses situational leadership to handle all interaction with a caregiver, which works by identifying the readiness level of the caregiver in order to match them with an appropria...
Conference Paper
Full-text available
In a medical emergency in which an amateur caregiver is separated from medical experts by space and/or time, a virtual assistant could be useful in order to guide the caregiver through the required procedure successfully. A successful procedure is one that preserves and improves the health of the patient and maintains a positive interaction between...
Conference Paper
Full-text available
In emergency medical procedures, positive and trusting interaction between followers and leaders are imperative. That relationship is even more important when a virtual agent assumes the leader role and a human assumes the follower role. In order to manage the human-computer interaction, situational leadership is employed to match the human to an a...
Conference Paper
Full-text available
This paper presents a new taxonomy of non-technical skills, communicative intentions, and behavior for an individual acting as a medical coordinator. In medical emergency situations, a leader among the group is imperative to both patient health and team emotional and mental health. Situational Leadership is used to make clear and easy-to-follow gui...
Conference Paper
Full-text available
In a medical environment, coordination between medical staff is imperative. In cases in which a human doctor or medical coordinator is not present, patient care, particularly from non-experts, becomes more difficult. The difficulty increases when care is completed at a remote site, for example, on a manned mission to Mars. Communication capability...
Conference Paper
Full-text available
This research work introduce virtual embodied tutors in Virtual Environments for learning devoted to learning of procedures for industrial systems. We present a communicative behavior which, integrated in pedagogical scenario, permits on the one hand to realize the pedagogical communicative actions at a semantic level (e.g., the tutor explains the...
Article
Full-text available
Our research work proposes an adaptive and embodied virtual tutor based on intelligent tutoring systems. The domain model is represented in our work by a virtual environment meta-model and the interface by an embodied conversational agent. Our main contribution concerns the tutor model, that is able to adapt the execution of a pedagogical scenario...
Article
Subtle phenomena rooted in our body dynamics affect the reactive and evolutive parts of every human interaction. The authors’ decision model allows for adaptive physical interactions between a human and a virtual agent. This article presents an evaluation of that model in terms of agent believability, the user’s feeling of co-presence, and overall...
Conference Paper
Full-text available
Virtual Reality and immersive experiences, which allow players to share the same virtual environment as the characters of a virtual world, have gained more and more interest recently. In order to conceive these immersive virtual worlds, one of the challenges is to give to the characters that populate them the ability to express behaviors that can s...
Conference Paper
Full-text available
Cet article présente un travail de recherche qui vise à améliorer les agents conversationnels animés ayant un comportement de tuteur, en les munissant de la capacité à générer des rétroactions, en anglais feedbacks, lors d'interactions pédagogiques avec des apprenants. Les feedback de l'agent virtuel et l'interprétation des feedbacks de l'utilisate...
Conference Paper
Full-text available
This paper introduces a new research work that aims to improve embodied conversational agents with tutor behavior by endowing them with the capability to generate feedback in pedagogical interactions with learners. The virtual agent feedback and the interpretation of the user’s feedback are based on the knowledge of the environment (informed virtua...
Conference Paper
This paper presents a platform dedicated to a full body interaction between a virtual agent and human or between two virtual agents. It is based on the notion of coupling and the metaphor of the alive communication that come from studies in psychology. The platform, based on a modular architecture, is composed of modules that communicate through me...
Poster
Full-text available
This research work aims to define a model for real-time simulation of multimodal and social behavior of virtual agents when interacting with a learner in an informed virtual environment. We present the main ideas of a cognitive feedback model based on the knowledge of the environment.
Conference Paper
This paper presents a model that provides adaptive and evolutive interaction between a human and a virtual agent. After introducing the theoretical justifications, the aliveness metaphor and the notion of coupling are presented. Then, we propose a formalization of the model that relies on the temporal evolution of the coupling between participants...
Conference Paper
Full-text available
We present a human-agent interaction based on a theatrical mirroring game. The user and the agent imitate each other's body movements and introduce, from time to time, changes by proposing a new movement. The agent responses are linked to the game scenario but also to the user's behavior, which makes each game session unique. This demonstration has...
Article
Full-text available
This paper presents a robust and anticipative real-time gesture recognition and its motion quality analysis module. By utilizing a motion capture device, the system recognizes gestures performed by a human, where the recognition process is based on skeleton analysis and motion features computation. Gestures are collected from a single person. Skele...
Conference Paper
Full-text available
This paper presents a study of the dynamic coupling between a user and a virtual character during body interaction. Coupling is directly linked with other dimensions, such as co-presence, engagement, and believability, and was measured in an experiment that allowed users to describe their subjective feelings about those dimensions of interest. The...
Article
Full-text available
Convincing conversational agents require a coherent set of behavioral responses that can be interpreted by a human observer as indicative of a personality. This paper discusses the continued development and subsequent evaluation of virtual agents based on sound psychological principles. We use Eysenck's theoretical basis to explain aspects of the c...
Article
Full-text available
We present a computational model that generates listening behaviour for a virtual agent. It triggers backchannel signals according to the user’s visual and acoustic behaviour. The appropriateness of the backchannel algorithm in a user-agent situation of storytelling, has been evaluated by naïve participants, who judged the algorithm-ruled timing of...
Conference Paper
Full-text available
Sensitive Artificial Listener (SAL) is a multimodal dialogue system which allows users to interact with virtual agents. Four characters with different emotional traits engage users is emotionally coloured interactions. They not only encourage the users into talking but also try to drag them towards specific emotional states. Despite the agents very...
Chapter
Full-text available
This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it”. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining the critical role of non-verbal behaviour during f...
Chapter
Full-text available
In face-to-face conversations listeners provide feedback and comments at the same time as speakers are uttering their words and sentence. This ‘talk’ in the backchannel provides speakers with information about reception and acceptance – or lack thereof – of their speech. Listeners, through short verbalisations and non-verbal signals, show how they...
Article
Full-text available
This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and non-verbal interaction capabilities. The work is motivated by the aim to provide technology with competences in interpreting and producing the emotional and non-verbal behaviours required to sustain a conversational di...
Article
Full-text available
Convincing conversational agents require a coherent set of behavioral responses that can be interpreted by a human observer as indicative of a personality. This paper discusses the continued development and subsequent evaluation of virtual agents based on sound psychological principles. We use Eysenck's theoretical basis to explain aspects of the c...
Conference Paper
Full-text available
We have developed a general purpose use and modular architecture of an embodied conversational agent (ECA). Our agent is able to communicate using verbal and nonverbal channels like gaze, facial expressions, and gestures. Our architecture follows the SAIBA framework that sets 3-step process and communication protocols. In our implementation of SAIB...
Conference Paper
Full-text available
This demonstration aims to showcase the recently completed SEMAINE system. The SEMAINE system is a publicly available, fully autonomous Sensitive Artificial Listeners (SAL) system that consists of virtual dialog partners based on audiovisual analysis and synthesis (see http://semaine.opendfki.de/wiki). The system runs in real-time, and combines inc...
Conference Paper
Full-text available
Smile is one of the most often used nonverbal signals. De-pending on when, how and where it is displayed, it may convey various meanings. We believe that introducing the variety of smiles may improve the communicative skills of embodied conversational agents. In this paper we present on-going research on the role of smile in embodied conver-sationa...
Article
Full-text available
This paper presents a generic ,modular and interactive architecture for embodied conversational agent called Greta. It is 3D agent able to communicate with users using verbal and non verbal channels like gaze, head and torso movements, facial expressions and gestures. Our architecture follows the SAIBA framework that defines modular structure, func...
Conference Paper
Full-text available
One of the most desirable characteristics of an Embodied Conversational Agent (ECA) is the capability of interacting with users in a human-like manner. While listening to a user, an ECA should be able to provide backchannel signals through visual and acoustic modalities. In this work we propose an improvement of our previous system to generate mult...
Conference Paper
Full-text available
This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTER-FACE’09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects ’ face. Annotatio...
Chapter
Full-text available
What will it be like to admit Artificial Companions into our society? How will they change our relations with each other? How important will they be in the emotional and practical lives of their owners – since we know that people became emotionally dependent even on simple devices like the Tamagotchi? How much social life might they have in contact...
Article
Full-text available
The AVLaughterCycle project aims at developing an audiovisual laughing machine, able to detect and respond to user’s laughs. Laughter is an important cue to reinforce the engagement in human-computer interactions. As a first step toward this goal, we have implemented a system capable of recording the laugh of a user and responding to it with a simi...
Conference Paper
Full-text available
Sensitive Artificial Listeners (SAL) are virtual dialogue partners based on audiovisual analysis and synthesis. Despite their very limited verbal understanding, they intend to engage the user in a conversation by paying attention to the user's emotions and nonverbal expressions. The SAL characters have their own emotionally defined personality, and...
Technical Report
Full-text available
The AVLaughterCycle project aims at developing an audiovisual laughing machine, capable of recording the laughter of a user and to respond to it with a machine-generated laughter linked with the input laughter. During the project, an audiovisual laughter database was recorded, including facial points tracking, thanks to the Smart Sensor Integration...
Conference Paper
Full-text available
In this paper, the AVLaughterCycle database is pre-sented. The database consists in the audio and facial motion capture recordings of 24 subjects watching a funny video. It is the first database of laughter combi-ning these 2 modalities. There are around 1000 laugh-ter episodes in the database, covering a large variety of shapes. The database annot...
Conference Paper
Full-text available
Sensitive artificial listeners (SAL) are virtual dialogue partners who, despite their very limited verbal understanding, intend to engage the user in a conversation by paying attention to the user's emotions and non-verbal expressions. The SAL characters have their own emotionally defined personality, and attempt to drag the user towards their domi...
Conference Paper
Full-text available
How do we construct credible personalities? The current SAL (Sensitive Artificial Listeners) characters were constructed intuitively and can be unconvincing. In addressing these issues, this paper considers a theory of personality and associated emotional traits, and discusses how behaviours associated with personality types in people may be adapte...
Conference Paper
Full-text available
How do we construct credible personalities? The current SAL (Sensitive Artificial Listeners) characters were constructed intuitively and can be unconvincing. In addressing these issues, this paper considers a theory of personality and associated emotional traits, and discusses how behaviours associated with personality types in people may be adapte...
Conference Paper
The proposed paper describes an approach that was used to influence conversational agent Greta’s mental state. The beginning this paper introduces the problem of conversational agents, especially in the listener role. The listener’s backchannels also influence its mental state. The simple agent state manager was developed to impact Greta’s internal...
Conference Paper
Full-text available
We have developed a general purpose use and modular architecture of an Embodied Conversational Agent (ECA) called Greta. Our 3D agent is able to communicate using verbal and nonverbal channels like gaze, head and torso movements, facial expressions and gestures. It follows the SAIBA framework (10) and the MPEG4 (6) standards. Our system is optimize...
Conference Paper
Full-text available
Within the Sensitive Artiflcial Listening Agent project, we propose a system that computes the behaviour of a listening agent. Such an agent must exhibit behaviour variations depending not only on its mental state towards the interaction (e.g., if it agrees or not with the speaker) but also on the agent's characteristics such as its emotional trait...
Article
Full-text available
In questo capitolo ci proponiamo di descrivere il lavoro necessario alla creazione di agenti virtuali multimodali dotati di significative capacità comunicative e sociali. Ci basiamo sulla nostra estesa conoscenza degli Agenti Conversazionali per implementare personaggi autonomi e dall'aspetto umano capaci di esibire comportamenti sociali credibili...
Article
Full-text available
RESUME Pour assurer l'engagement d'un utilisateur dans une inte-raction humain-machine médiatisée par un agent virtuel, nous devons doter ces agents de capacités de communi-cation et d'interaction, et leur permettre de communiquer socialement et émotionnellement avec un utilisateur. A cette fin, nous présentons différents modèles allant de la capac...
Conference Paper
Full-text available
Embodied conversational agents should be able to provide feedback on what a human interlocutor is saying. We are compiling a list of facial feedback expressions that signal attention and interest, ground- ing and attitude. As expressions need to serve many functions at the same time and most of the component signals are ambiguous, it is important t...
Article
Full-text available
One of the most desirable characteristics of an intelligent interactive system is its capability of interacting with users in a natural way. An example of such a system is the embodied conver-sational agent (ECA) that has a humanoid aspect and the capability of communicating with users through multiple modalities such as voice, gesture, facial expr...
Conference Paper
Full-text available
In this paper we present an agent that can analyse certain human full-body movements in order to respond in an expressive manner with copying be-haviour. Our work focuses on the analysis of human full-body movement for animating a virtual agent, called Greta, able to perceive and interpret users’ ex-pressivity and to respond properly. Our system ta...
Article
Full-text available
This work is about multimodal and expressive synthesis on virtual agents, based on the analysis of actions performed by human users. As input we consider the image sequence of the recorded human behavior. Computer vision and image processing techniques are incorporated in order to detect cues needed for expressivity features extraction. The multimo...
Conference Paper
Full-text available
Embodied Conversational Agents (ECAs) are a new paradigm of computer interface with a human-like aspect that allow users to interact with the machine through natural speech, gestures, facial expressions, and gaze. In this paper we present an head anima-tion system for our ECA Greta and we focus on two of its aspects: the expressivity of movement an...
Conference Paper
Full-text available
We present a scenario whereby an agent senses, interprets and copies a range of facial and gesture expression from a person in the real-world. Input is obtained via a video camera and processed initially using computer vision techniques. It is then processed further in a frame- work for agent perception, planning and behaviour generation in order t...
Conference Paper
Full-text available
One of the major problems of user's interaction with Embodied Con- versational Agents (ECAs) is to have the conversation last more than few second: after being amused and intrigued by the ECAs, users may find rapidly the restric- tions and limitations of the dialog systems; they may perceive the repetition of the ECAs animation; they may find the b...
Article
Full-text available
We aim at the realization of an Embodied Conversational Agent able to interact naturally and emotionally with user. In particular, the agent should behave expressively. Specifying for a given emotion, its corresponding facial expression will not produce the sensation of expressivity. To do so, one needs to specify parameters such as intensity, tens...
Article
Full-text available
In this paper we present an evaluation study on mimicry per-formed by an Embodied Conversational Agent while being a listener during an interaction with a human user. Previous research has shown the importance of mimicry in human-human interaction, highlighting its relation with the level of engagement between interactants. In the present work we a...
Article
Full-text available
We aim at the realization of an Embodied Conversa-tional Agent able to interact naturally and emotionally with user(s). In previous work [23], we have elaborated a model that computes the nonverbal behaviors associated to a given set of communicative functions. Specifying for a given emotion, its corresponding facial expression will not produce the...
Article
Full-text available
Abstract,This Document describes the Prototype of animated agent for application 1. In particular, it describes the different phases involved in the computation,of the final animation of the agents. This document,discusses the method,we are using to resolve conflicts arising when,combining,several
Article
Full-text available
The ability for AI controlled humanoid agents to interact and resonate with the user and with each other in a social and emotional manner is of the utmost importance for creating a sense of plausibility, immersion and ultimately enhancing user experience. In order to express themselves effectively, agents will need to be adept in a broad range of c...
Article
Full-text available
Within the Sensitive Artiflcial Listening Agent project, we propose a system that computes the behaviour of a listening agent by encompassing the notion of personality. In our system agent's behaviour tendencies are deflned by a set of parameters. The system selects the signals to be displayed by the agent depending on its behaviour tendency and it...
Article
Full-text available
This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTER- FACE'09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects' face. Annotatio...
Article
Full-text available
In this paper we describe our work toward the creation of affective multimodal virtual characters endowed with communicative and other socially significant capabilities. While much work in modern game AI has focused on is- sues such as path-finding and squad-level AI, more highly-detailed behaviour for small groups of interacting game characters ha...
Article
Full-text available
Abstract,This Document describes the Prototype of the animated agent for prototype 2. In this report, we first present the enhancements,that was made,to the Greta software. We also pay particular attention to the simulation of gaze behaviour in agents. We propose an algorithm that is based on the previous computational model as presented in deliver...
Article
Full-text available
In this paper we present an evaluation study on mimicry performed by an Embodied Conversa-tional Agent while being a listener during an in-teraction with a human user. Through an exper-imental setting, we analyze humans' reactions to agent's mimicry, in particular in relation with smiles. Results show that the agent's behavior influences the user's...

Network

Cited By