ArticlePDF Available

Employing Virtual Lecturers' Facial Expressions in Virtual Educational Environments.

Authors:

Abstract and Figures

This research began with a preliminary exploratory study that observed the relationship between the facial expressions of three human lecturers in a real academic lecture theatre and the reactions of the students to those expressions. Results informed the design of the first experiment that aimed to investigate the effectiveness of a virtual lecturer's expressions on the students' learning outcomes in a virtual pedagogical environment. A second follow-up experiment then focussed on the effectiveness of a single facial expression (the smile) on student performance. Both experiments involved virtual lectures, with virtual lecturers teaching real students. Results indicated that students performed better (by 86%) in the lectures where the virtual lecturer performed facial expressions compared to the results of the lectures that did not use facial expressions. However, this applied only for reasonably complex instructional material; when simple or basic instructional material was used the facial expressions of the virtual lecturer had no substantial effect on the students' learning outcome. Finally, it was demonstrated that the appropriate use of smiling increased the interest of the students and consequently their performance.
Content may be subject to copyright.
A preview of the PDF is not available
... In their study, Theonas et al. 18 created pre-recorded video lectures featuring varying degrees of expressivity exhibited by a teacher. The lectures were evaluated by students via desktop computers. ...
... The findings of the investigation indicated a notable enhancement in student performance and attentiveness, particularly during challenging language lectures when the teacher displayed a heightened level of expressivity. 18 In the context of a web-conference platform, Dharmawansa et al. 19 employed a web camera to transmit eye blinking and head posture signals from real individuals to their avatars during a lecture. The evaluation results, as reported by actual students, revealed that both cues contributed to the formation of a robust communication channel and improved the overall quality of the learning experience. ...
... The studies presented above provide valuable insights into non-verbal communication in virtual settings. Some of the studies emphasize the importance of NVCs and their correlation to affective parameters and interpersonal components of communication, such as empathy, 20,21 message interpretation, 11,18,19 and user acceptance. 22 However, other studies reach opposing conclusions, stating that NVCs did not significantly affect user experience. ...
Article
Full-text available
Face‐to‐face communication relies extensively on non‐verbal cues (NVCs) which complement, or at times dominate, the communicative process as they convey emotions with intense salience, thus definitively affecting interpersonal communication. The capture, transference, and subsequent interpretation of NVCs becomes complicated in computer‐mediated communicative processes, particularly in shared virtual worlds, for which there is growing interest both in regard to NVCs technological integration and their affective impact. This paper presents a between‐groups experimental setup which is facilitated in immersive virtual reality (IVR), and examines NVCs effects on user experience, with special emphasis on degree of attention toward each NVC as an isolated controlled variable of a scripted performance by a virtual character (VC). This study aims to evaluate NVCs fidelity based on the capabilities of the motion‐capture technologies utilized to address cue integration developmental challenges and examines NVCs impact on users' perceived realism of the VC, their empathy toward him, and the degree of social presence experienced. To meet the objectives set the affective impact of low‐fidelity automated NVCs and high‐fidelity real‐time captured NVCs were compared. The findings of the evaluation suggest that although NVCs do impact user experience to an extent, their effects are notably more subtle compared to previous studies.
... In another study conducted by Theonas Hobbs, and Rigas (2007), the relationship between the facial expressions (e.g., smiles, laughter, opening of eyes, raising and lowering of eyebrows) of teachers and students' reactions to these facial expressions were examined. Theonas, Hobbs and Rigas (2007) suggested that the same situation can be reflected in the facial expressions of virtual teachers, which will be designed by computer, and that it may positively affect students' performance. ...
... In another study conducted by Theonas Hobbs, and Rigas (2007), the relationship between the facial expressions (e.g., smiles, laughter, opening of eyes, raising and lowering of eyebrows) of teachers and students' reactions to these facial expressions were examined. Theonas, Hobbs and Rigas (2007) suggested that the same situation can be reflected in the facial expressions of virtual teachers, which will be designed by computer, and that it may positively affect students' performance. In order to prove their thesis, Theonas, Hobbs and Rigas (2007) designed a 3D animated head as illustrated in Figure 2 and placed facial expressions on the face of it such as happy, sad, scared, surprised, angry and disgusted. ...
Book
As the usage of alternative energy sources grows more widespread in developing nations, it will be crucial to monitor their development. Moreover, control of the utilization of these renewable energy sources is crucial. Rapid advances in virtual reality (VR) and augmented reality (AR) have paved the way for immersive visualization to expedite the study of particular types of complex scientific and technical data (AR). The rising demand for energy consumption in Malaysia continues to be influenced by development and economic growth. The main challenge facing Malaysia's energy sector at present is the question of sustainability. This report addresses the current energy scenario and examines problems in the energy management of Malaysia and an initial assessment of the Malaysian energy industry can be given by the study. The review will cover in depth the potential of energy management using immersive collaboration, challenges, and future policy opportunities in this sector. In addition, the aim of this review is to describe the various energy policies adopted in Malaysia to ensure long-term reliability and security of energy supply.
... Positive facial expressions from instructors, in this context, can nurture a supportive relationship with learners, fostering a conducive emotional state and deeper engagement with the material. Theonas et al. (2007) highlighted that expressive facial cues, such as smiling, can boost learners' interest, motivation, and performance in asynchronous learning environments that use pre-recorded videos. Further, research by Wang (2022) and Wang et al. (2019) emphasized the significance of an instructor's facial expressions in conveying emotions, which can stimulate learners' motivation and satisfaction while reducing cognitive load in asynchronous video-based online learning. ...
Article
Full-text available
In the rapidly evolving landscape of higher education and adult learning, asynchronous video-based online learning has not only become the new norm but has also emerged as the cornerstone of instructional delivery for Massive Open Online Courses (MOOCs). Despite its widespread adoption, this learning mode confronts a critical challenge: the inherent lack of social presence, posing a significant risk of diminishing learner affective engagement and, consequently, jeopardizing the efficacy of learning outcomes. Addressing this pressing issue, our study conducted a comprehensive analysis of 240 instructional videos from 240 distinct instructors on a MOOC platform, supplemented by 845 post-course learner feedback surveys from a diverse cohort of college students and adult learners. Using deep learning and statistical analysis, the research revealed that the on-screen presence of instructors does not inherently affect students’ affective engagement. The study revealed that learners’ affective engagement is affected by distinct combinations of the instructor’s facial and paraverbal expressions, including happiness, surprise, and anger, which vary depending on whether the instructor is visible. The discovery that vocal attractiveness is a pivotal element in enhancing learners’ affective engagement with instructional videos marks a paradigm shift in our understanding of digital andragogy and heutagogy. This study propels academic discourse by illuminating the critical role of instructor non-verbal cues in establishing social presence and facilitating emotional contagion within asynchronous video-based online learning but also provides educators and content creators with empirically-backed techniques to revolutionize video instruction and amplify affective engagement.
... D URING the last few years, the development and largescale deployment of remote education around the globe has been accelerated [1]. Notably, there has been an increase in the number of virtual schools [2], and the use of virtual tutoring, online learning software, and video conferencing tools [3]. This type of learning is now part of the education system, used by several schools and universities, either to reach students in remote areas or reduce large gatherings at schools for healthcare reasons. ...
Article
Full-text available
Since the outbreak of the COVID-19 crisis, transition to remote education presented several challenges to educational institutions. Unlike face-to-face classes where educators can modify and keep track of the lessons and content according to the students’ observed emotions and participation, such activities are difficult to complete in online learning environments. To address this issue, we propose here a novel and comprehensive framework that leverages advanced computer vision and analysis techniques to detect students’ emotions during online learning and assess their state of mind regarding the taught content. Our framework is composed of three modules. The first module uses a novel lightweight machine learning method, called convolutional neural network-random forest (CNN-RF), to efficiently detect the students’ basic emotions, e.g., sad, happy, etc., during the online course. Our approach surpasses existing benchmarks in terms of accuracy (over 71%) on the FER-2013 dataset, while being less complex (i.e., using a smaller number of parameters). The second module consists of mapping the basic emotions to an education-aware state of mind, e.g., interest, boredom, distraction, etc. Unlike the few works that proposed simplistic mapping, we propose here a Plutchik wheel’s inspired mapping system, which is more precise and reflects better the relationship between combinations of basic emotions and the resulting education-aware state of mind. Thus, our understanding of the students’ cognitive and affective experiences during online learning can be enhanced. The third module is a visualization dashboard that offers clear and intuitive real-time representations of basic emotions and states of mind. This tool provides educators with invaluable insights into students’ emotional dynamics, enabling them to identify learning difficulties with high precision and make informed recommendations for improvements in course content and online teaching methods. In summary, the proposed framework presents a novel and powerful tool that addresses the challenges related to online learning. By accurately detecting the students’ emotions, assessing their states of mind, and providing real-time visualization, our approach represents a significant advancement toward the optimization of online education, which is critically needed in rural and remote areas of the globe.
... Virtual reality (VR) worlds can also be used to simulate face-to-face interactions by facilitating real-time lectures and collaborative learning activities between online students and their instructors [12], [13]. Avatars, whether cartoon-based or photo realistic-based, can be used within the immersive learning environment to increase the sense of social presence [14][15]. In rolling out VR learning environments featuring a high-fidelity professor avatar to MSC students [13] reported that it provided similar qualities of education and communication and successfully simulated face-to-face learning. ...
... For example, Neill (1989) found that the teacher's emotions facilitated students' learning interest and performance. Similarly, Theonas et al. (2008) supported the belief that teachers' emotions expressed through facial expressions would improve students' learning. analyzed the role of teachers' facial expressions in students' learning and found that teachers' expressions improved students' learning. ...
Book
Full-text available
The research studies in this Research Topic make significant contributions to the area of teachers’ emotions. Furthermore, these studies also have both theoretical and practical implications. It is suggested that teachers’ emotions cannot be regarded as isolated from social, cultural, and political environments, but they are intertwined, a process called emotional transmission in the teaching context (Frenzel et al., 2018). Therefore, teachers’ emotions are dynamic rather than static. However, most previous studies regarded teachers’ emotions as a static variable by measuring their emotions at one time and testing their relationships with other variables (e.g., students’ emotional responses; Wang et al.). A transmissive and dynamic perspective on the possible roles of teachers’ emotions is still lacking. Further work should understand the role of teachers’ emotions in educational contexts by dynamic measures (e.g., experience sampling).
... A ce titre, la conversation est perçue comme une « production collaborative » (Sacks, Schegloff, & Jefferson, 1974) pendant laquelle les interlocuteurs mobilisent plusieurs « ressources multimodales » (Birdwhistell, 1968). (Barrier, 2013), il a également été montré que le sourire du locuteur contribue à maintenir l'attention de son interlocuteur (Theonas, Hobbs, & Rigas, 2008). Dès lors, nous appréhendons le sourire comme un « geste facial » (Bavelas et al., 2014) participant à la collaboration des interlocuteurs au cours d'une conversation. ...
Article
Full-text available
In our contribution, we will discuss the notion of multimodality in human interactions. First, we will present the different definitions of the concept of multimodality currently used in linguistics. Then, we will focus on some of the current studies on multimodality conducted at the LPL, showing the variety of theoretical and methodological frameworks used as well as the interaction situations considered, ranging from face-to-face to online interactions, from conversations between speakers of the same language to exolingual communication. Finally, in the last part, we will show that despite the variety of approaches, our studies are complementary. We will conclude this chapter by proposing some avenues for future research, particularly based on the phenomenon of interactional failure. 1. The concept of multimodalityInterpersonal communication (Goffman, 1974) is “multimodal” in its broadest sense, if we take into account the research work carried out in linguistics on oral speech (Colletta, 2004), on gestures (Cosnier et al. 1982, McNeill, 1992), in conversational analysis (Mondada, 2005) as well as in social semiotics (Kress & Van Leeuwen, 2001) or in computer-mediated communication (Develotte et al, 2011) and digital discourse analysis (Paveau, 2017). In this first part, we define our main theoretical anchors, one not excluding the other.1.1 Insights from linguistics When we analyze speech in interaction as a multimodal phenomenon, we generally consider three main modalities: the verbal, the vocal (including prosody) and the posturo-mimo-gestural elements (Colletta, 2004). A multimodal linguistic perspective thus takes into account the different modalities and analyses them by putting them in relation. Studying prosody or analysing gesture (or other kinesic aspects such as postures, gazes, and facial mimicry) without the verbal would isolate the modalities and would not show their articulation (Ferré, 2011).1.2 Insights from social semioticsA conception of multimodality through the input of sensory modalities can be complexified if we think that a single modality can be the channel for several semiotic modes. Thus, through the visual modality, it is possible to perceive several semiotic modes: facial expressions, gestures and proxemics among others. Therefore, multimodality can be defined as a characterization of interaction not only in terms of the modalities at work, but also in terms of semiotic modes.In the 1990s, the development of audio-visual and digital technologies led to the emergence of another vision of the concept of multimodality, rooted in the field of social semiotics. The works of Kress and van Leeuwen (2001) have allowed to define new contours of this notion by integrating the different modalities of the communication allowed by the different artifacts. Thus, multimodality is defined as the massive and joint use of various modes of expression (verbal, visual, audio, tactile, etc.) in communication. 1.3 Emerging insights of “digital multimodality”The concept of “digital multimodality” (Wachs & Weber, 2021) is rooted in several fields of research, close to each other but each with its own particularities. The first, founding and precursor field is Computer Mediated Communication (CMC) which has been focused one in online exchanges, mainly in their textual form (Herring, 1996; Anis, 1998), since the beginning of the Internet (and even of the minitel in France). Then come two emerging fields of linguistics in the lineage of CMC: the field of Screen-Based Multimodal Interactions, which has developed in particular around videoconference interactions (Develotte, Kern & Lamy, 2011, Develotte & Paveau, 2017; Guichon & Tellier, 2017) and that of Digital Discourse Analysis (Paveau, 2017). Finally, in the field of language didactics, we will highlight the field of Technology-Enhanced Language Learning (Guichon, 2012) which studies the integration of technologies in communication situations specifically related to language teaching and learning. 2. Research on multimodality at LPLResearch on multimodality at the LPL is divided into three main theoretical frameworks: 1) multimodal speech (verbal, vocal and posturo-mimo-gestural), 2) social semiotics, and finally 3) digital multimodality, and sometimes even a combination of the three. The LPL researchers who are interested in the question of multimodality each study different contexts such as face-to-face interactions, online interactions and hybrid interactions.2.1. Face-to-face interactionsFace-to-face interactions can be of different nature (e.g. conversation, doctor-patient interaction, work meeting…). In this section, we will present some research on face-to-face conversation. Our work focuses on prosody during disfluencies (Pallaud et al., 2019) and on smiling in two phases of conversation: thematic transitions and humor phases (Amoyal et al., 2020). In the study of face-to-face interactions, there is also interest in exolingual communication (between participants who do not have the same first language) (Porquier, 1994). These are particularly interesting to study from a multimodal perspective, especially from the point of view of the adaptation of modalities to facilitate access to meaning (Tellier et al., 2021). 2.2. Online interactionsWith the spread of computers and increasingly powerful Internet connections, several computer-mediated communication (CMC) tools have developed and spread over the years, from the first e-mail exchanges to the recent widespread use of videoconferencing and videophone calls. On the one hand, these new tools have been the object of a projection of interactional processes coming from face-to-face interactions, while on the other hand, new communicative phenomena linked to the multimodality of different forms of CMC have emerged (for example, emoticons in chat rooms). Some research at the LPL has thus turned to the emergence of discursive techno-genres and their characterization, for example in terms of ethos. Others are interested in the pedagogical potential for language didactics, both in terms of second language acquisition and teacher training.2.3. Hybrid interactionsHybrid interactions mix face-to-face and distance learning and can take place in different configurations. An interdisciplinary and collective research between several laboratories, including the LPL (see Présences numériques corpus), studied a poly-artifact doctoral seminar (Develotte et al, 2021), i.e. a seminar where one part of the participants is physically present and the other part is present via different artifacts (a tablet articulated on a base, a human-sized robot mobile on wheels and an interactive multimodal platform) which featured, among other modules, a videoconference space, a chat space, a collective note-taking and a document sharing. Each artifact was itself operated through multiple screens (of computers, tablets, smartphones, etc.). Such a multimodal poly-artifact communication context is therefore eminently complex to use for the participants, as well as to study and transcribe for the researchers. 3. PerspectivesAt the end of this chapter, which highlights the main directions of research carried out at LPL on multimodality, several perspectives are identified. For example, studying the phenomenon of “interactional failure” seems particularly interesting, especially to show how speakers draw on their multimodal resources to overcome these failures. Different phenomena can be observed: disfluencies, lexical searches, misunderstandings and repairs, explanations and finally anticipation of the failure through a multimodal didactic discourse.A first avenue of research that could be explored is the multimodal nature of disfluencies in interaction: is there only a synchronization of the suspension of gesture and speech or are there specific gestures linked to lexical research? A research perspective at LPL could be to compare the occurrence of disfluencies in L1 vs. foreign language speech to see if language proficiency affects how this phenomenon is expressed multimodally. Among the perspectives to be explored in research on multimodality, we can mention the study of conversational alignments. In the model proposed by Pickering and Garrod (2004, 2021), alignment is a psychological phenomenon concerning the mental representations of interlocutors, which is reflected in the articulation and often the repetition of communicative behaviours. While alignment was initially studied through the phenomenon of priming at different language levels (lexicon, morphosyntax, etc.), recently there has been an interest in multimodal communication, including gestures and facial mimicry (Cappellini, Holt and Hsu, 2022). By including the study of facial mimicry, a study of the smiles of interlocutors in the repair phase could highlight the importance of this facial expression in this interactional process. Another avenue of research would be to question the place of the smile in lexical research. Finally, it seems that we could also study the complexity of the interactional scheme of the YouTube platform (a video discourse to which written comments respond in asynchrony) and how vloggers anticipate possible interactional failures by resorting to a didactic discourse (Moirand, 1993) which calls on different multimodal elements.
Article
Full-text available
Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes.
Article
Full-text available
This paper presents some results from our research using human-computer interactions to study the dynamics and interactive nature of emotional episodes. For this purpose, we developed the Geneva Appraisal Manipulation Environment (GAME; Wehrle 1996), a tool for generating experimental computer games that translate psychological theories into specific micro world scenarios (for details about theoretical and technical embedding of GAME see Kaiser & Wehrle, 1996). GAME allows automatic data recording and automatic questionnaires. While playing the experimental game, subjects are videotaped and these tape recordings enable an automatic analysis of the subject's facial behavior with the Facial Expression Analysis Tool (FEAT; Kaiser & Wehrle, 1992; Wehrle, 1997). With FEAT, facial actions are categorized in terms of FACS (Ekman & Friesen, 1978). These facial data can be automatically matched to the corresponding game data (using vertical time code as a refererence for both kinds of data).
Article
Full-text available
Depictions, such as maps, that portray visible things are ancient whereas graphics, such as charts and diagrams, that portray things that are inherently not visible, are relatively modern inventions. An analysis of historical and developmental graphic inventions suggests that they convey meaning by using elements and space naturally. Elements are based on likenesses, "figures of depiction" and analogs to physical devices. Spatial relations are used metaphorically to convey other relations, based on proximity, at nominal, ordinal, and interval levels. Graphics serve a variety of functions, among them, attracting attention, supporting memory, providing models, and facilitating inference and discovery.
Chapter
Virtual reality (VR) is a new way to use computers. VR eliminates the traditional separation between user and machine, providing more direct and intuitive interaction with information. By wearing a head-mounted audio/visual display, position and orientation sensors, and tactile interface devices, one can actively inhabit an immersive computer-generated environment. One can create virtual worlds and step inside to see, hear, touch, and modify them. Major corporations and companies worldwide are actively exploring the use of the VR technology for a variety of application areas, including telecommunications, arcade and home entertainment, production and assembly management, health care, digital design, and product sales and marketing. VR has been developed during the past 20 years to facilitate learning and performance in high-workload environments in the U.S. Air Force. Flight simulators, which combine physical and computer-generated elements to create task-specific learning environments, have been highly effective in pilot training. Current VR systems provide new capabilities for perceptual expansion, for creative construction, and for unique social interactivity.
Article
Agents have become a predominant area of research and development in human interfaces. A major issue in the development of these agents is how to represent them and their activities to the user. Anthropomorphic forms have been suggested, since they provide a great degreeof subtlety and afford social interaction. However, these forms may be problematic since they maybe inherently interpretted as having a high degreeof agency and intelligence. An experiment is presented which supports these contentions.
Article
One source reports that there are over 300 VLEs in operation. With few exceptions they all seek to harness the potential power of GroupWare/electronic delivery with the opportunity to develop shared collaborative learning, whether synchronous or asynchronous, in a virtual classroom situation. But both supporters and critics do agree that if there is a potential weakness in using a Virtual Learning Environment it will be in the quality and quantity of discussion and debate that takes place in this virtual classroom. This paper draws upon experiences of using LearningSpace to deliver an off- campus programme, Certificate in Marketing Practice, validated by the Chartered Institute of Marketing for Sourcerer Ltd. and the SMILE Project being developed currently using an European Social Fund award. The paper concludes with a view that learners are more likely to engage in virtual discussions, both quality and quantity, when they perceive a 'worth' in doing so. In determining this 'worth', the role of the tutor as well as choice and role of a guest in the virtual classroom is critical. A guest should be more than a novelty factor. Some learners will see taking part in virtual classroom activities as worthwhile where such activity is linked to either summative and/or formative assessment. Linking activities can guarantee an higher participation rate, but measuring that involvement (for example in terms of quantifying individual or group contributions to threaded discussions for assessment purposes) in itself raises some major methodological issues. Moreover, this carrot-and- stick approach can't be relied upon to work in every situation. There will be modules where this linking is neither possible (there may no formal credit-bearing assessment as such) nor practical in learning terms. Ideally the learner should see the worth in the effort of taking part in virtual classroom activities in terms of these activities enriching their own learning experiences. It is the facility to engage in active collaborative learning in the virtual classroom which has moved VLEs and distributed learning beyond the (now) more traditional distance learning methods. It is up to module designers to release this opportunity. The well-conceived use of guests in the virtual classroom may be one such way.
Article
Virtual Reality (VR) has been shown to be an effective way of teaching difficult concepts to students. However, a number of important questions related to immersion, collaboration and realism remain to be answered before truly efficient virtual learning environments can be designed. We present CyberMath, an avatar-based shared virtual environment for mathematics education that allows further study of these issues. In addition, CyberMath is easily integrated into school environments and can be used to teach a wide range of mathematical subjects.