Fig 1 - uploaded by Wijdane Kaiss
Content may be subject to copyright.
Different Units of Action for the Upper and Lower Part of the Face [11].

Different Units of Action for the Upper and Lower Part of the Face [11].

Source publication
Article
Full-text available
In teaching environments, student facial expressions are a clue to the traditional classroom teacher in gauging students' level of concentration in the course. With the rapid development of information technology, e-learning will take off because students can learn anytime, anywhere and anytime they feel comfortable. And this gives the possibility...

Contexts in source publication

Context 1
... types of emotions: seven basic emotions and twelve compound emotions. There are forty-six fundamental muscles that form the UA of the face, including those that produce facial expressions. Based on individual UA, the system classifies facial categories by combining each individual UA, then identifying the facial category using that individual UA (Fig. 1). In the case of AU12 and 25, if the system recognizes that the image indicates a "happy" emotion (Table I), the system will classify the image accordingly (Table I). Bidwell and Fuchs [13] used an automated gaze system to measure student engagement. Classifiers were created based on video recordings of classrooms. A face tracking ...
Context 2
... VGG 19 model performed excellently with a training accuracy of 90% (show Fig. 8 and Fig. 9 ) achieved in the 70th epoch, while the VGG16 and Xception (show Fig. 10, Fig 11 and Fig. 12 and Fig 13) model achieved its full 90% accuracy in the 50th epoch. However, the AlexNet (show Fig. 6 and Fig. 7) model was a bit long with a 91% accuracy in the 90th epoch. The accuracy of Alexnet training was 99.8%, but it could have been ...
Context 3
... VGG 19 model performed excellently with a training accuracy of 90% (show Fig. 8 and Fig. 9 ) achieved in the 70th epoch, while the VGG16 and Xception (show Fig. 10, Fig 11 and Fig. 12 and Fig 13) model achieved its full 90% accuracy in the 50th epoch. However, the AlexNet (show Fig. 6 and Fig. 7) model was a bit long with a 91% accuracy in the 90th epoch. The accuracy of Alexnet training was 99.8%, but it could have been ...

Similar publications

Article
Full-text available
Background Negative bias in facial emotion recognition is a well-established concept in mental disorders such as depression. However, existing face sets of emotion recognition tests may be of limited use in international research, which could benefit from more contemporary and diverse alternatives. Here, we developed and provide initial validation...
Article
Full-text available
An Emoji is a small image representing facial expression, entity or a concept that can be either static or animated. In this paper, Emojis are used to study both cross-language and language based sentiment patterns. All the languages do not come with fair amount of labels. Emojis are useful signs of sentiment analysis in cross-lingual tweets. In th...
Article
Full-text available
In the wild, dynamic facial emotion recognition is a highly challenging task. Traditional approaches often focus on extracting discriminative features or preprocessing data to remove noisy frames. The former overlooks differences between keyframes and noise frames, while the latter can be complex and less robust. To address this issue, we propose a...
Article
Full-text available
Various emotion elicitation methods are used in many studies assessing the interaction of emotion and cognition. However, most of the experimental and meta-analysis studies comparing emotion elicitation methods have examined the success of emotion evoking in terms of valence and arousal. There are few experimental studies dealing with comparisons w...
Article
Full-text available
In recent years, speech emotion recognition (SER) increasingly attracts attention since it is a key component of intelligent human-computer interaction and sophisticated dialog systems. To obtain more abundant emotional information, a great number of studies in SER pay attention to the multimodal systems which utilize other modalities such as text...

Citations

... This approach achieves a performance of 95.39%. Meriem et al. [16] found a strong relationship between emotions and student concentration. To study student emotions, four types of datasets were pooled, and four pre-trained models were used to create an emotion detection system. ...
Article
Full-text available
span lang="EN-US">Distance education has been prevalent since the late 1800s, but its rapid expansion began in the late 1990s with the advent of the online technological revolution. Distance learning encompasses all forms of training conducted without the physical presence of learners or teachers. While this mode of education offers great flexibility and numerous advantages for both students and teachers, it also presents challenges such as reduced concentration and commitment from students, and difficulties in course supervision for teachers. This article aims to study student engagement on distance learning platforms by focusing on emotion detection. Leveraging various existing datasets, including the Facial Expression Recognition 2013 (FER2013), the Karolinska Directed Emotional Faces (KDEF), the extended Cohn-Kanade (CK+), and the Kyung Hee University Multimodal Facial Expression Database (KMU-FED), the proposed approach utilizes transfer learning. Specifically, it exploits the large number and diversity of images from datasets like FER2013, and the high-quality images from datasets like KDEF, CK+, and KMU-FED. The model can effectively learn and generalize emotional cues from varied sources by combining these datasets. This comprehensive method achieved a performance accuracy of 96.06%, demonstrating its potential to enhance understanding of student engagement in online learning environments.</span
... Camera-based technologies have emerged as a popular method for detecting user concentration levels due to their relative convenience and non-invasive nature. These systems typically analyze limb movements, eye behavior, pupil dilation, or facial expressions to infer a user's level of concentration [14,19,25]. Meriem et al. [19] find that students' emotions, inferred through facial expressions, are related to their attention levels. ...
... These systems typically analyze limb movements, eye behavior, pupil dilation, or facial expressions to infer a user's level of concentration [14,19,25]. Meriem et al. [19] find that students' emotions, inferred through facial expressions, are related to their attention levels. They develop a computer vision-based method to classify attention into three levels by correlating these emotions with students' concentration during class. ...
... Camera-based technologies have emerged as a popular method for detecting user concentration levels due to their relative convenience and non-invasive nature. These systems typically analyze limb movements, eye behavior, pupil dilation, or facial expressions to infer a user's level of concentration [14,19,25]. Meriem et al. [19] find that students' emotions, inferred through facial expressions, are related to their attention levels. ...
... These systems typically analyze limb movements, eye behavior, pupil dilation, or facial expressions to infer a user's level of concentration [14,19,25]. Meriem et al. [19] find that students' emotions, inferred through facial expressions, are related to their attention levels. They develop a computer vision-based method to classify attention into three levels by correlating these emotions with students' concentration during class. ...
... Emotions can be detected by behaviour, voice, facial expression or physiological cues, although facial expression, voice and behaviour can be subjective [23]. Individuals may mask or show emotions that are contrary to their true feelings. ...
Article
Full-text available
The purpose of the article is to understand the current state of both the technology and the implementation of emotion recognition in the educational environment. The goal is to obtain detailed information about the current state of emotion recognition technology and how its practical use is being carried out in educational settings. In this line, it examines the proposals from publications over the last 10 years on the advancement of technology for emotion recognition in education. A total of 1,347 studies were obtained and 43 were included in the review for analysis and discussion. The article demonstrates how the number of studies has increased in recent years, with a higher frequency in online learning. Furthermore, according to the Technological Readiness Level, despite the growing interest in emotion recognition in the educational environment, its implementation is still far from becoming a reality. Most of the research has been conducted from a theoretical perspective and none of them has been fully developed and implemented in the classroom. In addition, many of the studies analysed have not tested the validity of their findings.
... In light of the COVID-19 pandemic, the transition to online learning has been necessary but not without flaws [4], [5]. Many students struggle to stay engaged in virtual classes due to the relaxed environment, distractions, and the absence of constant monitoring by teachers [6]. This lack of attention is particularly problematic for students with low concentration, as it hinders their ability to retain knowledge [7]. ...
Article
Full-text available
Online learning has gained immense popularity, especially since the COVID-19 pandemic. However, it has also brought its own set of challenges. One of the critical challenges in online learning is the ability to evaluate students' concentration levels during virtual classes. Unlike traditional brick-and-mortar classrooms, teachers do not have the advantage of observing students' body language and facial expressions to determine whether they are paying attention. To address this challenge, this study proposes utilizing facial and body gestures to evaluate students' concentration levels. Common gestures such as yawning, playing with fingers or objects, and looking away from the screen indicate a lack of focus. A dataset containing images of students performing various actions and gestures representing different concentration levels is collected. We propose an enhanced model based on a vision transformer (RViT) to classify the concentration levels. This model incorporates a majority voting feature to maintain real-time prediction accuracy. This feature classifies multiple frames, and the final prediction is based on the majority class. The proposed method yields a promising 92% accuracy while maintaining efficient computational performance. The system provides an unbiased measure for assessing students' concentration levels, which can be useful in educational settings to improve learning outcomes. It enables educators to foster a more engaging and productive virtual classroom environment.
... Various studies have discussed student learning concentration, especially how to improve it, such as the influence of the environment on student concentration and learning outcomes with mobile learning (Yang et al., 2020), visual multimedia to increase learning concentration (Ikechukwu-Ilomuanya et al., 2021), coloring classroom walls and their effect on concentration (Pourbagher et al., 2020) and brain gym improves student concentration (Anggraini & Dewi, 2022). Research on concentration analysis is also widely carried out, such as measuring students' concentration levels based on facial expressions (Meriem et al., 2022), monitoring student concentration (Su et al., 2021), and analyzing student concentration with webcam feeds (Le et al., 2021). Although many studies have explained how to increase student concentration, a specific study has not observed how teachers try to improve student concentration. ...
Article
Full-text available
This study analyzes the teacher's efforts in increasing student concentration in online learning and the role of parents in assisting their children's learning. The method used in this research is a survey method. The data collection technique used was interviews with teachers and parents of students at MIN 2 Serang and MIN 4 Serang. The data from the interviews were analyzed and then concluded in the form of a narrative. The study results reveal that teachers can know the concentration of their students when they learn to use video call media. In addition, parents play a role in assisting their children's learning. These findings indicate that teachers have increased students' learning concentration. It is proven effective in increasing students' learning concentration, marked by an increase in learning outcomes, and students' parents provide learning assistance to their children. The study recommends that teachers use video calls to overcome the concentration of students' learning and learning assistance for students at home by their parents.
... The use of Artificial Intelligence (AI) in general and more particularly, Machine Learning, in almost all fields, allows nowadays, to predict from the available data, a number of interesting elements for decision-making [6] [7]. Thus, it is easy to understand the importance of using Machine Learning to improve the quality of academic performance [8] [9]. This study contributes to research in Educational Data Mining (EDM) by developing a prediction model for academic orientations [10] [11], in high schools in Benin. ...