Fig 2 - uploaded by Vandana Jagtap
Content may be subject to copyright.
Taxonomy of human facial expression 

Taxonomy of human facial expression 

Source publication
Conference Paper
Full-text available
Emotions are fundamental to human lives and help in their decision making. Understanding an expression of emotional feeling between people forms an intricate web. There are systems, been developed that attempt to recognize aspects of emotion related behaviors and to respond to these behavior. The example systems designed to improve the user experie...

Citations

... By leveraging IoT, the mirror offers a fully interactive and intelligent experience, making it a promising innovation for both personal use and broader applications in Smart homes, healthcare and customer engagement. In [12] R. S. Deshmukh and V. Jagtap present a smart device featuring facial emotion recognition that continuously monitors emotions like happiness and sadness. Their system is built to display emotions in real-time and track them over a period, helping users recognize patterns associated with stress or depressive states. ...
Article
In recent years, the integration of artificial intelligence has revolutionized human-computer interaction by enabling devices to provide personalized user experiences. This study focuses on the development of a Smart Mirror with Emotion Monitoring, which utilizes advanced facial recognition and deep learning algorithms to detect and classify user emotions in real time. By analyzing facial expressions, the system identifies emotional states such as happiness, sadness, surprise, angry, and neutrality. Based on these detected emotions, the mirror responds dynamically by displaying personalized content, including motivational messages, and aiming to enhance mental well-being. Beyond emotional responsiveness, the smart mirror also functions as a daily assistant, offering real-time information such as weather updates, calendar, and emotion monitoring insights. Its applications extend beyond residential use to sectors such as healthcare, retail, and corporate environments, where emotion-based interactions can improve user engagement and experience. In healthcare, for instance, the mirror can assist individuals dealing with stress or mental health conditions by providing therapeutic interactions. Similarly, in commercial settings, the mirror can be integrated into customer service environments to enhance user satisfaction. The proposed system is built using state-of-the-art computer vision techniques, deep learning models, and IoT integration to ensure accurate emotion detection and seamless functionality. A robust dataset is employed to train and optimize the facial recognition model, ensuring high accuracy in emotion classification. To validate the system’s performance, extensive experiments are conducted, evaluating parameters such as detection accuracy, response time, and user satisfaction. The results demonstrate the system’s effectiveness in providing real-time emotional insights and personalized interactions, contributing to the advancement of intelligent assistive technologies. By incorporating emotional intelligence into smart mirror technology, this study bridges the gap between functionality and user-centric design, paving the way for next-generation AI-driven smart devices
... Their system is designed to visualize real-time emotions and analyze them over time, enabling users to identify patterns linked to stress or depression. This feature is particularly beneficial for mental health management, as it offers insights into the user's emotional trends and helps track the effectiveness of interventions or lifestyle changes [15]. Hollen et al. propose a facial recognition-based smart mirror that detects the user's mood in real-time. ...
Article
This paper presents the survey of a Smart Mirror with Emotion Monitoring, which integrates advanced technologies like facial recognition and deep learning to enhance user interaction and mental well-being. The system detects facial features and classify emotions such as happiness and sadness in real time. Based on the detected emotions, the mirror provides personalized feedback, such as motivational quotes or cheerful messages, also displays daily essentials like time, weather, and calendar updates. By combining emotional intelligence with traditional functionalities, the smart mirror offers a versatile solution for home, healthcare, and commercial applications, making it an innovative tool for promoting emotional and functional support in everyday life. Key Words: Smart Mirror, Emotion Monitoring, Artificial Intelligence, IoT.
... Currently, systems for emotion recognition that utilize deep machine learning and neural networks are being developed. Emotion detection is used for monitoring purposes [9], aiming to eliminate barriers between humans and machines [10] [11]. Can we similarly use these systems to bridge the gap between space users and living machines, referring back to historical foundations? ...
Article
Full-text available
Spatial perceptions have always played a crucial role in determining the quality of architectural form. The scholarly literature recognizing the inseparable connection between architecture and psychology dates back to the early twentieth century, with environmental psychology emerging as a distinct branch of this scientific discipline in the 1960s. Contemporary environmental and climate change issues have notably drawn attention to topics within the realms of psychology and the environment. Past sociological and spatial analyses of residential areas predominantly relied on survey and interview methods. Today's technological advancements enable us to analyze collected data through modern applications and devices. A significant innovative moment arises with the introduction of artificial intelligence onto the scene. Based on deep machine learning technology, applications for identifying human emotions have been developed, among others. This paper problematizes the use of these modern tools in interpreting spatial sensations, aiming to offer new perspectives on existing theoretical frameworks and concepts through innovative methodological approaches.
... Application programming interface (API) is a broad term that describes any means of communication between two or more computer programs. In particular, emotion recognition APIs allow the synthesis of computer vision, machine learning algorithms, deep learning neural networks, and other components in order to accurately detect and label human emotions (Deshmukh & Jagtap, 2017). The emotion API performance is further enhanced by cloud-based support, which continuously supplies learning algorithms with severs full of facial and emotional data (Khanal, Barroso, Lopes, Sampaio, & Filipe, 2018). ...
... Naturally, technology giants with the largest cloud infrastructure (e.g., Amazon, Microsoft, Google), are the most equipped to construct accurate emotion recognition programs. While specific expressions that can be detected vary from program to program, most algorithms are minimally equipped to identify the six basic human emotions: disgust, contempt, anger, fear, surprise, and sadness (Deshmukh & Jagtap, 2017). ...
Article
Emotion recognition application programming interface (API) is a recent advancement in computing technology that synthesizes computer vision, machine-learning algorithms, deep-learning neural networks, and other information to detect and label human emotions. The strongest iterations of this technology are produced by technology giants with large, cloud infrastructure (i.e., Google, and Microsoft), bolstering high true positive rates. We review the current status of applications of emotion recognition API in psychological research and find that, despite evidence of spatial, age, and race bias effects, API is improving the accessibility of clinical and educational research. Specifically, emotion detection software can assist individuals with emotion-related deficits (e.g., Autism Spectrum Disorder, Attention Deficit-Hyperactivity Disorder, Alexithymia). API has been incorporated in various computer-assisted interventions for Autism, where it has been used to diagnose, train, and monitor emotional responses to one's environment. We identify AP's potential to enhance interventions in other emotional dysfunction populations and to address various professional needs. Future work should aim to address the bias limitations of API software and expand its utility in subfields of clinical, educational, neurocognitive, and industrial-organizational psychology.
... R. Deshmukh and V. Jagtap [13] have analysed advantages and drawbacks of existing tools and APIs' available for emotion detection. The highest accuracy has been observed in the API developed by Microsoft cognitive services named, "Face", which returns the emotion with the highest confidence for a particular image [13]. ...
... R. Deshmukh and V. Jagtap [13] have analysed advantages and drawbacks of existing tools and APIs' available for emotion detection. The highest accuracy has been observed in the API developed by Microsoft cognitive services named, "Face", which returns the emotion with the highest confidence for a particular image [13]. In another study done by Zhang et al. elaborated on mouse movement tracking as an accurate technique to detect the online learning engagement by tracking the usage, and the frequency of the use of the mouse device [14]. ...
Conference Paper
There is a growing concern to find an effective teaching and learning methodology during a social distancing situation as well as to address the drawbacks in the current educational system of Sri Lanka for students in Key Stage 1. Gamification has proved to make a positive impact on the concentration level, motivation and educational capabilities of students. Although previous research has been successful in introducing various gamified tools, very few are available in the local language. In this study, a gamified learning tool named “Punchi Nanasala” was introduced targeting grade 1 and 2 students which focussed on the subjects; Mathematics, Sinhala language, and Environmental studies. The tool was developed in three prototypes using a User-Centered-Design approach. The experimental and control groups Were given a pretest to measure their current knowledge capacity and a post-test to evaluate their knowledge capacity after the gamified treatment was given. The positive results obtained from the multiple evaluation techniques; Emotion detection, mouse click monitoring, performance analysis, interviews, and surveys suggested that the tool was successful as a learning approach.
... Since gaze direction is a subpart of detecting emotions through facial expressions [34], [24], considering this individual will not have a significant impact. ...
Article
Full-text available
Improving the User Experience (UX) of mobile devices is of vital importance due to the advent of emerging technologies and the prevalence of using mobile devices. This research aims to develop a model for a mobile device that can suggest adaptive functionalities, based on the current user emotion and the context by changing the user's negative emotions (sadness and anger) into positive ones. As a proof of concept, a keyboard named "Emotional Keyboard" was developed through five prototypes iteratively using Evolutionary Prototyping. Action Research was adopted as the methodology along with User-Centered Design (UCD) which further included two user surveys. The first three prototypes were implemented to decide the most optimal perception from facial expressions and text analytics. Subsequent prototypes provided affective functions to the user such as listening to music, playing a game, chat with friends, based on the detected negative emotion and the context. The evaluation of each of the prototypes was performed iteratively with user participation. The final (fifth) prototype evaluation was done in two phases, an individual analysis (to measure the performance of each user separately), and an overall analysis (a general analysis that averaged all the results from individual analysis and measured the performance of the overall model). Results of both analyses showed that eventually the Emotional Keyboard was able to predict the adaptive functions correctly to the user and it did not terminate its learning process where the users' feedback was continuously used to improve its performance. In conclusion, an "Adaptive System with User Control" was developed thus improving the acceptability and usability of a mobile device which aligns with the research aim.
... For this purpose, a geometric model of the facial shape was used, which was then superimposed on the face and the detected emotions were assigned based on the movement patterns of the grid. In addition to the two systems mentioned above, there are a number of other solutions (see, e.g., [2,9]). ...
Chapter
Full-text available
Facial expressions convey the vast majority of the emotional information contained in social utterances. From the point of view of affective intelligent systems, it is therefore important to develop appropriate emotion recognition models based on facial images. As a result of the high interest of the research and industrial community in this problem, many ready-to-use tools are being developed, which can be used via suitable web APIs. In this paper, two of the most popular APIs were tested: Microsoft Face API and Kairos Emotion Analysis API. The evaluation was performed on images representing 8 emotions—anger, contempt, disgust, fear, joy, sadness, surprise and neutral—distributed in 4 benchmark datasets: Cohn-Kanade (CK), Extended Cohn-Kanade (CK+), Amsterdam Dynamic Facial Expression Set (ADFES) and Radboud Faces Database (RaFD). The results indicated a significant advantage of the Microsoft API in the accuracy of emotion recognition both in photos taken en face and at a 45∘ angle. Microsoft’s API also has an advantage in the larger number of recognised emotions: contempt and neutral are also included.
... Facial detection consists in locating faces in images. Many different methods have been proposed for automatic facial detection in the literature over the years, and current state of the art provides accuracies which are similar or even surpass human capabilities [7]. In addition, latest proposals are less and less influenced by the position of the face (preferably frontal and upright) as well as by the lighting conditions [8]. ...
... One of the most common added dimensions is dominance, which ranges from submissive to dominant and is related to the user's control while displaying an emotion (see Figure 1). Nowadays, there exist many different available tools able to automatically perform each of the abovementioned tasks [7]. However, rather than trying to combine the data provided by these tools, we have focused on available tools capable of performing the three stages altogether, and which provide as an output the interpretation of the emotion contained in the displayed facial expressions. ...
... Laboratory databases mainly cover the seven basis emotions that are Neutral, Anger, Fear, Sadness, Disgust, Surprise and Happiness. But other finer emotion also exists [4], [5] and new methods like FACS [6] were developed in order to cover this larger range of emotional expression. Most of facial emotion recognition systems use Artificial Neural Networks (ANNs) trained "offline" with such databases. ...
... Further, Section 2.2.4 justifies the inconvenience of using other methods such as typing pattern analysis [14], voice [13], body gestures [11], gaze direction [12] and brain signals [15] to detect emotions when using mobile devices, thus confirming facial expressions and text analytics might be the only ways to detect emotions in this scenario. The limitations when practically detecting emotions through these two techniques should be tested using the state of the art APIs and SDKs [25], [26] and a method to get the most meaningful emotion by combining these two techniques must be investigated which will lead to multimodal emotion detection. ...
... APIs and SDKs such as Microsoft Cognitive Services, Affectiva, Emotient, Kairos, Sky biometric application, Eyris EmoVu, OpenFace, FaceReader 6.1, Nviso and Vision API [26], [25] have been introduced. ...
... Although the above two methods have such limitations, both these methods have accurate APIs and SDKs which can be easily used when detecting emotions [25], [26]. ...
Thesis
In the field of Human-Computer Interaction (HCI), improving User Experience (UX) of mobile devices has become a necessity due to the emergence of smart technologies and with the popularity of using mobile devices in day to day life rather than traditional desktop systems. The main aim of this research is to develop a model for a mobile device that can suggest adaptive functionalities, based on the current user emotion and the context. To the best of our knowledge, there are not any systems which provide not only adaptive interfaces but also provides adaptive functionalities within a mobile device which will enhance the acceptability and usability of that particular system. As a proof of concept, a keyboard named “Emotional Keyboard” was developed through five prototypes using Evolutionary Prototyping, iteratively. As the methodology, Action Research together with User-Centered Design (UCD) was followed which also included two user surveys. Initial decisions were taken after conducting the first survey and Prototypes 1, 2 and 3 have been developed and those have been evaluated with the participation of 40 users. Prototype 3 has incorporated an Artificial Neural Network (ANN) that was trained using the data collected during the evaluations of the Prototypes 1 and 2, which can decide the most optimal emotion by combining the emotions from facial expressions and text. Consequently, Prototype 4 and 5 were developed which can suggest the most affective function based on the emotion and the context (location, time and user activity), by incorporating the data that was collected by conducting the second survey and built a “Preference Tree” which consists the probabilities of choosing functions and also considering the frequently used functions. The evaluation of Prototype 4 and 5 have been carried out with the participation of 18 users, where an individual and general analysis has been performed and proved that with time the model was able to correctly suggest adaptive functions. This evaluation yields the conclusion of the research, thus paving the way to an “Adaptive System with User Control” to improve the acceptability and the usability of a mobile device which aligns with the research aim