ArticlePDF Available

Image Representations for Facial Expression Coding

Authors:

Abstract

The Facial Action Coding System (FACS) (9) is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These methods include unsupervised learning techniques for nding basis images such as principal component analysis, independent component analysis and local feature analysis, and supervised learning techniques such as Fisher's linear discriminants. These data-driven bases are compared to Gabor wavelets, in which the basis images are predened. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96% accuracy for classifying 12 facial actions. The ICA representation emplo...
... Some other works have addressed visual-facial emotion detection under special circumstances, such as when the subject suffers from depression [23] or is tired [1] or the detection of mild cognitive impairment [24]. Research has also been conducted on facial expressions according to stability over time [25] and related to the context for classification and form standardization [22,26]. Other approaches to the detection of emotion include alternative modalities, i.e., collection and processing of non-visual data. ...
Article
Full-text available
This paper presents the most recent version of V-GRAFFER, a novel system that we have been developing for Visual GRoup AFFEct Recognition research. This version includes new algorithms and features, as well as a new application extension for using and evaluating the new features. Specifically, we present novel methods to collect facial samples from other e-lecture applications. We use screen captures of lectures, which we track and connect with samples during the duration of e-educational events. We also developed and evaluated three new algorithms for drawing conclusions on group concentration states. As V-GRAFFER required such complex functionalities to be combined together, many corresponding microservices have been developed. The current version of V-GRAFFER allows drawing real-time conclusions using the input samples collected from the use of any tutoring system, which in turn leads to real-time feedback and allows adjustment of the course material.
... Computerized human face recognition has been an active research area for the last years. There is a wide range of military and civilian applications such as identity authentication, access control, digital libraries, bankcard identification, mug shots searching, and surveillance systems [1,2,3]. A general statement of the problem of face recognition can be formulated as follows: it is used to identify one or more persons from still images or a video image sequence of a scene by comparing input images with faces stored in a database [4,5]. ...
Thesis
Abstract Human faces are very similar in structure, with minor differences from person to person. Furthermore, lighting condition changes, facial expressions, and pose variations further complicate the face recognition task. This thesis presents an efficient and low complexity hybrid approach to face recognition, which combines image preprocessing based on histogram equalization, wavelet transform and multi- neural networks. A face preprocessing approach is a histogram equalization to improve contrast and compensates for differences in camera input gains. The preprocessed image is compressed to reduce the number of input pixels by using the wavelet transform to speed up the system and provide invariance to minor changes in the image samples. Multi-neural networks must be trained to deal with all remaining variation (rotation, scale and deformation). The outputs from multiple recognize must be combined into a single decision unit, that decide the classified face and associated information which is already stored in the database. The system recognize stimulus images correctly without being affected by shift in position, rotation, scaling, or by distortion in shape of these images because of using the arbitrate structure. The system also recognizes images with changes in angle and expression. Small set of images per person in the training database is needed to produce acceptable classification accuracy. The results obtained show that the proposed system gives a very encouraging performance. The proposed system structure was implemented as a software package using C++ and visual Basic. Keywords: Pattern Recognition, Face Detection, Human Face Recognition, Computer Vision, Feature Extraction, Artificial Neural Networks, Machine Learning, Pattern Classification, Multilayer Perceptrons, Statistical Classification.
... Computerized human face recognition has been an active research area for the last years. There is a wide range of military and civilian applications such as identity authentication, access control, digital libraries, bankcard identification, mug shots searching, and surveillance systems [1,2,3]. ...
Article
Full-text available
Abstract The neural network-based upright frontal face detection system is presented in this paper. A retinal connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. The straightforward procedure presented for aligning positive face examples for training. To collect negative examples, which use a bootstrap algorithm, which adds false detection's into the training set as training progresses. This eliminates the difficult task of manually selecting non face-training examples, which must be chosen to span the entire space of non face images. Simple heuristics, such as using faces rarely overlap in images, can further improve the accuracy. Keywords: Face detection, Computer vision, artificial neural networks
Article
In this paper, we present our research towards building V-GRAFFER, a system for Visual GRoup AFFect Recognition. Specifically, V-GRAFFER aims at the development and provision of services with regard to the recognition of the emotional state of groups of people. At this stage of the V-GRAFFER development, implemented services are oriented towards detecting and drawing conclusions about students who attend educational events such as lectures, question and answer (Q&A) sessions, or lab participation. Specific functionalities of the current version of V-GRAFFER include processes for data collection, lecture experiments under real conditions, algorithms for sample auto-extraction, and optimized approaches. These functionalities allowed the collection of data of various educational events under real conditions and the creation of flexible databases of appropriate samples for drawing conclusions with regard to emotion detection of groups of people. Furthermore, we devised and implemented innovative algorithms to identify and classify group samples from recorded educational events. These algorithms have been evaluated and improved via continuous cycles of development-testing-evaluation in a variety of experiments. Finally, we constructed completed databases of group samples which are correlated with each other based on time, depth of time and educational settings.
Article
Full-text available
Internal affective states produce external manifestations such as facial expressions. In humans, the Facial Action Coding System (FACS) is widely used to objectively quantify the elemental facial action-units (AUs) that build complex facial expressions. A similar system has been developed for macaque monkeys - the Macaque Facial Action Coding System (MaqFACS); yet unlike the human counterpart, which is already partially replaced by automatic algorithms, this system still requires labor-intensive coding. Here, we developed and implemented the first prototype for automatic MaqFACS coding. We applied the approach to the analysis of behavioral and neural data recorded from freely interacting macaque monkeys. The method achieved high performance in recognition of six dominant AUs, generalizing between conspecific individuals (Macaca mulatta) and even between species (Macaca fascicularis). The study lays the foundation for fully automated detection of facial expressions in animals, which is crucial for investigating the neural substrates of social and affective states.Significance StatementMaqFACS is a comprehensive coding system designed to objectively classify facial expressions based on elemental facial movements designated as Actions Units (AUs). It allows the comparison of facial expressions across individuals of same or different species based on manual scoring of videos, a labor- and time-consuming process. We implemented the first automatic prototype for AUs coding in macaques. Using machine learning, we trained the algorithm on video-frames with AU labels, and showed that after parameter tuning, it classified six AUs in new individuals. Our method demonstrates concurrent validity with manual MaqFACS coding and supports the usage of automated MaqFACS. Such automatic coding is useful not only for social- and affective- neuroscience research but also for monitoring animal health and welfare.
Preprint
Full-text available
Internal affective states produce external manifestations such as facial expressions. In humans, the Facial Action Coding System (FACS) is widely used to objectively quantify the elemental facial action-units (AUs) that build complex facial expressions. A similar system has been developed for macaque monkeys - the Macaque Facial Action Coding System (MaqFACS); yet unlike the human counterpart, which is already partially replaced by automatic algorithms, this system still requires labor-intensive coding. Here, we developed and implemented the first prototype for automatic MaqFACS coding. We applied the approach to the analysis of behavioral and neural data recorded from freely interacting macaque monkeys. The method achieved high performance in recognition of six dominant AUs, generalizing between conspecific individuals (Macaca mulatta) and even between species (Macaca fascicularis). The study lays the foundation for fully automated detection of facial expressions in animals, which is crucial for investigating the neural substrates of social and affective states.
Chapter
In this paper, we present the next step in our research on detecting emotions of groups of people from a computer, with special emphasis on educational technologies. Specifically, we present our results towards optimization of the automated classification method. In previous experiments for collecting, exporting, and classifying faces to create databases of group sample, we conducted evaluation of the basic ways of automatic classification of faces. From the evaluation results, we concluded that additional approaches were needed towards optimization of the classification algorithms, as the error rates of the basic automated classification approaches were not negligible. We present the optimization approaches we considered and evaluated, which result in both lower error rates and decrease in the required time of creation of databases with group samples. As a result, the automatic processes of correct classification and quick creation of complete databases with group samples provide a significant contribution in the field of emotion detection of groups of people from computer.
ResearchGate has not been able to resolve any references for this publication.