Zakia Hammal's research while affiliated with Carnegie Mellon University and other places

Publications (87)

Article
We propose an automatic method to estimate self-reported pain intensity based on facial landmarks extracted from videos. For each video sequence, we decompose the face into four different regions and pain intensity is measured by modeling the dynamics of facial movement using the landmarks of these regions. A formulation based on Gram matrices is u...
Article
Full-text available
We propose an automatic method to estimate self-reported pain based on facial landmarks extracted from videos. For each video sequence, we decompose the face into four different regions and the pain intensity is measured by modeling the dynamics of facial movement using the landmarks of these regions. A formulation based on Gram matrices is used fo...
Preprint
Full-text available
We propose an automatic method to estimate self-reported pain based on facial landmarks extracted from videos. For each video sequence, we decompose the face into four different regions and the pain intensity is measured by modeling the dynamics of facial movement using the landmarks of these regions. A formulation based on Gram matrices is used fo...
Article
Full-text available
In this mini-review, we discuss the fundamentals of using technology in mental health diagnosis and tracking. We highlight those principles using two clinical concepts: (1) cravings and relapse in the context of addictive disorders and (2) anhedonia in the context of depression. This manuscript is useful for both clinicians wanting to understand th...
Article
Full-text available
Pain is often characterized as a fundamentally subjective phenomenon; however, all pain assessment reduces the experience to observables, with strengths and limitations. Most evidence about pain derives from observations of pain-related behavior. There has been considerable progress in articulating the properties of behavioral indices of pain; espe...
Conference Paper
The standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this s...
Conference Paper
The goal of Face and Gesture Analysis for Health Informatics's workshop is to share and discuss the achievements as well as the challenges in using computer vision and machine learning for automatic human behavior analysis and modeling for clinical research and healthcare applications. The workshop aims to promote current research and support growt...
Preprint
We propose an automatic method for pain intensity measurement from video. For each video, pain intensity was measured using the dynamics of facial movement using 66 facial points. Gram matrices formulation was used for facial points trajectory representations on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. Cur...
Article
Head movement is an important but often overlooked component of emotion and social interaction. Examination of regularity and differences in head movements of infant-mother dyads over time and across dyads can shed light on whether and how mothers and infants alter their dynamics over the course of an interaction to adapt to each others. One way to...
Conference Paper
Full-text available
With few exceptions, most research in automated assessment of depression has considered only the patient's behavior to the exclusion of the therapist's behavior. We investigated the interpersonal coordination (synchrony) of head movement during patient-therapist clinical interviews. Our sample consisted of patients diagnosed with major depressive d...
Article
Full-text available
Background:. Craniofacial microsomia (CFM) is a congenital condition associated with malformations of the bone and soft tissue of the face and the facial nerves, all of which have the potential to impair facial expressiveness. We investigated whether CFM-related variation in expressiveness is evident as early as infancy. Methods:. Participants were...
Chapter
This chapter evaluates the state-of-the-art in the generalizability of affective facial computing (AFC) systems across domains and also reviews approaches that are designed to improve such generalizability. We address two questions that we regard as critical to the field: (1) To what extent do AFC systems generalize to new data sources, or domains?...
Chapter
Pain typically is measured by patient self-report, but self-reported pain is difficult to interpret and may be impaired or in some circumstances not possible to obtain. Automatic, objective assessment of pain from video or camera input is emerging as a powerful alternative. We review the current state of the art in automatic, objective assessment o...
Conference Paper
Full-text available
Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of...
Article
Full-text available
Background Deficits in motor movement in children with autism spectrum disorder (ASD) have typically been characterized qualitatively by human observers. Although clinicians have noted the importance of atypical head positioning (e.g. social peering and repetitive head banging) when diagnosing children with ASD, a quantitative understanding of head...
Article
Objective: To compare facial expressiveness (FE) of infants with and without craniofacial macrosomia (cases and controls, respectively) and to compare phenotypic variation among cases in relation to FE. Design: Positive and negative affect was elicited in response to standardized emotion inductions, video recorded, and manually coded from video...
Conference Paper
Action unit detection in infants relative to adults presents unique challenges. Jaw contour is less distinct, facial texture is reduced, and rapid and unusual facial movements are common. To detect facial action units in spontaneous behavior of infants, we propose a multi-label Convolutional Neural Network (CNN). Eighty-six infants were recorded du...
Article
Depression is one of the most common psychiatric disorders worldwide, with over 350 million people affected. Current methods to screen for and assess depression depend almost entirely on clinical interviews and self-report scales. While useful, such measures lack objective, systematic, and efficient ways of incorporating behavioral observations tha...
Article
Full-text available
We investigated the dynamics of head movement in mothers and infants during an age-appropriate, well-validated emotion induction, the Still Face paradigm. In this paradigm, mothers and infants play normally for 2 minutes (Play) followed by 2 minutes in which the mothers remain unresponsive (Still Face), and then two minutes in which they resume nor...
Article
Full-text available
Previous work in automatic affect analysis (AAA) has emphasized static expressions to the neglect of the dynamics of facial movement and considered head movement only a nuisance variable to control. We investigated whether the dynamics of head and facial movements apart from specific facial expressions communicate affect in infants, an under-studie...
Conference Paper
Recent advances in interaction technology and methodology fostered a shift of interest in investigating beyond the individual to social interaction of multiple individuals. Complex phenomena as interpersonal synchrony and influence started to be more and more investigated in order to provide a better understanding of human interaction and guideline...
Article
Full-text available
Current methods for depression assessment depend almost entirely on clinical interview or self-report ratings. Such measures lack systematic and efficient ways of incorporating behavioral observations that are strong indicators of psychological disorder. We compared a clinical interview of depression severity with automatic measurement in 48 partic...
Conference Paper
Full-text available
Current methods for depression assessment depend almost entirely on clinical interview or self-report ratings. Such measures lack systematic and efficient ways of incorporating behavioral observations that are strong indicators of psychological disorder. We compared a clinical interview of depression severity with automatic measurement in 48 partic...
Conference Paper
Affect communication and recognition is influenced by a number of contextual, social, and psychological factors. Personality and natural expressiveness of the individual, previous experiences, the ongoing task, and pre-existing relationships with the people involved helps individuals interpret and respond to social interactions. An important relate...
Article
Full-text available
Modelling, analysis and synthesis of behaviour are the subject of major efforts in computing science, especially when it comes to technologies that make sense of human-human and human-machine interactions. This article outlines some of the most important issues that still need to be addressed to ensure substantial progress in the field, namely (1)...
Article
The current paper investigates the merits of the Log-Normal and Log-Gabor filters for the dynamic analysis and segmentation of facial behavior during facial expression sequences. First, a spatial filtering method based on the Log-Normal filters is introduced for the holistic processing of the face towards the automatic segmentation of consecutive “...
Conference Paper
In affective computing, head motion too often has been considered only a nuisance variable, something to control when aligning face images for analysis of facial expression or identity. Yet, recent research suggests that head motion is critical to the communication of emotion and the regulation of face-to-face interaction. Using a generic head trac...
Conference Paper
Pain is a complex phenomenon that requires consideration of individual differences in the index person, those with whom they interact, and the social context. Pain displays vary, with some patients highly expressive regarding their pain and others exhibiting minimal discomfort. Given myriad individual differences among patients, their families, and...
Article
Previous literature suggests that depression impacts vocal timing of both participants and clinical interviewers but is mixed with respect to acoustic features. To investigate further, 57 middle-aged adults (men and women) with Major Depression Disorder and their clinical interviewers (all women) were studied. Participants were interviewed for depr...
Article
The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS actio...
Article
In automatic emotional expression analysis, head motion has been considered mostly a nuisance variable, something to control when extracting features for action unit or expression detection. As an initial step toward understanding the contribution of head motion to emotion communication, we investigated the interpersonal coordination of rigid head...
Conference Paper
Full-text available
We investigated the dynamics of head motion in parents and infants during an age-appropriate, well-validated emotion induction, the Face-to-Face/Still-Face procedure. Participants were 12 ethnically diverse 6-month-old infants and their mother or father. During infant gaze toward the parent, infant angular amplitude and velocity of pitch and yaw de...
Conference Paper
This is an introduction to the Second International Workshop on Context Based Affect Recognition CBAR 2013 Held in conjunction with Affective Computing and Intelligent Interaction 2-5 September 2013, Geneva, Switzerland.
Conference Paper
Previous research in automated expression analysis has focused on discrete actions with little attention to their timing either within or between persons. We investigated the interpersonal coordination of rigid head motion in 11 intimate couples with a history of interpersonal violence. The couples participated in both a low-conflict task in which...
Chapter
This chapter addresses recent advances in computer vision for facial expression classification. The authors present the different processing steps of the problem of automatic facial expression recognition. They describe the advances of each stage of the problem and review the future challenges towards the application of such systems to everyday lif...
Conference Paper
Previous efforts suggest that occurrence of pain can be detected from the face. Can intensity of pain be detected as well? The Prkachin and Solomon Pain Intensity (PSPI) metric was used to classify four levels of pain intensity (none, trace, weak, and strong) in 25 participants with previous shoulder injury (McMaster-UNBC Pain Archive). Participant...
Article
The current paper presents an automatic and context sensitive system for the dynamic recognition of pain expression among the six basic facial expressions and neutral on acted and spontaneous sequences. A machine learning approach based on the Transferable Belief Model, successfully used previously to categorize the six basic facial expressions in...
Conference Paper
The automatic extraction of the most relevant information in a video sequence made of continuous affective states is an important challenge for efficient human-machine interaction systems. In this paper a method is proposed to solve this problem based on two steps: first, the automatic segmentation of consecutive emotional segments based on the res...
Conference Paper
An automatic system for facial expression recognition should be able to recognize on-line multiple facial expressions (i.e. "emotional segments") without interruption. The current paper proposes a new method for the automatic segmentation of "emotional segments" and the dynamic recognition of the corresponding facial expressions in video sequences....
Article
Recently, Ambadar, Schooler and Cohn (2005) compared facial expression recognition performance with static and dynamic stimuli. To control for task difficulty, the researchers equated the information content in their dynamic and in their so-called “multi-static” condition, in which the frames of the dynamic stimuli were separated by noise masks and...
Conference Paper
Facial expression is considered to be the most reliable source of information when judging on the pain intensity experienced by another (Poole & Craig 1992). Nonetheless, observers in this situation show a systematic under-estimation bias (Harrison, 1993; Kappesser & Williams, 2002). In the medical domain, this bias results in under-treatment, whic...
Conference Paper
Holistic and feature-based processing have both been shown to be involved differently in the analysis of facial expression by human observer. The current paper proposes a novel method based on the combination of both approaches for the segmentation of "emotional segments" and the dynamic recognition of the corresponding facial expressions. The prop...
Article
This chapter addresses recent advances in computer vision for facial expression classification. The authors present the different processing steps of the problem of automatic facial expression recognition. They describe the advances of each stage of the problem and review the future challenges towards the application of such systems to everyday lif...
Article
Humans recognize basic facial expressions effortlessly. Yet, despite a considerable amount of research, this task remains elusive for computer vision systems. Here, we compared the behavior of one of the best computer models of facial expression recognition (Z. Hammal, L. Couvreur, A. Caplier, & M. Rombaut, 2007) with the behavior of human observer...
Conference Paper
Full-text available
Automatic recognition of Pain expression has potential medical significance. In this paper we present results of the application of an automatic facial expression recognition system on sequences of spontaneous Pain expression. Twenty participants were videotaped while undergoing thermal heat stimulation at nonpainful and painful intensities. Pain w...
Article
A method for the classification of facial expressions from the analysis of facial deformations is presented. This classification process is based on the transferable belief model (TBM) framework. Facial expressions are related to the six universal emotions, namely Joy, Surprise, Disgust, Sadness, Anger, Fear, as well as Neutral. The proposed classi...
Conference Paper
Despite significant amount of research on automatic classification of facial expressions, recognizing a facial expression remains a complex task to be achieved by a computer vision system. Our approach is based on a close look at the mechanisms of the human visual system, the best automatic facial expression recognition system yet. The proposed mo...
Article
The paper presents a system that makes use of the fusion information paradigm to integrate two different sorts of information in order to improve the facial expression classification accuracy over a single feature based classification. The Discriminant Non-negative Matrix Factorization (DNMF) approach is used to extract a first set of features and...
Article
The aim of this work is the analysis and the classification of facial expressions. Experiments in psychology show that human is able to recognize the emotions based on the visualization of the temporal evolution of some characteristic fiducial points. Thus we firstly propose an automatic system for the extraction of the permanent facial features (...
Article
In this paper, we are dealing with the problem of facial features segmentation (mouth, eyes and eyebrows). A specific parametric model is defined for each deformable feature, each model being able to take into account all the possible deformations. In order to initialize each model, some characteristic points are extracted on each image to be proce...
Article
L'objectif de ce travail est l'analyse et la classification d'expressions faciales. Des expériences en psychologie ont permis de mettre en évidence le fait que l'être humain est capable de reconnaître les émotions sur un visage à partir de la visualisation de l'évolution temporelle de certains points caractéristiques de celui-ci. Nous avons donc to...
Conference Paper
A novel and efficient approach to facial expression classification based on the belief theory and data fusion is presented and discussed. The considered expressions correspond to three (joy, surprise, disgust) of the six universal emotions as well as the neutral expression. A robust contour segmentation technique is used to generate an expression s...
Conference Paper
This paper presents a system for classifying facial expressions based on a data fusion process relying on the Belief Theory (BeT). Four expressions are considered: joy, surprise, disgust as well as neutral. The proposed system is able to take into account intrinsic doubt about emotion in the recognition process and to handle the fact that each pers...
Conference Paper
This paper presents a system of facial expressions classification based on a data fusion process using the belief theory. The considered expressions correspond to the six universal emotions (joy, surprise, disgust, sadness, anger, fear) as well as to the neutral expression. Since some of the six basic emotions are difficult to simulate by non-actor...
Conference Paper
Full-text available
An efficient algorithm to iris segmentation and its application to auto- matic and non-intrusive gaze tracking and vigilance estimation is presented and discussed. A luminance gradient technique is used to fit the irises from face im- ages. A robust preprocessing which mimics the human retina is used in such a way that a robust system to luminance...
Article
Five expressions are commonly considered to characterize human emotional states: Happiness, Surprise, Anger, Sadness and Neutral. Different measures can be extracted from speech signals to characterize these expressions, for example the pitch, the energy, the SPI and the speech rate. Automatic classification of the five expressions based on these f...
Conference Paper
The aim of our work is automatic facial expression analysis based on the study of temporal evolution of facial feature boundaries. Previously, we developed a robust and fast algorithm for accurate lip contour segmentation (Eveno, N. et al., IEEE Trans. Circuits and Systems for Video Technology, 2004). Now, we focus on eye and eyebrow boundary extra...
Article
The aim of our work is the segmentation of permanent facial features for a purpose of facial expression recognition. In a previous work, we developed a robust and fast algorithm for accurate lip contour segmentation. In this paper, we are focusing on eyes and brows boundaries extraction. The seg- mentation of eyes and brows involves three steps: fi...
Article
- Dans cet article, nous nous intéressons à l'extraction automatique des contours des traits permanents du visage à savoir : les yeux, les sourcils et les lèvres. Pour chacun des traits considérés, un modèle paramétrique spécifique capable de rendre compte de toutes les déformations possibles est défini. Lors de la phase d'initialisation, des point...
Article
Full-text available
Six expressions sont généralement considérées pour caractériser les états émotifs humains : Sourire, Surprise, Colère, Tristesse, dégoût et Neutre. Différentes mesures peuvent être extraites à partir du signal de parole pour caractériser ces expressions, à savoir la fréquence fondamentale, l'énergie, le SPI (rapport des énergies des HF et des BF da...
Article
The aim of our work is the dynamic analysis of facial emotions. Since some experiences in psychology have shown that humans are able to recognize facial expressions by the analysis of the temporal evolution of keypoints of facial features, we are working on facial features extraction. In a previous work, we have developed a robust and fast algorith...
Article
This article presents a system of facial expressions classification using the belief theory. The considered expressions are the six universal emotions (joy, disgust, surprise, sadness, anger, fear) as well as the neutral one. Classification is based on data fusion process. Since some of the six emotions are difficult to simulate by non actor people...

Citations

... Our approach also enables proactively identifying opportunities for service, rather than requiring expressions indicating unhappiness [25] or pain [21] before being able to react. ...
... These factors may impact the performance of the model. The challenge of developing an efficient model necessitates the use of databases that contain samples from different environments where pain may occur and are encrypted according to standards that enable sharing among international collaborations (Prkachin and Hammal, 2021). ...
... Evaluation using two benchmark datasets demonstrates that the proposed solution improves the accuracy compared to the state-of-the-art. This work develops on the model preliminary proposed in Szczapa et al. [31], exploiting the idea of modeling the dynamics of the face with Gram matrices on the manifold of positive semidefinite matrices of fixed rank. Compared to this preliminary approach, the main novelties of this paper are: ...
... The authors combined the use of multitask learning neural network to predict pain scores with an ensemble learning model to linearly combine the multi-dimensional pain scores to estimate the VAS. Erekat et al. [25] proposed a spatio-temporal Convolutional Neural Network -Recurrent Neural Network (CNN-RNN) for the automatic measurement of self reported pain and observed pain intensity, respectively. The authors proposed a new loss function that explores the added value of combining different self reported pain scales for a reliable assessment of pain intensity from facial expression. ...
... Human states can be measured and monitored continuously and unobtrusively using ubiquitous devices. In the analysis of linking human states with health and well-being, fine-grained behavioral indicators can be engineered and generated from the sensory data to provide valuable information of vital biometric responses and behavioral patterns in natural settings [93]. In this vein, Dong et al. ...
... In DSEM, any path can be modeled as random across dyads including intercepts, slopes, or even the residual variances to model differences in volatility across dyads with a location-scale model(Hedeker et al., 2008;2012;Jongerling et al., 2015). Readers familiar with the time-series literature may note that that this longitudinal form maps onto a lag-1 vector autoregressive model (VAR-1), which is typically used to model moment-to-moment dynamics of two variables simultaneously (e.g.,Bringmann et al., 2018;Chen et al., 2020; see Li et al., 2022 for a comparison of fitting VAR-1 models in Stan, JAGS, and Mplus). In this paper, we suggest a variation of the VAR-1 model such that the two time-series correspond to two members of a dyad rather than two variables.[[Figure ...
... In this section, we briefly discuss the different works performed in the field of depression analysis using multimodal inputs like text, audio signals, facial emotions, 34 and multimodal sentiment analysis. 35 ...
... We compared our method with many state-of-the-art AU detection methods, including LSVM [47], JPML [30], DRML [48], APL [49], EAC-Net [25], DSIN [50], ARL [24], SR-ERL [51], AU-GCN [23],AU-RCNN [29], ML-GCN [52], MS-CAM [53], LP-Net [12], FACS3D-Net [54], UGN-B [55], JÂA-Net [26], MHSA-FFN [27], SEV-Net [28], HMP-PS [56], D-PAttNet tt [21], CISNet [13] and CaFNet [57]. ...
...  Access control-Face validation is well beyond the capability of a perfect precision digital technology, comparing a face to a standard enroller. As Web photographers are widely used for face-to-face Device access, however, the sign appears to be quite small [7,8]. 2.3Challenges in face recognition -The face recognition process encounters with following challenges [9][10][11][12]. ...
... Pain recognition has potential medical significance because it is supportive to those who are not able to communicate pain verbally (e.g., newborns, individuals with pronounced cognitive impairments [71,73]). Hammal et al. [48] create a spontaneous pain expression database where the pain is induced experimentally by mean of a peltier-based, computerized thermal stimulator. One non-painful (1°C below the individual pain threshold) and two painful thermal stimuli (2-3°C above the individual pain threshold) are applied to the left lower leg in a random order. ...