Marwa Mahmoud

University of Cambridge, Cambridge, England, United Kingdom

Are you Marwa Mahmoud?

Claim your profile

Publications (9)1.59 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Individuals with Autism Spectrum Conditions (ASC) have marked difficulties using verbal and non-verbal communication for social interaction. The ASC-Inclusion project helps children with ASC by allowing them to learn how emotions can be expressed and recognised via playing games in a virtual world. The platform assists children with ASC to understand and express emotions through facial expressions, tone-of-voice and body gestures. In fact, the platform combines several state-of-the art technologies in one comprehensive virtual world, including analysis of users' gestures, facial, and vocal expressions using standard microphone and web-cam, training through games, text communication with peers and smart agents, animation, video and audio clips. We present the recent findings and evaluations of such a serious game platform and provide results for the different modalities.
    Full-text · Conference Paper · May 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Individuals with Autism Spectrum Conditions (ASC) have marked difficulties using verbal and non-verbal communication for social interaction. The ASC-Inclusion project helps children with ASC by allowing them to learn how emotions can be expressed and recognised via playing games in a virtual world. The platform assists children with ASC to understand and express emotions through facial expressions, tone-of-voice and body gestures. In fact, the platform combines several state-of-the art technologies in one comprehensive virtual world, including analysis of users' gestures, facial, and vocal expressions using standard microphone and web-cam, training through games, text communication with peers and smart agents, animation, video and audio clips. We present the recent findings and evaluations of such a serious game platform and provide results for the different modalities.
    Full-text · Conference Paper · May 2015
  • Marwa M. Mahmoud · Tadas Baltrušaitis · Peter Robinson

    No preview · Conference Paper · Jan 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Prolonged durations of rhythmic body gestures were proved to be correlated with different types of psychological disorders. To-date, there is no automatic descriptor that can robustly detect those behaviours. In this paper, we propose a cyclic gestures descriptor that can detect and localise rhythmic body movements by taking advantage of both colour and depth modalities. We show experimentally how our rhythmic descriptor can successfully localise the rhythmic gestures as: hands fidgeting, legs fidgeting or rocking, significantly higher than the majority vote classification baseline. Our experiments also demonstrate the importance of fusing both modalities, with a significant increase in performance when compared to individual modalities.
    No preview · Conference Paper · Dec 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the capabilities of automatic nonverbal behavior descriptors to identify indicators of psychological disorders such as depression, anxiety, and post-traumatic stress disorder. We seek to confirm and enrich present state of the art, predominantly based on qualitative manual annotations, with automatic quantitative behavior descriptors. In this paper, we propose four nonverbal behavior descriptors that can be automatically estimated from visual signals. We introduce a new dataset called the Distress Assessment Interview Corpus (DAIC) which includes 167 dyadic interactions between a confederate interviewer and a paid participant. Our evaluation on this dataset shows correlation of our automatic behavior descriptors with specific psychological disorders as well as a generic distress measure. Our analysis also includes a deeper study of self-adaptor and fidgeting behaviors based on detailed annotations of where these behaviors occur.
    Full-text · Conference Paper · Apr 2013
  • Marwa M. Mahmoud · Tadas Baltrusaitis · Peter Robinson
    [Show abstract] [Hide abstract]
    ABSTRACT: Crowdsourcing is becoming increasingly popular as a cheap and effective tool for multimedia annotation. However, the idea is not new, and can be traced back to Charles Darwin. He was interested in studying the universality of facial expressions in conveying emotions, thus he had to consider a global population. Access to different cultures allowed him to reach more general conclusions. In this paper, we highlight a few milestones in the history of the study of emotion that share the concepts of crowdsourcing. We first consider the study of posed photographs and then move to videos of natural expressions. We present our use of crowdsouring to label a video corpus of natural expressions, and also to recreate one of Darwin's original emotion judgment experiments. This allows us to compare people's perception of emotional expressions in the 19th and 21st centuries, showing that it remains stable through both culture and time.
    No preview · Conference Paper · Oct 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a real-time system for detecting facial action units and inferring emotional states from head and shoulder gestures and facial expressions. The dynamic system uses three levels of inference on progressively longer time scales. Firstly, facial action units and head orientation are identified from 22 feature points and Gabor filters. Secondly, Hidden Markov Models are used to classify sequences of actions into head and shoulder gestures. Finally, a multi level Dynamic Bayesian Network is used to model the unfolding emotional state based on probabilities of different gestures. The most probable state over a given video clip is chosen as the label for that clip. The average F1 score for 12 action units (AUs 1, 2, 4, 6, 7, 10, 12, 15, 17, 18, 25, 26), labelled on a frame by frame basis, was 0.461. The average classification rate for five emotional states (anger, fear, joy, relief, sadness) was 0.440. Sadness had the greatest rate, 0.64, anger the smallest, 0.11.
    Full-text · Conference Paper · Apr 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Hand-over-face gestures, a subset of emotional body language, are overlooked by automatic affect inference systems. We propose the use of hand-over-face gestures as a novel affect cue for automatic inference of cognitive mental states. Moreover, affect recognition systems rely on the existence of publicly available datasets, often the approach is only as good as the data. We present the collection and annotation methodology of a 3D multimodal corpus of 108 audio/video segments of natural complex mental states. The corpus includes spontaneous facial expressions and hand gestures labelled using crowd-sourcing and is publicly available.
    Full-text · Conference Paper · Jan 2011
  • Source
    Marwa Mahmoud · Peter Robinson
    [Show abstract] [Hide abstract]
    ABSTRACT: People often hold their hands near their faces as a gesture in natural conversation, which can interfere with affective inference from facial expressions. However, these gestures are valuable as an additional channel for multi-modal inference. We analyse hand-over-face gestures in a corpus of naturalistic labelled expressions and propose the use of those gestures as a novel affect cue for automatic inference of cognitive mental states. We define three hand cues for encoding hand-over-face gestures, namely hand shape, hand action and facial region occluded, serving as a first step in automating the interpretation process.
    Preview · Conference Paper · Jan 2011