Rana El Kaliouby’s research while affiliated with Massachusetts Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (71)


Large-scale observational evidence of cross-cultural differences in facial behavior
  • Preprint

June 2018

·

42 Reads

·

Jeffrey M. Girard

·

Rana el Kaliouby

Self-report studies have found evidence that cultures differ in the display rules they have for facial expressions (i.e., for what is appropriate for different people at different times). However, observational studies of actual patterns of facial behavior have been rare and typically limited to the analysis of dozens of participants from two or three regions. We present the first large-scale evidence of cultural differences in observed facial behavior, including 740,984 participants from 12 countries around the world. We used an Internet-based framework to collect video data of participants in two different settings: in their homes and in market research facilities. Using computer vision algorithms designed for this dataset, we measured smiling and brow furrowing expressions as participants watched television ads. Our results reveal novel findings and provide empirical evidence to support theories about cultural and gender differences in display rules. Participants from more individualist cultures displayed more brow furrowing overall, whereas smiling depended on both culture and setting. Specifically, participants from more individualist countries were more expressive in the facility setting, while participants from more collectivist countries were more expressive in the home setting. Female participants displayed more smiling and less brow furrowing than male participants overall, with the latter difference being more pronounced in more individualist countries. This is the first study to leverage advances in computer science to enable large-scale observational research that would not have been possible using traditional methods.


AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings

February 2018

·

194 Reads

·

25 Citations

IEEE Transactions on Affective Computing

Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77%) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos. The dataset is available to download online: Link: http://bit.ly/2hJH1fu (click "Download" when you arrive at the URL. If any trouble is experienced accessing the data the corresponding author can send the dataset directly, damcduff@microsoft.com.)


Frequency of facial actions in men and women
The mean fraction of videos in which inner brow raises, outer brow raises, brow furrows, lip corner pulls and lip corner depressors appeared.
Regression coefficients for facial action presence and durations (base rates)
A frequency odds ratio > 1 means the action was present in responses from more women than men. Lip corner pulls were significantly more frequent and longer in duration in women, inner brow raises were significantly more frequent in women and brow furrows were significantly more frequent and longer in duration in men.
A large-scale analysis of sex differences in facial expressions
  • Article
  • Full-text available

April 2017

·

465 Reads

·

95 Citations

There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants’ own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state.

Download


Figure 1. a) Example frames from the videos collected. b) Cropped example images of the facial actions; note that other facial actions may also be present in these examples. 
Figure 2. Multilevel regression estimates for the (a) culture, (b) setting, and (c) gender main effects. Estimates for the (d) setting by culture, (e) gender by culture, and (f) setting by gender interactions. For cultural effects, estimates for the mean individualism score and one standard deviation above and below the mean individualism score are shown. 
Large-Scale Observational Evidence of Cross-Cultural Differences in Facial Behavior

March 2017

·

505 Reads

·

56 Citations

Journal of Nonverbal Behavior

Self-report studies have found evidence that cultures differ in the display rules they have for facial expressions (i.e., for what is appropriate for different people at different times). However, observational studies of actual patterns of facial behavior have been rare and typically limited to the analysis of dozens of participants from two or three regions. We present the first large-scale evidence of cultural differences in observed facial behavior, including 740,984 participants from 12 countries around the world. We used an Internet-based framework to collect video data of participants in two different settings: in their homes and in market research facilities. Using computer vision algorithms designed for this dataset, we measured smiling and brow furrowing expressions as participants watched television ads. Our results reveal novel findings and provide empirical evidence to support theories about cultural and gender differences in display rules. Participants from more individualist cultures displayed more brow furrowing overall, whereas smiling depended on both culture and setting. Specifically, participants from more individualist countries were more expressive in the facility setting, while participants from more collectivist countries were more expressive in the home setting. Female participants displayed more smiling and less brow furrowing than male participants overall, with the latter difference being more pronounced in more individualist countries. This is the first study to leverage advances in computer science to enable large-scale observational research that would not have been possible using traditional methods.


Applications of Automated Facial Coding in Media Measurement

May 2016

·

138 Reads

·

24 Citations

IEEE Transactions on Affective Computing

Facial coding has become a common tool in media measurement, with large companies (e.g., Unilever) using it to test all of their new video ad content. Facial reactions capture the in-the-moment response of an individual and these data complement self-report measures. Two advancements in affective computing have made measurement possible at scale: 1) computer vision algorithms are used to automatically code sign and message judgments based on facial muscle movements, 2) video data are collected by recording responses in everyday environments via the viewer's own webcam over the Internet. We present results of online facial coding studies of video ads, movie trailers, political content, and long-form TV shows. We explain how these data can be used in market research. Despite the ability to measure facial behavior in a scalable and quantifiable way, the interpretation of these data is still challenging without baselines and comparative measures. Over the past four years we have collected and coded over two million responses to everyday media content. Our huge dataset allows us to calculate reliable normative distributions of responses across different media types. We present these data and argue that this provides a context within which to interpret facial responses more accurately.


AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit

May 2016

·

1,753 Reads

·

305 Citations

·

Abdelrahman Mahmoud

·

Mohammad Mavadati

·

[...]

·

Rana el Kaliouby

We present a real-time facial expression recognition toolkit that can automatically code the expressions of multiple people simultaneously. The toolkit is available across major mobile and desktop platforms (Android, iOS, Windows). The system is trained on the world's largest dataset of facial expressions and has been optimized to operate on mobile devices and with very few false detections. The toolkit offers the potential for the design of novel interfaces that respond to users' emotional states based on their facial expressions. We present a demonstration application that provides real-time visualization of the expressions captured by the camera.


On the Future of Personal Assistants

May 2016

·

329 Reads

·

37 Citations

The purpose of this panel is to explore issues that will arise in building future personal assistants (PAs), especially for family use. In this regard, we will consider implications of being an "assistant" and those of being "personal." The target timeframe is 3-10 years out, so that very near-term products will not be discussed. We will elaborate briefly on the kinds of communicative and inferential capabilities such PAs will need, and then examine their social and emotional capabilities. We will discuss pros and cons for their evolution and deployment. In this regard, we will discuss the kinds of support that could be provided by the HCI community in building personal assistant systems that are useful, delightful, functional, controllable, educational, ethical, and secure.




Citations (56)


... Our large-scale dataset was collected using the web-based approach described in [21,22]. The videos were collected worldwide (from 90+ countries) for participants watching commercial ads. ...

Reference:

Automatic Detection of Sentimentality from Facial Expressions
AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings
  • Citing Article
  • February 2018

IEEE Transactions on Affective Computing

... 10 Freely available, facial recognition and emotion detection software has been developed from 5 million videos of normal subjects from 75 countries, resulting in over 2 billion facial frames in which Facial Action Coding System action units were expertly recognized and labeled. [11][12][13][14][15] These data have enabled automatic analysis of the phenomenon known as emotionality, the observable behavioral and physiologic component of emotion. 16 One must recognize that facial movement analysis alone, without contextual information, is unable to determine the true or felt emotional state of an individual. ...

Crowdsourcing facial responses to online videos: Extended abstract
  • Citing Article
  • September 2015

... Third, we explored how Facebook usage is associated with respondents' demographic parameters. We found that men and women had similar numbers of Facebook friends and spent similar amounts of time on Facebook but women reported a stronger connection to Facebook (presumably, because women are more emotional, reviewed in McDuff et al., 2017). Students who self-identified as religious used Facebook slightly less than others (possible due to the highly negative stigma of social media in religious communities). ...

A large-scale analysis of sex differences in facial expressions

... Several studies have mapped a user's facial expressions to an avatar or illustration viewable by the person with whom the user is communicating [12,74]. While these systems do successfully convey the user's emotion, they require a user to pay attention to both the avatar depicting the expressed emotion and the actual text content of the message, which also demand more mental effort of the user. ...

FAIM: integrating automated facial affect analysis in instant messaging
  • Citing Conference Paper
  • January 2004

... In the emotion literature, there is ongoing debate about the universality of emotional expressions across cultures (see Jack, Garrod, Yu, & Schyns, 2012), and the extent to which facial expressions reflect felt emotions (see Barrett et al., 2019). Cultural differences between collectivist and individualist societies may influence facial behaviour (McDuff, Girard, & Kaliouby, 2017) or facial expression perception (Matsumoto et al., 2002;Tsai et al., 2019). This influence could be mediated by display rules, which vary across cultures in terms of emotion expression (Matsumoto, Takeuchi, Andayani, Kouznetsova, & Krupp, 1998). ...

Large-Scale Observational Evidence of Cross-Cultural Differences in Facial Behavior

Journal of Nonverbal Behavior

... They have used a cognitive system algebra combined with a neural network model to online recognize facial expressions. A method for collecting and analyzing facial responses over the web was introduced in [31]. The proposed framework was utilized to crowdsource over three million face videos in response to thousands of media clips ranging from advertisements to movie trailers to TV shows and political debates. ...

Crowdsourcing facial responses to online videos: Extended abstract
  • Citing Conference Paper
  • September 2015

... EMG measurements of the zygomaticus and the corrugator muscles and Affdex measurements were performed simultaneously. Subsequently, EMG activity of the zygomaticus and corrugator muscles was directly compared to the FACS-oriented Affdex action units lip corner puller and brow lowerer [43]. This allows for a direct comparison of measurement sensitivity, as these action units represent the visible correlates to the underlying muscles [29]. ...

Facial Action Unit Detection Using Active Learning and an Efficient Non-linear Kernel Approximation
  • Citing Conference Paper
  • December 2015

... However, until recently, the measurement of facial expressions required the use of electromyography protocols, which is an unfriendly technique and, again, limited to laboratory settings. However, the automatic analysis of facial expressions offered new possibilities (Sikka et al., 2015;McDuff et al., 2016;McDuff & el Kaliouby, 2017). ...

Applications of Automated Facial Coding in Media Measurement
  • Citing Article
  • May 2016

IEEE Transactions on Affective Computing

... It actually makes the interaction really deep and engaging, where the artist, observer, and the art all merge together. The study of McDuff et al. [7] in 2016 reveals that art based on real-time emotional data can enhance one's emotional involvement with the people participating and increase the relationships between the participants and how they perceive the art. Here, the very essence of art is nothing but a living being that responds or changes according to the perceived emotional climate created by its audience. ...

AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit
  • Citing Conference Paper
  • May 2016

... In UX research, it therefore seems important to better understand emotional responses to this new technology. Yet only a few studies have investigated these (Cohen et al., 2016;Purington et al., 2017;Yang et al., 2019). When looking into the way emotional responses to CAs are studied (e.g., in Yang et al., 2019), we see that well-established scales from the UX design field are used, such as the PANAS scale developed by (Watson et al., 1988). ...

On the Future of Personal Assistants
  • Citing Conference Paper
  • May 2016