Predicting Spontaneous Facial Expressions from EEG

Poster (PDF Available) · June 2016with 103 Reads
DOI: 10.13140/RG.2.2.20532.55680
Volume 9684, , Zagreb, Croatia, International Conference on Intelligent Tutoring Systems, University of Montreal, DOI:10.13140/RG.2.2.20532.55680
Cite this publication
Abstract
The current study focuses on building a real-time emotions model to automatically detect emotions directly from brain signals. This model analyses the learner’s emotional state which is very useful to intelligent tutoring systems. An experiment was conducted to record neural activity and facial micro-expressions of participants while they are looking at pictures stimuli from the IAPS (International Affective Picture System). Camera-based facial expression detection software (FACET) was used to assess facial micro-expressions of a participant with high accuracy. Machine learning algorithm was fed with time-domain and frequency-domain features of one second EEG signals with the cor-responding facial expression data as ground truth in the training phase. The classifier provides outputs representing facial emotional reactions dynamics in fully automatic and non-intrusive way without the need to a camera. Using our approach, we have reached 92% accuracy.
Detecting the learners emotional state is very important to
intelligent tutoring systems. [3, 4]
Study Objective :
Prediction of facial expressions [2] only from physiological
brain signals.
Gather data from
Encephalography EEG (Emotiv Epoch)
Camera-based facial expression detection program (FACET) [1]
Pictures stimuli from the IAPS (International Affective Picture
System) [5].
Train machine learning algorithms:
Using facial expression data as ground truth
Building EEG-based facial expression models
Using our approach, we have reached 92% accuracy.
Methods
Conclusions
Table 2: Performance of Machine Learning methods
Bibliography
In order to elicit the participants’ emotions:
Experiment with 20 participants (7 women, 13 men, aged: 21 - 35)
Using 30 selected IAPS pictures with different affective ratings.
To build the EEG-based affective model:
Record EEG and facial micro-expressions (1) of users confronted
to IAPS pictures, using Emotiv and Facet software.
For each FACET frame time (every 1/6 sec), compute from the
corresponding mobile window of 1 sec EEG data, the frequency-
domain (2) and time-domain (3) features.
Train machine learning models (4) that correlates the micro-
expressions probabilities with the EEG features.
Test the accuracy of the models in predicting the emotions only
from the EEG.
1.Littlewort, G., et al. The computer expression recognition toolbox (CERT).
in Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011
IEEE International Conference on. 2011. IEEE.
2.Ekman, P., Emotions revealed: Recognizing faces and feelings to improve
communication and emotional life. 2007: Macmillan.
3.Chaouachi, M. and C. Frasson. Mental workload, engagement and
emotions: an exploratory study for intelligent tutoring systems. in
Intelligent Tutoring Systems. 2012. Springer.
4.Liu, Y., O. Sourina, and M.K. Nguyen, Real-time EEG-based emotion
recognition and its applications, in Transactions on computational science.
2011, Springer. p. 256-277.
5.Lang, P.J., M.M. Bradley, and B.N. Cuthbert, International affective
picture system (IAPS): Affective ratings of pictures and instruction
manual. Technical report A-8, 2008
Three machine learning algorithms were used to predict the numeric
values of each emotion category:
IBk (K-nearest neighbours classifier),
Random Forest (classifier constructing a forest of random trees)
RepTree (Fast decision tree learner).
We used 10 fold cross validation in our test phase.
In this study, we investigated the predictability of facial micro-
expressions from EEG signal.
Our methodology and experimental design insured a reliable dataset.
From this dataset, EEG-Based facial expression models were extracted
with high accuracy .
In the future work, we will integrate our affective models in a real-
time application to support adaptation in an intelligent tutorial system.
Introduction Results
Predicting Spontaneous Facial Expressions from EEG
Mohamed S. Benlamine1, Maher Chaouachi1, Claude Frasson1 and Aude Dufresne2
1Computer Science and Operations Research Department, University of Montreal
2Communication Department, University of Montreal
Random Forest outperforms IBk (k=1 neighbour) and RepTree
methods with higher correlation coefficient and lower error rates for
all emotion categories.
Our approach provides a simple and reliable way to capture the
emotional reactions of the user that can be used in intelligent tutorial
systems, learning, games, neurofeedback, and VR environments.
Emotion
IBk
Random Forest
RepTree
CC
MAE
RMSE
CC
MAE
RMSE
CC
MAE
RMSE
Joy
0.8528
0.0174
0.0545
0.9076
0.0225
0.0483
0.7174
0.0298
0.0702
Anger
0.8794
0.0518
0.0907
0.9216
0.0534
0.0753
0.8282
0.0684
0.1041
Surprise
0.854
0.0163
0.0395
0.8965
0.0175
0.0346
0.757
0.024
0.0484
Fear
0.8891
0.0431
0.075
0.9164
0.0477
0.0686
0.7754
0.065
0.1015
Neutral
0.8741
0.0537
0.1
0.9074
0.0639
0.0927
0.7428
0.0849
0.1352
Contempt
0.79
0.0341
0.0664
0.8526
0.0351
0.0575
0.6307
0.0488
0.0807
Disgust
0.8792
0.0316
0.0585
0.916
0.0327
0.0504
0.6307
0.0488
0.0807
Sadness
0.8892
0.0311
0.0628
0.9203
0.0341
0.0552
0.827
0.0443
0.0759
Positive
0.8528
0.0518
0.105
0.9124
0.0582
0.0869
0.7591
0.0765
0.1266
Negative
0.8569
0.0998
0.1608
0.9034
0.1052
0.1362
0.797
0.1264
0.1834
Figure 1: Pipeline of dataset construction for EEG-
based Facial Expressions recognition
Table 1. The computed Features from EEG Signals
F r e q u e n c y
-
d o m ai n E E G
F e a t u re s
(5 F e at u r e s
)
d e l ta [ 1
4 H z [ , th e t a [ 4 8 H z [,
a l p ha [ 8 1 2 H z [, be ta [ 1 2 2 5 Hz [,
a n d gam m a [ 2 5 4 0 H z [
T i m e
- d o m a i n
E E G Feat u r e s
(1 2 Fe a t u res )
M e a n
, S t a nd a rd E r ro r, M e d i a n ,
Mo d e , Sta n d a rd D ev i a ti o n , S am p l e
Va r i a n c e
, K u r to s i s , S ke w ne s s,
R a n g e
, M i n i m u m , M a x i m u m a n d
S u m
Emails: ms.benlamine@umontreal.ca | chaouacm@iro.umontreal.ca | frasson@iro.umontreal.ca | aude.dufresne@umontreal.ca
This research hasn't been cited in any other publications.
This research doesn't cite any other publications.