Maher Chaouachi's research while affiliated with McGill University and other places
Users’ performance is known to be impacted by their emotional states. To better understand this relationship, different situations could be simulated during which the users’ emotional reactions are analyzed through sensors like eye tracking and EEG. In addition, virtual reality environments provide an immersive simulation context that induces high-intensity emotions such as excitement. Extracting excitement from EEG provides more precise measures than other methods, however, it is not always possible to use EEG headset in a virtual reality environment. In this paper, we present an alternative approach to the use of EEG for excitement detection using only eye movements. Results showed that there is a correlation between eye movements and excitement index extracted from EEG. Five machine learning algorithms were used in order to predict the excitement trend exclusively from eye tracking. Results revealed that we can detect the offline excitements trend directly from eye movements with a precision of 92% using deep neural network
Interaction with games can induce emotional reactions which could have an impact on players’ game experience and performance. Physiological sensors such as EEG and eye tracking represent an important mean to track these emotional reactions. In addition, virtual reality isolates the players from the external environment, strengthening the emotional measures. In this paper, we present an explorative study of the use of eye tracking for game adaptation accord-ing to the players’ excitement. Results showed that there ex-ists a relationship between the modification of the game’s speed and the EEG excitement index and a correlation be-tween eye movement and excitement as well. These results suggest that eye tracking could be a valid support or replacement of EEG data in game adaptation.
The purpose of this work is to analyze the learners’ visual and brain behaviors during clinical reasoning. An experimental study was conducted to record gaze and EEG data of 15 novice medical students as they interacted with a computer-based learning environment in order to treat medical cases. We describe our approach to track the learners’ reasoning process using the visual scanpath followed during the clinical diagnosis and present our methodology to assess the learners’ brain activity using the engagement and the workload cerebral indexes. We determine which visual and EEG features are related to the students’ performance and analyze the relationship between the students’ visual behavior and brain activity.
Users’ performance is known to be impacted by their emotional states. To better understand this relationship, different situations could be simulated during which the users’ emotional reactions are analyzed through sensors like eye tracking and EEG. In addition, virtual reality environments provide an immersive simulation context that induces high intensity emotions such as excitement. Extracting excitement from EEG provides more precise measures then other methods, however it is not always possible to use EEG headset in virtual reality environment. In this paper we present an alternative approach to the use of EEG for excitement detection using only eye movements. Results showed that there is a correlation between eye movements and excitement index extracted from EEG. Five machine learning algorithms were used in order to predict excitement trend exclusively from eye tracking. Results revealed that we can detect the offline excitements trend directly from eye movements with a precision of 92% using deep neural network.
Argumentation is a mechanism to support different forms of reasoning such as decision making and persuasion and always cast under the light of critical thinking. In the latest years, several computational approaches to argumentation have been proposed to detect conflicting information, take the best decision with respect to the available knowledge, and update our own beliefs when new information arrives. The common point of all these approaches is that they assume a purely rational behavior of the involved actors, be them humans or artificial agents. However, this is not the case as humans are proved to behave differently, mixing rational and emotional attitudes to guide their actions. Some works have claimed that there exists a strong connection between the argumentation process and the emotions felt by people involved in such process. We advocate a complementary, descriptive and experimental method, based on the collection of emotional data about the way human reasoners handle emotions during debate interactions. Across different debates, people's argumentation in plain English is correlated with the emotions automatically detected from the participants, their engagement in the debate, and the mental workload required to debate. Results show several correlations among emotions, engagement and mental workload with respect to the argumentation elements. For instance, when two opposite opinions are conflicting, this is reflected in a negative way on the debaters' emotions. Beside their theoretical value for validating and inspiring computational argumentation theory, these results have applied value for developing artificial agents meant to argue with human users or to assist users in the management of debates.
The current study focuses on building a real-time emotions model to automatically detect emotions directly from brain signals. This model analyses the learner’s emotional state which is very useful to intelligent tutoring systems. An experiment was conducted to record neural activity and facial micro-expressions of participants while they are looking at pictures stimuli from the IAPS (International Affective Picture System). Camera-based facial expression detection software (FACET) was used to assess facial micro-expressions of a participant with high accuracy. Machine learning algorithm was fed with time-domain and frequency-domain features of one second EEG signals with the cor-responding facial expression data as ground truth in the training phase. The classifier provides outputs representing facial emotional reactions dynamics in fully automatic and non-intrusive way without the need to a camera. Using our approach, we have reached 92% accuracy.
In this paper, we present a novel work about predicting the facial expressions from physiological signals of the brain. The main contributions of this paper are twofold. a) Investigation of the predictability of facial micro-expressions from EEG. b) Identification of the relevant features to the prediction. To reach our objectives, an experiment was conducted and we have proceeded in three steps: i) We recorded facial expressions and the corresponding EEG signals of participant while he/she is looking at pictures stimuli from the IAPS (International Affective Picture System). ii) We fed machine learning algorithms with time-domain and frequency-domain features of one second EEG signals with also the corresponding facial expression data as ground truth in the training phase. iii) Using the trained classifiers, we predict facial emotional reactions without the need to a camera. Our method leads us to very promising results since we have reached high accuracy. It also provides an additional important result by locating which electrodes can be used to characterize specific emotion. This system will be particularly useful to evaluate emotional reactions in virtual reality environments where the user is wearing VR headset that hides the face and makes the traditional webcam facial expression detectors obsolete.
In this paper we present a tutoring system that automatically sequences the learning content according to the learners’ mental states. The system draws on techniques from Brain Computer Interface and educational psychology to automatically adapt to changes in the learners’ mental states such as attention and workload using electroencephalogram (EEG) signals. The objective of this system is to maintain the learner in a positive mental state throughout the tutoring session by selecting the next pedagogical activity that fits the best to his current state. An experimental evaluation of our approach involving two groups of learners showed that the group who interacted with the mental state-based adaptive version of the system obtained higher learning outcomes and had a better learning experience than the group who interacted with a non-adaptive version.
Argumentation is often seen as a mechanism to support different forms of reasoning such that decision-making and persuasion, but all these approaches assume a purely rational behavior of the involved actors. However, humans are proved to behave differently, mixing rational and emotional attitudes to guide their actions, and it has been claimed that there exists a strong connection between the argumentation process and the emotions felt by people involved in such process. In this paper , we assess this claim by means of an experiment: during several debates people's argumenta-tion in plain English is connected and compared to the emotions automatically detected from the participants. Our results show a correspondence between emotions and argumentation elements, e.g., when in the argumentation two opposite opinions are conflicting this is reflected in a negative way on the debaters' emotions.
In this paper we present a physiological computing approach based on electroencephalogram (EEG) signals to adaptively sequence the learning content according to the learners’ mental states. The system draws on techniques from Brain Computer Interface and educational psychology to automatically select the next best learning activity according to changes in the learners’ mental states such as attention and workload. The objective of this system is to maintain the learner in a positive mental state throughout the tutoring session
We seek to model the users' experience within an interactive learning environment. More precisely, we are interested in assessing the relationship between learners' emotional reactions and three trends in the interaction experience, namely, flow: the optimal interaction (a perfect immersion within the task), stuck: the nonoptimal interaction (a difficulty to maintain focused attention), and off-task: the noninteraction (a dropout from the task). We propose a hierarchical probabilistic framework using a dynamic Bayesian network to model this relationship and to simultaneously recognize the probability of experiencing each trend as well as the emotional responses occurring subsequently. The framework combines three modality diagnostic variables that sense the learner's experience including physiology, behavior, and performance, predictive variables that represent the current context and the learner's profile, and a dynamic structure that tracks the evolution of the learner's experience. An experimental study, with a specifically designed protocol for eliciting the targeted experiences, was conducted to validate our approach. Results revealed that multiple concurrent emotions can be associated with the experiences of flow, stuck, and off-task and that the same trend can be expressed differently from one individual to another. The evaluation of the framework showed promising results in predicting learners' experience trends and emotional responses.
The use of Video Games as learning tool is becoming increasingly widespread. Indeed, these games are well known as educational games or serious games. They mainly aim at providing to the learner an interactive, motivational and educational environment at the same time. In order to better study the necessary characteristics for the development of an effective serious game (both motivational and educational), we evaluated the physiological responses of participants during their interaction with our serious game, called HeapMotiv. We essentially measured a physiological index of engagement through an EEG wifi headset and studied the evolution of this index with the different missions and motivational strategies of HeapMotiv. Focusing on the gaming aspects, the analysis of this engagement index behavior showed the significant impact of motivational strategies on skills acquisition and motivational experience. An agent-based architecture is proposed as a methodological basis for serious games conception.
In this paper we seek to model the users' experience within an interactive learning environment. More precisely, we are interested in assessing three extreme trends in the interaction experience, namely flow (a perfect immersion within the task), stuck (a difficulty to maintain focused attention) and off-task (a drop out from the task). We propose a hierarchical probabilistic framework using a dynamic Bayesian network to simultaneously assess the probability of experiencing each trend, as well as the emotional responses occurring subsequently. The framework combines three-modality diagnostic variables that sense the learner's experience including physiology, behavior and performance, predictive variables that represent the current context and the learner's profile, and a dynamic structure that tracks the temporal evolution of the learner's experience. We describe the experimental study conducted to validate our approach. A protocol was established to elicit the three target trends as 44 participants interacted with three learning environments involving different cognitive tasks. Physiological activities (electroencephalography, skin conductance and blood volume pulse), patterns of the interaction, and performance during the task were recorded. We demonstrate that the proposed framework outperforms conventional non-dynamic modeling approaches such as static Bayesian networks, as well as three non-hierarchical formalisms including naive Bayes classifiers, decision trees and support vector machines.
Modeling learners' emotional states is a promising tool for enhancing learning outcomes and tutoring abilities. In this paper, we present a new perspective of learner emotional modeling according to two fundamental dimensions, namely mental workload and engagement. We hypothesize that analyzing results from learners' workload and engagement evolution can help Intelligent Tutoring Systems diagnose learners' emotional states and understand the learning process. We demonstrate by an experiment involving 17 participants that learners' mental workload and engagement are closely related to specific emotions with regard to different learning phases.
Endowing systems with abilities to assess a user’s mental state in an operational environment could be useful to improve communication and interaction methods. In this work we seek to model user mental workload using spectral features extracted from electroencephalography (EEG) data. In particular, data were gathered from 17 participants who performed different cognitive tasks. We also explore the application of our model in a non laboratory context by analyzing the behavior of our model in an educational context. Our findings have implications for intelligent tutoring systems seeking to continuously assess and adapt to a learner’s state.
This paper presents a novel multimodal approach to automatically detect learner's uncertainty through the integration of multiple sensors. An acquisition protocol was established to record participants' electrical brain activity and physiological signals while interacting with a problem solving system specifically designed for uncertainty elicitation. Data were collected from 38 subjects using 8 sensors and two video feeds. Results from machine learning classifiers support the feasibility of our approach. 81% of accuracy was reached using Support Vector Machine (SVM) algorithm. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
In this paper, we propose to introduce the self-esteem component within learning process. More precisely, we explore the effects of learner self-esteem conditioning in a tutoring system. Our approach is based on a subliminal priming method aiming at enhancing implicit self-esteem. An experiment was conducted while participants were outfitted with biofeedback device. Three physiological sensors were used to continuously monitor learners' affective reactions namely electroencephalogram, skin conductance and blood volume pulse sensors. The purpose of this work is to analyze the impact of self-esteem conditioning on learning performance on one hand and learners' emotional and mental states on the other hand.
This paper studies the influence of learner's affective states on the EEG-mental engagement index during a problem solving task. The electrical activity of the human brain, known as electroencephalography or EEG was registered according to an acquisition protocol in a learning environment specifically constructed for emotional elicitation. Data was gathered from 35 healthy subjects using 8 biosensors and two video cameras. The effect of learners' emotional states on the engagement index was analyzed as well as their impact on response time variability.
In mis paper, we propose to integrate the self-esteem component within learning process. The goal was to introduce a new strategy to enhance learner self-esteem while interacting with a tutoring system. This approach is based on a subliminal, non-consciously perceived, selfesteem conditioning priming method. We conducted an experimental study to measure the effect of this strategy on participant self-esteem and analyze its impact on learning performance. We have also monitored participants' physiological activity to analyze the effects of our approach on learner affective state. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
This paper discusses novel research conducted to study the direct impact of learner's affective changes on the value of a well established EEG-mental engagement index. An acquisition protocol for recording the electrical activity of the human brain, known as electroencephalography or EEG, was implemented in a learning environment specifically constructed for emotional elicitation. Data was collected from 35 healthy subjects using 8 sensors and two video cameras. A correlation analysis of the engagement index with emotional states was conducted. Results have shown that emotional states are strongly correlated with learners' mental engagement index and that the later can be used in an educational setting to reliably predict performance. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
It is very common that students fail their exam because of an excessive stress. However, an attitude with no stress at all can cause the same thing as stress can be beneficial in some learning cases. Studying the stress level variation can then be very useful in a learning environment. In this paper, the aim is predicting the stress level variation of the learner in relation to his electrical brain activity within an experiment over two days. To attain that goal, three personal and non-personal characteristics were used: gender, usual mode of study and dominant activity between the first and second day. 21 participants were recruited for our experiment. Results were very encouraging: an accuracy of 71% was obtained by using the ID3 machine learning algorithm.
In this work, we developed a new approach to predict learning performance regarding a group, by analyzing his brain activity during a two-day experiment. In fact, we recruited 21 participants and asked them to read multiple texts in the first day and to answer questions about these texts on the second day. In our experiment we recorded for every learner the amplitude of the six main frequency bands and we computed the rate of the dominant brainwave vector per second. The other factor that we register is the variation of this rate during the two days of the experiment. The combination of these factors was used to predict the learner performance of the 21 participants and permitted us to predict the learning performance with 76% of accuracy.
The stress factor plays an important role in learning tasks, especially when the learner is in front of an exam. Studying the stress level variation can then be very useful in a learning situation. In this paper, we conducted an experiment with 21 participants over a two day period. We have two objectives; the first one is to predict the stress level variation of the learner in relation to his electrical brain activity. To attain that goal, three personal and non-personal characteristics were used: the gender, the usual mode of study and the dominant activity between the first and second day of the experiment. An accuracy of 71% was obtained by using the ID3 machine learning algorithm. The second goal is to propose an extension of a Multi Agent System (MAS) by adding the Stress Prediction Agent. This MAS uses Brainwaves to predict certain learner characteristics.
... This is not surprising, however, concerns are often ignored when judging the effectiveness of arguments or choosing a strategy. There are some studies that make use of different personality traits of the user attributes in order to evaluate what sort of argument might be more effective for this particular person (for examples see [13,9,23,19]). However, computational argumentation largely focuses on sentimental , rhetorical  and structural  attributes of the argument, rather than attributes about the user. ...
... A typical AC system under an eLearning/dLearning/mLearning scenario proceeds to: collect data that convey information on the user's affective state; 3 recognize this state based on computer intelligence/machine learning algorithms on the basis of an adopted taxonomy for human affective states; produce meaningful emotional visualisations readily available to all stakeholders; and provide responses to the user accordingly ( Figure 1). The respondent's affective state is implicitly captured either by detecting physiological signals (e.g., electroencephalography-EEG electromyogram-EMG, electrodermal activity-EDA/SC, electrocardiogram-EKG/ECG, blood volume pulse-BVP, etc.) or by observing motorbehavioural activity (facial expressions, voice intonation, mouse movements, body posture, etc.) (D'Mello and Calvo 2013;Jraidi and Frasson 2013;Feidakis 2016). ...
... Each IAPS image was displayed for 20 seconds, while the duration of the neutral one was set to 10 seconds. The first neutral image was used to create a baseline reference for the recorded biological signals, in order to measure the response caused by each IAPS image in relation with the initial emotional state of each participant; this method is widely used to facilitate the comparison between different subjects . Fig. 1 shows a schematic view of the experimental protocol. ...
... We perform ANOVA test to determine whether the features obtained under each test are significantly different from each other before the emotion and cognitive load analysis. We declared two sets of features are different from each other if the p-score of the ANOVA test is less than 0.05 , , . Table I shows the ANOVA test results for SCR, HR and LF features under Stroop, math, and event recall tests respectively. ...
... In one set of studies, laboratory experiments were carried out to examine how psychophysiological recordings (i.e. EEG, facial EMG, EDA, cardiac activity) obtained during using the serious game predict (short-term) learning outcomes (Chaouachi & Frasson, 2010; Chaouachi, Jraidi, & Frasson, 2011; Heraz, Jraidi, Chaouachi, & Frasson, 2009). Several authors have suggested that it is advisable to use multiple measures so that differential response patterns (or profiles) can be identified (Cacioppo et al., 2000). ...
... Many studies have used those mental states to predict reading behavior in a tutor , to predict performance level during cognitive task , to predict math problem solving  or to monitor indexes of alertness, cognition, and memory . Berka and her team  used indexes extracted from EEG in order to study the engagement and workload mental states , Chaouachi et al.  integrated these two mental states in their system, Mentor. This system used some rules in order to maintain students in a positive mental state while learning, and reacted each time on selecting the appropriate next activity to present to the learner. ...
... ─ Objective measures consist of capturing and analyzing the signals coming from the player's body and face. Different tools can be used such as cerebral activity EEG , skin conductance  or facial expressions recognition . ...
... BCI researchers have extracted a lots of features. Among them band power spectrum, energy spectral density, spectral centroid, common spatial pattern, wavelet transformations, wavelet packet decomposition, independent component analysis, autoregressive model, principal component analysis, cross-correlation, variant, covariant, short-time Fourier Transform, Shannon's entropy and z-score are the most usable features . These features have been classified by the variety of classification algorithm namely Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Neural Networks (NN), MultiLayer Perceptron (MLP), Hidden Markov Model (HMM) and K-Nearest Neighbors (K-NN) by the previous research . ...
... Another study intended to detect, in real-time, partici- pants' valence (positive or negative emotion) while they were watching videos (Soleymani, Asghari-Esfeden, Pantic, and Fu, 2014). Chaouachi and Frasson (Chaouachi and Frasson 2012) used as well EEG in order to extract engage- ment and workload indexes, which are two mental states in- dexes, in their system. ...
... Goal-driven Web bots can actively participate to the online activity and, for instance, prevent bullying and harassment . The coupling of AI and the Web has the potential to take our interactions to the next level of intelligence  and to take into account complex aspects of them such as the role of emotions in online debates . An important goal for Web Science therefore is the production of AIs benevolent-by-design for the good of the Web and society. ...