Content uploaded by Angelica Reyes
Author content
All content in this area was uploaded by Angelica Reyes on Sep 19, 2014
Content may be subject to copyright.
1850 IEEE TRA NSA CTI ON S ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014
Developing a Body Sensor Network to
Detect Emotions During Driving
Genaro Rebolledo-Mendez, Angélica Reyes, Sebastian Paszkowicz, Mari Carmen Domingo, and Lee Skrypchuk
Abstract—Emerging applications using body sensor networks (BSNs)
constitute a new trend in car safety. However, the integration of hetero-
geneous body sensors with vehicular ad hoc networks (VANETs) poses a
challenge, particularly on the detection of human behavioral states that
may impair driving. This paper proposes a detector of human emotions,
of which tiredness and stress (tension) could be related to traffic accidents.
We present an exploratory study demonstrating the feasibility of detecting
one emotional state in real time using a BSN. Based on these results, we
propose middleware architecture that is able to detect emotions, which can
be communicated via the onboard unit of a vehicle with city emergency
services, VANETs, and roadside units, aimed at improving the driver’s
experience and at guaranteeing better security measures for the car driver.
Index Terms—Body sensor network (BSN), driver’s behavior, vehicu-
lar ad hoc network (VANET).
I. INTRODUCTION
Body sensor networks (BSNs) are becoming more complex due
to the use of different kinds of sophisticated sensors, which provide
advanced functionalities. BSNs are continuously being integrated into
different environments of our everyday lives, including cars. This
paper presents the results of ongoing research in the area of emotional
detection using a BSN in cars. This paper uses the empirical evidence
obtained during one experiment to propose a new architecture designed
to prevent accidents caused by driver’s negative emotional reactions
while driving. To achieve this, we considered a pervasive computing
environment in which one vehicle with communication capabilities
was integrated with drivers who wore a BSN in order to collect
physiological data that could be related to driving impairment. Most
drivers are aware of the effects that drinking alcohol and using cell
phones may have on driving [1]–[3]. However, little consideration
has been given to other factors that may impair driving such as the
emotional state of the driver. According to official statistics, inattention
(including emotional factors) could have serious or fatal consequences
for driving [4]. For example, according to the U.S. National Highway
Traffic Safety Administration [4], 20% of injury crashes in 2009
involved reports of distracted driving. In addition, 2.7% of drivers and
motorcycle riders involved in fatal crashes were drowsy, asleep, fa-
Manuscript received October 29, 2013; revised January 19, 2014, April 22,
2014, and June 18, 2014; accepted June 19, 2014. Date of publication August 1,
2014; date of current version August 1, 2014. This work was supported in
part by Jaguar Land Rover and in part by the Spanish Ministry of Education
and Science under project TRA2013-45119-R RPAS OPERATIONS IN THE
SINGLE EUROPEAN SKY and project TIN2010-20136-C03-01. The Asso-
ciate Editor for this paper was C. Olaverri-Monreal.
G. Rebolledo-Mendez is with the Facultad de Estadística e Informática, Uni-
versidad Veracruzana, 91020 Jalapa, Mexico, and also with AffectSense, 91500
Veracruz, Mexico (e-mail: grebolledo@uv.mx; g.rebolledo@affectsense.com).
A. Reyes is with the Department of Computer Architecture, Universitat Poli-
técnica de Catalunya, 08034 Barcelona, Spain (e-mail: mreyes@ac.upc.edu).
S. Paszkowicz and L. Skrypchuk are with the Jaguar Land Rover Research
and Advanced Engineering, International Digital Laboratory, Warwick Man-
ufacturing Group, University of Warwick, Coventry CB4 7AL, U.K. (e-mail:
spaszkow@jaguarlandrover.com; lskrypch@jaguarlandrover.com).
M. C. Domingo is with the Escola d’Enginyeria de Telecomunicaciói
Aeroespacial de Castelldefels and the Departament d’Enginyeria Telemática,
Universitat Politécnica de Catalunya, 08034 Barcelona, Spain (e-mail: mari.
carmen.domingo@upc.edu).
Digital Object Identifier 10.1109/TITS.2014.2335151
tigued, ill, or had had a blackout. These are important figures that need
to be addressed for accident prevention. This paper taps into this need
and presents empirical evidence toward the detection of emotions.
Previous work has focused on the detection of inattentive states in
relation to drunkenness and other nonemotional factors in driving. A
system to automatically detect both drunk and drowsy driving states
was developed by Sakairi and Togami [5]. Chin-Teng et al. [6], [7]
proposed a technique to continuously detect drivers’ cognitive states
in relation to their abilities in perception, recognition, and vehicle
control using electroencephalography (EEG). The authors developed
a drowsiness-estimation system based on EEG to estimate a driver’s
cognitive state when he/she was driving a car in a virtual-reality-
based dynamic simulator. EEG signals have been also used to detect
drowsiness. For example, Flores et al. [8] proposed a real-time wire-
less EEG-based computer interface system to collect, amplify, filter,
preprocess, and send EEG signals to a signal-processing module using
wireless communication. The signal-processing module was capable
of detecting real-time drowsiness.
Some work have addressed the recognition of the emotional state of
the drivers using BSN in simulation environments [9]–[11], whereas
others have analyzed drivers’ emotions in real-life scenarios [12]–[14].
Although the papers reporting experiments in simulated environments
provide a good indication of the feasibility of detecting emotional
states during driving, there are indications [9] that subjects experienced
different emotions in simulation environments to those they may
experience in real conditions. Because real-life driving conditions
potentially provoke genuine emotions, we chose to carry out our
experiments in realistic settings as a means to provide unique insights
into drivers’ emotional behaviors. In [12], physiological sensing has
been applied to determine the driver’s stress levels using an electrocar-
diogram (ECG), an electromyogram, and electrodermal activity (EDA)
in real scenarios comprising highway and city driving. The authors
suggested that the first sensors that should be integrated into a car
should be the skin conductance and heart rate sensors [12]. In [13], a
real-time methodology for the assessment of drivers’ stress has been
introduced, employing not only physiological data but also driving
history extracted from Global Positioning System records and the
vehicle’s controller area network bus data. This information has been
incorporated into a Bayesian network to estimate the levels of stress.
Their results in real driving conditions show accuracy of 82% in stress
event detection. However, the authors notice that more reliable stress
metrics should be based, for example, on EEG [13]. Singh et al. [14]
monitored the driver’s affective state using physiological signals (EDA
and photoplethysmography) during on-road driving experiments.
This paper aims to provide preliminary empirical evidence of how
to recognize four emotional states in a real-world driving situation:
concentrated, tension, tired, and relaxed. The objective of this paper is
twofold. On one hand, we present one field study specifically defined
to measure emotions in drivers using a BSN. On the other hand, we
propose an architecture describing how the BSN to detect emotions
could be integrated into a vehicular onboard unit (OBU). Our proposal
consists of detecting driver’s emotions and defining corresponding
actions such as the transmission of notification messages to emergency
services, other vehicles within the transmission range, roadside units
1524-9050 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15,NO. 4, AUGUST 2014 1851
Fig. 1. Proposed scenario.
(RSUs), and nearby pedestrians operated by the OBU and/or the
driver’s wireless personal device. The structure of this paper is as
follows. Section II describes a scenario for the inclusion of a BSN in
conjunction with the vehicle’s OBU. Section III presents a proposition
for an architecture taking advantage of emotional recognition using a
BSN. Section IV presents a study where the resulting BSN was de-
ployed for emotional detection during real driving conditions. Finally,
a discussion of our work and the research challenges to be addressed
is presented in Section V.
II. PROPOSED SCENARIO
We propose a scenario (see Fig. 1) where the driver’s behavior is
monitored in real time. A driver wears a BSN consisting of at least
two sensors capable of reading physiological signals. The driver’s
physiology is constantly measured and sent to the OBU, which is
embedded in the vehicle.
In this context, the OBU determines the driver’s emotional states,
considering the models of emotions similar to those described in Sec-
tion IV. In this proposition, common causes of traffic accidents related
to emotional states such as cognitive fatigue or stress can be detected.
The OBU provides clues in an effort to make the driver become aware
of these states. In this paper, we focus on highway and city contexts, as
well as the types of emotional reactions that occur during the driving
sessions. Based on the results presented in Section IV, we hypothesize
that it is possible to safely monitor the driver and detect emotions that
may pose a danger for the driver and other road users. Because of this,
our proposed architecture considers mechanisms to inform emergency
services in case there is an associated driving danger (see Fig. 1).
III. ARCHITECTURE FOR DRIVERS
EMOTION DETECTION USING BSN
We propose a BSN deployed to sense drivers’ physiological change
in real time, as well as to examine the feasibility of establishing
an onboard system capable of sensing physiological data and of
calculating a driver’s emotional state in real time.
Our field study consists of an ECG, EEG, EDA, and respiration
sensors. This paper presents results in relation to the EEG and EDA
sensors. Future work will integrate results from the data obtained with
the other sensors. If the BSN detects a driver’s emotional state that
could produce impaired driving such as excessive tiredness or tension,
then alarm notification messages are sent from the vehicle’s OBU to
the RSUs or emergency services (see Fig. 2).
Fig. 2. Integrated BSN and a vehicle’s OBU.
A. BSN Module
The two sensors used in this BSN consisted of two portable com-
mercially available sensors. The physiological data collected consisted
of neural and EDA. The sensor used to collect neural activity was
NeuroSky’s MindWave.1The MindWave software indicates two types
of neural activity: attention and meditation. Attention is related to a
state of alertness and denotes an increase in Beta waves. Meditation
is related to increases in Alpha waves and indicates a state of alert
relaxation.
The EDA sensor was Affectiva’s Q sensor [15] consisting of a
bracelet with a sensor attached to it. The Q sensor measures EDA,
which is also called skin conductance. The Q sensor displays varia-
tions in electrical activity measured at the surface of the skin in mi-
crosiemens (a unit of conductance). In its raw format, EDA expresses
electrical conductance (inverse of resistance) across the skin. Changes
in EDA are automatically and unconsciously activated by the wearer’s
brain and reflect arousal levels on the part of the wearer. Higher levels
of EDA indicate higher levels of arousal and could be related to a
person being more engaged, stressed, or excited. Lower EDA indicates
lower levels of arousal and relates to disengagement, boredom, or
calmness.
The decision to utilize these sensors was primarily based on driver
safety. It was of paramount importance to use a BSN that was not
obtrusive or impeded a driver’s ability to correctly perform all the tasks
involved in guiding a car. A second consideration was the reliability
of the data collection process. The EDA data collection mechanism
with the Q sensor had previously been tested [16], [17] as it was
specifically designed for field data collection. We assume that its
reliability could be ascertained. Unlike the Q sensor, the NeuroSky
device does not store data on the device, but depends on external
storage mechanisms and a steady Bluetooth-enabled connection. We
achieved this by developing a program capable of reading data gener-
ated by the MindWave and logging it onto a laptop computer serving
as the vehicle’s OBU. The acquired data are transmitted via Bluetooth,
but future versions may use a wireless communication module using
ultrawideband or IEEE 802.15.6 for wireless transmission between the
1http://www.neurosky.com/Products/MindWave.aspx
1852 IEEE TRA NSA CTI ON S ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014
Fig. 3. Information to be transmitted to the emergency services.
sensors and the gateway. Bluetooth or Zigbee could be also used to
forward the physiological data from the gateway to the vehicle’s OBU.
The information passed between the BSN of the driver and the OBU
includes the following: health state and characteristics of the emotional
state that impairs driving (see Fig. 3).
B. Vehicle’s OBU
The acquired data from the BSN are processed by the OBU in
real time since the BSN gateway might be restricted by its low
capabilities and limited battery capacity. The OBU is divided into
three major modules: the feature extraction module, the intelligent
driver’s state recognition module, and the alarm notification module.
The first module extracts features from the selected biosignals. These
features are used by the intelligent driver’s state recognition module
to determine if the driver has one of the predefined emotional states.
Alarm notifications are sent to the emergency services in case of
detection of an emotional state that impairs driving. The communi-
cation between OBU and Emergency Services will exploit various
communication technologies (DSRC, UMTS/HSDPA, and WAVE)
empowering OBUs with vehicular networks and cellular or wireless
communications. Vehicle-to-vehicle networks allow faster alarm noti-
fications since sensing and propagation of information are done on the
spot in real time via multihop communication. Surrounding vehicles
will be immediately notified of the alarm and can be further propagated
via radio base stations to the emergency services.
C. Emergency Services
The emergency services can be also notified if the driver requires
medical assistance, for example, due to excessive stress. The in-
formation sent from the vehicle’s OBU to the emergency services
includes the information collected from the BSN, driver and vehicles
characteristics, as well as OBU location (see Fig. 3).
Accurate OBU location in open-air scenarios can be provided by
the Global Navigation Satellite Systems. However, in dense urban
and underground scenarios, these systems suffer from the weakness
(or even the blockage) of their signals when the receiver operates in
non-line-of-sight conditions. Switching between technologies, such as
wideband communication provided trough 3G radio network-based
localization methods wireless sensor networks, allows determining the
most accurate position of the OBU.
Pedestrians and other drivers may be also warned of a driver’s
indisposition to drive properly, through the use of notification
messages forwarded to their own OBU (e.g., smartphones) using
vehicle-to-pedestrian or infrastructure-to-pedestrian communications.
The following section reports an evaluation made on an implemen-
tation of the architecture (excluding the emergency services) and the
BSN in the context of an experiment involving drivers in real driving
scenarios.
IV. METHOD
An experiment was aimed at collecting physiological data using
the proposed architecture (except for the emergency services) and
a BSN in real driving conditions. The experiment lasted for seven
working days. It consisted of asking participants to wear sensors and
to drive in two driving conditions in relation to highway and city
environments. Gathering data from two conditions allowed the study
of body reactions in the same driver. It also enabled the study of
multiple data points potentially useful for understanding the drivers’
physiological responses.
A. Participants
There were 24 drivers (13 males and 11 females) aged between 23
and 48 years old. The average driving time was 8 min and 5 s per con-
dition. Weather, traffic conditions (vehicle volume, and pedestrians),
and time of the day were not controlled, and drivers faced variable
unpredictable situations. Information related to the participants’ coffee
ingestion and hours of sleep during the night prior to the experiment
was collected via questionnaires. Participants were asked to spend
2 h of their time in order to complete the experiment. Prior to the
experiment, all the participants filled out a consent form.
B. Driving Conditions and Driving Tasks
The two driving conditions were simulated on Jaguar Land Rover’s
vehicle proving ground in Gaydon in the U.K. The Emissions Circuit
served as the highway-like situation, and Gaydon’s streets simulated
a city-like environment containing roundabouts, pedestrian crossings,
and speed limits. The car used for the experiment was a Range Rover
(2010 Model Year). The task the participants were asked to perform
was to drive the car as normally as they would do on a regular day, but
to keep the speed below 100 mi/h (160.93 km/h) in order to comply
with Gaydon’s guidelines for experimentation. The participants were
told to treat the proving ground as normal public roads and to follow
the traffic rules applicable for the U.K.2Before driving, the participants
were asked to adjust the seat, the steering wheel, and the mirrors; and
all seat belts were checked to be in place. One team member sitting in
the passenger seat provided the driving tasks by reading a predefined
set of instructions. These instructions consisted of driving indications
that allowed the drivers to navigate the proving ground. Examples
of the instructions included “drive to the roundabout at the exit of
the observation tower area” or “complete two laps of the emissions
circuit.”
C. Procedure
The procedure consisted of four stages.
Stage 1: Drivers were briefed about the aims of the experiment and its
processes and were asked to fill out a consent form.
Stage 2: Drivers were asked to wear several types of sensors. This
report focuses only on two types of physiological data.
Stage 3: Drivers were asked to drive in two types of conditions. The
first was always highway conditions, followed by city conditions.
A video camera was placed on the car’s dashboard to film the
driver’s face while driving.
Stage 4: The video was immediately used after finishing Stage 3. The
drivers were asked to self-report the emotional state they saw
at fixed intervals (see Table I for the emotional states); please
note that responses were coded in relation to only the four main
emotional states.
2https://www.gov.uk/speed-limits
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15,NO. 4, AUGUST 2014 1853
TAB L E I
EMOTIONAL STATES CONSIDERED FOR THE EXPERIMENT
TAB L E I I
DESCRIPTIVE STATISTICS FOR DRIVER 10 UNDER DRIVING
CONDITION 2, TIME 7MIN AND 6S(N=427)
The aim of collecting the reports was to look for correlations be-
tween one or multiple physiological responses (measured using the Q
sensor and the NeuroSky device) and emotional information provided
by the drivers themselves. Both the physiological data and the self-
reports were used to build preliminary models of emotional reactions
while driving. The first model employed logistic regression, where one
physiological signal was used to predict emotions. The second model
was based on a K-means algorithm to classify the physiological data
to predict an emotional state.
D. Results
The first step consisted of organizing drivers by considering their
physiological data and the completeness of their self-reports. Fifty-
four percent (N=13)of drivers had complete data and were thus con-
sidered as part of the analyses. Since the neural activity was captured in
raw format, it underwent two transformations: fast Fourier transforma-
tion (15%), followed by natural logarithm transformation. Descriptive
analyses (see Table II) show that neural activity had high coefficients
of variation and did not have any linear relation with other variables.
Unlike neural activity, EDA shows lower coefficients of variations.
Given the lack of linear relations among the variables, they were
treated as independent. To build the regression models, the levels
of significance for the variables were tested for a response variable
“affect,” a design variable with values from 1 to 4 referring to the
categorical values of the main emotional states on Table I. Since it
was found that EDA has a significant correlation among all the drivers
in the subsample (N=13, Pearson’s =0.929, p<0.05), we chose
to utilize this variable for the development of models of emotions.
Two principal component analyses (PCA) were used to identify the
driver who had the most representative EDA pattern of the subsample.
For these analyses, the drivers were treated as variables, and the
drivers’ EDA were treated as cases. The results indicated that seven
drivers account for 98.5% of cumulative percentage of variance of the
subsample’s EDA behavior. A second PCA, on which the seven drivers
identified on the first PCA were treated as variables, suggested that
driver 10 explains 99.1% of the variability of the newer subsample
(N=7). Driver 10’s EDA data were thus employed as a training set.
The data from the rest of the drivers (N=12)were used as a test
set. Table III includes descriptive statistics for the emotional data, as
self-reported by the drivers for the two driving conditions. Given that
some emotions are not present during driving, five logistic regression
TABLE III
EMOTIONAL INFORMATION FOR TWO DRIVING CONDITIONS
Fig. 4. Fitted function and observed values to detect the state concentrated.
models were developed: three for condition 1 and two for condition 2.
One model (see Fig. 4) and its formula for the detection of the state
“concentrated” for “city-like” condition are presented as example, i.e.,
y=exp (−4.05 +(1.68857)∗x)
(1+ exp(−4.05 +(1.68857)∗x)) .
In the formula, “y” values refer to the response variable, whereas
“x” values represent the current EDA measurement. To test the model,
we fed this with drivers’ physiological data and calculated the levels
of agreement (using Cohen’s Kappa) between the models’ responses
and the self-reports provided by driver 10. The results showed that the
model’s Kappa index is 0.5455, indicating a moderate agreement be-
tween the model and the self-reports. The level of agreement between
the model and the training set was 0.7186, indicating a substantial
agreement. In comparison, a K-means classifier built with the same
data set (training and test) has a Kappa of 0.2745, with a fair level of
agreement. A characterization of agreement levels proposes levels <0
to indicate no agreement, 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60
as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect
agreement [18]. The accuracy of the other four models indicated
slight and no agreement. The cause may be the self-reports as they
were provided by individual drivers and not by one single observer.
Future studies will focus on building models that consider affective
assessment only by one person. In addition to building new models,
we plan to classify using the one-versus-all approach to pick up the
most promising class.
1854 IEEE TRA NSA CTI ON S ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014
The results of this preliminary model are encouraging and provide
an indication of the feasibility of detecting emotions for real driving
situations. Given the levels of agreement between the preliminary
model and the self-reports, this methodology has the potential to
provide accurate classification of emotions that can be integrated
with the vehicle’s OBU. Future studies will analyze drivers’ personal
characteristics such as age, driving experience, and use of medications.
Other nonphysiological data, such as the pressure exerted on the
accelerator and/or the brake, will be used to build more reliable models
of emotions.
V. C ONCLUSION AND FUTURE WORK
This paper has presented, on one hand, the results of a field study
specifically set up to measure emotions in drivers using a BSN. To
that end, we defined a BSN as consisting of two sensors. On the other
hand, this paper has proposed an architecture describing how the BSN
could be integrated into vehicular ad hoc networks (VANETs) in order
to analyze driver’s emotions and to orchestrate actions operated by
an OBU with the aim of preventing potentially fatal accidents related
to negative emotions such as tiredness and stress. This architecture
defines actions, including the transmission of notification messages
to the emergency services, other vehicles in the transmission range,
RSUs, and nearby pedestrians through the VANET. The dissemination
of warning and safety messages through VANETs would alert other
drivers about possible hazards, increase the available maneuvering
time [19], and prevent accidents that would have been caused by
driver’s negative emotional behaviors. The results of this study show
preliminary evidence of measuring emotions using a BSN and logistic
regression. Based on these results, we hypothesize that it is possible to
quantify driver’s emotions and the role the proposed architecture plays
in preventing car accidents (involving the driver and other people and
vehicles) by constantly monitoring the driver’s emotions. Future exper-
iments will analyze the following: 1) the role of emotional awareness
(emotional intelligence) and self-regulation of negative emotions while
driving; 2) the dynamics of emotional change in relation to external
factors such as driving conditions and duration, age, experience, and
gender; and 3) the role of the architecture in reducing car accidents.
Work for the future also consists of carrying out in-depth data analyses
and correlating emotional responses with driving behavior such as
pressure on the accelerator and brake and adding a communication
component to existing VANETs. We will also analyze which protocols
and tools better fit the use of VANETs for user applications. Finally,
we would like to study some technical aspects from VANETs, test the
overall architecture, and see how much we can reduce preventable car
accidents.
REFERENCES
[1] S. Kojima et al., “Noninvasive biological sensor system for detection of
drunk driving,” in Proc. 9th Int. Conf. ITAB, 2009, pp. 1–4.
[2] Y.-C. Wu, Y.-Q. Xia, P. Xie, and X.-W. Ji, “The design of an automotive
anti-drunk driving system to guarantee the uniqueness of driver,” in Proc.
ICIECS, 2009, pp. 1–4.
[3] W. J. Horrey and C. D. Wickens, “Examining the impact of cell phone
conversations on driving using meta-analytic techniques,” Hum. Factors,
vol. 48, no. 1, pp. 196–205, 2006.
[4] “Traffic safety facts—Distracted driving 2009,” U.S. Dept. Transp.,
Washington, DC, USA, 2010.
[5] M. Sakairi and M. Togami, “Use of water cluster detector for preventing
drunk and drowsy driving,” in Proc. IEEE Sensors, 2010, pp. 141–144.
[6] C.-T. Lin et al., “A real-time wireless brain–computer interface system for
drowsiness detection,” IEEE Trans. Biomed. Circuits Syst., vol. 4, no. 4,
pp. 214–222, Aug. 2010.
[7] L. Chin-Teng et al., “EEG-based drowsiness estimation for safety driving
using independent component analysis,” IEEE Trans. Circuits Syst. I, Reg.
Papers, vol. 52, no. 12, pp. 2726–2738, Dec. 2005.
[8] M. Flores, J. M. Armingol, and A. de la Escalera, “Driver drowsiness
warning system using visual information for both diurnal and nocturnal
illumination conditions,” EURASIP J. Adv. Signal Process., vol. 2010,
no. 1, p. 438 205, Jul. 2010.
[9] C. D. Katsis, N. Katertsidis, G. Ganiatsas, and D. I. Fotiadis, “Toward
emotion recognition in car-racing drivers: A biosignal processing ap-
proach,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 3,
pp. 502–512, May 2008.
[10] C. D. Katsis, Y. Goletsis, G. Rigas, and D. I. Fotiadis, “A wearable
system for the affective monitoring of car racing drivers during simulated
conditions,” Transp. Res. C, Emerging Technol., vol. 19, no. 3, pp. 541–
551, Jun. 2011.
[11] H. Cai and Y. Lin, “Modeling of operators’ emotion and task performance
in a virtual driving environment,” Int. J. Hum.-Comput. Stud., vol. 69,
no. 9, pp. 571–586, Aug. 2011.
[12] J. A. Healey and R. W. Picard, “Detecting stress during real-world driv-
ing tasks using physiological sensors,” IEEE Trans. Intell. Transp. Syst.,
vol. 6, no. 2, pp. 156–166, Jun. 2005.
[13] G. Rigas, Y. Goletsis, and D. I. Fotiadis, “Real-time driver’s stress event
detection,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 1, pp. 221–234,
Mar. 2012.
[14] R. R. Singh, S. Conjeti, and R. Banerjee, “A comparative evaluation of
neural network classifiers for stress level analysis of automotive drivers
using physiological signals,” Biomed. Signal Process. Control,vol.8,
no. 6, pp. 740–754, Nov. 2013.
[15] Liberate Yourself from the Lab: Q Sensor Measures EDA in the Wild,
Affectiva Inc., Waltham, MA, USA, Aug. 13, 2013.
[16] Z. Liu et al., “Measuring the engagement level of TV viewers,” in Proc.
10th IEEE Int. Conf. Workshops Autom. FG Recog., 2013, pp. 1–7.
[17] Y. Ayzenberg, J. Hernandez, and R. W. Picard, “FEEL: Frequent EDA and
Event Logging, a mobile social interaction stress monitoring system,” in
Proc. CHI Extended Abstr. Hum. Factors Comput. Syst., Austin, TX, USA,
2012, pp. 2357–2362.
[18] J. R. Landis and G. G. Koch, “The measurement of observer agreement
for categorical data,” Biometrics, vol. 33, no. 1, pp. 159–174, Mar. 1977.
[19] B. K. Chaurasia and S. Verma, “Haste induced behavior and VANET
communication,” in Proc. IEEE ICVES, Nov. 2009, pp. 19–24.