ArticlePDF Available

Abstract and Figures

Emerging applications using body sensor networks (BSNs) constitute a new trend in car safety. However, the integration of heterogeneous body sensors with vehicular ad hoc networks (VANETs) poses a challenge, particularly on the detection of human behavioral states that may impair driving. This paper proposes a detector of human emotions, of which tiredness and stress (tension) could be related to traffic accidents. We present an exploratory study demonstrating the feasibility of detecting one emotional state in real time using a BSN. Based on these results, we propose middleware architecture that is able to detect emotions, which can be communicated via the onboard unit of a vehicle with city emergency services, VANETs, and roadside units, aimed at improving the driver's experience and at guaranteeing better security measures for the car driver.
Content may be subject to copyright.
1850 IEEE TRA NSA CTI ON S ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014
Developing a Body Sensor Network to
Detect Emotions During Driving
Genaro Rebolledo-Mendez, Angélica Reyes, Sebastian Paszkowicz, Mari Carmen Domingo, and Lee Skrypchuk
Abstract—Emerging applications using body sensor networks (BSNs)
constitute a new trend in car safety. However, the integration of hetero-
geneous body sensors with vehicular ad hoc networks (VANETs) poses a
challenge, particularly on the detection of human behavioral states that
may impair driving. This paper proposes a detector of human emotions,
of which tiredness and stress (tension) could be related to traffic accidents.
We present an exploratory study demonstrating the feasibility of detecting
one emotional state in real time using a BSN. Based on these results, we
propose middleware architecture that is able to detect emotions, which can
be communicated via the onboard unit of a vehicle with city emergency
services, VANETs, and roadside units, aimed at improving the driver’s
experience and at guaranteeing better security measures for the car driver.
Index Terms—Body sensor network (BSN), driver’s behavior, vehicu-
lar ad hoc network (VANET).
I. INTRODUCTION
Body sensor networks (BSNs) are becoming more complex due
to the use of different kinds of sophisticated sensors, which provide
advanced functionalities. BSNs are continuously being integrated into
different environments of our everyday lives, including cars. This
paper presents the results of ongoing research in the area of emotional
detection using a BSN in cars. This paper uses the empirical evidence
obtained during one experiment to propose a new architecture designed
to prevent accidents caused by driver’s negative emotional reactions
while driving. To achieve this, we considered a pervasive computing
environment in which one vehicle with communication capabilities
was integrated with drivers who wore a BSN in order to collect
physiological data that could be related to driving impairment. Most
drivers are aware of the effects that drinking alcohol and using cell
phones may have on driving [1]–[3]. However, little consideration
has been given to other factors that may impair driving such as the
emotional state of the driver. According to official statistics, inattention
(including emotional factors) could have serious or fatal consequences
for driving [4]. For example, according to the U.S. National Highway
Traffic Safety Administration [4], 20% of injury crashes in 2009
involved reports of distracted driving. In addition, 2.7% of drivers and
motorcycle riders involved in fatal crashes were drowsy, asleep, fa-
Manuscript received October 29, 2013; revised January 19, 2014, April 22,
2014, and June 18, 2014; accepted June 19, 2014. Date of publication August 1,
2014; date of current version August 1, 2014. This work was supported in
part by Jaguar Land Rover and in part by the Spanish Ministry of Education
and Science under project TRA2013-45119-R RPAS OPERATIONS IN THE
SINGLE EUROPEAN SKY and project TIN2010-20136-C03-01. The Asso-
ciate Editor for this paper was C. Olaverri-Monreal.
G. Rebolledo-Mendez is with the Facultad de Estadística e Informática, Uni-
versidad Veracruzana, 91020 Jalapa, Mexico, and also with AffectSense, 91500
Veracruz, Mexico (e-mail: grebolledo@uv.mx; g.rebolledo@affectsense.com).
A. Reyes is with the Department of Computer Architecture, Universitat Poli-
técnica de Catalunya, 08034 Barcelona, Spain (e-mail: mreyes@ac.upc.edu).
S. Paszkowicz and L. Skrypchuk are with the Jaguar Land Rover Research
and Advanced Engineering, International Digital Laboratory, Warwick Man-
ufacturing Group, University of Warwick, Coventry CB4 7AL, U.K. (e-mail:
spaszkow@jaguarlandrover.com; lskrypch@jaguarlandrover.com).
M. C. Domingo is with the Escola d’Enginyeria de Telecomunicaciói
Aeroespacial de Castelldefels and the Departament d’Enginyeria Telemática,
Universitat Politécnica de Catalunya, 08034 Barcelona, Spain (e-mail: mari.
carmen.domingo@upc.edu).
Digital Object Identifier 10.1109/TITS.2014.2335151
tigued, ill, or had had a blackout. These are important figures that need
to be addressed for accident prevention. This paper taps into this need
and presents empirical evidence toward the detection of emotions.
Previous work has focused on the detection of inattentive states in
relation to drunkenness and other nonemotional factors in driving. A
system to automatically detect both drunk and drowsy driving states
was developed by Sakairi and Togami [5]. Chin-Teng et al. [6], [7]
proposed a technique to continuously detect drivers’ cognitive states
in relation to their abilities in perception, recognition, and vehicle
control using electroencephalography (EEG). The authors developed
a drowsiness-estimation system based on EEG to estimate a driver’s
cognitive state when he/she was driving a car in a virtual-reality-
based dynamic simulator. EEG signals have been also used to detect
drowsiness. For example, Flores et al. [8] proposed a real-time wire-
less EEG-based computer interface system to collect, amplify, filter,
preprocess, and send EEG signals to a signal-processing module using
wireless communication. The signal-processing module was capable
of detecting real-time drowsiness.
Some work have addressed the recognition of the emotional state of
the drivers using BSN in simulation environments [9]–[11], whereas
others have analyzed drivers’ emotions in real-life scenarios [12]–[14].
Although the papers reporting experiments in simulated environments
provide a good indication of the feasibility of detecting emotional
states during driving, there are indications [9] that subjects experienced
different emotions in simulation environments to those they may
experience in real conditions. Because real-life driving conditions
potentially provoke genuine emotions, we chose to carry out our
experiments in realistic settings as a means to provide unique insights
into drivers’ emotional behaviors. In [12], physiological sensing has
been applied to determine the driver’s stress levels using an electrocar-
diogram (ECG), an electromyogram, and electrodermal activity (EDA)
in real scenarios comprising highway and city driving. The authors
suggested that the first sensors that should be integrated into a car
should be the skin conductance and heart rate sensors [12]. In [13], a
real-time methodology for the assessment of drivers’ stress has been
introduced, employing not only physiological data but also driving
history extracted from Global Positioning System records and the
vehicle’s controller area network bus data. This information has been
incorporated into a Bayesian network to estimate the levels of stress.
Their results in real driving conditions show accuracy of 82% in stress
event detection. However, the authors notice that more reliable stress
metrics should be based, for example, on EEG [13]. Singh et al. [14]
monitored the driver’s affective state using physiological signals (EDA
and photoplethysmography) during on-road driving experiments.
This paper aims to provide preliminary empirical evidence of how
to recognize four emotional states in a real-world driving situation:
concentrated, tension, tired, and relaxed. The objective of this paper is
twofold. On one hand, we present one field study specifically defined
to measure emotions in drivers using a BSN. On the other hand, we
propose an architecture describing how the BSN to detect emotions
could be integrated into a vehicular onboard unit (OBU). Our proposal
consists of detecting driver’s emotions and defining corresponding
actions such as the transmission of notification messages to emergency
services, other vehicles within the transmission range, roadside units
1524-9050 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15,NO. 4, AUGUST 2014 1851
Fig. 1. Proposed scenario.
(RSUs), and nearby pedestrians operated by the OBU and/or the
driver’s wireless personal device. The structure of this paper is as
follows. Section II describes a scenario for the inclusion of a BSN in
conjunction with the vehicle’s OBU. Section III presents a proposition
for an architecture taking advantage of emotional recognition using a
BSN. Section IV presents a study where the resulting BSN was de-
ployed for emotional detection during real driving conditions. Finally,
a discussion of our work and the research challenges to be addressed
is presented in Section V.
II. PROPOSED SCENARIO
We propose a scenario (see Fig. 1) where the driver’s behavior is
monitored in real time. A driver wears a BSN consisting of at least
two sensors capable of reading physiological signals. The driver’s
physiology is constantly measured and sent to the OBU, which is
embedded in the vehicle.
In this context, the OBU determines the driver’s emotional states,
considering the models of emotions similar to those described in Sec-
tion IV. In this proposition, common causes of traffic accidents related
to emotional states such as cognitive fatigue or stress can be detected.
The OBU provides clues in an effort to make the driver become aware
of these states. In this paper, we focus on highway and city contexts, as
well as the types of emotional reactions that occur during the driving
sessions. Based on the results presented in Section IV, we hypothesize
that it is possible to safely monitor the driver and detect emotions that
may pose a danger for the driver and other road users. Because of this,
our proposed architecture considers mechanisms to inform emergency
services in case there is an associated driving danger (see Fig. 1).
III. ARCHITECTURE FOR DRIVERS
EMOTION DETECTION USING BSN
We propose a BSN deployed to sense drivers’ physiological change
in real time, as well as to examine the feasibility of establishing
an onboard system capable of sensing physiological data and of
calculating a driver’s emotional state in real time.
Our field study consists of an ECG, EEG, EDA, and respiration
sensors. This paper presents results in relation to the EEG and EDA
sensors. Future work will integrate results from the data obtained with
the other sensors. If the BSN detects a driver’s emotional state that
could produce impaired driving such as excessive tiredness or tension,
then alarm notification messages are sent from the vehicle’s OBU to
the RSUs or emergency services (see Fig. 2).
Fig. 2. Integrated BSN and a vehicle’s OBU.
A. BSN Module
The two sensors used in this BSN consisted of two portable com-
mercially available sensors. The physiological data collected consisted
of neural and EDA. The sensor used to collect neural activity was
NeuroSky’s MindWave.1The MindWave software indicates two types
of neural activity: attention and meditation. Attention is related to a
state of alertness and denotes an increase in Beta waves. Meditation
is related to increases in Alpha waves and indicates a state of alert
relaxation.
The EDA sensor was Affectiva’s Q sensor [15] consisting of a
bracelet with a sensor attached to it. The Q sensor measures EDA,
which is also called skin conductance. The Q sensor displays varia-
tions in electrical activity measured at the surface of the skin in mi-
crosiemens (a unit of conductance). In its raw format, EDA expresses
electrical conductance (inverse of resistance) across the skin. Changes
in EDA are automatically and unconsciously activated by the wearer’s
brain and reflect arousal levels on the part of the wearer. Higher levels
of EDA indicate higher levels of arousal and could be related to a
person being more engaged, stressed, or excited. Lower EDA indicates
lower levels of arousal and relates to disengagement, boredom, or
calmness.
The decision to utilize these sensors was primarily based on driver
safety. It was of paramount importance to use a BSN that was not
obtrusive or impeded a driver’s ability to correctly perform all the tasks
involved in guiding a car. A second consideration was the reliability
of the data collection process. The EDA data collection mechanism
with the Q sensor had previously been tested [16], [17] as it was
specifically designed for field data collection. We assume that its
reliability could be ascertained. Unlike the Q sensor, the NeuroSky
device does not store data on the device, but depends on external
storage mechanisms and a steady Bluetooth-enabled connection. We
achieved this by developing a program capable of reading data gener-
ated by the MindWave and logging it onto a laptop computer serving
as the vehicle’s OBU. The acquired data are transmitted via Bluetooth,
but future versions may use a wireless communication module using
ultrawideband or IEEE 802.15.6 for wireless transmission between the
1http://www.neurosky.com/Products/MindWave.aspx
1852 IEEE TRA NSA CTI ON S ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014
Fig. 3. Information to be transmitted to the emergency services.
sensors and the gateway. Bluetooth or Zigbee could be also used to
forward the physiological data from the gateway to the vehicle’s OBU.
The information passed between the BSN of the driver and the OBU
includes the following: health state and characteristics of the emotional
state that impairs driving (see Fig. 3).
B. Vehicle’s OBU
The acquired data from the BSN are processed by the OBU in
real time since the BSN gateway might be restricted by its low
capabilities and limited battery capacity. The OBU is divided into
three major modules: the feature extraction module, the intelligent
driver’s state recognition module, and the alarm notification module.
The first module extracts features from the selected biosignals. These
features are used by the intelligent driver’s state recognition module
to determine if the driver has one of the predefined emotional states.
Alarm notifications are sent to the emergency services in case of
detection of an emotional state that impairs driving. The communi-
cation between OBU and Emergency Services will exploit various
communication technologies (DSRC, UMTS/HSDPA, and WAVE)
empowering OBUs with vehicular networks and cellular or wireless
communications. Vehicle-to-vehicle networks allow faster alarm noti-
fications since sensing and propagation of information are done on the
spot in real time via multihop communication. Surrounding vehicles
will be immediately notified of the alarm and can be further propagated
via radio base stations to the emergency services.
C. Emergency Services
The emergency services can be also notified if the driver requires
medical assistance, for example, due to excessive stress. The in-
formation sent from the vehicle’s OBU to the emergency services
includes the information collected from the BSN, driver and vehicles
characteristics, as well as OBU location (see Fig. 3).
Accurate OBU location in open-air scenarios can be provided by
the Global Navigation Satellite Systems. However, in dense urban
and underground scenarios, these systems suffer from the weakness
(or even the blockage) of their signals when the receiver operates in
non-line-of-sight conditions. Switching between technologies, such as
wideband communication provided trough 3G radio network-based
localization methods wireless sensor networks, allows determining the
most accurate position of the OBU.
Pedestrians and other drivers may be also warned of a driver’s
indisposition to drive properly, through the use of notification
messages forwarded to their own OBU (e.g., smartphones) using
vehicle-to-pedestrian or infrastructure-to-pedestrian communications.
The following section reports an evaluation made on an implemen-
tation of the architecture (excluding the emergency services) and the
BSN in the context of an experiment involving drivers in real driving
scenarios.
IV. METHOD
An experiment was aimed at collecting physiological data using
the proposed architecture (except for the emergency services) and
a BSN in real driving conditions. The experiment lasted for seven
working days. It consisted of asking participants to wear sensors and
to drive in two driving conditions in relation to highway and city
environments. Gathering data from two conditions allowed the study
of body reactions in the same driver. It also enabled the study of
multiple data points potentially useful for understanding the drivers’
physiological responses.
A. Participants
There were 24 drivers (13 males and 11 females) aged between 23
and 48 years old. The average driving time was 8 min and 5 s per con-
dition. Weather, traffic conditions (vehicle volume, and pedestrians),
and time of the day were not controlled, and drivers faced variable
unpredictable situations. Information related to the participants’ coffee
ingestion and hours of sleep during the night prior to the experiment
was collected via questionnaires. Participants were asked to spend
2 h of their time in order to complete the experiment. Prior to the
experiment, all the participants filled out a consent form.
B. Driving Conditions and Driving Tasks
The two driving conditions were simulated on Jaguar Land Rover’s
vehicle proving ground in Gaydon in the U.K. The Emissions Circuit
served as the highway-like situation, and Gaydon’s streets simulated
a city-like environment containing roundabouts, pedestrian crossings,
and speed limits. The car used for the experiment was a Range Rover
(2010 Model Year). The task the participants were asked to perform
was to drive the car as normally as they would do on a regular day, but
to keep the speed below 100 mi/h (160.93 km/h) in order to comply
with Gaydon’s guidelines for experimentation. The participants were
told to treat the proving ground as normal public roads and to follow
the traffic rules applicable for the U.K.2Before driving, the participants
were asked to adjust the seat, the steering wheel, and the mirrors; and
all seat belts were checked to be in place. One team member sitting in
the passenger seat provided the driving tasks by reading a predefined
set of instructions. These instructions consisted of driving indications
that allowed the drivers to navigate the proving ground. Examples
of the instructions included “drive to the roundabout at the exit of
the observation tower area” or “complete two laps of the emissions
circuit.”
C. Procedure
The procedure consisted of four stages.
Stage 1: Drivers were briefed about the aims of the experiment and its
processes and were asked to fill out a consent form.
Stage 2: Drivers were asked to wear several types of sensors. This
report focuses only on two types of physiological data.
Stage 3: Drivers were asked to drive in two types of conditions. The
first was always highway conditions, followed by city conditions.
A video camera was placed on the car’s dashboard to film the
driver’s face while driving.
Stage 4: The video was immediately used after finishing Stage 3. The
drivers were asked to self-report the emotional state they saw
at fixed intervals (see Table I for the emotional states); please
note that responses were coded in relation to only the four main
emotional states.
2https://www.gov.uk/speed-limits
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15,NO. 4, AUGUST 2014 1853
TAB L E I
EMOTIONAL STATES CONSIDERED FOR THE EXPERIMENT
TAB L E I I
DESCRIPTIVE STATISTICS FOR DRIVER 10 UNDER DRIVING
CONDITION 2, TIME 7MIN AND 6S(N=427)
The aim of collecting the reports was to look for correlations be-
tween one or multiple physiological responses (measured using the Q
sensor and the NeuroSky device) and emotional information provided
by the drivers themselves. Both the physiological data and the self-
reports were used to build preliminary models of emotional reactions
while driving. The first model employed logistic regression, where one
physiological signal was used to predict emotions. The second model
was based on a K-means algorithm to classify the physiological data
to predict an emotional state.
D. Results
The first step consisted of organizing drivers by considering their
physiological data and the completeness of their self-reports. Fifty-
four percent (N=13)of drivers had complete data and were thus con-
sidered as part of the analyses. Since the neural activity was captured in
raw format, it underwent two transformations: fast Fourier transforma-
tion (15%), followed by natural logarithm transformation. Descriptive
analyses (see Table II) show that neural activity had high coefficients
of variation and did not have any linear relation with other variables.
Unlike neural activity, EDA shows lower coefficients of variations.
Given the lack of linear relations among the variables, they were
treated as independent. To build the regression models, the levels
of significance for the variables were tested for a response variable
“affect,” a design variable with values from 1 to 4 referring to the
categorical values of the main emotional states on Table I. Since it
was found that EDA has a significant correlation among all the drivers
in the subsample (N=13, Pearson’s =0.929, p<0.05), we chose
to utilize this variable for the development of models of emotions.
Two principal component analyses (PCA) were used to identify the
driver who had the most representative EDA pattern of the subsample.
For these analyses, the drivers were treated as variables, and the
drivers’ EDA were treated as cases. The results indicated that seven
drivers account for 98.5% of cumulative percentage of variance of the
subsample’s EDA behavior. A second PCA, on which the seven drivers
identified on the first PCA were treated as variables, suggested that
driver 10 explains 99.1% of the variability of the newer subsample
(N=7). Driver 10’s EDA data were thus employed as a training set.
The data from the rest of the drivers (N=12)were used as a test
set. Table III includes descriptive statistics for the emotional data, as
self-reported by the drivers for the two driving conditions. Given that
some emotions are not present during driving, five logistic regression
TABLE III
EMOTIONAL INFORMATION FOR TWO DRIVING CONDITIONS
Fig. 4. Fitted function and observed values to detect the state concentrated.
models were developed: three for condition 1 and two for condition 2.
One model (see Fig. 4) and its formula for the detection of the state
“concentrated” for “city-like” condition are presented as example, i.e.,
y=exp (4.05 +(1.68857)x)
(1+ exp(4.05 +(1.68857)x)) .
In the formula, “y” values refer to the response variable, whereas
x” values represent the current EDA measurement. To test the model,
we fed this with drivers’ physiological data and calculated the levels
of agreement (using Cohen’s Kappa) between the models’ responses
and the self-reports provided by driver 10. The results showed that the
model’s Kappa index is 0.5455, indicating a moderate agreement be-
tween the model and the self-reports. The level of agreement between
the model and the training set was 0.7186, indicating a substantial
agreement. In comparison, a K-means classifier built with the same
data set (training and test) has a Kappa of 0.2745, with a fair level of
agreement. A characterization of agreement levels proposes levels <0
to indicate no agreement, 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60
as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect
agreement [18]. The accuracy of the other four models indicated
slight and no agreement. The cause may be the self-reports as they
were provided by individual drivers and not by one single observer.
Future studies will focus on building models that consider affective
assessment only by one person. In addition to building new models,
we plan to classify using the one-versus-all approach to pick up the
most promising class.
1854 IEEE TRA NSA CTI ON S ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014
The results of this preliminary model are encouraging and provide
an indication of the feasibility of detecting emotions for real driving
situations. Given the levels of agreement between the preliminary
model and the self-reports, this methodology has the potential to
provide accurate classification of emotions that can be integrated
with the vehicle’s OBU. Future studies will analyze drivers’ personal
characteristics such as age, driving experience, and use of medications.
Other nonphysiological data, such as the pressure exerted on the
accelerator and/or the brake, will be used to build more reliable models
of emotions.
V. C ONCLUSION AND FUTURE WORK
This paper has presented, on one hand, the results of a field study
specifically set up to measure emotions in drivers using a BSN. To
that end, we defined a BSN as consisting of two sensors. On the other
hand, this paper has proposed an architecture describing how the BSN
could be integrated into vehicular ad hoc networks (VANETs) in order
to analyze driver’s emotions and to orchestrate actions operated by
an OBU with the aim of preventing potentially fatal accidents related
to negative emotions such as tiredness and stress. This architecture
defines actions, including the transmission of notification messages
to the emergency services, other vehicles in the transmission range,
RSUs, and nearby pedestrians through the VANET. The dissemination
of warning and safety messages through VANETs would alert other
drivers about possible hazards, increase the available maneuvering
time [19], and prevent accidents that would have been caused by
driver’s negative emotional behaviors. The results of this study show
preliminary evidence of measuring emotions using a BSN and logistic
regression. Based on these results, we hypothesize that it is possible to
quantify driver’s emotions and the role the proposed architecture plays
in preventing car accidents (involving the driver and other people and
vehicles) by constantly monitoring the driver’s emotions. Future exper-
iments will analyze the following: 1) the role of emotional awareness
(emotional intelligence) and self-regulation of negative emotions while
driving; 2) the dynamics of emotional change in relation to external
factors such as driving conditions and duration, age, experience, and
gender; and 3) the role of the architecture in reducing car accidents.
Work for the future also consists of carrying out in-depth data analyses
and correlating emotional responses with driving behavior such as
pressure on the accelerator and brake and adding a communication
component to existing VANETs. We will also analyze which protocols
and tools better fit the use of VANETs for user applications. Finally,
we would like to study some technical aspects from VANETs, test the
overall architecture, and see how much we can reduce preventable car
accidents.
REFERENCES
[1] S. Kojima et al., “Noninvasive biological sensor system for detection of
drunk driving,” in Proc. 9th Int. Conf. ITAB, 2009, pp. 1–4.
[2] Y.-C. Wu, Y.-Q. Xia, P. Xie, and X.-W. Ji, “The design of an automotive
anti-drunk driving system to guarantee the uniqueness of driver,” in Proc.
ICIECS, 2009, pp. 1–4.
[3] W. J. Horrey and C. D. Wickens, “Examining the impact of cell phone
conversations on driving using meta-analytic techniques,” Hum. Factors,
vol. 48, no. 1, pp. 196–205, 2006.
[4] “Traffic safety facts—Distracted driving 2009,” U.S. Dept. Transp.,
Washington, DC, USA, 2010.
[5] M. Sakairi and M. Togami, “Use of water cluster detector for preventing
drunk and drowsy driving,” in Proc. IEEE Sensors, 2010, pp. 141–144.
[6] C.-T. Lin et al., “A real-time wireless brain–computer interface system for
drowsiness detection,” IEEE Trans. Biomed. Circuits Syst., vol. 4, no. 4,
pp. 214–222, Aug. 2010.
[7] L. Chin-Teng et al., “EEG-based drowsiness estimation for safety driving
using independent component analysis,” IEEE Trans. Circuits Syst. I, Reg.
Papers, vol. 52, no. 12, pp. 2726–2738, Dec. 2005.
[8] M. Flores, J. M. Armingol, and A. de la Escalera, “Driver drowsiness
warning system using visual information for both diurnal and nocturnal
illumination conditions,” EURASIP J. Adv. Signal Process., vol. 2010,
no. 1, p. 438 205, Jul. 2010.
[9] C. D. Katsis, N. Katertsidis, G. Ganiatsas, and D. I. Fotiadis, “Toward
emotion recognition in car-racing drivers: A biosignal processing ap-
proach,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 3,
pp. 502–512, May 2008.
[10] C. D. Katsis, Y. Goletsis, G. Rigas, and D. I. Fotiadis, “A wearable
system for the affective monitoring of car racing drivers during simulated
conditions,” Transp. Res. C, Emerging Technol., vol. 19, no. 3, pp. 541–
551, Jun. 2011.
[11] H. Cai and Y. Lin, “Modeling of operators’ emotion and task performance
in a virtual driving environment,” Int. J. Hum.-Comput. Stud., vol. 69,
no. 9, pp. 571–586, Aug. 2011.
[12] J. A. Healey and R. W. Picard, “Detecting stress during real-world driv-
ing tasks using physiological sensors,” IEEE Trans. Intell. Transp. Syst.,
vol. 6, no. 2, pp. 156–166, Jun. 2005.
[13] G. Rigas, Y. Goletsis, and D. I. Fotiadis, “Real-time driver’s stress event
detection,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 1, pp. 221–234,
Mar. 2012.
[14] R. R. Singh, S. Conjeti, and R. Banerjee, “A comparative evaluation of
neural network classifiers for stress level analysis of automotive drivers
using physiological signals,” Biomed. Signal Process. Control,vol.8,
no. 6, pp. 740–754, Nov. 2013.
[15] Liberate Yourself from the Lab: Q Sensor Measures EDA in the Wild,
Affectiva Inc., Waltham, MA, USA, Aug. 13, 2013.
[16] Z. Liu et al., “Measuring the engagement level of TV viewers,” in Proc.
10th IEEE Int. Conf. Workshops Autom. FG Recog., 2013, pp. 1–7.
[17] Y. Ayzenberg, J. Hernandez, and R. W. Picard, “FEEL: Frequent EDA and
Event Logging, a mobile social interaction stress monitoring system,” in
Proc. CHI Extended Abstr. Hum. Factors Comput. Syst., Austin, TX, USA,
2012, pp. 2357–2362.
[18] J. R. Landis and G. G. Koch, “The measurement of observer agreement
for categorical data,” Biometrics, vol. 33, no. 1, pp. 159–174, Mar. 1977.
[19] B. K. Chaurasia and S. Verma, “Haste induced behavior and VANET
communication,” in Proc. IEEE ICVES, Nov. 2009, pp. 19–24.
... Conzeti et al. [40] in 2012 proposed a bioinspired architecture for on-road emotion monitoring using recurrent neural networks from a photoplethysmogram (P.P.G.) and E.D.A. signals. In 2014, Robodello Mendez et al. [41] developed a body sensor network to detect emotions during the driving environment from E.E.G., E.D.A. using P.C.A., and logistic regression methods. Neska et al. [42] in 2018 proposed a driver emotion system using a random forest approach from physiological functional variable selection signals such as E.M.G., E.C.G., and RESP. ...
... Malta et al. [43] in 2011 also analyzed real-world driver's emotions using the Bayesian network, which combines both behavioral and physiological signals such as E.D.A. and the face. Among all these works, some results [15,25,26,28,31,33] have proposed systems running in a non-car environment, whereas works [20,29,37,[40][41][42] have been conducted in a real-time environment. Some results [14,[16][17][18]24,30,38,39] have used a simulator environment. ...
Article
Full-text available
Monitoring drivers’ emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles’ road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers’ expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods.
... Table 4. Sets of attributes selected for evaluation of classifiers. Sets 1-5 are based on only features extracted obtained from in-vehicle sensors; sets 6-9 are based on insertion of the break pedal features; sets 10-12 are based on only features extracted from the driver; and the last sets (13)(14) are sets with vehicle and drive features. Vehicle + Driver Velocity + RPM + Accelerator + Inertial + sEMG 14 ...
Article
Full-text available
Today’s cars have dozens of sensors to monitor vehicle performance through different systems, most of which communicate via vehicular networks (CAN). Many of these sensors can be used for applications other than the original ones, such as improving the driver experience or creating new safety tools. An example is monitoring variables that describe the driver’s behavior. Interactions with the pedals, speed, and steering wheel, among other signals, carry driving characteristics. However, not always all variables related to these interactions are available in all vehicles; for example, the excursion of the brake pedal. Using an acquisition module, data from the in-vehicle sensors were obtained from the CAN bus, the brake pedal (externally instrumented), and the driver’s signals (instrumented with an inertial sensor and electromyography of their leg), to observe the driver and car information and evaluate the correlation hypothesis between these data, as well as the importance of the brake pedal signal not usually available in all car models. Different sets of sensors were evaluated to analyze the performance of three classifiers when analyzing the driver’s driving mode. It was found that there are superior results in classifying identity or behavior when driver signals are included. When the vehicle and driver attributes were used, hits above 0.93 were obtained in the identification of behavior and 0.96 in the identification of the driver; without driver signals, accuracy was more significant than 0.80 in identifying behavior. The results show a good correlation between vehicle data and data obtained from the driver, suggesting that further studies may be promising to improve the accuracy of rates based exclusively on vehicle characteristics, both for behavior identification and driver identification, thus allowing practical applications in embedded systems for local signaling and/or storing information about the driving mode, which is important for logistics companies.
... BSN is a collection of low-power and lightweight wireless sensor nodes for monitoring human body processes and the surroundings. BSNs have been used to track users' activity in a number of situations, including but not limited to illness identification and prevention [50] via activity analysis, rehabilitation after a medical operation [311], and emotion detection of drivers [222]. For example, Ali et al. demonstrated the success of using wearable bio-sensors (e.g., Electroencephalogram (EEG)) to develop a robust emotion recognition model for patients with special needs in ambient assisted living environments [17]. ...
Article
Full-text available
The Internet of Things (IoT) boom has revolutionized almost every corner of people’s daily lives: healthcare, environment, transportation, manufacturing, supply chain, and so on. With the recent development of sensor and communication technology, IoT artifacts including smart wearables, cameras, smartwatches, and autonomous systems can accurately measure and perceive their surrounding environment. Continuous sensing generates massive amounts of data and presents challenges for machine learning. Deep learning models (e.g., convolution neural networks and recurrent neural networks) have been extensively employed in solving IoT tasks by learning patterns from multi-modal sensory data. Graph neural networks (GNNs), an emerging and fast-growing family of neural network models, can capture complex interactions within sensor topology and have been demonstrated to achieve state-of-the-art results in numerous IoT learning tasks. In this survey, we present a comprehensive review of recent advances in the application of GNNs to the IoT field, including a deep dive analysis of GNN design in various IoT sensing environments, an overarching list of public data and source codes from the collected publications, and future research directions. To keep track of newly published works, we collect representative papers and their open-source implementations and create a Github repository at GNN4IoT.
... BSN is a collection of low-power and lightweight wireless sensor nodes for monitoring human body processes and the surroundings. BSNs have been used to track users' activity in a number of situations, including but not limited to illness identification and prevention [49] via activity analysis, rehabilitation after a medical operation [307], and emotion detection of drivers [223]. For example, Ali et al. demonstrate the success of using wearable bio-sensors (e.g., Electroencephalogram (EEG)) to develop a robust emotion recognition model for patients with special needs in ambient assisted living environments [17]. ...
Preprint
Full-text available
The Internet of Things (IoT) boom has revolutionized almost every corner of people's daily lives: healthcare, home, transportation, manufacturing, supply chain, and so on. With the recent development of sensor and communication technologies, IoT devices including smart wearables, cameras, smartwatches, and autonomous vehicles can accurately measure and perceive their surrounding environment. Continuous sensing generates massive amounts of data and presents challenges for machine learning. Deep learning models (e.g., convolution neural networks and recurrent neural networks) have been extensively employed in solving IoT tasks by learning patterns from multi-modal sensory data. Graph Neural Networks (GNNs), an emerging and fast-growing family of neural network models, can capture complex interactions within sensor topology and have been demonstrated to achieve state-of-the-art results in numerous IoT learning tasks. In this survey, we present a comprehensive review of recent advances in the application of GNNs to the IoT field, including a deep dive analysis of GNN design in various IoT sensing environments, an overarching list of public data and source code from the collected publications, and future research directions. To keep track of newly published works, we collect representative papers and their open-source implementations and create a Github repository at https://github.com/GuiminDong/GNN4IoT.
... For example, Peiris et al. showed that two professionals who analyze the EEG detect drowsiness do not necessarily make the same decision for the same participant [35]. In contrast, in some cases, the EDA is affected by stress [36] or emotion [37]. Therefore, these indicators cannot be considered solely as adequate and exclusive indicators for the detection or estimation of sleepiness or fatigue. ...
Article
Full-text available
Drowsiness is among the important factors that cause traffic accidents; therefore, a monitoring system is necessary to detect the state of a driver's drowsiness. Driver monitoring systems usually detect three types of information: biometric information, vehicle behavior, and driver's graphic information. This review summarizes the research and development trends of drowsiness detection systems based on various methods. Drowsiness detection methods based on the three types of information are discussed. A prospect for arousal level detection and estimation technology for autonomous driving is also presented. In the case of autonomous driving levels 4 and 5, where the driver is not the primary driving agent, the technology will not be used to detect and estimate wakefulness for accident prevention; rather, it can be used to ensure that the driver has enough sleep to arrive comfortably at the destination.
... Malta et al. used electrodermal activity (EDA) in combination with facial expressions, driving events, and pedal behaviours to build a Bayesian network to predict the frustration of drivers [43]. To infer the comprehensive mental and physical states (concentration, tension, tiredness, relaxation) of drivers, the authors of [57] built a body sensor network to monitor signals, the such as electrocardiogram (ECG), electroencephalography (EEG), electrodermal activity (EDA) and respiration rate. Kato et al. classified emotions as positive and negative based on ECG and pulse wave measurements during traffic jams [30]. ...
Article
Full-text available
An empathetic car that is capable of reading the driver’s emotions has been envisioned by many car manufacturers. Emotion inference enables in-vehicle applications to improve driver comfort, well-being, and safety. Available emotion inference approaches use physiological, facial, and speech-related data to infer emotions during driving trips. However, existing solutions have two major limitations: Relying on sensors that are not built into the vehicle restricts emotion inference to those people leveraging corresponding devices (e.g., smartwatches). Relying on modalities such as facial expressions and speech raises privacy concerns. By contrast, researchers in mobile health have been able to infer affective states (e.g., emotions) based on behavioural and contextual patterns decoded in available sensor streams, e.g., obtained by smartphones. We transfer this rationale to an in-vehicle setting by analyzing the feasibility of inferring driver emotions by passively interpreting the data streams of the control area network (CAN-bus) and the traffic context (inferred from the front-view camera). Therefore, our approach does not rely on particularly privacy-sensitive data streams such as the driver facial video or driver speech but is built based on existing CAN-bus data and traffic information, which is available in current high-end or future vehicles. To assess our approach, we conducted a four-month field study on public roads covering a variety of uncontrolled daily driving activities. Hence, our results were generated beyond the confines of a laboratory environment. Ultimately, our proposed approach can accurately recognise drivers’ emotions and achieve comparable performance as the medical-grade physiological sensor-based state-of-the-art baseline method.
Article
The major interest which is increasing for many kinds of applications includes human driver’s characterisation. There are different promising approaches in order to characterise the drivers by means of control theoretic driver models. The driver state is monitored by applying features of driver model from survey till real road distraction experiment. The dataset for the experiment consists of driving behavior with visuomotor and even few secondary tasks like auditory and even driving reference. The individual estimation of model parameters uses data of driving of nearly eleven drivers for error prediction identification. Few hand gestures and head movements are gentle way of getting distracted by drivers which covers many states like eye closure either short or long term. This paper represents the distraction detection system with the help of attention strategy. By matching the scaled features, the transformation of frontal face of the driver, driver recognition can be made. The severity of accident zone is found in particular area based on dataset. Driver behavior at particular hotspot location is found which is considered as the accident hotspot in order to gain better accuracy. The results help in validation of robustness and effectiveness of the model. Th
Article
Full-text available
With rapid advances in the field of autonomous vehicles (AVs), the ways in which human–vehicle interaction (HVI) will take place inside the vehicle have attracted major interest and, as a result, intelligent interiors are being explored to improve the user experience, acceptance, and trust. This is also fueled by parallel research in areas such as perception and control of robots, safe human–robot interaction, wearable systems, and the underpinning flexible/printed electronics technologies. Some of these are being routed to AVs. Growing number of network of sensors are being integrated into the vehicles for multimodal interaction to draw correct inferences of the communicative cues from the user and to vary the interaction dynamics depending on the cognitive state of the user and contextual driving scenario. In response to this growing trend, this timely article presents a comprehensive review of the technologies that are being used or developed to perceive user’s intentions for natural and intuitive in-vehicle interaction. The challenges that are needed to be overcome to attain truly interactive AVs and their potential solutions are discussed along with various new avenues for future research.
Article
Full-text available
Every year, traffic accidents due to human errors cause increasing amounts of deaths and injuries globally. To help reduce the amount of fatalities, in the paper presented here, a new module for Advanced Driver Assistance System (ADAS) which deals with automatic driver drowsiness detection based on visual information and Artificial Intelligence is presented. The aim of this system is to locate, track, and analyze both the drivers face and eyes to compute a drowsiness index, where this real-time system works under varying light conditions (diurnal and nocturnal driving). Examples of different images of drivers taken in a real vehicle are shown to validate the algorithms used.
Conference Paper
This work studies the feasibility of using visual information to automatically measure the engagement level of TV viewers. Previous studies usually utilize expensive and invasive devices (e.g., eye trackers or physiological sensors) in controlled settings. Our work differs by only using an RGB video camera in a naturalistic setting, where viewers move freely and respond naturally and spontaneously. In particular, we recorded 47 people while watching a TV program and manually coded the engagement levels of each viewer. From each video, we extracted several features characterizing facial and head gestures, and used several aggregation methods over a short time window to capture the temporal dynamics of engagement. We report on classification results using the proposed features, and show improved performance over baseline methods that mostly rely on head-pose orientation.
Article
This work proposes a system for the automatic annotation and monitoring of cell phone activity and stress responses of users. While mobile phone applications (e.g., e mail, voice, calendar) are used to non-intrusively extract the context of social interactions, a non-intrusive and comfortable biosensor is used to measure the electrodermal activity (EDA). Then, custom stress recognition software analyses the streams of data in real-time and associates stress levels to each event. Both contextual data and stress levels are aggregated in a searchable journal where the user can reflect on his/her physiological responses.
Article
An automotive anti-drunk driving system with real-time monitoring is introduced. The system guarantees the uniqueness of the driver by combining the function of alcohol detection and face identification system, putting forward the design of combination of the alcohol main detection and the image processing auxiliary surveillance. It can eradicate the fraudulent conduct that drunk driving and driver changing. It solved the problem that current automotive alcohol detecting system can not ensure the uniqueness of the driver, and further improved the safety of the car.
Article
A real time, wearable system for remote monitoring of car racing drivers’ emotional state is presented. The so-called AUBADE system (standing for AUgmentation system for roBust emotionAl understanding) consists of a wearable device and a centralized unit. The wearable device is responsible to acquire selected biosignals, pre-process them and wirelesses transmit them from the subject-site to the centralized system. The centralized system, which is the main part of the system and carries most of the processing, has a twofold purpose: on the one hand it performs evaluation of the subject’s emotional state and on the other hand it projects a generic 3D face model whereat the facial expression of the subject can be viewed. A two stage classification scheme is used. First a decision tree is implemented in order to classify the subject’s emotional state as high stress, low stress and valence. Then a Tree Augmented Naïve Bayesian classifier (TAN) is used to classify valence as euphoria and dysphoria. The centralized system has been validated using a dataset obtained from ten subjects in simulated racing conditions. The emotional classes identified are high stress, low stress, dysphoria and euphoria. The overall classification rate achieved using tenfold cross-validation is high. AUBADE constitutes the first system which has the feasibility of remote and real time affective assessment in car racing, providing a useful addition to the existing telemetry systems used in the domain.Research highlights► Real time, wearable system for remote monitoring of car racing drivers’ emotional state. ► Identified emotional classes: high stress, low stress, dysphoria and euphoria. ► Useful tool for car racing teams, enabling the correlation between the driver’s emotional state with specific adjustments in the car’s performance.
Conference Paper
Implementing safety measures to prevent drunk and drowsy driving is a major technical challenge for the car industry. We have developed a system that involves a non-contact breath sensor to do this. The breath sensor detects breath by measuring electric currents of positively or negatively charged water clusters in breath that are separated by using an electric field. Our device couples a breath sensor with an alcohol sensor, and simultaneously detects the electrical signals of both breath and alcohol in the breath. This ensures that the sample is from a person's breath, not an artificial source. Furthermore, our breath sensor can detect breath from about 50 cm, and can also test the level of alertness of a subject sitting in the driver's seat. This is done by measuring the point of time the breathing changes from conscious, such as in pursed-lip breathing, to unconscious when the driver becomes drowsy. This is the first time that one device has been used to detect both drunk and drowsy driving.
Article
A real-time wireless electroencephalogram (EEG)-based brain-computer interface (BCI) system for drowsiness detection has been proposed. Drowsy driving has been implicated as a causal factor in many accidents. Therefore, real-time drowsiness monitoring can prevent traffic accidents effectively. However, current BCI systems are usually large and have to transmit an EEG signal to a back-end personal computer to process the EEG signal. In this study, a novel BCI system was developed to monitor the human cognitive state and provide biofeedback to the driver when drowsy state occurs. The proposed system consists of a wireless physiological signal-acquisition module and an embedded signal-processing module. Here, the physiological signal-acquisition module and embedded signal-processing module were designed for long-term EEG monitoring and real-time drowsiness detection, respectively. The advantages of low owner consumption and small volume of the proposed system are suitable for car applications. Moreover, a real-time drowsiness detection algorithm was also developed and implemented in this system. The experiment results demonstrated the feasibility of our proposed BCI system in a practical driving application.
Conference Paper
In this paper, the impact of the knowledge of non local traffic state through vehicular ad hoc network (VANET) communication on traffic stability is studied through microscopic traffic simulation. In particular, the effect of this communication through modification of driver behavior is evaluated. Intra-driver behavior such as non negligible reaction time, anticipation, limited attention spans and perception error affect his inter-driver behavior like lane changing, keeping safe gap etc. Intuitively, it is felt that this behavior is further conditioned by external factors like frustration due to falling behind predefined schedules, congestion, inability to overtake or change lanes and behavior of other drivers etc. The warning and safety messages from other vehicles increase the available maneuvering time and mitigate the effects of intra-driver behavior. The look ahead capability allows the driver to plan his journey and eschew risks like shortened safety gap, lane changing frequency etc. Simulation results for various traffic scenarios demonstrate the potential of VANET communications on safety and traffic stability through modification of driver behavior.