Conference PaperPDF Available

Modeling dangerous driving events based on in-vehicle data using Random Forest and Recurrent Neural Network

Authors:

Abstract and Figures

Modern vehicles produce big data with a wide variety of formats due to missing open standards. Thus, abstractions of such data in the form of descriptive labels are desired to facilitate the development of applications in the automotive domain. We propose an approach to reduce vehicle sensor data into semantic outcomes of dangerous driving events based on aggressive maneuvers. The supervised time-series classification is implemented with Random Forest and Recurrent Neural Network separately. Our approach works with signals of a real vehicle obtained through a back-end solution, with the challenge of low and variable sampling rates. We introduce the idea of having a dangerous driving classifier as the first discriminant of relevant instances for further enrichment (e.g., type of maneuver). Additionally, we suggest a method to increment the number of driving samples for training machine learning models by weighting the window instances based on the portion of the labeled event they include. We show that a dangerous driving classifier can be used as a first discriminant to enable data integration and that transitions in driving events are relevant to consider when the dataset is limited, and sensor data has a low and unreliable frequency.
Content may be subject to copyright.
Modeling dangerous driving events based on in-vehicle data using
Random Forest and Recurrent Neural Network
Daniel Alvarez-Coello,, Benjamin Klotz,, Daniel Wilms,
Sofien Fejji, Jorge Marx G´
omez, and Rapha¨
el Troncy
Abstract Modern vehicles produce big data with a wide
variety of formats due to missing open standards. Thus,
abstractions of such data in the form of descriptive labels
are desired to facilitate the development of applications in
the automotive domain. We propose an approach to reduce
vehicle sensor data into semantic outcomes of dangerous driving
events based on aggressive maneuvers. The supervised time-
series classification is implemented with Random Forest and
Recurrent Neural Network separately. Our approach works
with signals of a real vehicle obtained through a back-end
solution, with the challenge of low and variable sampling
rates. We introduce the idea of having a dangerous driving
classifier as the first discriminant of relevant instances for
further enrichment (e.g., type of maneuver). Additionally, we
suggest a method to increment the number of driving samples
for training machine learning models by weighting the window
instances based on the portion of the labeled event they include.
We show that a dangerous driving classifier can be used as a
first discriminant to enable data integration and that transitions
in driving events are relevant to consider when the dataset is
limited, and sensor data has a low and unreliable frequency.
I. INTRODUCTION
Smart devices rely on high-quality data sources which can
be located and accessed remotely. Thanks to the increasing
number of connected devices, new applications can combine
multiple domains. The automotive industry also follows this
trend with its connected vehicles [1]. Nevertheless, there is
still the need for international open standards and protocols
to enable uniform data interaction [2]. Initiatives, such as
the Data-Centric Manifesto1, show the interest in data-driven
solutions regardless of the domain. Apart from the fact that
it would be too costly to share all raw vehicle sensor data
to the cloud, an application developer would have to adapt
their application throughout the variety of many models and
brands. Also, vehicles produce big time-series data [3] which
increases the complexity of data processing pipelines handled
differently by each application.
One could extract and communicate the meaning of ve-
hicle data instead of sensor values to eliminate the need for
in-depth domain knowledge for making new applications.
This abstraction process corresponds to the transformation
of data into information and is part of a Data, Information,
Knowledge, Wisdom (DIKW) hierarchy which describes the
building blocks for reasoning [4]. Our goal is to simplify
BMW Research, New Technologies, Innovation. Garching, Germany.
Department of Computer Science. University of Oldenburg. Oldenburg,
Germany. daniel.alvarez@uni-oldenburg.de
Department of Data Science. EURECOM. Sophia Antipolis, France.
1http://datacentricmanifesto.org
vehicle data into information that describes how the driver
and vehicle behave. Thus, we focus on modeling dangerous
driving events using machine learning to classify past time
windows. In this work, we assume that just dangerous driving
events are relevant to consider for further enrichment.
This paper is organized as follows: we discuss current ap-
proaches and models related to the generation of information
on driver behavior in section II. Our approach is discussed
in section III, followed by the implementation details in IV.
The evaluation results of the classifiers and our experiments
to test them are presented in section V. We conclude with
the principal findings and possible future directions in section
VI.
II. RELATED WORK
There are different approaches to Driver Behavior Mod-
eling (DBM). Some authors study driving patterns based
on the driver’s comfort [5], or physiological signals [6];
whereas the majority focus on cameras, in-vehicle signals,
and smartphones [7]. We consider only existing vehicle sig-
nals that could be adapted in the future to possible standards
in development such as VSS2or ontological models like
VSSo3or the driving context ontology [9].
Most of the related work focuses on individual actions
of driving. The combination of such actions defines relevant
behavioral domains such as drowsiness [10]–[12], distrac-
tion [13], [14], and aggressiveness [15]–[20] (see figure 1).
Fig. 1. Main domains and actions in Driver Behavior Modeling
A. Using Cameras
In recent years, computer vision applications using deep
neural networks have shown important advances in image
2Vehicle Signal Specification https://w3.org/auto/wg/wiki/
Vehicle_Signal_Specification_(VSS)/Vehicle_Data_
Spec
3Vehicle Signal Ontology [8]
2019 IEEE Intelligent Vehicles Symposium (IV)
Paris, France. June 9-12, 2019
978-1-7281-0559-8/19/$31.00 ©2019 IEEE 165
and video processing. In the driving context, applications
can recognize different driver actions (e.g., driver’s gaze
and head position[13], [14], eye blinking[21], yawning[10],
emotions [22], etc.) and surrounded elements (e.g., traffic
signs, pedestrians, road lanes, other vehicles, etc. [23], [24]).
Although they are not under the scope of our study, such
information could be used for further data enrichment.
B. Using Vehicle Signals
The access to vehicle signals is usually restricted and
requires specific setups and dedicated hardware such as
an On-Board Diagnostics (OBD) device. Depending on the
signals of interest, an alternative is to use a vehicle simulator.
Some authors use simulated data to detect aggressive driving
events. [15] applies SVMs and K-means clustering. Simi-
larly, [16] includes more signals and uses a semi-supervised
learning approach.
There are also efforts to characterize the driver’s profile.
[25] uses signals to identify the driving style after recogniz-
ing the type of maneuver. For this purpose, logical conditions
are applied to the sensor values to determine the status of the
vehicle. However, their dataset is not available. Martinez et
al. [26] present an approach for identifying the driver, too.
While it provides an excellent foundation to learn how to
differentiate the drivers, it does not specify the behavioral
patterns. Burton et al. use the Euclidean distance traveled
and the average speed of the vehicle to discriminate driving
styles. Driver profiling does not have the granularity we look
for, because it focuses on the behavior over time and not on
single events.
C. Using External Sensors
Since accessing vehicle data can be challenging, some
researchers opt for low-cost external devices as an alternative
(e.g., smartphones, Raspberry Pi, etc.). Such devices are
equipped with inertial sensors like accelerometers, gyro-
scope, magnetometer, camera, and others.
[11] explains the implementation of a mobile applica-
tion that detects drowsiness and aggressiveness. It rates the
driver’s behavior and provides life feedback about the driving
patterns. Drowsiness is given by lane drifting and weaving
events using computer vision, where the tracking of lane
marks of the road determine how centered the vehicle is.
On the other hand, aggressiveness is inferred purely by the
accelerometers. The number of critical events defines the
level of distraction that influences the driver. The system
works only for speeds higher than 50 km/h.
[17], [18] use a smartphone as the sensing and processing
device to classify aggressive events. They use an end-point
detection algorithm, as well as Dynamic Time Warping
which is computationally expensive. A drawback is that the
entire event needs to happen before the system can process
it. On the other hand, [27] classifies the trajectory with the
aid of a highly accurate GPS data logger as either smooth
or aggressive. It uses a mathematical model, and its solution
is not suitable for a real-time application.
Another approach [28] aims to detect dangerous driving
based on four actions: abnormal speeding, steering, weaving,
and using the phone while driving. Nevertheless, no data is
collected, and the decisions rely on experimentally prede-
fined thresholds. Similarly, [29] uses also thresholds together
with an end-point algorithm to detect driving events, obtain
statistical features, and classify them using a neural network.
One way to achieve the granularity we wanted is by
classifying the maneuvers that the driver performs. The
work by Junior et al. [19] explores this topic as well as
the application of machine learning techniques to classify
driving events based on aggressiveness. Their dataset is
publicly available which facilitates the analysis. They use
a smartphone to collect 3-axis signals from accelerometer,
gyroscope, and magnetometer. This approach was used as
the primary reference for our study. With the dataset of [19],
Carvalho et al. [20] investigate the use of Recurrent Neural
Network (RNN) to classify maneuvers.
III. APPROACH AND CLASSIFICATION ASPECTS
We considered the work from Junior et al. [19] as the basis
for our approach. After replication of their grid-searches
using their public Driving Behavior Dataset4, we corrob-
orate their conclusions that Random Forest (RF) outper-
forms Support Vector Machine, Neural Network, and Multi-
Layer Perceptron in the multi-class maneuver classification
task. Therefore, we select RF as the first technique to
test. Since we are dealing with sequences, RNN are also
considered [20].
A. Base Classifier
In contrast to [19], we propose to split the multi-class
classification problem into two parts as shown in figure 2.
A binary classifier of dangerous driving will tell us what
time-window instances are relevant for further processing.
Then other criteria (e.g., type of maneuver) can be used to
enrich the outcomes of the base classifier. In this way, results
of different applications could also be added (e.g., driver’s
emotion of the last seconds, drowsiness percentage, gaze,
etc.).
Fig. 2. A base classifier detects a dangerous situation by using relevant
vehicle signal data. More specific classifiers can enrich the outcome
B. Feature Selection and Extraction
Based on [26], [30], we selected the subset of 12 signals
shown in the table I. We have two types of variables: con-
tinuous and categorical. For continuous signals, we extract
statistical features (i.e., mean, median, standard deviation,
4https://github.com/jair-jr/driverBehaviorDataset
166
and trend [19]). For categorical signals we take only the
median value. All 12 signals were used in RF. For RNN,
we did not use 3 of the signals because they were ranked as
less important in the analysis with Random Forest: displayed
speed, gear, and brake DSC safe.
Continuous signals Categorical signals
Lateral acceleration Acceleration efficiency
Longitudinal acceleration Gear *
Accelerator pedal position Brake pressed
Actual speed Brake Dynamic Stability Control (DSC) state *
Speed displayed *
Engine consumption
Engine RPM speed
Engine torque
TABLE I
SEL ECTE D SIGNA LS. THOSE MARKED WITH “*” W ERE N OT
CONSIDERED FOR RNN
C. Instance Relevance
One driving event E, in our case a maneuver, is composed
of a sequence of measurements with a duration of esize.
To classify the events, the size of the time window wsize
should generalize for all the maneuvers of interest. The ones
we want to classify are in the range of a few seconds. Thus,
we tried out window sizes between 1 and 10 seconds.
Nevertheless, the low and irregular sampling frequency we
get at the back-end added complexity to the classification
of short driving events. Especially because sometimes time
windows do not contain enough values to constitute a sample
suitable for training. To overcome the limitation, we propose
to consider the transitions between driving events as valid
instances for training.
When Whops over time, it will not always be covering
the whole driving event (i.e., Wpartially overlapping E). A
window instance is when Wis at a specific time step. We
only care about instances when the window is overlapping
the occurrence of E. With that said, the total number of
instances of one driving event instancestotal, and the
instance index iare given by:
instancestotal =wsize +esize 1(1)
i∈ {1,2,3, ..., (wsize +esize 1)}(2)
Since one maneuver has several instances, such instance
have different relevance. To determine the importance of the
instances, we introduce a method to calculate their relevance
as a number between 0 (not important) and 1 (most relevant).
It considers the following aspects:
When wsize =esize , there exists just one instance with
a relevance of 1.
When wsize > esize , relevance is 1 for all instances in
which all the frames of Eare covered by W.
When wsize < esize , relevance is 1 for instances in
which Wis inside E.
relevance =n
wsize
·n
esize
·k=n2
wsize ·esize
·k(3)
n=
i+ 1,if (i<wsize )and (i<esize )
i1,if (i>wsize )and (i>esize )
1,otherwise
(4)
k=
wsize
esize ,if (wsize > esiz e)
esize
wsize ,if (wsize < esiz e)
1,if (wsize =esize )
(5)
D. Random Forest Parameters
In addition to [19], we add the window size and the
instance relevance as our custom parameters. We do a grid-
search for RF to find the best parameters from the table II.
Custom parameters
Window size [frames] {2, 3, 4, ..., 10}
Minimum instance relevance {0.1, 0.2, ..., 1.0}
Random Forest
Number of estimators {10, 11, 12, ..., 25}
Maximum features {10, 15, “log2”}
Maximum depth {5, 10, 15}
TABLE II
PARA ME TE RS T ES TE D IN T HE G RI D -SEARCH FOR RF
E. Recurrent Neural Network Parameters
For RNN, we trained different combinations of parameters
based on [20] (see table III). We used a window size of
10 frames and an instance relevance of 0.7. The number of
epochs was 500 with an early stopping patience of 50 epochs.
The optimizer was “RMSprop” and the learning rate 0.001.
Recurrent Neural Network
Number of hidden layers {1, 2}
Number of recurrent units in the hidden layer {10, 15, 16, 32, 64, 128}
Recurrent unit type {LSTM, GRU}
Dropout {0.1, 0.2}
Recurrent dropout {0.1, 0.2}
TABLE III
PARAMETERS TESTED ON THE RNN IMPLEMENTATION
IV. IMPLEMENTATION
A. Dataset
We collected and labeled vehicle data of two licensed
drivers. The maneuvers were the same as in [19] (i.e.,
aggressive and normal turns, lane changes, accelerations, and
braking). The signals come from the CAN which is accessed
via a dedicated back-end architecture that is developed within
BMW Research. The collection of data is only used in
research for tests like the one conducted in this study.
Once the data was collected, we downsampled the series
to half-second periods by assigning the aggregated values to
167
the starting point of the current time window. This frequency
was determined based on the lowest rate among the selected
signals.
B. Considerations for Training
We use the Area Under the ROC curve (AUC) [31] as
the evaluation metric for the trained models since it is a
trade-off between False Positive Rate and True Positive
Rate by considering all the possible thresholds for the
classification. AUC is a better metric for classification
problems that have an imbalanced number of samples.
We joined left and right lane-changes to deal with low
sampling frequency issues because those maneuvers
are the shortest in duration. Lateral acceleration was
inverted to double the number of samples in this class.
We used binary cross-entropy as the loss function
for driving mode classification and categorical cross-
entropy for the maneuver classification.
To use RNNs, we first normalized our data according
to the minimum and maximum possible values of the
signals. The input layer is feed with sequences of 10
measurements from 9 signals.
V. RESULTS
In this section, we present the results of the best classi-
fiers found for our specific use-case and the corresponding
experiments that were conducted.
A. Classifiers Evaluation
The grid-search on RF showed us that models which use
instance relevance were ranked among the best combinations.
The parameters corresponding to the best RF found are
presented in table IV.
Parameter \Classifier Base Maneuver
Window size [frames] 10 10
Minimum instance relevance 0.9 0.8
Number of estimators 15 24
Maximum features 5 “log2”
Maximum depth 10 15
TABLE IV
PARA ME TE RS O F TH E BE ST R F FOUND
Regarding RNN, one hidden layer with 64 recurrent units
showed a better score for both classification tasks. The output
layer contains 2 and 5 units for driving mode and maneuvers
respectively. LSTM cells did slightly better than GRU for
most of the tested combinations.
For the base classifier, both RF and RNN classified the
instances of the test set correctly. While for the maneuver
classification, the corresponding confusion matrices and the
ROC curves show a few miss classifications of turns and
lane changes (see figure 3). Nevertheless, the consideration of
lane-changes to both sides as just one class (doubled samples
by inverting the sign of the lateral acceleration) improved
the performance significantly compared to the first attempt
when we tried to classify lane change to the right and left
separately.
(a) Normalized confusion matrix
(b) Multi-class ROC curve
Fig. 3. Evaluation of dangerous maneuver classifier using RNN
B. Test Experiments
We tested the best found models with 10 unseen trajec-
tories. For this purpose, we used two different routes of a
track where each trajectory corresponded to one lap. The
test drivers were given specific instructions (see table V) on
how to drive before each lap. The instruction of 3 driving
styles refers to 3 laps on a given route, where each lap had
a different style (i.e., normal, moderate, aggressive).
For every time step, the base classifier will predict the
class of the previous 10 frames, the overall danger score is
168
Route Driver Instruction
1 A 3 driving styles
1 B 3 driving styles
1 B 2 laps of free driving
2 A 3 driving styles
TABLE V
TEST TRAJECTORIES AND THEIR CORRESPONDING INSTRUCTION
calculated by dividing the total number of positive outcomes
by the total number of time steps. If we want to have more
granularity, we can calculate a moving score by considering
only a given amount of previous time steps. As we see
in figure 4, the overall danger score of the RNN base
classifier reflects the instructions given to the driver for the
3 driving styles: normal (lower-level), moderate (mid-level),
and aggressive (upper-level). Likewise, the moving score
provides us more insight into how the behavior over time
is.
(a) Overall danger score
(b) Score of the past 50 frames
Fig. 4. Danger score of Driver A on route 1 (3 driving styles) using RNN
Additionally, we collected 2 laps of free driving and
compared the model’s outcome against the perception of two
co-pilots who were inside the vehicle. The co-pilots wrote
down their perception of danger in a scale from 0 (no danger)
to 4 (most dangerous). The average danger score perceived
by the co-pilots was roughly 67%, which is not far from the
approximately 75% predicted by the models (see figure 5).
Since we know the sequences of the maneuvers performed
on the track, one can map the outcomes of an unseen
trajectory (i.e., data that is new to the trained model) to check
how consistent they are. Figure 6 shows how the classified
maneuvers of an aggressive lap match the sequences of the
track.
Fig. 5. Danger score of Driver B on route 1 (2 laps of free driving)
VI. CONCLUSION
A binary classifier of dangerous driving events based on
in-vehicle signals can simplify vehicle data and enable the
integration of other domains. Compared to state-of-the-art
methods, the proposed approach can provide similar results
with lower and variable sampling rate. The instance relevance
let us use samples in which a hopping window overlaps the
driving event partially.
The collection of dangerous driving samples is time-
consuming and requires special considerations (i.e., dedi-
cated track, qualified drivers, correct labeling, etc.). A limi-
tation of our implementation is that the labels were selected
by one person, which translates into a model that represents
the labeler’s danger perception. Hence, the assignation of
labels should be extended to more criteria to reduce potential
bias. To overcome this situation we are working on an
infrastructure to involve normal test drivers outside dedicated
tracks. Mainly, we use the base classifier model to recognize
dangerous driving events in the back-end, let the driver rate
the detected situation and reinforce the model over time with
less overhead.
Sending classified driving events over the network is more
practical than transferring raw data. Therefore, our next
steps would be to predict incoming data streams directly
in the vehicle with a reinforced model and use graph data
to prioritize data relationships for the integration with other
domains. One approach to deal with interoperability issues
across platforms could be by mapping the detected driving
events to a standardized data model, such as VSS/VSSo.
REFERENCES
[1] Cisco, “Internet of things,” 2016, accessed: 2019-01-25. [Online].
Available: www.bit.ly/2vpGRxp
[2] Postscapes, “Iot standards and protocols,” 2018, ac-
cessed: 2018-05-03. [Online]. Available: www.postscapes.com/
internet-of- things-protocols/
[3] McKinsey, “Ready for inspection – the automotive aftermarket in
2030,” McKinsey Center for Future Mobility, June 2018. [Online].
Available: www.bit.ly/2MEhXlG
[4] J. Rowley, “The wisdom hierarchy: representations of the dikw hier-
archy,Journal of information science, vol. 33, no. 2, pp. 163–180,
2007.
[5] X. Chang, J. Rong, C. Zhou, and H. Li, “Relationship between
driver’s feeling and vehicle operating characteristics on urban road,”
in Intelligent Control and Automation (WCICA), 2016 12th World
Congress on. IEEE, 2016, pp. 3033–3037.
[6] N. Li, T. Misu, and A. Miranda, “Driver behavior event detection for
manual annotation by clustering of the driver physiological signals,
in Intelligent Transportation Systems (ITSC), 2016 IEEE 19th Inter-
national Conference on. IEEE, 2016, pp. 2583–2588.
169
Fig. 6. Trajectory reconstruction of route 1 from the aggressive lap of Driver A using RNN
[7] N. AbuAli and H. Abou-zeid, “Driver behavior modeling: Devel-
opments and future directions,” International Journal of Vehicular
Technology, 2016.
[8] B. Klotz, R. Troncy, D. Wilms, and C. Bonnet, “VSSo - A
vehicle signal and attribute ontology,” in SSN 2018, 9th International
Semantic Sensor Networks Workshop, 9 October 2018, Monterey, CA,
USA, Monterey, UNITED STATES, 10 2018. [Online]. Available:
http://www.eurecom.fr/publication/5691
[9] ——, “A driving context ontology for making sense of cross-domain
driving data,” in BMW Summer school, Raitenhaslach, Germany, July-
August 2018.
[10] B. Reddy, Y.-H. Kim, S. Yun, C. Seo, and J. Jang, “Real-time driver
drowsiness detection for embedded system using model compression
of deep neural networks,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops, 2017, pp. 121–
128.
[11] L. M. Bergasa, D. Almer´
ıa, J. Almaz´
an, J. J. Yebes, and R. Arroyo,
“Drivesafe: An app for alerting inattentive drivers and scoring driv-
ing behaviors,” in Intelligent Vehicles Symposium Proceedings, 2014
IEEE. IEEE, 2014, pp. 240–245.
[12] E. Romera, L. M. Bergasa, and R. Arroyo, “Need data for driver
behaviour analysis? presenting the public uah-driveset,” in Intelligent
Transportation Systems (ITSC), 2016 IEEE 19th International Confer-
ence on. IEEE, 2016, pp. 387–392.
[13] T. Liu, Y. Yang, G.-B. Huang, Y. K. Yeo, and Z. Lin, “Driver
distraction detection using semi-supervised machine learning,” IEEE
transactions on intelligent transportation systems, vol. 17, no. 4, pp.
1108–1120, 2016.
[14] I.-H. Choi, S. K. Hong, and Y.-G. Kim, “Real-time categorization of
driver’s gaze zone using the deep learning techniques,” in Big Data
and Smart Computing (BigComp), 2016 International Conference on.
IEEE, 2016, pp. 143–148.
[15] W. Wang and J. Xi, “A rapid pattern-recognition method for driving
styles using clustering-based support vector machines,” in American
Control Conference (ACC), 2016. IEEE, 2016, pp. 5270–5275.
[16] W. Wang, J. Xi, A. Chong, and L. Li, “Driving style classification
using a semisupervised support vector machine,” IEEE Transactions
on Human-Machine Systems, vol. 47, no. 5, pp. 650–660, 2017.
[17] D. A. Johnson and M. M. Trivedi, “Driving style recognition using
a smartphone as a sensor platform,” in Intelligent Transportation
Systems (ITSC), 2011 14th International IEEE Conference on. IEEE,
2011, pp. 1609–1615.
[18] H. Eren, S. Makinist, E. Akin, and A. Yilmaz, “Estimating driving
behavior by a smartphone,” in Intelligent Vehicles Symposium (IV),
2012 IEEE. IEEE, 2012, pp. 234–239.
[19] J. F. J´
unior, E. Carvalho, B. V. Ferreira, C. de Souza, Y. Suhara,
A. Pentland, and G. Pessin, “Driver behavior profiling: An investiga-
tion with different smartphone sensors and machine learning,” PLoS
one, vol. 12, no. 4, p. e0174959, 2017.
[20] E. Carvalho, B. V. Ferreira, J. Ferreira, C. de Souza, H. V. Carvalho,
Y. Suhara, A. S. Pentland, and G. Pessin, “Exploiting the use of
recurrent neural networks for driver behavior profiling,” in Neural
Networks (IJCNN), 2017 International Joint Conference on. IEEE,
2017, pp. 3016–3021.
[21] A. Rosebrock, “Drowsiness detection with OpenCV,” 2017, accessed:
2018-01-15. [Online]. Available: www.pyimagesearch.com/2017/05/
08/drowsiness-detection-opencv/
[22] O. Arriaga, M. Valdenegro-Toro, and P. Pl ¨
oger, “Real-time convolu-
tional neural networks for emotion and gender classification,” arXiv
preprint arXiv:1710.07557, 2017.
[23] O. Kumtepe, G. B. Akar, and E. Yuncu, “Driver aggressiveness
detection via multisensory data fusion,” EURASIP Journal on Image
and Video Processing, vol. 2016, no. 1, p. 5, 2016.
[24] S. M. Hegazy and M. N. Moustafa, “Classifying aggressive drivers for
better traffic signal control.
[25] M. Van Ly, S. Martin, and M. M. Trivedi, “Driver classification and
driving style recognition using inertial sensors,” in Intelligent Vehicles
Symposium (IV), 2013 IEEE. IEEE, 2013, pp. 1040–1045.
[26] M. V. Mart´
ınez, I. Del Campo, J. Echanobe, and K. Basterretxea,
“Driving behavior signals and machine learning: a personalized driver
assistance system,” in Intelligent Transportation Systems (ITSC), 2015
IEEE 18th International Conference on. IEEE, 2015, pp. 2933–2940.
[27] A. B. R. Gonz´
alez, M. R. Wilby, J. J. V. D´
ıaz, and C. S. ´
Avila,
“Modeling and detecting aggressiveness from driving signals,IEEE
Transactions on intelligent transportation systems, vol. 15, no. 4, pp.
1419–1428, 2014.
[28] F. Li, H. Zhang, H. Che, and X. Qiu, “Dangerous driving behavior
detection using smartphone sensors,” in Intelligent Transportation
Systems (ITSC), 2016 IEEE 19th International Conference on. IEEE,
2016, pp. 1902–1907.
[29] P. Brombacher, J. Masino, M. Frey, and F. Gauterin, “Driving event de-
tection and driving style classification using artificial neural networks,
in Industrial Technology (ICIT), 2017 IEEE International Conference
on. IEEE, 2017, pp. 997–1002.
[30] K. Zfnebi, N. Souissi, and K. Tikito, “Driver behavior quantitative
models: Identification and classification of variables,” in Networks,
Computers and Communications (ISNCC), 2017 International Sympo-
sium on. IEEE, 2017, pp. 1–6.
[31] A. P. Bradley, “The use of the area under the roc curve in the evaluation
of machine learning algorithms,” Pattern recognition, vol. 30, no. 7,
pp. 1145–1159, 1997.
170
... Imagine a use case where we need to predict the aggressiveness in the driving style. That prediction can be made with an ML model, as shown in [1]. Depending on how the data is managed, we can at least distinguish the following scenarios depicted in Fig. 1. ...
... Scenario 1 is the typical applicationcentric approach that overloads the application logic with all the data transformations needed to satisfy specific use cases [15]. This setup was firstly tried in our previous work [1]. Although the resulting models were able to satisfactory solve the prediction task, the training allowed multiple arbitrary iterations over example data which contradicts the one-pass constraint of data streams. ...
... We reuse the multi-class multi-variate classification problem reported in [1] and its dataset to validate the proposed approach. It consists of classifying the driving style from vehicle time-series data as normal or aggressive driving and recognizing the maneuver of the aggressive instances. ...
Preprint
Full-text available
Decision-making is nowadays data-driven instead of merely intuition-oriented. A reasoning process that leads to a decision depends on knowledge and the available facts. When the data is raw, it may contain non-actionable facts that are hardly useful. Ideally, such facts registered with data must be relevant and represented in a human-understandable way. Hence, data processing and analysis steps are necessary to get facts that can lead to action. Unfortunately, some analyses are too hard for humans to handcraft a suitable solution. Machine Learning (ML) has proven to be an excellent choice for solving specific prediction tasks. In the case of data streams, special ML techniques consider the one-pass constraint and train the models incrementally. This paper proposes to train these incremental ML models with feature vectors that are generically formed. We demonstrate its usefulness using a time series classification task. Moreover, we position the proposed approach inside a larger design to improve the vehicle data semantics.
... Various classification methods such as Random Forest [29], Fuzzy Modeling [30], Discrete Wavelet Transform [31], Dynamic Time Warping [32], and machine learning algorithms such as K-Means [33], K-Nearest Neighbor (KNN) [34], Hidden Markov Model (HMM) [35], Support Vector Machine (SVM) [36], and Neural Network [37] are used to classify data. Increasingly, Deep Learning algorithms have grown popular in recent years. ...
... The authors suggested further studies on low-cost, high-performance, collaborative sensing solutions. Researchers compared the performance of Different RNN architectures and the Random Forest model [29]. In this research, they proposed a safe driving event classifier back to the back of an event-type classifier. ...
... • These features were handy declared on input sensor signals. • Detailed information about Safe events was not provided • Fine-grained classification of safe events • The best LSTM model reached converged after 500 epochs and two hours and the Mean Per Class Error• Dependence of threshold-based methods on the type of the car and the smartphone • Generalization of the model was an issue• Acceleration in feature extraction by using threshold-based models • Analyzing changes in the motion sensor data in detecting driving events • An LSTM classification model for fusing sensor data • Utilizing the physical model of a moving vehicle[8] Accuracy• High performance, low-cost solutions, and collaborative sensing were planned for further studies • More driving events collection from different weather conditions, road types, sensors, and vehicles were planned for further study • The best result achieved after more than 901 epochs• Different RNN architectures were evaluated • Inputting Accelerometer data to different RNN architectures • The empirical evaluation indicated that GRU performance was more accurate[29] Accuracy and Area Under the ROC curve • The proposed model cannot distinguish between Right or Left Lane Changes • Window sizes between 1 and 10 seconds were used which might cause the occurrence of two events in a window size • Combining low and variable sampling rates of signals led to missing some actual values and decreased the performance • One non-expert person selected labels • Other actions such as gaze, head position, and blinking are reported in their further research ...
Article
Full-text available
The human factor is one of the most critical parameters in car accidents and even traffic occurrences. Driving style affected by human factors comprises driving events (maneuvers) and driver behaviors. Driving event detection is the fundamental step of identifying driving style and facilitates predicting potentially unsafe behaviors, preventing accidents, and imposing restrictions on high-risk drivers. This paper proposes a deep hybrid model to detect safe driver behaviors and driving events using real-time smartphone sensor signals. The ensemble of Multi-layer Perceptron, Support-Vector Machine, and Convolutional Neural Network classifiers process each driving event sample. In order to evaluate our model, we develop an Android Application to capture smartphone sensor signal data. We capture about 24000 driving data from 50 drivers. Results indicate that the fusion model performs better than each individual classifier in terms of Accuracy, False Positive Rate (FPR), and Specificity (96.75, 0.004, and 0.996). This research gives insights to Auto-mobile developers to focus on the speed and cost efficiency of smartphone driver monitoring platforms. Although some insurance and freight management companies utilize smartphones as their monitoring platforms, the market share of these use cases is meager and could improve rapidly with the promotion of new smartphones with better processing and storage.
... However, in a real-life situation, this setup may be interrupted because the drivers may use smartphones for communication and navigation [18]. To overcome this problem, some researchers utilized previously collected sensor fusion datasets [24,25,26]. ...
... Alvarez-Coello et al. [26] proposed and split the supervised multi-class driving maneuver classification problem into two parts. The authors developed a binary classifier by applying RF in order to classify aggressive and non-aggressive driving events and the result was transferred to the RNN model to recognize the type of maneuver. ...
... However, the authors mentioned that the used dataset was labeled manually and has a possibility of bias with the labelers' perception. We utilize both accelerometer and gyroscope and the performance of our proposed model outperforms [25] and [26]. We also evaluate per-class accuracy which are above 98% for each class. ...
Thesis
Full-text available
With the increasing number of vehicles, the usage of technology has also been increased in the transportation system. Although automobile companies are using advanced technologies to develop high-performing transports, traffic safety still remains to be a concerning issue. Drivers’ driving behavior is considered as one of the key factors of traffic safety, which could be monitored from their individual driving maneuvers. In this thesis, we investigate the existing research gaps of domain-specific knowledge, utilization and features extraction of unlabeled data in predictive driving behavior system. With the proposed models, we use domain-specific knowledge data of the driving environment, such as data changing rules of various driving maneuvers as well as the temporal features over time. We proposed seven class functions for seven common driving maneuvers- aggressive acceleration, aggressive braking, aggressive left turn, aggressive right turn, aggressive left lane change, aggressive right lane change and non-aggressive driving maneuver. Then we converted the class functions into binary domain-specific feature vectors to train the proposed models along with other temporal features. In this thesis, we present a supervised model and a semi-supervised model for the classification of driving maneuvers from the sensor fusion time series data. The semi-supervised model consists of an unsupervised long-short term memory (LSTM) autoencoder and a supervised LSTM classifier. The supervised model consists of a supervised LSTM model. Because of using LSTM, both of the models can analyze time-series data. In the semi-supervised model, the LSTM encoder learns from unlabeled data as a compressed low dimensional feature vector, which then transfers the learning to the supervised LSTM classifier to classify the driving maneuvers. We present a comparative analysis of the per-class accuracy of the proposed semi-supervised and supervised models with and without using domain-specific knowledge, where the models with domain-specific knowledge outperform. Our proposed semi-supervised and supervised models are compared with the other existing approaches, where our models trained with domain-specific knowledge provided better performance. We also compared the per-class accuracy for both the supervised and semi-supervised models, where all the maneuver class accuracy for the supervised model was above 98% and the semi-supervised model was above 95%. Although the supervised model outperforms the semi-supervised model, the semi-supervised model would be more beneficial in applications where the labeled driving maneuvers data are hard to capture or insufficient. We plan to propose a domain adaptive model to reduce the loss due to transfer the knowledge from one model to another and an explainable model for driving maneuver classification as well.
... However, in a real-life situation, this setup may be interrupted because the drivers may use smartphone for communication and navigation [14]. To overcome this problem, some researchers utilized previously collected sensor fusion dataset [20]- [22]. Researchers proposed various fuzzy inference, machine learning, deep learning techniques to classify driving behavior from inertial sensors data [20]. ...
... The performance of the model for classifying each class was not mentioned in the work. Alvarez-Coello et al. [22] proposed and split the supervised multi-class driving maneuver classification problem into two parts. The authors developed a binary classifier by applying RF in order to classify aggressive and non-aggressive driving events and the result was transferred to the RNN model to recognize the type of maneuver. ...
... As the data is being labeled by human manually, the perfection of the maneuver class label is heavily depends on the maneuver perception of a labeler. Hence, there are possibilities of bias [22]. Besides, most cases, a small amount of labeled dataset can be found which further participate to train models. ...
Article
Full-text available
With the increasing number of vehicles, the usage of technology has also been increased in the transportation system. Although automobile companies are using advanced technologies to develop high performing transports, traffic safety still remains to be a concerning issue. Drivers’ driving behavior is considered as one of the key factors of the traffic safety, which could be monitored from their individual driving maneuvers. In this paper, we present a supervised learning model and a semi-supervised transfer learning model for the classification of driving maneuvers from the sensor fusion time series data. The semi-supervised model consists of an unsupervised long-short term memory (LSTM) autoencoder and a supervised LSTM classifier. The supervised model consists of a supervised LSTM model. Because of using LSTM, both of the models can analyze time-series data. In the semi-supervised model, the LSTM encoder learns from unlabeled data as a compressed low dimensional feature vector, which then transfers the learning to the supervised LSTM classifier to classify the driving maneuvers. With the proposed models, we use domain specific knowledge data of the driving environment, such as data changing rules of various driving maneuvers as well as the temporal features over time. We use class functions for seven driving maneuver types and convert those into binary feature vector to use with the LSTM models. We present a comparative analysis of the per class accuracy of the proposed semi-supervised and supervised models with and without using domain-specific knowledge, where the models with the domain specific knowledge outperform. Our proposed semi-supervised and supervised models are compared with the other existing approaches, where our models trained with the domain specific knowledge provided better performance. We also compared the per class accuracy for both the supervised and semi-supervised models, where all the maneuver class accuracy for supervised model was above 98% and semi-supervised model was above 95%. Although the supervised model outperforms the semi-supervised model, the semi-supervised model would be more beneficial in applications where the labeled driving maneuvers data are hard to capture or insufficient.
... Alvarez-Coello et al. [6] considered the problem of classifying driving style into aggressive or non-aggressive as a binary classification problem and applied RF in a time window. This method also applied RNN for classifying seven driving maneuvers. ...
... In Table 4 Evaluation scores for each maneuver class for dataset [52] Class Driving maneuver classification from time series data... Table 5, the comparison is done based on the applied techniques, considered features, required time and system for processing, the necessity of hyperparameters tuning and the ability to explain. Typically, deep learning based models i.e., [4][5][6][7]27] need high performance computer (HPC) and the sufficient amount of time to train the model after tuning several optimum hyperparameters and models' structure such as the number of layers, the number of units in each hidden layers, activation function, learning rate, drop out rate, and so on. In machine learning based techniques i.e., [3,10,23] are also required to set parameters, for example, C, gamma value, etc. ...
Article
Full-text available
Drivers’ improper driving behavior plays a vital role in road accidents. Different approaches have been proposed to classify and evaluate driving performance to ensure road safety. However, most of the techniques are based on neural networks which work like a black box and make the logical reasoning behind the classification decision unclear. In this paper, we propose a rule-based machine learning technique using a sequential covering algorithm to classify the driving maneuvers from time-series data. In the sequential covering algorithm, the impact of each rule is measured as the metrics of coverage and accuracy, where the coverage and accuracy indicate the amount of covered and correctly identified instances in a maneuver class, respectively. The final ruleset for each maneuver class is formed with only the significant rules. In this way, the rules are learned in an unsupervised manner and only the best performance of the rules are included in the ruleset. The set of rules is also optimized by pruning based on the performance of the test data. Application of the proposed system is beneficial compared to the traditional machine learning and deep learning approaches which typically require a larger dataset and higher computational time and complexity.
... Therefore, many studies focus on the prediction of traffic and road conditions using different methods such as Artificial Neural Networks (ANN) [22,23], Long-Short Term Memory (LSTM) Neural Networks [24], Bayesian networks [25], and deep learning [26,27] approaches and methods relying on an ensemble of different single predictors of traffic [28]. Other interesting approaches focus on warning systems, such as the one proposed by Teke and Duran [29] or dangerous driving events modelling platforms, such as the one proposed by Alvarez-Coello et al. [30]. More specifically, Alvarez-Coello et al. engaged the Random Forest (RF) algorithm as well as a Recurrent Neural Network (RNN) in the design of their platform. ...
Article
Full-text available
The unceasingly increasing needs for data acquisition, storage and analysis in transportation systems have led to the adoption of new technologies and methods in order to provide efficient and reliable solutions. Both highways and vehicles, nowadays, host a vast variety of sensors collecting different types of highly fluctuating data such as speed, acceleration, direction, and so on. From the vast volume and variety of these data emerges the need for the employment of big data techniques and analytics in the context of state-of-the-art intelligent transportation systems (ITS). Moreover, the scalability needs of fleet and traffic management systems point to the direction of designing and deploying distributed architecture solutions that can be expanded in order to avoid technological and/or technical entrapments. Based on the needs and gaps detected in the literature as well as the available technologies for data gathering, storage and analysis for ITS, the aim of this study is to provide a distributed architecture platform to address these deficiencies. The architectural design of the system proposed, engages big data frameworks and tools (e.g., NoSQL Mongo DB, Apache Hadoop, etc.) as well as analytics tools (e.g., Apache Spark). The main contribution of this study is the introduction of a holistic platform that can be used for the needs of the ITS domain offering continuous collection, storage and data analysis capabilities. To achieve that, different modules of state-of-the-art methods and tools were utilized and combined in a unified platform that supports the entire cycle of data acquisition, storage and analysis in a single point. This leads to a complete solution for ITS applications which lifts the limitations imposed in legacy and current systems by the vast amounts of rapidly changing data, while offering a reliable system for acquisition, storage as well as timely analysis and reporting capabilities of these data.
... Other similar works are [18,19], where they use Deep Convolutional Neural Networks to classify driver behavior. Finally, ref. [20] combines neural networks with Random Forest for dangerous driving classification. ...
Article
Full-text available
Intelligent transportation systems encompass a series of technologies and applications that exchange information to improve road traffic and avoid accidents. According to statistics, some studies argue that human mistakes cause most road accidents worldwide. For this reason, it is essential to model driver behavior to improve road safety. This paper presents a Fuzzy Rule-Based System for driver classification into different profiles considering their behavior. The system’s knowledge base includes an ontology and a set of driving rules. The ontology models the main entities related to driver behavior and their relationships with the traffic environment. The driving rules help the inference system to make decisions in different situations according to traffic regulations. The classification system has been integrated on an intelligent transportation architecture. Considering the user’s driving style, the driving assistance system sends them recommendations, such as adjusting speed or choosing alternative routes, allowing them to prevent or mitigate negative transportation events, such as road crashes or traffic jams. We carry out a set of experiments in order to test the expressiveness of the ontology along with the effectiveness of the overall classification system in different simulated traffic situations. The results of the experiments show that the ontology is expressive enough to model the knowledge of the proposed traffic scenarios, with an F1 score of 0.9. In addition, the system allows proper classification of the drivers’ behavior, with an F1 score of 0.84, outperforming Random Forest and Naive Bayes classifiers. In the simulation experiments, we observe that most of the drivers who are recommended an alternative route experience an average time gain of 66.4%, showing the utility of the proposal.
... Alvarez-Coello et al. [12]. proposed a model for dangerous driving events using in-vehicle data. ...
Article
Full-text available
Many network protocols such as Controller Area Network (CAN) and Ethernet are used in the automotive industry to allow vehicle modules to communicate efficiently. These networks carry rich data from the different vehicle systems, such as the engine, transmission, brake, etc. This in-vehicle data can be used with machine learning algorithms to predict valuable information about the vehicle and roads. In this work, a low-cost machine learning system that uses in-vehicle data is proposed to solve three categorization problems; road surface conditions, road traffic conditions and driving style. Random forests, decision trees and support vector machine algorithms were evaluated to predict road conditions and driving style from labeled CAN data. These algorithms were used to classify road surface condition as smooth, even or full of holes. They were also used to classify road traffic conditions as low, normal or high, and the driving style was classified as normal or aggressive. Detection results were presented and analyzed. The random forests algorithm showed the highest detection accuracy results with an overall accuracy score between 92% and 95%.
... It could be argued that even Reorganization has useful information for the other three components. While recurrent systems have seen some traction in specific automotive vision problems [14], [15], this is not commonplace at an architecture level in automotive visual systems. ...
Article
Full-text available
Cameras are the primary sensor in automated driving systems. They provide high information density and are optimal for detecting road infrastructure cues laid out for human vision. Surround-view camera systems typically comprise of four fisheye cameras with 190°+ field of view covering the entire 360° around the vehicle focused on near-field sensing. They are the principal sensors for low-speed, high accuracy, and close-range sensing applications, such as automated parking, traffic jam assistance, and low-speed emergency braking. In this work, we provide a detailed survey of such vision systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization. We jointly call this the 4R Architecture . We discuss how each component accomplishes a specific aspect and provide a positional argument that they can be synergized to form a complete perception system for low-speed automation. We support this argument by presenting results from previous works and by presenting architecture proposals for such a system. Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY .
... In this paper we have presented the implementation of a verifiable data chain for a supervised learning scenario with an RNN algorithm detecting dangerous driving scenarios as shown in [18]. The referenced scenario detects situations of dangerous driving on an incoming stream of vehicle data and classifies the maneuver (e.g. ...
Poster
Full-text available
We propose a car signal ontology named VSSo that provides a formal definition of the numerous sensors embedded in car regardless of the vehicle model and brand, re-using the work made by the GENIVI alliance with the Vehicle Signal Specification (VSS). We observe that recent progress in machine learning enables to predict a number of useful information using the car signals and environmental factors such as the emotion of the driver or the detection of dangerous situation on the road. However, there is a lack of a central modeling pattern for describing the dynamic situation of a vehicle, its driver and passengers, moving in an evolving environment. We propose a driving context ontology relying on a patterns composed of events and states to glue together automotive-related vocabularies.
Conference Paper
Full-text available
Application developers in the automotive domain have to deal with thousands of different signals, represented in highly heterogeneous formats, and coming from various car architectures. This situation prevents the development and connectivity of modern applications. We hypothesize that a formal model of car signals, in which the definition of signals are uncorrelated with the physical implementations producing them, would improve interoperability. In this paper, we propose VSSo, a car signal ontology that derives from the automotive standard VSS, and that follows the SSN/SOSA pattern for representing observations and actuations. This ontology is comprehensive while being extensible for OEMs, so that they can use additional private signals in an interoperable way. We developed a simulator for interacting with data modeled under the VSSo ontology pattern available at http://automotive.eurecom.fr/simulator/query
Conference Paper
Full-text available
Traffic signal control, in a given single road intersection, primarily aims at reducing the average delay among a group of vehicles and avoiding crash accidents. Commonly, a driver's behavior is an essential cause of these accidents. There exist many solutions for automatic traffic signal control focusing on reducing average delay. However, to our knowledge, we are the first to integrate aggressive driving behavior classification with the traffic signal controller to enhance the safety and to reduce delays. Firstly, we present an artificial neural network as an aggressive driver behavior classifier trained using the benchmark Virginia Tech 100-car naturalistic driving study data. Secondly, we propose a modification to the popular Mamdani's fuzzy logic signal controller to accommodate for the integration of our aggressive behavior classifier. Multiple experiment sets were conducted to provide an indication to the effectiveness of our approach. The integration results yielded significant improvements at higher traffic flow volumes when compared against the baseline controller.
Article
Full-text available
Supervised learning approaches are widely used for driving style classification; however, they often require a large amount of labeled training data, which is usually scarce in a real-world setting. Moreover, it is time-consuming to manually label huge amounts of driving data due to uncertainties of driver behavior and variances among the data analysts. To address this problem, a semi-supervised approach, a semi-supervised support vector machine (S3VM), is employed to classify drivers into aggressive and normal styles based on a few labeled data points. First, a few data clusters are selected and manually labeled using a k-means clustering method. Then, a specific differentiable surrogate of a loss function is developed, which makes it feasible to use standard optimization tools to solve the non-convex optimization problem. One of the most popular quasi-Newton algorithms is then used to assign the optimal label to all of the training data. Lastly, we compare the S3VM method with a support vector machine (SVM) method for classifying driving styles from different amounts of labeled data. Experiments show that the S3VM method can improve the classification accuracy by about 10% and reduce the labeling effort by using only a few labeled data clusters amongst huge amounts of unlabeled data.
Conference Paper
Full-text available
In this paper we investigate driver behavior vari- ables proposed in literature from an objective approach; we consider studies which employed numeric data that can be ob- jectively measured by standardized tools and electronic devices, such as speed, acceleration and position. Our analysis develops in three phases; first we identify recent fields that focused on modeling driver behavior, such as Usage based insurance and governement agencies of transport. Next we extract quantitative variables from literature and classify them according to their importance; to achieve this, we identify their priority in each paper depending on their results. Finally we order them by their rate use in scientific literature, in order to have a comprehensive synthesis about the most significant and prevalent variables in literature. This paper therefore serves as reference for future studies on driver behavior; the collaboration we provide is a set of crucial variables for modeling, understanding and analysis driver behavior in quantitative research. Results show that “Speed”, “Acceleration/deceleration” and “Braking” are the most used in recent studies as well as considered priority variables.
Article
Melbourne is one of the liveliest cities in the world. It has a well efficient transport system, supported by a vast network of trams. Therefore, the mental health and stress level of the tram drivers plays a crucial role in the safety of the passengers. The issue of fatigue and drowsiness in the tram drivers are mostly due to their work-time and the most common thing is that the drowsiness occurs during the work time itself. This drowsiness is a risk for everyone including those who are not travelling in the tram. The current system that is used to prevent the drivers from falling sleeping is called the deadlock system. In this system the driver keeps his foot on a pedal at all times. Whenever the driver lifts his foot from the pedal the tram stops moving. Considering the technologies that are currently implemented in the vehicles seems to be insufficient. More over the driver gets uncomfortable when he keeps his foot onto the lever for a long time during long working hours. We have used OpenCV in python to create a program which monitors the eyes of a person and ensures that they keep the eyes open. The developed algorithm uses python libraries to detect any abnormality in the time interval between blinks and the extent of openness of the driver’s eyes. When an abnormality is detected the driver receives an alarm on his phone indicating driver drowsiness.
Article
In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository.
Conference Paper
Knowledge about the driving behavior of a driver is important for applications in many different areas, especially for Advanced Driver Assistance Systems. The driving style does not only affect the current driver and his vehicle but also his environment. For example, usage-based insurances classify the driving style in order to reward calm drivers by granting them a discount. In this paper we present a novel algorithm to provide an accurate classification of a person's driving style. Our model is based on the identification of driving maneuvers and the classification of the driving style for these events using artificial neural networks. Furthermore, an overall score of the driving style for one trip is calculated based on the classified events. We validate our developed model in 58 test trips from different test drivers using a recently developed low-cost measuring device based on a Raspberry Pi. The results of our validation show that the model can identify more than 90 % of the driving maneuvers correctly. Moreover, the driving style classification matches the assessment of the driver in 81 % of the relevant trips with a normalized average mean squared error of less than 11 %. In addition, a moving average of the calculated score for each event shows validated changes in the driving behavior of the test persons.