PreprintPDF Available

SaveMeNow.AI: a Machine Learning based wearable device for fall detection in a workplace

Authors:

Abstract and Figures

Slips, trips and falls are among the main causes of accidents in a workplace. For this reason, many fall detection approaches have been proposed in the literature. One of the most important categories of approaches is based on the usage of wearable devices. These devices have many advantages, but they also pose some challenging open issues. In particular, they must not be bulky, must have low power consumption and must be able to optimize the low computational power available. In this paper, we aim at facing these challenges by proposing SaveMeNow.AI, a new wearable device for fall detection. SaveMeNow.AI is based on the deployment of a Machine Learning approach for fall detection embedded in it. This approach exploits data continuously measured by a six-axis IMU present inside the device.
Content may be subject to copyright.
SaveMeNow.AI: a Machine Learning based wearable device for fall
detection in a workplace
Emilinao Anceschi1, Gianluca Bonifazi2, Massimo Callisto de Donato1, Enrico Corradini2,
Domenico Ursino2, Luca Virgili2
1Filippetti S.p.A.,
2Department of Information Engineering, Polytechnic University of Marche
Corresponding author
emiliano.anceschi@gruppofilippetti.it; g.bonifazi@univpm.it;
massimo.callistodedonato@gruppofilippetti.it; e.corradini@pm.univpm.it; d.ursino@univpm.it;
l.virgili@pm.univpm.it
Abstract
Slips, trips and falls are among the main causes of accidents in a workplace. For this reason, many
fall detection approaches have been proposed in the literature. One of the most important categories of
approaches is based on the usage of wearable devices. These devices have many advantages, but they also
pose some challenging open issues. In particular, they must not be bulky, must have low power consumption
and must be able to optimize the low computational power available. In this paper, we aim at facing these
challenges by proposing SaveMeNow.AI, a new wearable device for fall detection. SaveMeNow.AI is based
on the deployment of a Machine Learning approach for fall detection embedded in it. This approach exploits
data continuously measured by a six-axis IMU present inside the device.
Keywords: Fall Detection, Machine Learning, Wearable Device, Decision Tree, Internet of Things
1 Introduction
Slips, trips and falls are among the main causes of accidents in a workplace in all the countries of the world.
For this reason, many fall detection approaches have been proposed in the past literature. A possible taxonomy
for them can be based on the environment surrounding the user and the employed sensors. According to
this taxonomy, we can distinguish between ambient sensor based approaches, vision based approaches and
wearable based approaches [24].
Ambient sensor based approaches analyze the recordings of audio and video streams from the work
environment [33, 37] and/or track vibrational data derived from the usage of pressure sensors [2, 29]. They
are little intrusive for the final user; however, they have high costs and could generate many false alarms.
Vision based approaches [23, 25] exploit image processing techniques, which use cameras to record
workers and detect their falls. They are not intrusive and can achieve a great accuracy. However, they require
to install cameras in each room to monitor and can return many false alarms.
Wearable based approaches make use of wearable devices [17, 22, 34] which workers are provided with,
and, in some cases, they are combined with Machine Learning algorithms to process data provided by these
devices [30, 26]. They are cost-effective and easy to install and setup. Moreover, they are strictly related to
people and can detect falls regardless the environment where workers are operating. However, they can be
bulky and intrusive, and their energy power and computation capabilities are limited. Finally, analogously
1
to what happens for the approaches belonging to the other two categories, they could generate many false
alarms [19] because realizing a model that accurately represents the daily activities of workers is difficult.
Nevertheless, we think that the advantages provided by this category of approaches are extremely relevant
and the current problems affecting them are, actually, challenging open issues that, if successfully faced,
can open many opportunities in preventing, or at least quickly facing, accidents in a workplace. For this
reason, in this paper, we aim at proposing a contribution in this context presenting a wearable device called
SaveMeNow.AI. This device aims at maintaining all the benefits of the previous wearable devices proposed
for the same purposes and, simultaneously, at avoiding most, or all, of the problems characterizing them.
The hardware at the core of SaveMeNow.AI is SensorTile.box1. This is a device containing the ultra-
low-power microcontroller STM32L4R92and several sensors. Among them the one of interest for us is
LSM6DSOX3, which is a six-axis Inertial Measurement Unit (hereafter, IMU) and Machine Learning Core.
The fall detection approach we propose in this paper, which defines the behavior of the SaveMeNow.AI
device of a given worker, receives the data continuously provided by its six-axis IMU and processes it by
means of a customized Machine Learning algorithm, conceived to determine if the corresponding worker
has fallen or not. In the affirmative case, it immediately sends an alarm to all the near workers, who receive
it through the SaveMeNow.AI device worn by them. This approach, once defined, trained and tested, can be
natively implemented in the Machine Learning Core of LSM6DSOX.
As we will see in the following, SensorTile.box is very small and not bulky and, as we said above, it is
provided with an ultra-low-power microcontroller. We implemented in it the Machine Learning approach
presented in this paper and, therefore, we optimized the exploitation of the limited computation power
characterizing this device. Finally, as we will show below, the accuracy of the defined fall detection approach
is very satisfying, and false alarms are very few. As a consequence, SaveMeNow.AI is capable of addressing
all the four open issues for wearable devices that we mentioned above.
This paper is organized as follows: In Section 2, we define and, then, illustrate the implementation and
testing of the approach underlying SaveMeNow.AI. In Section 3, we illustrate all the features characterizing
both the hardware and the software of SaveMeNow.AI. In Section 4, we describe related literature and
evidence the differences between several past approaches and ours. Finally, in Section 5, we draw our
conclusion and have a look to some future developments of our approach.
2 Applying Machine Learning to evaluate fall detection in a workplace
In this section, we illustrate the customized Machine Learning approach for fall detection we have defined
for SaveMeNow.AI. As a preliminary activity, we consider relevant describing the data sources we have used
for both its training and its testing.
2.1 Construction of the support dataset
In recent years, thanks to the pervasive diffusion of portable devices (e.g., smartphones, smartwatches, etc.),
wearable based fall detection approaches have been increasingly investigated by scientific community [6].
Thanks to this great interest, it is possible to find online many public datasets to perform analyses on slips,
trips and falls or to find new approaches for their detection and management. After having analyzed many
of these datasets, we decided to select four of them for our training and testing activities. In particular, we
chose some datasets that would help us to define a generalized model, able to adapt to the activities carried
out by workers and operators from various sectors, and performing very different movements during their
tasks.
The first dataset used is “SisFall: a Fall and Movement Dataset” (hereafter, SisFall), created by SIS-
TEMIC. This is the Integrated Systems and Computational Intelligence research group of the University of
Antioquia [32]. This dataset consists of 4505 files, each referring to a single activity. All the activities are
1https://www.st.com/en/evaluation-tools/steval- mksbox1v1.html
2https://www.st.com/en/microcontrollers-microprocessors/stm32l4r9- s9.html
3https://www.st.com/en/mems-and- sensors/lsm6dsox.html
2
Measures Acceleration Rotation
Axis X Y Z X Y Z
Table 1: Structure of the new dataset
grouped in 49 categories: 19 refer to ADLs (Activities of Daily Living) performed by 23 adults, 15 concern
falls (Fall Activities) that the same adults had, and 15 regard ADLs performed by 14 participants over 62
years of age. Data was collected by means of a device placed at the hips of the volunteers. This device
consists of different types of accelerometers (ADXL345 and MMA8451Q) and of a gyroscope (ITG3200).
The second dataset used is “Simulated Falls and Daily Living Activities” (hereafter, SFDLAs), created
by Ahmet Turan Özdemir of the Erciyes University and by Billur Barshan of the Bilkent University [26]. It
consists of 3060 files that refer to 17 volunteers who carried out 36 different kinds of activity. Each of them
was repeated by each volunteer about 5 times. The 36 categories of activities are, in turn, partitioned in 20
Fall Activities and 16 ADLs. The dataset was recorded using 6 positional devices placed on the volunteers’
head, chest, waist, right wrist, right thigh and right ankle. Each device consists of an accelerometer, a
gyroscope and a magnetometer.
The third dataset used is “CGU-BES Dataset for Fall and Activity of Daily Life” (hereafter, CGU-BES)
created by the Laboratory of Biomedical Electronics and Systems of the Chang Gung University [8]. This
dataset contains 195 files that refer to 15 volunteers who performed 4 Fall Activities and 9 ADLs. Data was
collected by a system of sensors consisting of an accelerometer and a gyroscope.
The fourth, and last, dataset used is the “Daily and Sports Activities Dataset” (hereafter, DSADS) of the
Department of Electrical and Electronic Engineering of the Bilkent University [1]. This dataset comprises
9120 files obtained by sampling 152 activities carried out by 8 volunteers. Each activity had a duration
of about 5 minutes, split into 5-seconds recordings. This dataset does not contain fall activities, but sport
activities. We chose it in order to make our model generalizable and, therefore, more adaptable to most of
the various situations that may occur in the working environment. Data was collected through 5 sensors
containing an accelerometer, a gyroscope and a magnetometer, positioned on different parts of the volunteer’s
body.
From these four datasets, we decided to extrapolate only the accelerometric and gyroscopic data. This
choice was motivated by two main reasons. The first concerns data availability; in fact, the only measurements
common to all dataset are acceleration and rotation. The second regards the ability of Machine Learning
models to obtain better performance than thresholding-based models when using accelerometric data, as
described in [13]. By merging the acceleration and rotation data extrapolated from the four datasets we
obtained a new dataset whose structure is shown in Table 1. It stores data from 8579 activities. 4965 of them
do not represent falls, while the remaining 3614 denote falls. Each activity has associated a file that stores
the values of the 6 parameters of interest for a certain number of samples. Since data comes from different
datasets, the number of samples associated with the various actitivies is not homogeneous; in fact, it depends
on the length of the activity and the sampling frequency used in the dataset where it was originally registered.
With regard to this aspect, it should be noted that having datasets characterized by different activity lengths
and sampling frequencies does not significantly affect the final result, as long as the sampling frequency is
very high compared to the activity length, as it is the case for all our datasets. This is because our features
are little influenced by the number of samples available. This happens not only for the maximum and the
minimum values, which is intuitive, but also for the mean value and the variance, because, in this case, as
the number of samples increases, both the numerator and the denominator of the corresponding formulas
grow in the same way.
After building the new dataset, we applied a Butterworth Infinite Impulse Response (i.e., IIR) second
order low-pass filter with a cut-off frequency of 4 Hz to the data stored therein. The purpose of this task was
keeping the frequency response module as flat as possible in the pass band in such a way as to remove noise.
3
Feature Definition
Maximum Value max
𝑘=1..𝑛 (𝜁[𝑘])
Minimum Value min
𝑘=1..𝑛 (𝜁[𝑘])
Mean Value 𝜇=
1
𝑛
𝑛
𝑘=1
𝜁[𝑘]
Variance 𝜎2=
1
𝑛
𝑛
𝑘=1
(𝜁[𝑘] − 𝜇)2
Table 2: Feature definition
Instead, the choice of the Butterworth filter was motivated by its simplicity and low computational cost [15].
These features make it perfect for a possible future hardware implementation.
After performing data cleaning, through which we eliminated excess data, and data pre-processing,
through which we reduced the noise as much as possible, we proceeded to the feature engineering phase. In
particular, given a parameter 𝜁, whose sampled data was present in our dataset, we considered 4 features,
that is the maximum value, the minimum value, the mean value and the variance of 𝜁. If 𝑛is the number
of samples of 𝜁present in our dataset and 𝜁[𝑘]denotes the value of the 𝑘𝑡 ℎ sample of 𝜁,1𝑘𝑛, the
definition of the 4 features is the one shown in Table 2.
As shown in Table 1, the parameters present in our dataset are 6, corresponding to the values of the 𝑋,
𝑌and 𝑍axes returned by the accelerometer and the gyroscope. As a consequence, having 4 features for 6
parameters at disposal, each activity can have associated 24 features.
Finally, in a very straightforward way, each activity can have associated a two-class label, whose possible
values are Fall Activity and Not Fall Activity.
The result of all these operations is a 8579 ×25 matrix that represents the training set used to perform
the next classification activity.
2.2 Descriptive analytics on the support dataset
In this section, we illustrate some of the analyses that we conducted on the support dataset and that allowed
us to better understand the reference scenario and, then, to better face the next challenges.
The first activity we performed was the creation of the correlation matrix between features, which is
reported in Figure 1.
What clearly emerged when looking at this matrix was the presence of some evident negative correlations
between the maximum and minimum values of some parameters. Moreover, a positive correlation between
the maximum values (resp., minimum values, variances) calculated on the various axes and on the two
4
Figure 1: Correlation matrix between the features
sensors could be noticed. Finally, there were some parameters that had no significant correlation, either
positive or negative. This is particularly evident for all the cases in which the feature “mean value” is
involved. From this analysis, we intuitively deduced that exactly these last parameters would have played a
fundamental role in the next classification activity.
To verify if this last intuition was right, we ran a Random Forests algorithm [5] with a 10-Fold Cross
Validation [14] that allowed us to generate the list of features sorted according to their relevance in identifying
the correct class of activities.
In particular, in order to compute the relevance of features, this algorithm operates as follows. Given a
5
decision tree Dhaving 𝑁nodes, the relevance 𝜌𝑖of a feature 𝑓𝑖is computed as the decrease of the impurity
of the nodes splitting on 𝑓𝑖weighted by the probability of reaching them [12]. The probability of reaching a
node 𝑛𝑗can be computed as the ratio of the number of samples reaching 𝑛𝑗to the total number of samples.
The higher 𝜌𝑖, the more relevant 𝑓𝑖will be. Formally speaking, 𝜌𝑖can be computed as:
𝜌𝑖=Í𝑛𝑗𝑁𝑓𝑖𝜗𝑗
Í𝑛𝑗𝑁𝜗𝑗
Here, 𝑁𝑓𝑖is the set of the nodes of 𝑁splitting on 𝑓𝑖.𝜗𝑗is the relevance of the node 𝑛𝑗. If we assume
that 𝑛𝑗has only two child nodes 𝑛𝑙and 𝑛𝑟, then:
𝜗𝑗=𝑤𝑗𝐶𝑗𝑤𝑙𝐶𝑙𝑤𝑟𝐶𝑟
Here:
𝑤𝑗(resp., 𝑤𝑙,𝑤𝑟) is the fraction of samples reaching the node 𝑛𝑗(resp., 𝑛𝑙,𝑛𝑟);
𝐶𝑗is the impurity value of 𝑛𝑗;
𝑛𝑙(resp., 𝑛𝑟) is the child node derived from the left (resp., right) split of 𝑛𝑗.
The value of 𝜌𝑖can be normalized to the range [0,1]. For this purpose, it must be divided by the sum of
the relevances of all features.
𝜌𝑖=
𝜌𝑖
Í𝑓𝑘𝐹𝜌𝑘
where 𝐹denotes the set of all the available features.
The final relevance of a feature 𝑓𝑖returned by Random Forests is obtained by averaging the values of the
normalized relevances 𝜌𝑖computed on all the available trees:
b𝜌𝑖=Í𝑡𝑞𝑇𝜌𝑖
|𝑇|
Here, 𝑇is the set of all the trees returned by Random Forests.
The result obtained by applying the approach illustrated above to the features of our interest is shown in
Figure 2.
Figure 2: Feature relevance in identifying the correct class of activities
6
Figure 3: Activities labeled as Not Fall and Fall against the mean and the maximum accelerations on the 𝑌axis
To check if what suggested by Random Forests made sense, we considered the two features with the
highest relevance returned by this algorithm, i.e., the mean and the maximum accelerations computed on the
𝑌axis. Starting from these two features, we built the scatter diagram shown in Figure 3. Here, an orange dot
is visualized for each activity labeled as Not Fall, while a blue cross is visualized for each activity labeled
as Fall. Looking at this diagram, we can observe that the activities labeled as Not Fall have a very negative
mean acceleration and a much lower maximum acceleration than the ones labeled as Fall. This allows us to
conclude that Random Forests actually returned a correct result when it rated these two features as the most
relevant ones. In fact, their combination makes it particularly easy to distinguish falls from not falls.
2.3 Applying Machine Learning techniques on the available dataset
After having constructed a dataset capable of supporting the training task of our Machine Learning campaign,
the next activity of our research was the definition of the classification approach to be natively implemented
in the Machine Learning Core of LSM6DSOX, i.e., the sensor at the base of SaveMeNow.AI. The first step of
this activity was to verify if one (or more) of the existing classification algorithms, already proposed, tested,
verified and accepted by the scientific community, obtained satisfactory results in our specific scenario.
Indeed, in that case, it appeared us natural to adopt a well known and already accepted approach, instead of
defining a new one, whose complete evaluation in real scenarios would have required an ad-hoc experimental
campaign in our context, the publication in a journal and the consequent evaluation and possible adoption
by research groups all over the world, in order to find possible weaknesses that could have been overlooked
during our campaign.
In order to evaluate the existing classification algorithms, we decided to apply the classical measures
adopted in the literature, i.e., Accuracy, Sensitivity and Specificity. If we indicate by: (i) 𝑇 𝑃 the number of
true positives, (ii) 𝑇 𝑁 the number of true negatives, (iii) 𝐹𝑃 the number of false positives, and (iv) 𝐹 𝑁 the
number of false negatives, these three measures can be defined as:
𝐴𝑐𝑐𝑢𝑟 𝑎𝑐𝑦 =
𝑇 𝑃 +𝑇 𝑁
𝑇 𝑃 +𝑇 𝑁 +𝐹 𝑃 +𝐹 𝑁
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 =
𝑇 𝑃
𝑇 𝑃 +𝐹𝑁
𝑆 𝑝𝑒𝑐𝑖 𝑓 𝑖𝑐 𝑖𝑡𝑦 =
𝑇 𝑁
𝑇 𝑁 +𝐹 𝑃
Accuracy corresponds to the number of correct forecasts on the total input size, and represents the
overall performance of the algorithm. Sensitivity denotes the fraction of positive samples that are correctly
identified. In our scenario, it stands for the fraction of Fall Activities that are properly identified by the
7
Algorithm Accuracy Sensitivity Specificity
Decision Tree - C4.5 0.9487 0.9391 0.9566
Decision Tree - CART 0.9128 0.8910 0.9223
Multilayer Perceptron 0.9270 0.8829 0.9363
k-Nearest Neighbors (k=3) 0.8790 0.8747 0.9263
Logistic Regression 0.7707 0.8599 0.7057
Quadratic Discriminant Analysis 0.7664 0.4956 0.9680
Linear Discriminant Analysis 0.7557 0.4956 0.9663
Gaussian Naive Bayes 0.7175 0.4947 0.8989
Support Vector Machine 0.7141 0.4103 0.9486
Table 3: Accuracy, Sensitivity and Specificity values achieved by several classification algorithms when applied
to our dataset
algorithms. Finally, Specificity corresponds to the fraction of negative samples correctly identified, so it
represents the fraction of Not Fall Activities properly identified by the algorithms.
In Table 3, we report a summary of all the tested classification algorithms; in particular, we show the
mean values of Accuracy, Sensitivity and Specificity obtained through a 10-Fold Cross Validation.
Depending on the application scenario, a metric can be more important than another one. In our case, in
which we want to detect falls in a work environment, Sensitivity has a higher importance than Specificity.
In fact, a missed alarm (corresponding to a Not Fall prediction of a Fall Activity) leads to a lack of assistance
to the worker. Furthermore, a false alarm can be mitigated by providing the worker with the possibility to
interact with the device and turn off the alarm.
From the analysis of the Table 3, we can observe that the Machine Learning model that has the highest
Accuracy (and, therefore, the best overall performance) is the Decision Tree - C4.5. This model obtains
excellent results also in terms of Sensitivity and Specificity. Another interesting results was obtained through
the Quadratic Discriminant Analysis, which achieves a Specificity value equal to 0.9680. However, this last
algorithm obtains low values for Accuracy and Sensitivity, which led us to discard it.
Based on all these considerations, we decided that, among the classification algorithms of Table 3, the
best one for our scenario was the Decision Tree - C4.5. Furthermore, we evaluated that the performance it
achieved was so good that it could be adopted for our case, without the need to think about a new ad-hoc
classification algorithm, which would have hardly achieved better performance than it and would have been
exposed to all the problems mentioned at the beginning of this section.
3 Design, realization and testing of SaveMeNow.AI
In this section, we explain how we realized SaveMeNow.AI starting from the device SensorTile.box. Specif-
ically, in Subsection 3.1, we describe the main characteristics of the hardware adopted. Then, in Subsection
3.2, we outline how we implemented the logic of our approach in the device. Finally, in Subsection 3.3, we
8
Figure 4: SensorTile.box (STEVAL-MKSBOX1V1)
show how we tested it.
3.1 Hardware characteristics of the IoT device at the core of SaveMeNow.AI
The choice of the IoT device for implementing our approach and constructing SaveMeNow.AI was not
simple. Indeed, it had to comply with some requirements. First, as outlined previously, the device had to
be small and ergonomic in order to be worn by a user. Afterwards, it should have an Inertial Measurement
Unit (i.e., IMU), which, in its turn, should have contained an accelerometer and a gyroscope, as well as
a Bluetooth module able to manage the Bluetooth Low Energy (i.e., BLE) protocol. One of the possible
devices compliant with all these requirements is SensorTile.box provided by STMicroelectronics. In Figure
4, we report a picture of this device.
SensorTile.box was designed for supporting the development of wearable IoT devices. It contains a BLE
v4.2 module and a ultra-low-power microcontroller STM32L4R9 that manages the following sensors:
STTS751, which is a high precision temperature sensor;
LSM6DSOX, which is a six-axis IMU and Machine Learning Core (i.e., MLC);
LIS3DHH and LIS2DW12, which is a three-axis accelerometer;
LIS2MDL, which is a magnetometer;
LPS22HH, which is a pressure sensor;
MP23ABS1, which is an analogic microphone;
HTS221, which is a humidity sensor.
As we said in the previous sections, in the current version of SaveMeNow.AI, the only sensor we used is
LSM6DSOX. However, we do not exclude that we will employ one or more of the other sensors in the future.
LSM6DSOX contains everything necessary for our approach. Indeed, it is a system-in-package (i.e.,
SIP) that contains a three-axis high precision accelerometer and gyroscope. Beside from the advantages of
being a low-power sensor and having a small size, the really important feature of LSM6DSOX is the MLC
component. In fact, it is able to directly implement Artificial Intelligence algorithms in the sensor, without
involving a processor. MLC uses data provided by the accelerometer, the gyroscope and some possible
external sensors for computing some statistical parameters (such as mean, variance, maximum and minimum
values, etc.) in a specific sliding time window. These parameters can be provided in input to a classification
algorithm (in our case, a decision tree) previously loaded by the user. The whole workflow of MLC is
reported in Figure 5.
9
Figure 5: Workflow of the Machine Learning Core of LSM6DSOX
As reported in Figure 5, some filters can be applied to provided data. Specifically, the possible filters
are a low-pass filter, a bandwidth filter, a First-Order IIR and a Second-Order IIR. This last feature was very
important for our approach in that it allowed us to implement the Butterworth filter to be applied on the data
provided by the accelerometer and the gyroscope to reduce noise (see Section 2.1).
3.2 Embedding the logics of SaveMeNow.AI in the IoT device
In order to implement our approach, we had to develop a firmware that can be loaded in SensorTile.box.
This device accepts a firmware written in the C language, which must contain all the instructions for the
initialization of the micro-controller and the configuration of the Machine Learning Core. To support these
tasks, STMicroelectronics provides two software tools (i.e., STM32CubeMX and STM32CubeIDE) allowing
users to develop C code for the microcontroller STM32L4R9. STM32CubeMX is a graphic tool to initialize
the peripherals, such as GPIO and USART, and the middlewares, like USB or TCP/IP protocols, of the
microcontroller. The second software is an IDE allowing users to write, debug and compile the firmware of
the microcontroller.
The firmware that we developed contains three main functions, namely:
HAL_init(): it initializes the Hardware Abstraction Layer, which represents a set of APIs above the
hardware allowing developers to interact with the hardware components in a safe way.
Bluetooth_init(): it initializes the whole Bluetooth stack. Such a task comprises the setting of the
MAC address, the configuration of the HCI interface, the GAP and GATT protocols, and so forth.
MLC_init(): it initializes the MLC component of LSM6DOX and enables the interruption of the
output of the decision trees. The MLC initialization is performed through the loading of a specific
header file that configures all the registers of LSM6DOX. We dive into this file below.
The MLC configuration is not trivial, because it implies to also configure the sensors of LSM6DSOX
and to set all its registers. To perform this task, STMicroelectronics provides a software tool called Unico.
This is a Graphical User Interface allowing developers to manage and configure sensors, like accelerometers
and gyroscopes, along with the Machine Learning Core of LSM6DSOX. The output of Unico is a header file
10
Measure Setting
Input data Three axis accelerometer and gyroscope
MLC output frequency 12.5 Hz
Accelerometer sampling fre-
quency
12.5 Hz
Gyroscope sampling frequency 12.5 Hz
Full scale accelerometer ±8 g
Full scale gyroscope ±2000 dps
Sample window 37 samples
Filtering Second-Order IIR filter with cutting
frequency at 4 Hz
Table 4: Adopted configuration of the MLC Component
containing the configurations of all the registers and all the information necessary for the proper functioning
of the Machine Learning models. Indeed, thanks to Unico, it is possible to set the configuration’s parameters
of MLC and the sensors of LSM6DSOX, like the output frequency of MLC, the full scale of the accelerometer
and gyroscope, the sample window of reference for the computation of features, and so on. We report our
complete configuration in Table 4.
With this configuration, at each clock of MLC, the output of the classification algorithm implemented
therein is written to a dedicated memory register. In this way, it is possible to read this value and, in case
this last is set to Fall (which implies that the worker who is wearing it has presumably fallen), to activate
the alarm. At this point, all the problems concerning the communication between SaveMeNow.AI devices
in presence of an alarm come into play.
In Figure 6, we show a possible operation scenario of such an alarm. Each SaveMeNow.AI device
continuously checks its status and determines whether or not there is a need to send an alarm. If the MLC
component of the SaveMeNow.AI device worn by a worker reports a fall, the device itself sends an alarm
in broadcast mode. All the other SaveMeNow.AI devices that are in the signal range receive the alarm and,
then, trigger help (for example, workers wearing them go to see what happened). If no SaveMeNow.AI
device is in the range of the original alarm signal, the alarm is managed by the Gateway Devices. These must
be positioned in such a way as to cover the whole workplace area. A Gateway Device is always in a receiving
state and, when it receives an alarm, it sets a 30-second timer. After this time interval, if no SaveMeNow.AI
device was active in the reception range of the original alarm, the Gateway Device itself sends an alarm and
activates rescue operations.
As mentioned above, communications are managed through the Bluetooth protocol, in its low-energy
version, called BLE. Each SaveMeNow.AI device has two roles, i.e., Central and Peripheral. The BLE
protocol is ideal for our scenario because it allows SaveMeNow.AI to switch its role at runtime. During its
normal use, a SaveMeNow.AI device listens to any other device; therefore, it assumes the role of Central.
When the worker who wears it falls, and its MLC component detects and reports this fall, it switches its role
from Central to Peripheral and starts sending the advertising data connected to the alarm activation.
11
Figure 6: A possible emergency scenario
12
3.3 Testing of SaveMeNow.AI
After having deployed the logic of our approach in the SensorTile.box, we proceeded with the testing
campaign. Specifically, we selected 30 volunteers, 15 males and 15 females, of different age and weight,
and asked them to perform different kinds of activity. In particular, the considered activities include all the
ones mentioned in the past literature. They are reported in Table 5. Some of them could be labeled as Fall
Activity, whereas other ones could be labeled as Not Fall Activity. In all these activities, SaveMeNow.AI was
put at the waist of the volunteers.
In Table 6, we report the confusion matrix obtained after all the activities, and the corresponding output
provided by SaveMeNow.AI.
From the analysis of this table, we can observe that the number of real Fall Activities was 1,205; 1,170
of them were correctly recognized by SaveMeNow.AI, whereas 35 of them were wrongly categorized by
our system. On the other hand, the number of real Not Fall Activities was 595; 540 of them were correctly
recognized by SaveMeNow.AI, whereas 55 of them were wrongly labeled by our system. Observe that the
number of real Fall Activities is much higher than the one of real Not Fall Activities. This fact is justified
because, in our scenario, Sensitivity is much more important than Specificity. Starting from these results,
we have that the Sensitivity of SameMeNow.AI is equal to 0.97; its Specificity is equal to 0.91. Finally, its
Accuracy is 0.95.
After having tested SaveMeNow.AI, we can conclude that its performance is very satisfying during both
the training and the test phases. In our opinion, the dataset used for training played a key role in obtaining
this successful results, because it contained very heterogeneous activities, which allowed us to create a
generalized model. Indeed, our model is able to distinguish sport activities from fall activities, which is
a difficult task to achieve. A careful reader could point out that a generalized model like ours sacrifices
performance to generalizability. However, we observe that Sensitivity (that is the most important parameter
to evaluate in our scenario) is very high; only Specificity is not particularly high. This could lead to some
false alarms that, in most cases, could be directly stopped by the worker wearing the alarming device. On the
other hand, in a work environment, which is the reference application scenario of our approach, it is really
common to assist to activities like running or jumping, which could generate many false alarms if the model
would not be sufficiently generalized to handle them, at least partially.
4 Related Literature
Before examining related approaches in detail, we believe that a preliminary observation is necessary. In
fact, unlike most approaches present in the literature, which focus on elderly fall detection, our approach
was specifically conceived to detect falls in work environment. In our setting, during the working hours,
an operator can perform running and/or jumping activities that can be easily confused with fall activities.
Actually, in most past approaches, a wearable device contains accelerometers and gyroscopes registering the
user behavior. From these sensors’ perspective, sport activities and fall activities have some common points.
This is the reason why we also used the Daily and Sports Activities Dataset (see Section 2.1) to train our
classification algorithm. This choice allowed us to better train our classification algorithm in order to make
it more capable of distinguishing sport activities from fall activities.
After this clarification, we start our detailed analysis of related literature by observing that, as pointed out
in the Introduction, the different techniques developed for fall detection can be categorized in three different
classes, depending on the environment where the user and the employed sensors operate. Specifically, it is
possible to distinguish three categories of fall detection systems, namely ambient sensor based, vision based,
and wearable device based [24].
The first category is based on the recording of audios and videos of the environment and/or the monitoring
of vibrational data [33, 37, 7, 35, 9]. In the former case, the fall detection techniques exploit audio and
video streams for object detection and tracking. For instance, in [33], the authors present an image sensing
and vision-based reasoning for analyzing and verifying sensor-transmitted events. Specifically, a wireless
badge node is placed between the user and her network; it detects falls through event sensing functions.
13
Not Fall Activity
Walk slow (<6𝑘 𝑚/)
Walk fast (6𝑘 𝑚/)
Run slow (<8𝑘𝑚/)
Run fast (≥ 8𝑘 𝑚/)
Sit slowly in a chair
Sit slowly on the ground
Sit abruptly in a chair
Jump to reach an object located at
the top
Go up and down the stairs slowly
(<6𝑘𝑚/)
Go up and down the stairs quickly
(6𝑘𝑚/)
Walk and stumble without falling
down
Jump forward from an elevated po-
sition
Jump forward from the floor
Fall Activity
Walk and fall forward after tripping
Walk and fall sideways (right) after
tripping
Walk and fall to the side (left) after
tripping
Fake fainting and fall on the right
while standing
Fake fainting and fall forward while
standing
Fake fainting and fall on the left
while standing
Run and fall forward after stumbling
Table 5: A taxonomy for Not Fall Activities (on the left) and Fall Activities (on the right)
(Real) Fall (Real) Not
Fall
(Evaluated)
Fall
1170 (TP) 55 (FP)
(Evaluated)
Not Fall
35 (FN) 540 (TN)
Table 6: Confusion matrix for the output provided by SaveMeNow.AI
14
Furthermore, there is a continuous tracking of the approximate location of the user performed through signal
strength measurements provided by the network nodes.
Another interesting approach is proposed in [37], where the authors use the audio signal from a single
far-field microphone. In particular, they create a Gaussian Mixture Model (i.e., GMM) super vector to
model each fall as a noise segment, and then compute the difference between audio segments by means of
the Euclidean distance. The kernel between the GMM super vectors makes up the one of Support Vector
Machine employed for the classification of various types of audio and noise segments into falls.
Ambient sensors based approaches exploiting vibrational data are focused on the usage of pressure
sensors. For example, in [2], the authors design a floor vibration-based fall detector. It considers the
vibrations caused by objects moving on the floor, because the vibrations generated by a human fall are
different from the ones related to normal activities. In this perspective, they use a special piezoelectric
sensor, coupled with the floor, and generate a binary fall signal in case of a fall event.
Another proposal in this setting can be found in [29]. Here, the authors propose to use a floor sensor
based on near-field imaging. This sensor detects the locations and patterns of people by measuring the
impedances with respect to a matrix of thin electrodes under the floor. Then, a collection of features is
computed starting from the cluster of observations associated with a person. In this way, a Bayesan filter
and a Markov chain can be adopted to estimate the posture of the user and, finally, to detect a possible fall.
The approaches based on ambient sensors are not intrusive for the final user. However, they have two
main disadvantages. The former regards their cost, while the latter concerns the difficulty of installing them
because it is necessary to setup the whole room with sensors.
The second type of fall detection approaches concerns those based on vision [23, 25, 10, 21, 11]. The
reasoning underlying this kind of system is that cameras are increasingly present in our daily environment
and are less intrusive than other kinds of objects (for instance, the ones that should be worn by the user). In
[23], the authors present a fall detector for smart homes based on artificial vision algorithms. The overall
system is developed through a single-board computer with an external camera, placed in the room to be
monitored. The approach consists of different phases. First, it acquires an image and subtracts the subject
from the background. Then, it uses a Kalman filter to reduce noise in the data. Afterwards, it starts to study
the changes in the human actions. Finally, it applies a Machine Learning algorithm to the data obtained to
classify the current state of the subject.
Another interesting fall detection system is reported in [25], where the authors propose a framework for
indoor scenarios using a single-camera system. This approach is based on the analysis of motion orientation,
motion magnitude and human shape changes. According to the authors, the duration of a fall is often less
than 2𝑠, starting when the balance is lost until the fallen person completely lies on the floor. Specifically this
system works as follows: when it detects an abnormally large motion, whose direction is less than 180, it
continues to monitor the next 50 frames. Then, if there is a downward movement, followed by the exceeding
of AR ratio (which represents a body width-to-height ratio) and the inclination of the angle of the major axis
of the person, a fall might have happened. Then, it monitors the next 25 frames and, if no further movement,
or just a small movement, occurs, it concludes that the motion is a fall. If none of the above conditions is
satisfied, no warning signal is sent out and the monitoring continues.
Vision based approaches are really interesting and can achieve a great accuracy in the fall detection
process. Their main drawback concerns the necessity to install cameras in each room to monitor, which, in
turn, leads to a high installation cost.
Finally, the last category of fall detection systems is based on wearable devices [27, 31, 16, 18, 36, 4, 3, 20].
These approches rely on smart garments with embedded sensors capable of detecting the motion and location
of the user body. In the literature, there are many interesting proposals, each one with different employed
sensors. For instance, in [17], the authors present a posture-based fall detection algorithm that operates
starting from the reconstruction of the posture of a user. Several wireless tags are placed on some parts of
the body, such as hips, ankles, knees, wrists, shoulders and elbows. The locations of these tags are detected
by a motion capture system, so that it can reconstruct the complete posture of a person in a 3D plane. Finally,
acceleration thresholds, along with velocity profiles, are applied to detect falls.
A less invasive approach, based on an accelerometer, is presented in [22]. Here, the authors use an
integrated approach of waist-mounted accelerometers, so that a fall is detected when a negative acceleration
15
is suddenly increased, due to the change in orientation from an upright to a lying position. A similar
proposal can be found in [34], where the authors design a wearable airbag containing an accelerometer and
a gyroscope. This airbag is inflated when acceleration and angular velocity thresholds are exceeded.
There are also interesting fall detection proposals using Machine Learning algorithms. An example is
reported in [30], where the authors propose a fall detection system consisting of a sensing unit (such as a
mobile phone) and a threshold for acceleration along three axes specific to a patient. The overall system
is based on monitoring the tri-axial accelerometer data in three different sliding time windows, each one
lasting one second. Depending on this information and the threshold related to a patient, the authors exploit
a Machine Learning algorithm to predict if she is falling or she is conducting normal daily activity.
A similar proposal is reported in [26]. Here, the authors describe a fall detection system with wearable
motion sensor units fitted to the subject’s body at six different positions. Each of these units comprises
three tri-axial devices, i.e., an accelerometer, a gyroscope, and a magnetometer. Then, six different Machine
Learning algorithms are tested to evaluate which one performs better than the others. Finally, the overall
system is tested in a real world scenario, obtaining interesting results. Even if this approach achieves a
high accuracy with an acceptable computation time, it could be invasive for the final user to be adopted in
everyday life.
Analogously to the other categories of fall detection approaches, the wearable based ones have their
advantages and disadvantages. The most important advantages are their cost efficiency, their easy installation
and setup. Furthermore, these systems are not directly connected with only one place, but with a person;
therefore, they can identify falls regardless of the environment. On the other hand, some disadvantages
concern the low computation power and the high energy power consumption characterizing wearable devices.
Another possible disadvantage could be the intrusiveness of the system in the user’s life, even if researchers
are constantly offering increasing small and ergonomic wearable devices.
In any case, since SaveMeNow.AI belongs to the category of wearable based fall detection approaches,
we consider appropriate to present a further comparison between it and several other approaches belonging
to this category, particularly those ones that, like ours, use accelerometers and gyroscopes.
In Table 7, we report a comparison between SaveMeNow.AI and most wearable based fall detection
approaches proposed in the literature. This comparison considers several characteristics, namely the position
of sensors, the adopted Machine Learning algorithm and the results obtained. From the analysis of this table,
we can see that SaveMeNow.AI returns results equivalent or better than the ones characterizing the other
approaches. In particular, the Sensitivity of SaveMeNow.AI (which, we recall, is much more important
than Specificity in our application scenario) is higher than the one of all the other approaches, except the
approach of [3] that presents a Sensitivity slightly higher (0.98 against 0.97 reached by SaveMeNow), even
if no information about Specificity and Accuracy is provided by the authors.
16
Research Sensors Sensors’position Algorithm Results
SaveMeNow.AI Accelerometer,
Gyroscope
Waist Decision Tree Sensitivity:
0.97
Specificity:
0.91
Accuracy:
0.95
[27] Accelerometer Waist Gaussian
Mixture
Model
Accuracy:
0.91
[31] Acceleromter,
Gyroscope,
Barometric
altimeter
Upper right
iliac bone
Decision Tree Sensitivity:
0.8
Specificity:
0.99
[16] Accelerometer,
Gyroscope
Shirt k-NN Sensitivity:
0.95
Specificity:
0.96
[18] Accelerometer Waist Binary
classifier
Accuracy:
0.95
[36] Gyroscope Waist Decision Tree Specificity:
1.00
[4] Accelerometer Waist One-Class
SVM
Accuracy:
0.96
[3] Accelerometer Jacket collar Decision Tree Sensitivity:
0.98
[20] Accelerometer Waist, neck,
right and left
hands
Decision Tree Accuracy:
0.92
Table 7: Comparison between SaveMeNow.AI and several wearable based fall detection approaches proposed
in past literature
5 Conclusion
In this paper, we have proposed SaveMeNow.AI, a Machine Learning based wearable device for fall detection
in a workplace. To realize it, we have preliminarily created a new dataset by merging four datasets available
17
online in order to obtain more data on the classical activities that a worker can perform in the workplace.
Then, we tested different classification algorithms and found that at least one of them, i.e. Decision Tree
based on C4.5, can reach very satisfactory results when applied on the created dataset.
After this, we selected an IoT device available on the market and we natively implemented the logic
of our approach on it. As for this aspect, we observe that the choice of SensorTile.box as the starting IoT
device where implementing SaveMeNow.AI helped us very much reaching our goals. Indeed, thanks to
SensorTile.box, we were able to implement all the operations for data collection, data pre-processing, feature
engineering and classification directly into the STM32L4R9 microcontroller and the LSM6DSOX sensor
contained in this device. This fact allowed us to obtain a relevant energy saving and the optimization of the
limited computation power characterizing our device, as well as all the other wearable ones available in the
market.
Afterwards, we tested SaveMeNow.AI in a real world scenario and found that its performance is very
satisfying, especially for Sensitivity. Finally, we proposed a comparison between SaveMeNow.AI and several
wearable based fall detection approaches proposed in the literature.
Regarding some possible future developments of our research, we note that, currently, the only sensor
of SensorTile.box used in SaveMeNow.AI is LSM6DSOX. However, other sensors in the device may be
useful to monitor some parameters to predict and/or report possible emergency situations in a workplace.
For example, humidity, pressure and temperature sensors could be used for this purpose. In addition, the set
of SaveMeNow.AI devices worn by operators in a delimited place can be seen as a Wireless Sensor Network
that could be used, similarly to what proposed in [28], to detect emergency situations, such as fires or harmful
gas leaks.
Another interesting development could be the implementation of a routing system that can show a rescuer
the shortest route to the fallen worker. Finally, SaveMeNow.AI could be transformed into a non-invasive
garment that allows a worker to perform operations and movements in total freedom. The simplest solution
would be the insertion of the various sensors on a shirt that, once worn, would allow the evaluation of the
accelerometric and gyroscopic data in a solidary way with the body, making data processing even more
accurate. Last, but not the least, other sensors could be added to evaluate vital parameters, such as blood
pressure and heartbeat. This would open up new frontiers in the use of SaveMeNow.AI which would also
(at least partially) become a medical device.
Acknowledgments
This work was partially funded by the Department of Information Engineering at the Polytechnic University
of Marche under the project “A network-based approach to uniformly extract knowledge and support decision
making in heterogeneous application contexts” (RSAB 2018), and by the Marche Region under the project
“Human Digital Flexible Factory of the Future Laboratory (HDSFIab) - POR MARCHE FESR 2014-2020
- CUP B16H18000050007”.
References
[1] K. Altun, B. Barshan, and O. Tunçel. Comparative study on classifying human activities with miniature inertial
and magnetic sensors. Pattern Recognition, 43(10):3605–3620, 2010.
[2] M. Alwan, P.J. Rajendran, S. Kell, D. Mack, S. Dalal, M. Wolfe, and R. Felder. A smart and passive floor-
vibration based fall detector for elderly. In Proc. of the International Conference on Information & Communication
Technologies (ICICT’06), volume 1, pages 1003–1007, Damascus, Syria, 2006. IEEE.
[3] G. Anania, A. Tognetti, N. Carbonaro, M. Tesconi, F. Cutolo, G. Zupone, and D. De Rossi. Development of a
novel algorithm for human fall detection using wearable sensors. Sensors, pages 1336–1339, 2008. IEEE.
[4] A.K. Bourke and G.M. Lyons. A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor.
Medical engineering & physics, 30(1):84–90, 2008. Elsevier.
[5] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001. Springer.
18
[6] E. Casilari, J. Santoyo-Ramón, and J. Cano-García. Analysis of public datasets for wearable fall detection systems.
Sensors, 17(7):1513, 2017.
[7] K. Chaccour, R. Darazi, A.H. El Hassans, and E. Andres. Smart carpet using differential piezoresistive pressure
sensors for elderly fall detection. In Proc. of the International Conference on Wireless and Mobile Computing,
Networking and Communications (WIMOB’15), pages 225–229, Abu-Dhabi, United Arab Emirates, 2015. IEEE.
[8] H.L. Chan. CGU-BES Dataset for Fall and Activity of Daily Life. 8 2018.
[9] I. Chandra, N. Sivakumar, C.B. Gokulnath, and P. Parthasarathy. IoT based fall detection and ambient assisted
system for the elderly. Cluster Computing, 22(1):2517–2525, 2019. Springer.
[10] R. Cucchiara, A. Prati, and R. Vezzani. A multi-camera vision system for fall detection and alarm generation.
Expert Systems, 24(5):334–345, 2007. Wiley Online Library.
[11] G. Diraco, A. Leone, and P. Siciliano. An active vision system for fall detection and posture recognition in elderly
healthcare. In Proc. of the Design, Automation & Test in Europe Conference & Exhibition (DATE’10), pages
1536–1541, Dresden, Germany, 2010. IEEE.
[12] R. Genuer, J.M. Poggi, and C. Tuleau-Malot. Variable selection using random forests. Pattern recognition letters,
31(14):2225–2236, 2010. Elsevier.
[13] R.M. Gibson, A. Amira, N. Ramzan, P. Casaseca de-la Higuera, and Z. Pervez. Multiple comparator classifier
framework for accelerometer-based fall detection and diagnostic. Applied Soft Computing, 39:94–103, 2016.
[14] J. Han, M. Kamber, and J. Pei. Data Mining: Concepts and Techniques - Third Edition. 2011. Morgan Kaufmann
notes.
[15] F. Hussain, M.B. Umair, M. Ehatisham ul Haq, I.M. Pires, T. Valente, N.M. Garcia, and N. Pombo. An Efficient
Machine Learning-based Elderly Fall Detection Algorithm. arXiv preprint 1911.11976, 2019.
[16] H. Jian and H. Chen. A portable fall detection and alerting system based on k-NN algorithm and remote medicine.
China Communications, 12(4):23–31, 2015. IEEE.
[17] B. Kaluža and M. Luštrek. Fall detection and activity recognition methods for the confidence project: a survey.
A:22–25, 2009.
[18] D.M. Karantonis, M.R. Narayanan, M. Mathie, N.H. Lovell, and B.G. Celler. Implementation of a real-time human
movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Transactions on Information
Technology in Biomedicine, 10(1):156–167, 2006. IEEE.
[19] B. Kwolek and M. Kepski. Human fall detection on embedded platform using depth maps and wireless accelerom-
eter. Computer Methods and Programs in Biomedicine, 117(3):489–501, 2014.
[20] C.F. Lai, S.Y. Chang, H.C. Chao, and Y.M. Huang. Detection of cognitive injured body region using multiple
triaxial accelerometers for elderly falling. Sensors, 11(3):763–770, 2010. IEEE.
[21] G. Mastorakis and D. Makris. Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image
Processing, 9(4):635–646, 2014. Springer.
[22] M.J. Mathie, A.C.F. Coster, N.H. Lovell, and B.G. Celler. Accelerometry: providing an integrated, practical
method for long-term, ambulatory monitoring of human movement. Physiological measurement, 25(2):R1, 2004.
IOP Publishing.
[23] K. De Miguel, A. Brunete, M. Hernando, and E. Gambao. Home camera-based fall detection system for the elderly.
Sensors, 17(12):2864, 2017. MDPI.
[24] M. Mubashir, L. Shao, and L. Seed. A survey on fall detection: Principles and approaches. Neurocomputing,
100:144–152, 2013. Elsevier.
[25] V.A. Nguyen, T.H. Le, and T.H. Nguyen. Single camera based fall detection using motion and human shape
features. In Proc. of the Symposium on Information and Communication Technology (SoICT’16), pages 339–344,
Ho Chi Minh, Vietnam, 2016.
[26] A.T. Özdemir and B. Barshan. Detecting falls with wearable sensors using machine learning techniques. Sensors,
14(6):10691–10708, 2014. MDPI.
[27] N. Pannurat, S. Thiemjarus, and E. Nantajeewarawat. A hybrid temporal reasoning framework for fall monitoring.
IEEE Sensors Journal, 17(6):1749–1759, 2017. IEEE.
19
[28] A. Qandour, D. Habibi, and I. Ahmad. Wireless sensor networks for fire emergency and gas detection. In Proc.
of the International Conference on Networking, Sensing and Control (ICNSC’12), pages 250–255, Beijing, China,
2012. IEEE.
[29] H. Rimminen, J. Lindström, M. Linnavuo, and R. Sepponen. Detection of falls among the elderly by a floor sensor
using the electric near field. IEEE Transactions on Information Technology in Biomedicine, 14(6):1475–1476,
2010. IEEE.
[30] W. Saadeh, M.A.B. Altaf, and M.S.B. Altaf. A high accuracy and low latency patient-specific wearable fall
detection system. In Proc. of the International Conference on Biomedical & Health Informatics (BHI’17), pages
441–444, Orlando, FL, USA, 2017. IEEE.
[31] A.M. Sabatini, G. Ligorio, A. Mannini, V. Genovese, and L. Pinna. Prior-to-and post-impact fall detection
using inertial and barometric altimeter measurements. IEEE Transactions on Neural Systems and Rehabilitation
Engineering, 24(7):774–783, 2015. IEEE.
[32] A. Sucerquia, J.D. López, and J.F. Vargas-Bonilla. SisFall: A fall and movement dataset. Sensors, 17(1):198,
2017. MDPI.
[33] A.M. Tabar, A. Keshavarz, and H. Aghajan. Smart home care network using sensor fusion and distributed vision-
based reasoning. In Proc. of the International Workshop on Video Surveillance & Sensor Networks (VSSN’06),
pages 145–154, Santa Barbara, CA, USA, 2006.
[34] T. Tamura, T. Yoshimura, M. Sekine, M. Uchida, and O. Tanaka. A wearable airbag to prevent fall injuries. IEEE
Transactions on Information Technology in Biomedicine, 13(6):910–914, 2009. IEEE.
[35] F. Wang, Z. Wang, Z. Li, and J.R. Wen. Concept-based Short Text Classification and Ranking. In Proc. of the
International Conference on Information and Knowledge Management (CIKM’14), pages 1069–1078, Shangai,
China, 2014. ACM.
[36] T. Zhang, J. Wang, L. Xu, and P. Liu. Fall detection by wearable sensor and one-class SVM algorithm. Intelligent
computing in signal processing and pattern recognition, pages 858–863, 2006. Springer.
[37] X. Zhuang, J. Huang, G. Potamianos, and M. Hasegawa-Johnson. Acoustic fall detection using Gaussian mixture
models and GMM supervectors. In Proc. of the International Conference on Acoustics, Speech and Signal
Processing (ICASSP’09), pages 69–72, Taipei, Taiwan, 2009. IEEE.
20
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Falling is a commonly occurring mishap with elderly people, which may cause serious injuries. Thus, rapid fall detection is very important in order to mitigate the severe effects of fall among the elderly people. Many fall monitoring systems based on the accelerometer have been proposed for the fall detection. However, many of them mistakenly identify the daily life activities as fall or fall as daily life activity. To this aim, an efficient machine learning-based fall detection algorithm has been proposed in this paper. The proposed algorithm detects fall with efficient sensitivity, specificity, and accuracy as compared to the state-of-the-art techniques. A publicly available dataset with a very simple and computationally efficient set of features is used to accurately detect the fall incident. The proposed algorithm reports and accuracy of 99.98% with the Support Vector Machine (SVM) classifier.
Article
Full-text available
Falls are considered as risky for the elderly people because it may affect the health of the people. So, in the recent years many elderly fall detection methods has been developed. In the present years many fall detection method had been developed but it uses only accelerometer sensor to detect the fall. It may fail in finding in the difference between actual fall and fall like activities such as sitting fast and jumping. In the proposed approach I have suggested a fall detection algorithm to detect the fall of elderly people. Daily human activities are divided into two parts such as static position and dynamic position. With the help of tri-axis accelerometer proposed fall detection can detect four kinds of positions such as falling front, front backward, jumping and sitting fastly. Acceleration and velocity is used to determine kind of fall. Our algorithm uses accelerometer and gyroscope sensor to predict the fall correctly and reduce the false positives and false negatives and increase the accuracy. In addition to that our method is made out of low cost and it can be used in real-time.
Article
Full-text available
Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.
Article
Full-text available
Due to the boom of wireless handheld devices such as smartwatches and smartphones, wearable Fall Detection Systems (FDSs) have become a major focus of attention among the research community during the last years. The effectiveness of a wearable FDS must be contrasted against a wide variety of measurements obtained from inertial sensors during the occurrence of falls and Activities of Daily Living (ADLs). In this regard, the access to public databases constitutes the basis for an open and systematic assessment of fall detection techniques. This paper reviews and appraises twelve existing available data repositories containing measurements of ADLs and emulated falls envisaged for the evaluation of fall detection algorithms in wearable FDSs. The analysis of the found datasets is performed in a comprehensive way, taking into account the multiple factors involved in the definition of the testbeds deployed for the generation of the mobility samples. The study of the traces brings to light the lack of a common experimental benchmarking procedure and, consequently, the large heterogeneity of the datasets from a number of perspectives (length and number of samples, typology of the emulated falls and ADLs, characteristics of the test subjects, features and positions of the sensors, etc.). Concerning this, the statistical analysis of the samples reveals the impact of the sensor range on the reliability of the traces. In addition, the study evidences the importance of the selection of the ADLs and the need of categorizing the ADLs depending on the intensity of the movements in order to evaluate the capability of a certain detection algorithm to discriminate falls from ADLs.
Article
Full-text available
Research on fall and movement detection with wearable devices has witnessed promising growth. However, there are few publicly available datasets, all recorded with smartphones, which are insufficient for testing new proposals due to their absence of objective population, lack of performed activities, and limited information. Here, we present a dataset of falls and activities of daily living (ADLs) acquired with a self-developed device composed of two types of accelerometer and one gyroscope. It consists of 19 ADLs and 15 fall types performed by 23 young adults, 15 ADL types performed by 14 healthy and independent participants over 62 years old, and data from one participant of 60 years old that performed all ADLs and falls. These activities were selected based on a survey and a literature analysis. We test the dataset with widely used feature extraction and a simple to implement threshold based classification, achieving up to 96% of accuracy in fall detection. An individual activity analysis demonstrates that most errors coincide in a few number of activities where new approaches could be focused. Finally, validation tests with elderly people significantly reduced the fall detection performance of the tested features. This validates findings of other authors and encourages developing new strategies with this new dataset as the benchmark.
Article
This paper presents a real-time method for detecting a fall at different phases using a wireless tri-axial accelerometer and reports the classification performance when the sensor is placed on different body parts. The proposed hybrid framework combines a rule-based knowledge representation scheme with a time control mechanism and machine-learningbased activity classification. Real-time temporal reasoning is performed using a standard rule-based inference engine. The framework is validated for fall detection performance, false alarm evaluation, and comparison with a highly-cited baseline method. Based on a dataset with 14 fall types (280 falls) collected from 16 subjects, the highest accuracy values of 86.54%, 87.31%, and 91.15% are obtained for fall detection at pre-impacts, impacts, and post-impacts, respectively. Without post-impact activity information, the side of the waist and chest are the best sensor positions, followed by the head, front of the waist, wrist, ankle, thigh, and upper arm. With post-impact activity information, the best sensor position is the side of the waist, followed by the head, wrist, front of the waist, thigh, chest, ankle, and upper arm. Most false alarms occur during transitions of lying postures. The proposed method is more robust to a variety of fall and activity types and yields better classification performance and false alarm rates compared to the baseline method. The results provide guidelines for sensor placement when developing a fall monitoring system.
Conference Paper
Most existing approaches for text classification represent texts as vectors of words, namely "Bag-of-Words." This text representation results in a very high dimensionality of feature space and frequently suffers from surface mismatching. Short texts make these issues even more serious, due to their shortness and sparsity. In this paper, we propose using "Bag-of-Concepts" in short text representation, aiming to avoid the surface mismatching and handle the synonym and polysemy problem. Based on "Bag-of-Concepts," a novel framework is proposed for lightweight short text classification applications. By leveraging a large taxonomy knowledgebase, it learns a concept model for each category, and conceptualizes a short text to a set of relevant concepts. A concept-based similarity mechanism is presented to classify the given short text to the most similar category. One advantage of this mechanism is that it facilitates short text ranking after classification, which is needed in many applications, such as query or ad recommendation. We demonstrate the usage of our proposed framework through a real online application: Channel-based Query Recommendation. Experiments show that our framework can map queries to channels with a high degree of precision (avg. precision = 90.3%), which is critical for recommendation applications.