ChapterPDF Available

Fall Detection with Unobtrusive Infrared Array Sensors

Authors:

Abstract and Figures

As the world’s aging population grows, fall is becoming a major problem in public health. It is one of the most vital risks to the elderly. Many technology based fall detection systems have been developed in recent years with hardware ranging from wearable devices to ambience sensors and video cameras. Several machine learning based fall detection classifiers have been developed to process sensor data with various degrees of success. In this paper, we present a fall detection system using infrared array sensors with several deep learning methods, including long-short-term-memory and gated recurrent unit models. Evaluated with fall data collected in two different sets of configurations, we show that our approach gives significant improvement over existing works using the same infrared array sensor.
Content may be subject to copyright.
Cronfa - Swansea University Open Access Repository
_____________________________________________________________
This is an author produced version of a paper published in:
Multisensor Fusion and Integration in the Wake of Big Data, Deep Learning and Cyber Physical System
Cronfa URL for this paper:
http://cronfa.swan.ac.uk/Record/cronfa39574
_____________________________________________________________
Book chapter :
Fan, X., Zhang, H., Leung, C. & Shen, Z. (2019). Fall Detection with Unobtrusive Infrared Array Sensors. Multisensor
Fusion and Integration in the Wake of Big Data, Deep Learning and Cyber Physical System, (pp. 253-267). Springer.
_____________________________________________________________
This item is brought to you by Swansea University. Any person downloading material is agreeing to abide by the terms
of the repository licence. Copies of full text items may be used or reproduced in any format or medium, without prior
permission for personal research or study, educational or non-commercial purposes only. The copyright for any work
remains with the original author unless otherwise specified. The full-text must not be sold in any format or medium
without the formal permission of the copyright holder.
Permission for multiple reproductions should be obtained from the original author.
Authors are personally responsible for adhering to copyright and publisher restrictions when uploading content to the
repository.
http://www.swansea.ac.uk/library/researchsupport/ris-support/
Fall Detection with Unobtrusive Infrared Array
Sensors
Xiuyi Fan1, Huiguo Zhang2, Cyril Leung3, and Zhiqi Shen2
1Swansea University, Swansea, United Kingdom,
2Nanyang Technological University, Singapore,
3The University of British Columbia, Canada
Abstract. As the world’s aging population grows, fall is becoming a
major problem in public health. It is one of the most vital risks to the
elderly. Many technology based fall detection systems have been devel-
oped in recent years with hardware ranging from wearable devices to
ambience sensors and video cameras. Several machine learning based fall
detection classifiers have been developed to process sensor data with var-
ious degrees of success. In this paper, we present a fall detection system
using infrared array sensors with several deep learning methods, includ-
ing long-short-term-memory and gated recurrent unit models. Evaluated
with fall data collected in two different sets of configurations, we show
that our approach gives significant improvement over existing works us-
ing the same infrared array sensor.
Keywords: fall detection, machine learning, unobtrusive sensing
1 Introduction
Aging is a global challenge faced by many countries in the world. The rapid
growth of the aging population puts high demand for relevent assistive tech-
nologies supported by various sensor-actuator systems [15]. There are various
types of sensors utilized in assisted living, including cameras [24], light sensors,
accelerometers [39], temperature sensors, gyroscope, barometer, infrared sensors
[32], etc. These sensors are rich data sources for analyzing various aspects of a
user’s daily life, ranging from health and fitness monitoring, personal biometric
signature, navigation and localization [25]. In this context, one particular prob-
lem is the detection of falls. Fall is the most vital risk to the elderly’s health as
over one in every three elderly people suffer from fall consequences [12, 41]. In
event of fall, it is urgent to provide immediate treatment of the injured. Thus
the quick detection of fall is essential for on time treatment [38].
Technology based fall detection has been of great interest. It has generated a
wide range of applied research and has prompted the development of telemoni-
toring systems to enable the early diagnosis of fall conditions [27]. Mubashir et.
al. distinguish fall detection systems into three categories, wearable devices, am-
bience sensors and cameras [25]. The first category needs the subject of interest
2 Fan et al.
wearing a wearable device all the time whereas the last two only to deploy the
device in the vicinity of the subject.
In addition to sensor development, different data classification techniques
have been developed for fall detection. From raw sensor data, various data pro-
cessing algorithms have been proposed in the literature. Roughly speaking, there
are two schools of methods for fall detection: rule-based methods that detect falls
with domain knowledge and machine learning based approaches “learn fall char-
acteristics” from training data [15, 27].
In this work, we present a fall detection system that is based on data col-
lected from Grid-Eye Infrared Array Sensors, which are low cost, low resolution
infrared thermal image temperature sensors. These low resolution sensors have
less intrusion of privacies when compared with high resolution sensors like RGB
cameras. Sensor data is processed with several mainstream deep learning mod-
els, including the long short term memory (LSTM) [11] and gated recurrent
unit (GRU) models [6]. We have also experimented these models with attention
mechanisms as proposed in [7]. We compare our approaches with the fall detec-
tion system reported in [22], which also uses the same Grid-Eye sensor, and we
show that our approach yields improvement over existing ones.
The rest of this paper is organized as follows. Section 2 introduces several
existing works on fall detection. Section 3 introduces deep learning classifiers we
developed in this work. Section 4 presents performance evaluation of the devel-
oped fall detection system. We conclude the paper and discuss future research
directions in section 5.
2 Related Work
Existing fall detection systems can be categorized into three types, wearable
devices, camera systems and ambience sensors [25]. Wearable devices are sensors
attached to a human body to collect body movements and to recognize activities.
Most wearable devices use accelerometers and gyroscopes [16, 4]. In these fall
detection systems, sensors are attached to different parts of the user’s body such
as waist [41], chest [12], and shoes [30]. One major problem with wearable device
based methods is that the user has to wear the device all the time, which causes
a great amount of inconvenience. Also, users often forgot to wear such devices
from time to time.
Camera based fall detection systems normally use RGB cameras [28]. Re-
cently, several studies also use Microsoft Kinect [33, 23]. Camera-based devices
are commonly deployed through the elderly’s house or at public places. There
are two limitations with these systems, privacy intrusion with video monitoring
and the lack of system robustness.
Ambience sensor based fall detection systems have also been studied. Differ-
ent sensors or devices such as doppler radar [19], passive infrared sensors [20, 37,
22, 5], pressure sensors [35, 14], sound sensors [18] and Wi-Fi routers [36] have
been tested for fall detection.
Fall Detection 3
Many research has been devoted to the study of fall detection classification
algorithms [38, 1]. There are mainly two categories of methods developed, rule-
based methods that depend much on domain knowledge and machine learning
methods that recognize fall characteristics from sensor data [15, 27]. For instance,
[3, 2, 13, 17] are some early fall detection works with threshold-based algorithms.
In those works, thresholds are set such that if any of these thresholds is exceeded,
then a fall alert is triggered. The major drawback of these approaches is the lack
of adaptability and flexibility.
At the same time, various machine learning based fall detection classifiers
have been developed [21]. Mainstream machine learning approaches, including
decision trees [29], support vector machines (SVM) [34], k-nearest neighbours (k-
NN) [8] and hidden Markov models [10] have been applied in fall detection, see
e.g., [9, 26, 40, 5]. Many of these approaches rely on manually designed features
for classification.
The following works are most relevant to ours. L. Liu et. al. [19] develop a
dual Doppler radar system for fall detection. A fusion methodology combines
partial decision information from two sensors in three different classifiers, k-NN,
SVM and Bayes to form a fall/non-fall decision based on Melfrequency Cepstral
Coefficients (MFCC) features. Its performance measured with AUC is 0.88 and
0.97.
Liu et. al. [20] pospose a two-layer hidden Markov model for recognizing a
fall event based on the signals of five passive infrared sensors which were placed
at different heights on the wall. The associated sensitivity and specificity of the
falls algorithm were 92.5% and 93.7%, respectively.
Chen et. al. [5] use 16-by-4 thermopile array sensors for fall detection and el-
derly tracking. Two sensors are used in their system with a k-NN classifier. They
have reached 95.25% sensitivity, 90.75% specificity and 93% accuracy in their
experiment. Sixsmith and Johnson [31] developed a Smart Inactivity Monitor
using array-based detectors which also detects falls.
Mashiyama et. al. [22] propose a system of fall detection using an infrared
array sensor. From a data sequence obtained in a fixed window, four manually
crafted features, number of consecutive frames, maximum number of pixels, max-
imum variance of temperature and distance of a maximum temperature pixel,
are extracted from the sequence and used to classify falls or non-falls using with
the k-NN algorithm. Experiment results with their testing data show that their
system reaches 94% accuracy.
3 Fall Detection Classifiers
At the core of our fall detection system is the infrared array sensor, Grid-Eye
(AMG8832). A Grid-Eye sensor outputs an 8-pixel by 8-pixel temperature dis-
tribution in its 60-degree field of view at a maximum 10-frame per second rate.
Its maximum detection distance is 5m if there is a 4C temperature difference
between the foreground object and the background ambience. We use a Zig-
Bee CC2530 as a microprocessor to control the sensor via an I2C bus as shown
4 Fan et al.
in Figure 1. The measured temperature distribution is sent to another ZigBee
CC2530 at a 10Hz rate. A standard PC is then used for data processing and
classification.
Fig. 1. The Grid-Eye sensor package used in our experiment.
Although a Grid-Eye sensor measures temperature in a large range (-20C
to 100C), its temperature accuracy is only 3.0C. Since thermal image based
fall detection depends on correctly identifying the abrupt movement of a human
body, the ability to recognize the subtle temperature difference between the hu-
man body and the ambience is the key to ensure correct detections. However,
as illustrated in Figure 2, data obtained from Grid-Eye sensors is noisy. (In this
figure, warm colour indicates high temperature.) Thus, we develop a fall detec-
tion system with two main components: (1) data filters for pre-processing and
(2) neural networks for classification. As illustrated in Figure 3, data produced
by the Grid-Eye is firstly filtered with one of the filters. Filtered data is then
passed to neural network classifiers.
Three filters, Median, Gaussian and Wavelet, have been experimented in
this work. For neural network classifiers, we have experimented with two-layer
Fall Detection 5
Fig. 2. Illustration of Grid-Eye images. Top left: no person in Grid-Eye’s field of view.
Top right: a person standing on the right-hand side. Bottom left: a person falling from
the right-hand side. Bottom right: a person lying in front of the Grid-Eye.
Fig. 3. Fall Detection Classification Workflow.
6 Fan et al.
perceptron networks (Figure 4), long short-term memory (LSTM) networks and
gated recurrent unit (GRU) networks (Figure 5), each with and without attention
links.
Fig. 4. Two-layer Fully Connected Perceptron Network.
Fig. 5. LSTM / GRU Networks.
As illustrated in Figure 6, the developed system works as follows. At each
time step t, the Grid-Eye outputs thermal reading represented with a 1 ×64
vector. To detect fall, we examine data collected in a 2-second (outer) window.
Since the Grid-Eye is running at 10Hz, 20 1 ×64 vectors are collected during
each (outer) window. We then filter data stored in this outer window with one of
the three filters. For both median and Gaussian filters, an inner window of size
5 is used. For the wavelet filter, we use Daubechies 4 tap wavelet. The filtering
process does not change the size of the data. Filtered data is then sent to neural
networks for classification.
Two-layer perceptron networks with the following configuration are selected
for their simplicity. The input layer contains 64 ×20 = 1280 nodes (64 is the
length of the Grid-Eye output vector and 20 is the size of the outer window).
Fall Detection 7
Fig. 6. Data layout for filters and classifiers.
The fully connected hidden layer contains 400 nodes. The output layer contains
2 nodes (indicating a fall and not a fall, respectively).
LSTM and GRU networks have seen many successes in recent years. They
both contain “memory structures”, i.e., LSTM cells and GRU units, to store
past information. As illustrated in Figure 5, the input layers of our LSTM and
GRU networks both contain 64 nodes. There is a fully connected perceptron
layer with 64 nodes between the LSTM / GRU layer and the 2-node output
layer. The LSTM model can be described with the following equations.
i=σ(xtUi+st1Wi) (1)
f=σ(xtUf+st1Wf) (2)
o=σ(xtUo+st1Wo) (3)
g=tanh(xtUg+st1Wg) (4)
ct=ct1f+gi(5)
st=tanh(ct)o(6)
Here, σis the sigmoid function. denotes element-wise multiplication. xtis the
input at time t.stis the output of the cell at time t.Us and Ws are weight
matrices connecting various components. Specifically, in our system, xtis a 1-
by-64 vector; stis a 1-by-64 vector; Us, are 64-by-64 matrices; Ws are 64-by-64
matrices.
GRU [6] is a recently proposed variation of the LSTM model. The main
difference is that, instead of using three gates to control memory updates, a
8 Fan et al.
GRU unit uses only two gates. Formally, a GRU model can be described with
the following equations:
z=σ(xtUz+st1Wz) (7)
r=σ(xtUr+st1Wr) (8)
h=tanh(xtUh+ (st1r)Wh) (9)
st= (1 z)h+zst1(10)
Again, σis the sigmoid function. xtis the input at time t.his the output.
stis the internal state of a GRU unit at time t. The size of Us and Ws are the
same as in LSTM. Essentially, we use the same network structure as our LSTM
implementation, with LSTM cells replaced by GRU units.
Introducing attention mechanism into both LSTM and GRU models in this
work is very simple. Conceptually, the attention mechanism provides a means for
specifying the relative importance of each frame in a classification window (20-
frames in our case). For instance, stin Equation 6 for t= 20 not only depends
on s19 but also (directly) depends on all previous si, for 1 i19, i.e.,
s20 =X
0i<20
ωisi,(11)
for some ωialso learned with backward propagation though time as Uand W.
4 Performance Evaluation
To evaluate the performance of the developed system, we conduct fall detection
experiments in our laboratory environment (Figure 7). In our test, we have
created a dataset with 312 falls in two sets of configurations. As illustrated in
Figure 8, in the first set of experiments, the testing subject falls perpendicular
to the Grid-Eye sensor at A, B and C three different positions. In the second set
of experiments, the testing subject falls parallel to the Grid-Eye sensor, also at
A, B and C three different positions. In both configurations, negative examples
including randomly walking in the room, slowly sitting down, jumping, running
and laying down in front of the sensor have been performed. The dataset has
been created in multiple sessions crossing several days with ambient temperature
ranging from 19C to 23C.
For evaluation, we have divided the dataset into a training set with 240 falls
and a testing set with 72 falls with each falling position contains exactly the same
number of falls. Since robust fall detection requires high ratings in both precision
and recall, reducing both false positives and false negatives, we compare results
Fall Detection 9
Fig. 7. Testing Environment (illustrated for one testing configuration).
Fig. 8. Illustration of Experiment Configurations. In configurations shown on the left,
the testing subject falls in directions perpendicular to the Grid-Eye at positions A,
B and C. In configurations shown on the right, the testing subject falls in directions
parallel to the Grid-Eye at positions A, B and C.
10 Fan et al.
with F1 scores for each test case, defined as follows.
Precision = True Positive
True Positive + False Positive,
Recall = True Positive
True Positive + False Negative ,
F1 = 2 ×Precision ×Recall
Precision + Recall.
Table 1: Experiment Results from the MLP classifier.
F-Score Precision Recall Total True Positive False Negative
No Filter (H) 0.972 0.972 0.972 36 35 1
No Filter (V) 0.679 0.522 0.972 67 35 1
Median Filter (H) 0.986 0.972 1 37 36 0
Median Filter (V) 0.666 0.619 0.722 42 26 10
Gaussian Filter (H) 0.972 0.972 0.972 36 35 1
Gaussian Filter (V) 0.693 0.666 0.722 39 26 10
Wavelet Filter (H) 0.972 0.947 1 38 36 0
Wavelet Filter (V) 0.658 0.568 0.75 46 27 9
Table 2: Experiment Results from the LSTM classifier.
F-Score Precision Recall Total True Positive False Negative
No Filter (H) 0.956 1 0.916 33 33 3
No Filter (V) 0.864 0.777 0.972 45 35 1
Median Filter (H) 1 1 1 36 36 0
Median Filter (V) 0.805 0.805 0.805 36 29 7
Gaussian Filter (H) 0.986 0.972 1 37 36 0
Gaussian Filter (V) 0.805 0.805 0.805 36 29 7
Wavelet Filter (H) 0.986 0.972 1 37 36 0
Wavelet Filter (V) 0.746 0.659 0.861 47 31 5
Experiment results from our systems are shown in Table 1-5. In each table,
rows labelled with (H) and (V) are experment results from falls parallel and per-
pendicular to the Grid-Eye sensors, respectively. Overall, we make the following
observations.
Measured by F1 scores, all classifiers perform better in settings where users
fall parallelly to the sensor. This indicates that falling-parallel-to-the-sensor
is intrinsically easier to classify than falling-perpendicular-to-the-sensor.
Introducing filters specifically to remove noise improves the performance in
certain cases. Amongst three filters tested, the simple median filter performs
better than the other two.
Fall Detection 11
Table 3: Experiment Results from the LSTM-ATT classifier.
F-Score Precision Recall Total True Positive False Negative
No Filter (H) 0.972 0.947 1 38 36 0
No Filter (V) 0.857 0.804 0.916 41 33 3
Median Filter (H) 0.947 0.9 1 40 36 0
Median Filter (V) 0.819 0.723 0.944 47 34 2
Gaussian Filter (H) 0.96 0.923 1 39 36 0
Gaussian Filter (V) 0.735 0.627 0.888 51 32 4
Wavelet Filter (H) 0.944 0.944 0.944 36 34 2
Wavelet Filter (V) 0.749 0.681 0.833 44 30 6
Table 4: Experiment Results from the GRU classifier.
F-Score Precision Recall Total True Positive False Negative
No Filter (H) 0.972 0.9447 1 38 36 0
No Filter (V) 0.825 0.75 0.916 44 33 3
Median Filter (H) 0.935 0.878 1 41 36 0
Median Filter (V) 0.819 0.723 0.944 47 34 2
Gaussian Filter (H) 0.972 0.972 0.972 36 35 1
Gaussian Filter (V) 0.722 0.638 0.833 47 30 6
Wavelet Filter (H) 0.911 0.837 1 43 36 0
Wavelet Filter (V) 0.692 0.642 0.75 42 27 9
Table 5: Experiment Results from the GRU-ATT classifier.
F-Score Precision Recall Total True Positive False Negative
No Filter (H) 0.935 0.878 1 41 36 0
No Filter (V) 0.904 0.891 0.916 37 33 3
Median Filter (H) 0.986 0.972 1 37 36 0
Median Filter (V) 0.742 0.764 0.722 34 26 10
Gaussian Filter (H) 0.945 0.921 0.972 38 35 1
Gaussian Filter (V) 0.739 0.729 0.75 37 27 9
Wavelet Filter (H) 0.933 0.897 0.972 39 35 1
Wavelet Filter (V) 0.722 0.638 0.833 47 30 6
12 Fan et al.
There is no clear winner between LSTM models and GRU models. The
memory ability of both models works well.
Introducing attention mechanisms in both LSTM and GRU models does not
consistently improve the performance. This may suggest that fall detection
takes information from all frames containing a fall equally and it gives no
advantage to focus the detection at any single moment of the fall.
When the classification problem is easy (parallel settings), MLP does not
expose its weakness; however, when the problem gets more difficult (perpen-
dicular settings), models explicitly recording previous information perform
significantly better.
In order to put our results into perspective, we compare our approaches with
the model presented in [22], which uses the same Grid-Eye sensor with a k-
NN classifier with four manually crafted features. We replicate their system and
tested on our dataset, the comparison results are shown in Table 6 (perpendicular
to the sensor) and 7 (parallel to the sensor). From these two tables, we see that
their approach also performs better when falls are parallel to the sensor. However,
overall, their k-NN classifier with manually crafted features performs worse than
any of our neural network based approaches with data filtering.
Table 6: Fall Detection Performance (Falls are perpendicular to the Grid-Eye).
Precision Recall F1
GRU-ATT 0.891 0.916 0.904
GRU 0.75 0.916 0.825
LSTM-ATT 0.804 0.916 0.857
LSTM 0.777 0.972 0.864
MLP 0.666 0.722 0.693
k-NN [22] 0.52 1 0.68
Table 7: Fall Detection Performance (Falls are parallel to the Grid-Eye).
Precision Recall F1
GRU-ATT 0.97 1 0.99
GRU 0.972 0.972 0.972
LSTM-ATT 0.947 1 0.972
LSTM 1 1 1
MLP 0.972 1 0.986
k-NN [22] 0.83 0.97 0.9
We have also experimented with different outer window size for the fall de-
tection using four different classifiers. In the original setting, the outer window is
20 (See Figure 6), meaning that each fall detection occurs in a 2-second window,
Fall Detection 13
as the Grid-Eye is running at 10Hz. In Table 8 and 9, we show fall detection
result with outer window being 30, we see that the performances are consid-
erably lower for all four classifiers (the Median filter has been used in these
experiments). We interpret these results as: since fall is an instantaneous event,
increasing the window size does not help improving the detection performance.
Table 8: Fall Detection Performance with 3-seconds detection window (Falls are
perpendicular to the Grid-Eye).
Precision Recall F1
GRU-ATT 0.632 0.861 0.729
GRU 0.731 0.833 0.779
LSTM-ATT 0.695 0.888 0.780
LSTM 0.82 0.888 0.853
Table 9: Fall Detection Performance with 3-seconds detection window (Falls are
parallel to the Grid-Eye).
Precision Recall F1
GRU-ATT 0.7 0.972 0.813
GRU 0.809 0.944 0.871
LSTM-ATT 0.875 0.972 0.921
LSTM 0.947 1 0.972
5 Conclusion
Fall is a major health threat to the elderly. In event of fall, it is urgent to pro-
vide immediate treatment to the injured people. In this paper, we present a fall
detection system using Grid-Eye infrared array sensor. Due to its low spatial res-
olution, infrared array sensor incurs little privacy intrusion and can be deployed
to sensitive areas such as washrooms, which are known to be fall-prone. For data
processing, we have taken a two-step approach: (1) pre-processing data filtering
and (2) machine learning classification with neural networks. For filtering, we
have experimented with Wavelet, Gaussian and Median filters. For classification,
we have experimented with several deep learning models, including multi-layer
perceptrons, LSTM and GRU. To evaluate our approaches, we have created a
dataset containing over 300 falls in multiple configurations. We then compare
our work with an existing work using the same infrared array sensor but with
different classification techniques and show significantly improved classification
accuracy. In the future, we would like to (1) perform in depth theoretical study,
including computational complexity analysis, of the proposed methods, (2) de-
ploy our system to nursing homes for real-world experiment and (3) explore fall
detection with other ambience sensor systems and deployment configurations.
14 Fan et al.
References
1. F. Bagala, C. Becker, A. Cappello, L. Chiari, K. Aminian, J. M. Hausdorff, W. Zi-
jlstra, and J. Klenk. Evaluation of accelerometer-based fall detection algorithms
on real-world falls. PLoS ONE, 7(5), 2012.
2. A. K. Bourke and G. M. Lyons. A threshold-based fall-detection algorithm using
a bi-axial gyroscope sensor. Med Eng Phys, 30(1):84–90, Jan 2008.
3. A. K. Bourke, J. V. O’Brien, and G. M. Lyons. Evaluation of a threshold-based
tri-axial accelerometer fall detection algorithm. Gait Posture, 26(2):194–199, Jul
2007.
4. J. Chen, K. Kwong, D. Chang, J. Luk, and R. Bajcsy. Wearable sensors for reliable
fall detection. In 2005 IEEE Engineering in Medicine and Biology 27th Annual
Conference, pages 3551–3554, Jan 2005.
5. Wei-Han Chen and Hsi-Pin Ma. A fall detection system based on infrared array
sensors with tracking capability for the elderly at home. In 2015 17th International
Conference on E-health Networking, Application Services (HealthCom), pages 428–
434, Oct 2015.
6. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ul¸cehre, Dzmitry Bahdanau,
Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representa-
tions using RNN encoder-decoder for statistical machine translation. In Proceed-
ings of the 2014 Conference on Empirical Methods in Natural Language Processing,
EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special
Interest Group of the ACL, pages 1724–1734, 2014.
7. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, KyungHyun Cho, and
Yoshua Bengio. Attention-based models for speech recognition. CoRR,
abs/1506.07503, 2015.
8. Sahibsingh A. Dudani. The distance-weighted k-nearest-neighbor rule. IEEE
Trans. Systems, Man, and Cybernetics, 6(4):325–327, 1976.
9. Raghu K. Ganti, Praveen Jayachandran, Tarek F. Abdelzaher, and John A.
Stankovic. Satire: A software architecture for smart attire. In Proceedings of
the 4th International Conference on Mobile Systems, Applications and Services,
MobiSys ’06, pages 110–123, New York, NY, USA, 2006. ACM.
10. Zoubin Ghahramani. An introduction to hidden markov models and bayesian
networks. IJPRAI, 15(1):9–42, 2001.
11. Sepp Hochreiter and urgen Schmidhuber. Long short-term memory. Neural Com-
putation, 9(8):1735–1780, 1997.
12. J. Y. Hwang, J. M. Kang, Y. W. Jang, and H. C. Kim. Development of novel
algorithm and real-time monitoring ambulatory system using bluetooth module
for fall detection in the elderly. In Engineering in Medicine and Biology Society,
2004. IEMBS ’04. 26th Annual International Conference of the IEEE, volume 1,
pages 2204–2207, Sept 2004.
13. M. Kangas, A. Konttila, I. Winblad, and T. Jamsa. Determination of simple
thresholds for accelerometry-based parameters for fall detection. Conf Proc IEEE
Eng Med Biol Soc, 2007:1367–1370, 2007.
14. Lars Klack, Christian ollering, Martina Ziefle, and Thomas Schmitz-Rode. Fu-
ture Care Floor: A Sensitive Floor for Movement Monitoring and Fall Detection
in Home Environments, pages 211–218. Springer Berlin Heidelberg, Berlin, Hei-
delberg, 2011.
15. Simon Kozina, Hristijan Gjoreski, Matjaˇz Gams, and Mitja Luˇstrek. Efficient Ac-
tivity Recognition and Fall Detection Using Accelerometers, pages 13–23. Springer
Berlin Heidelberg, Berlin, Heidelberg, 2013.
Fall Detection 15
16. Q. Li, J. A. Stankovic, M. A. Hanson, A. T. Barth, J. Lach, and G. Zhou. Accurate,
fast fall detection using gyroscopes and accelerometer-derived posture information.
In 2009 Sixth International Workshop on Wearable and Implantable Body Sensor
Networks, pages 138–143, June 2009.
17. Qiang Li, Gang Zhou, and John A. Stankovic. Accurate, fast fall detection using
posture and context information. In Proceedings of the 6th ACM Conference on
Embedded Network Sensor Systems, SenSys ’08, pages 443–444, New York, NY,
USA, 2008.
18. Y. Li, Z. Zeng, M. Popescu, and K. C. Ho. Acoustic fall detection using a cir-
cular microphone array. In 2010 Annual International Conference of the IEEE
Engineering in Medicine and Biology, pages 2242–2245, Aug 2010.
19. L. Liu, M. Popescu, M. Skubic, and M. Rantz. An automatic fall detection frame-
work using data fusion of doppler radar and motion sensor network. In 2014 36th
Annual International Conference of the IEEE Engineering in Medicine and Biology
Society, pages 5940–5943, Aug 2014.
20. Tong Liu, Xuemei Guo, and Guoli Wang. Elderly-falling detection using distributed
direction-sensitive pyroelectric infrared sensor arrays. Multidimensional Systems
and Signal Processing, 23(4):451–467, 2012.
21. Mitja Lustrek and Bostjan Kaluza. Fall detection and activity recognition with
machine learning. Informatica (Slovenia), 33:197–204, 2009.
22. S. Mashiyama, J. Hong, and T. Ohtsuki. A fall detection system using low resolu-
tion infrared array sensor. In 2014 IEEE 25th Annual International Symposium on
Personal, Indoor, and Mobile Radio Communication (PIMRC), pages 2109–2113,
Sept 2014.
23. Georgios Mastorakis and Dimitrios Makris. Fall detection system using kinect’s
infrared sensor. Journal of Real-Time Image Processing, 9(4):635–646, 2014.
24. S. G. Miaou, Pei-Hsu Sung, and Chia-Yuan Huang. A customized human fall detec-
tion system using omni-camera images and personal information. In 1st Transdis-
ciplinary Conference on Distributed Diagnosis and Home Healthcare, 2006. D2H2.,
pages 39–42, April 2006.
25. Muhammad Mubashir, Ling Shao, and Luke Seed. A survey on fall detection:
Principles and approaches. Neurocomputing, 100:144–152, 2013.
26. H. Nait-Charif and S. J. McKenna. Activity summarisation and fall detection in a
supportive home environment. In Proceedings of the 17th International Conference
on Pattern Recognition, 2004. ICPR 2004., volume 4, pages 323–326 Vol.4, Aug
2004.
27. N. Noury, A. Fleury, P. Rumeau, A. K. Bourke, G. O. Laighin, V. Rialle, and J. E.
Lundy. Fall detection - principles and methods. In 2007 29th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society, pages 1663–
1666, Aug 2007.
28. C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau. Fall detection from hu-
man shape and motion history using video surveillance. In Advanced Information
Networking and Applications Workshops, 2007, AINAW ’07. 21st International
Conference on, volume 2, pages 875–880, May 2007.
29. S. Rasoul Safavian and David A. Landgrebe. A survey of decision tree classifier
methodology. IEEE Trans. Systems, Man, and Cybernetics, 21(3):660–674, 1991.
30. S. Y. Sim, H. S. Jeon, G. S. Chung, S. K. Kim, S. J. Kwon, W. K. Lee, and
K. S. Park. Fall detection algorithm for the elderly using acceleration sensors on
the shoes. In 2011 Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, pages 4935–4938, Aug 2011.
16 Fan et al.
31. A. Sixsmith and N. Johnson. A smart sensor to detect the falls of the elderly.
IEEE Pervasive Computing, 3(2):42–47, April 2004.
32. Andrew Sixsmith and Neil Johnson. A smart sensor to detect the falls of the
elderly. IEEE Pervasive Computing, 3:42–47, 2004.
33. E. E. Stone and M. Skubic. Fall detection in homes of older adults using the
microsoft kinect. IEEE Journal of Biomedical and Health Informatics, 19(1):290–
301, Jan 2015.
34. Johan A. K. Suykens and Joos Vandewalle. Least squares support vector machine
classifiers. Neural Processing Letters, 9(3):293–300, 1999.
35. Huan-Wen Tzeng, Mei-Yung Chen, and J. Y. Chen. Design of fall detection system
with floor pressure and infrared image. In 2010 International Conference on System
Science and Engineering, pages 131–135, July 2010.
36. H. Wang, D. Zhang, Y. Wang, J. Ma, Y. Wang, and S. Li. Rt-fall: A real-time and
contactless fall detection system with commodity wifi devices. IEEE Transactions
on Mobile Computing, PP(99):1–1, 2016.
37. Piotr Wojtczuk, David Binnie, Alistair Armitage, Tim Chamberlain, and Carsten
Giebeler. A touchless passive infrared gesture sensor. In Proceedings of the Adjunct
Publication of the 26th Annual ACM Symposium on User Interface Software and
Technology, UIST ’13 Adjunct, pages 67–68, New York, NY, USA, 2013. ACM.
38. Xinguo Yu. Approaches and principles of fall detection for elderly and patient. In
HealthCom 2008 - 10th International Conference on e-health Networking, Applica-
tions and Services, pages 42–47, July 2008.
39. Tong Zhang, Jue Wang, Ping Liu, and Jing Hou. Fall detection by embedding
an accelerometer in cellphone and using kfd algorithm. In IJCSNS International
Journal of Computer Science and Network Security, 2006.
40. Tong Zhang, Jue Wang, Liang Xu, and Ping Liu. Fall Detection by Wearable Sensor
and One-Class SVM Algorithm, pages 858–863. Springer Berlin Heidelberg, Berlin,
Heidelberg, 2006.
41. J. Zheng, G. Zhang, and T. Wu. Design of automatic fall detector for elderly based
on triaxial accelerometer. In 2009 3rd International Conference on Bioinformatics
and Biomedical Engineering, pages 1–4, June 2009.
... At first, 1674 articles were identified through databases search and seven were added after manual search in articles bibliographies and empirical searches on articles, Google Scholar and Semantic Scholar. After exclusion of duplicates and full-text reading, 19 articles were included [21][22][23][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42] (see details in Figure 1). Four authors "duplicated" their study in two different journals. ...
... The two publications of Hayashida et al 33,34 were similar and were duplicates published as "conference paper" in two different scientific journals. The two publications of Fan et al 30,31 were similar and were published as "conference paper" in two different scientific journals but the second publication contains additional results. 31 Therefore, our systematic review reported the results of 15 reports. ...
... The two publications of Fan et al 30,31 were similar and were published as "conference paper" in two different scientific journals but the second publication contains additional results. 31 Therefore, our systematic review reported the results of 15 reports. ...
Article
Full-text available
Systems using passive infrared sensors with a low resolution were recently proposed to answer the dilemma effectiveness–ethical considerations for human fall detection by Information and Communication Technologies (ICTs) in older adults. How effective is this type of system? We performed a systematic review to identify studies that investigated the metrological qualities of passive infrared sensors with a maximum resolution of 16×16 pixels to identify falls. The search was conducted on PubMed, ScienceDirect, SpringerLink, IEEE Xplore Digital Library, and MDPI until November 26–28, 2020. We focused on studies testing only these types of sensor. Thirteen articles were “conference papers”, five were “original articles” and one was a found in arXiv.org (an open access repository of scientific research). Since four authors “duplicated” their study in two different journals, our review finally analyzed 15 studies. The studies were very heterogeneous with regard to experimental procedures and detection methods, which made it difficult to draw formal conclusions. All studies tested their systems in controlled conditions, mostly in empty rooms. Except for two studies, the overall performance reported for the detection of falls exceeded 85–90% of accuracy, precision, sensitivity or specificity. Systems using two or more sensors and particular detection methods (eg, 3D CNN, CNN with 10-fold cross-validation, LSTM with CNN, LSTM and Voting algorithms) seemed to give the highest levels of performance (> 90%). Future studies should test more this type of system in real-life conditions.
... Compared with commonly used sensors such as cameras and microphones, infrared-based sensors have their own advantages [6]. Some researchers use a group of infrared array sensors to monitor the movement of pedestrians by detecting human existence and create trajectories of pedestrians [7], while other researchers use a long short-term memory and gated recurrent unit models to construct a fall detection system based on infrared array sensors [8]. Besides, location [9] and bedexit detection [10] have also been attempted with infrared array sensors. ...
... First, sequential 10 frames of static hand posture image are obtained, and then, the barycenter of each frame is calculated by Eqs. (8) and (9): ...
Article
Nowadays, with the development of automotive driving technologies, more and more functions and devices with control systems based on tactile, optical, and acoustic sensors are assembled into cars. However, these systems are faced with environmental limitations such as environmental noise and illumination conditions. Moreover, operations of these systems will cause lack of concentration on driving, which is a major cause of car accidents. In order to overcome these limitations, in this paper, an infrared array sensor is applied to construct a hand posture recognition system for in-vehicle device control. In the system, 10 kinds of target hand postures and posture movements toward four directions are combined to achieve the aim of the device selection and operations. The input images are separated into images with objects and without objects. Then, images in which object appears in boundary areas as well as blurred images are removed to improve the accuracy of the system. A convolutional neural network is applied as a classifier in order to realize the recognition of the 10 target hand postures and non-target postures for the in-vehicle device selection. After that, a detection method of the posture movement directions is applied for the device operations. Both indoor and in-vehicle experiments are conducted to verify the robustness of this system, and the results show that the proposed system can overcome the disadvantages of other systems and has a wide application with high accuracy.
... The types of fall detection vary from inertial sensors for gait recognition [91] like accelerometers [92], and vibration sensors [93], to RFID-based systems to avoid obstacles [94]. Accordingly, these devices usually include laboratory-assembled sensors [95], modified commercial systems (like smartphones [96] and Microsoft Kinect [97,98]), or hybrid environments based on cloud computing [99]. As for older driver's health, the issues are briefly divided into car-based health monitoring (in case of sudden stoke [100,101]), cognitive assessment for driving ability via in-car driving behavior as drivers have to handle the trip complexity [102,103] and classification in regards to driving behaviours [104]. ...
Article
Full-text available
Big data has been prominent in studying aging and older people’s health. It has promoted modeling and analyses in biological and geriatric research (like cellular senescence), developed health management platforms, and supported decision-making in public healthcare and social security. However, current studies are still limited within a single subject, rather than flourished as interdisciplinary research in the context of big data. The research perspectives have not changed, nor has big data brought itself out of the role as a modeling tool. When embedding big data as a data product, analysis tool, and resolution service into different spatial, temporal, and organizational scales of aging processes, it would present as a connection, integration, and interaction simultaneously in conducting interdisciplinary research. Therefore, this paper attempts to propose an ecological framework for big data based on aging and older people’s health research. Following the scoping process of PRISMA, 35 studies were reviewed to validate our ecological framework. Although restricted by issues like digital divides and privacy security, we encourage researchers to capture various elements and their interactions in the human-environment system from a macro and dynamic perspective rather than simply pursuing accuracy.
... In [13] authors used a pressure sensing triboelectric nanogenerator (TENG) array for ambient-based fall detection. Fan et al. [14] developed a fall detection system using infrared array sensors with several deep learning methods to detect the existence/non-existence of fall. Another interesting approach is the use of Atheros commercial NIC equipment to map the amplitude information in the wireless signal to the human body's fall action [15]. ...
Article
Full-text available
We proposed in this paper an algorithm for fall detection using 2D RGB camera. Occlusion, fall, and common daily activities are separated from each other by machine learning algorithms, which were trained on features extracted by a deep learning-based computer vision algorithm. Person dection in videos is done using deep learing algorithm. The experimental validation of the proposed approach was conducted on two datasets, one public, and the second created localy. For evaluation, several assessment measures are computed. This evaluation shown effectiveness of the proposed solution.
Chapter
Falling detection, especially for elderly people in confined areas such as bathrooms is vital for timely rescue. The mainstream vision-based fall detection approaches however are not applicable here for strong privacy concerns. It is therefore necessary to design a privacy-preserving fall detection model that utilizes other signals such as widely existed Wi-Fi for this scenario. Existing Wi-Fi based fall detection approaches often suffer from environment noise removal, resulting in moderate accuracy. In this paper, a Wi-Fi based fall detection model for bathroom environment, termed WiBFall, is proposed. Firstly, time series CSI data is reconstructed into a two-dimensional frequency energy map structure to obtain more feature capacity. Secondly, the reconstructed CSI data stream is filtered by Butterworth filter for noise elimination. Finally, the filtered data is used to train the established deep learning network to get a high accuracy fall detection model for bathroom. The experimental results show that the WiBFall not only reaches a fall detection accuracy of up to 99.63% in home bathroom environment, but also enjoys high robustness comparing to other schemes in different bathroom settings.
Article
Full-text available
Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks in- cluding machine translation, handwriting synthesis and image caption gen- eration. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in reaches a competitive 18.7% phoneme error rate (PER) on the TIMIT phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18% PER in single utterances and 20% in 10-times longer (repeated) utterances. Finally, we propose a change to the at- tention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6% level.
Article
Full-text available
This paper describes the ongoing work of detecting falls in independent living senior apartments. We have developed a fall detection system with Doppler radar sensor and implemented ceiling radar in real senior apartments. However, the detection accuracy on real world data is affected by false alarms inherent in the real living environment, such as motions from visitors. To solve this issue, this paper proposes an improved framework by fusing the Doppler radar sensor result with a motion sensor network. As a result, performance is significantly improved after the data fusion by discarding the false alarms generated by visitors. The improvement of this new method is tested on one week of continuous data from an actual elderly person who frequently falls while living in her senior home.
Article
Full-text available
In this paper, we propose a novel neural network model called RNN Encoder--Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder--Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.
Conference Paper
Full-text available
Ambient assisted living (AAL) systems need to understand the user's situation, which makes activity recognition an important component. Falls are one of the most critical problems of the elderly, so AAL systems often incorpo-rate fall detection. We present an activity recognition (AR) and fall detection (FD) system aiming to provide robust real-time performance. It uses two wear-able accelerometers, since this is probably the most mature technology for such purpose. For the AR, we developed an architecture that combines rules to rec-ognize postures, which ensures that the behavior of the system is predictable and robust, and classifiers trained with machine learning algorithms, which provide maximum accuracy in the cases that cannot be handled by the rules. For the FD, rules are used that take into account high accelerations associated with falls and the recognized horizontal orientation (e.g., falling is often followed by lying). The system was tested on a dataset containing a wide range of activities, two different types of falls and two events easily mistaken for falls. The F-measure of the AR was 99 %, even though it was never tested on the same per-sons it was trained on. The F-measure of the FD was 78 % due to the difficulty of the events to be recognized and the need for real-time performance, which made it impossible to rely on the recognition of long lying after a fall.
Conference Paper
In this paper, a low resolution privacy preserved infrared array sensor is adopted for the applications of the elderly tracking and fall detection. The sensor is composed of a 16 × 4 thermopile array with the corresponding 60° × 16.4° field of view. Each pixel or thermopile element of infrared sensor contains the temperature value. Two infrared sensors are attached to the wall at different places in our system for capturing the three dimensional image information. The foreground of human body is determined by subtracting the image with the background model using the temperature difference characteristic. Using the foreground temperature, the angle of arrival (AOA) from each sensor is obtained. The location is estimated by the AOA based positioning algorithm. The estimated position is passed to the regression model to reduce the positioning error. As a result, the mean error of our tracking algorithm is 13.39 cm. On the other hand, the fall detection algorithm is implemented by extracting the features from the falling action. Two sensors capture the action at the same time. The sensor with larger foreground region is chosen for the feature extraction process. The extracted features are applied to the k-nearest neighbor (k-NN) classification model for the fall detection. In experiment, 80 fall actions and 80 normal actions are collected. Finally, 95.25% sensitivity, 90.75% specificity and 93% accuracy are achieved.
Article
This paper presents the design and implementation of RT-Fall, a real-Time, contactless, low-cost yet accurate indoor fall detection system using the commodity WiFi devices. RT-Fall exploits the phase and amplitude of the fine-grained Channel State Information (CSI) accessible in commodity WiFi devices, and for the first time fulfills the goal of segmenting and detecting the falls automatically in real-Time, which allows users to perform daily activities naturally and continuously without wearing any devices on the body. This work makes two key technical contributions. First, we find that the CSI phase difference over two antennas is a more sensitive base signal than amplitude for activity recognition, which can enable very reliable segmentation of fall and fall-like activities. Second, we discover the sharp power profile decline pattern of the fall in the time-frequency domain and further exploit the insight for new feature extraction and accurate fall segmentation/detection. Experimental results in four indoor scenarios demonstrate that RT-fall consistently outperforms the state-of-The-Art approach WiFall with 14 percent higher sensitivity and 10 percent higher specificity on average.
Conference Paper
Nowadays, aging society is a big problem and demand for monitoring systems is becoming higher. Under this circumstance, a fall is a main factor of accidents at home. From this point of view, we need to detect falls expeditiously and correctly. However, usual methods like using a video camera or a wearable device have some issues in privacy and convenience. In this paper, we propose a system of fall detection using a low resolution infrared array sensor. The proposed system uses this sensor with advantages of privacy protection (low resolution), low cost (cheap sensor), and convenience (small device). We propose four features and based on them, classify activities as either a fall or a non-fall using k-nearest neighbor (k-NN) algorithm. We show a proof-of-concept of our proposed system using a commercial-off-the-shelf (COTS) hardware. Results of experiments show the detection rate of higher than 94 % irrespective of training data contains object's data or not.
Conference Paper
A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor's use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories - up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.
Article
A method for detecting falls in the homes of older adults using the Microsoft Kinect and a two-stage fall detection system is presented. The first stage of the detection system characterizes a person's vertical state in individual depth image frames, and then segments on ground events from the vertical state time series obtained by tracking the person over time. The second stage uses an ensemble of decision trees to compute a confidence that a fall preceded an on ground event. Evaluation was conducted in the actual homes of older adults, using a combined nine years of continuous data collected in 13 apartments. The data set includes 454 falls, 445 falls performed by trained stunt actors and 9 naturally occurring resident falls. The extensive data collection allows for characterization of system performance under real-world conditions to a degree that has not been shown in other studies. Cross validation results are included for standing, sitting, and lying down positions, near (within 4 m) vs. far fall locations, and occluded vs. not occluded fallers. The method is compared against five state-of-the-art fall detection algorithms and significantly better results are achieved.
Article
This paper presents a novel fall detection system based on the Kinect sensor. The system runs in real-time and is capable of detecting walking falls accurately and robustly without taking into account any false positive activities (i.e. lying on the floor). Velocity and inactivity calculations are performed to decide whether a fall has occurred. The key novelty of our approach is measuring the velocity based on the contraction or expansion of the width, height and depth of the 3D bounding box. By explicitly using the 3D bounding box, our algorithm requires no pre-knowledge of the scene (i.e. floor), as the set of detected actions are adequate to complete the process of fall detection.