Conference PaperPDF Available

EEG Wavelet Classification for Fall Detection with Genetic Programming



The ability to autonomously detect a physical fall is one of the many enabling technologies towards better independent living. This work explores how genetic programming can be leveraged to develop machine learning pipelines for the classification of falls via EEG brainwave activity. Eleven physical activities (5 types of falls and 6 non-fall activities) are clustered into a binary classification problem of whether a fall has occurred or not. Wavelet features are extracted from the brainwaves before machine learning models are explored and tuned for better k-fold classification accuracy, precision, recall, and F1 score. Results show that solutions discovered through genetic programming can detect falls with a mean accuracy of 89.34%, precision of 0.883, recall of 0.908, and an F1-Score of 0.895 from EEG brainwave data alone. All three genetic programming solutions chose a further step of Principal Component Analysis for additional feature extraction from the computed wavelet features, each with iterated powers of 6, 3, and 7, and all with a randomised Singular Value Decomposition approach. The best model is finally analysed via the Receiver Operating Characteristic and Precision-Recall curves. Python code for each of the genetic programming pipelines are provided.
EEG Wavelet Classification for Fall Detection with Genetic
Jordan J. Bird
Computational Intelligence and Applications Research Group (CIA), Nottingham Trent University
Nottingham, Nottinghamshire, United Kingdom
The ability to autonomously detect a physical fall is one of the many
enabling technologies towards better independent living. This work
explores how genetic programming can be leveraged to develop
machine learning pipelines for the classication of falls via EEG
brainwave activity. Eleven physical activities (5 types of falls and 6
non-fall activities) are clustered into a binary classication problem
of whether a fall has occurred or not. Wavelet features are extracted
from the brainwaves before machine learning models are explored
and tuned for better k-fold classication accuracy, precision, recall,
and F1 score. Results show that solutions discovered through ge-
netic programming can detect falls with a mean accuracy of 89.34%,
precision of 0.883, recall of 0.908, and an F1-Score of 0.895 from
EEG brainwave data alone. All three genetic programming solu-
tions chose a further step of Principal Component Analysis for
additional feature extraction from the computed wavelet features,
each with iterated powers of 6, 3, and 7, and all with a randomised
Singular Value Decomposition approach. The best model is nally
analysed via the Receiver Operating Characteristic and Precision-
Recall curves. Python code for each of the genetic programming
pipelines are provided.
Computing methodologies
Machine learning;Machine
learning approaches;Human-centered computing;
Fall Detection, EEG, Signal Processing, Signal Classication
The ability to autonomously detect a physical fall is one of many
enabling technologies towards better independent living. Many
State-of-the-Art fall detection techniques are based on the detec-
tion of physical movements such as through accelerometers and
gyroscopes, whereas many consider other traits such as bioelectri-
cal activity from the muscles and brain. Applied machine learning
is never perfect, and thus provision of multiple methods of fall
detection reduces the potential error in the real world, since there
are several observational models to consider rather than reliance
on just one or a few. In the United Kingdom during 2021, there were
more deaths registered than births [
] in part due to the world
facing an ever-ageing population. The demographics of those who
provide care and those who are service users are changing in size
and pace at considerable rates throughout the world [
], and thus
changes are required for healthcare systems throughout the world
to continue to operate eectively and provide a suitable level of
care to those who require it. A number of state-of-the-art solutions
to these issues are presented in the form of applied articial intelli-
gence for independent assisted living [
]. This work proposes
the utilisation of a single electroencephalography electrode to de-
tect the event of a fall autonomously through a process of data
collection, feature extraction, processing, and machine learning.
To detect a fall by this method would provide a further facet to
independent assisted living and allow for further independence
within the home. The main scientic contributions of this work are
as follows:
Exploration of brainwave features via Kullback-Leibler Di-
vergence shows that the absolute mean of the 8
and variance of the 3
wavelet hold the most information
for fall classication.
Balancing and normalisation provide an alleviation to the
data scarcity of brainwave activity recorded during a fall
Manual tuning of machine learning models presents a Gauss-
ian Process as a candidate for fall detection.
Genetic Programming to develop pipelines for better classi-
cation are successful, and the three solutions found outper-
form all other approaches explored within this work.
The remainder of this article is as follows; Section 2 explores the
background and state of the art within the elds of study related to
this work. Section 3 then describes the method of the experiments
prior to the results being presented in Section 4. Finally, Section 5
concludes this study and suggests future work based on the ndings.
Falls in older adults are caused in part by loss of balance due to
ageing [
]. The risk of preventable injury by a fall grows with age,
with around 33% of older adults experiencing a fall once or more
per year, and around half of people over the age of 80 experience
falls annually [
]. According to the NHS, falls do not often re-
sult in serious physical injuries in older adults, but can cause the
person to lose condence, withdraw socially, and feel like they
have lost their independence [
]. It was noted in [
], that 0.1%
of all healthcare expenditures in the United States and 1.5% in Eu-
rope are directly related to fall-related injuries. The review notes
risk factors including impaired balance and gait, polypharmacy,
history of previous falls, advancing age, sex, visual impairments,
cognitive decline, and environmental factors. In the United States,
there were an estimated 10,300 fatal and 2.6 million non-fatal fall-
related injuries in the year 2000 alone [
]. The main goal of fall
detection is the employment of technology to detect a fall event
(abnormal behaviour recognition), leading to a quicker response
Table 1: Class labels applied to group the 11 individual activ-
ities found within the dataset [14].
Activity Duration (s) Class Label
Falling forward using hands 10 Falling
Falling forward using knees 10 Falling
Falling backwards 10 Falling
Falling sideward 10 Falling
Falling sitting in empty chair 10 Falling
Walking 60 Not Falling
Standing 60 Not Falling
Sitting 60 Not Falling
Picking up an object 10 Not Falling
Jumping 30 Not Falling
Laying 60 Not Falling
from carers, and alleviates issues in situations where the suerer of
the fall cannot locate or reach an emergency call button or cord [
Falls can be detected through a number of proposed methods in-
cluding the analysis of wireless networks [
], computer vision [
thermal image processing [
], acoustic classication [
], and ac-
tivities recorded via wearable sensors [
]. Adkin et al. [
] note that
compensatory balance reactions are recognisable within recorded
EEG data. In [
], authors proposed a random forest ensemble for the
classication of fall events and drowsiness with electrodes embed-
ded within a helmet. The model achieved around 98% accuracy, but
the authors note the exhaustiveness of the electrode array approach
in terms of its computational complexity and thus inference time
of the model, and the authors propose that future work may nd
more eciency in an array of fewer electrodes. Annese et al. [
proposed a multimodal approach to learning from EEG and EMG
signals. In particular, seven electrodes are placed around the motor
cortex and the occipital lobe. The results on the dataset were almost
perfect, but similarly to the previous work, the authors note the
computational expense of the approach. Given the level of con-
sumer hardware available which could be provided by healthcare
systems, a slow classication of an event would not solve the goal of
quick response times that fall detection requires during real-world
use. The NeuroSky EEG headset has a single electrode placed on
the Fp1 position within the 10-20 EEG electrode placement system.
Although many of the commercial applications of the device are
based on the classication of concentration [
], the NeuroSky has
proposed applications in fatigue detection [
], blink detection [
and fall detection [14].
The initial raw signals are collected from the Preliminar Fall-UP
Dataset presented in [
]. The dataset is comprised of 11 activi-
ties performed by 4 subjects (three trials each). This work focuses
only on the data recorded by the Neurosky MindWave EEG device,
and all other features are disregarded. Table 1 details the binary
classication problem that is formed from the dataset due to the
consideration of whether a fall is occurring during the recording.
Feature extraction in EEG is the process of deriving mathematical
descriptions of sections of the wave for classication [
], and
wavelet characteristics have been noted as informative descriptors
of brainwave activity [
]; EEG signals are divided into half-
second windows, and seven sets of features are extracted, which
leads to a dataset of 39 numerical features. The spectral entropies
of the signals are computed via Fourier transform. The spectral
entropy is given as
is the power
spectrum and probability distribution of the input signal. Shan-
non entropy
𝑃(𝑥𝑖)𝑙𝑜𝑔𝑃 (𝑥𝑖)
is also calculated. In
terms of each wavelet scale, the following features are extracted
via the continuous wavelet transform: absolute mean value, en-
ergy, entropy, standard deviation, and variance. After extraction,
all numerical features are normalised to the range 0-1.
Prior to machine learning, the dataset is explored to discern
how eective each attribute is for classication prediction. The
information gain
𝐼𝐺 (𝑇 , 𝑎)=𝐸(𝑇) 𝐸(𝑇|𝑎)
of each attribute is con-
sidered via observed changes in entropy
𝐸(𝑠)=Í𝑗𝑝𝑗𝑙𝑜𝑔 (𝑝𝑗)
Hyperparameters for the KNN and Random Forest models are ex-
plored through a linear search
, ...,
to discern
whether hyperparameter tuning has a noticeable eect on mean
classication metrics. Various machine learning algorithms are se-
lected with a range of dierent statistical methods to provide a
general overview of the classication ability using multiple meth-
ods (see Section 4.5 for more details). Following this, further tuning
is performed via Adaptive Boosting [
] on all the selected models
that are compatible with the algorithm (Random Forest, Logistic
Regression, Naive Bayes, Stochastic Gradient Descent). Finally, a
Genetic Programming approach is explored through a tree-based
algorithm detailed in [
]; the algorithm is executed three times
with random seeds equal to the iteration (1, 2, 3) and the source
code is provided. All models are trained by 10-fold cross-validation
with a seed set to 1for randomisation and are therefore directly
comparable. The algorithms were trained on an overclocked Intel
Core i7-8700K CPU (4.3GHz) with scikit-learn [
] and TPOT [
In this section, the results of all planned experiments are presented.
First, the information gain of the best features are noted prior to
a a machine learning argument for class balancing and numerical
normalisation are presented. Hyperparameter optimisation of select
models is explored, and boosting is performed where possible. This
section also details the results of genetic programming before giving
a nal comparison of all experiments performed in this work.
4.1 Data Preprocessing
The information gain (Kullback-Leibler divergence) of the top 5 fea-
tures within the dataset by 10-fold cross-validation can be observed
in Table 2. Prior to performing the experiments, Table 3 shows
further details on the reasoning behind class balancing. When the
dataset is unbalanced, there is a much higher frequency of EEG sig-
nals linked to activities under the category of not falling. Due to this,
misleading results can be achieved; for example, even though the
class balanced approach seemingly has a lower classication accu-
racy (83.3% vs. 92.21%), the ability to recognise the falling behaviour
is improved from 885 correctly classied instances to 980. The base-
line (prediction based on the most common label) for the balanced
Table 2: Top 5 features in the dataset by their Kullback-
Leibler divergence after feature extraction and normalisa-
Attribute KLD Rank
Wavelet absolute mean_8 0.288 ±0.004 2 ±1.18
Wavelet variance_3 0.288 ±0.006 2.4 ±1.28
Wavelet variance_4 0.286 ±0.005 2.5 ±0.92
Wavelet standard deviation_5 0.284 ±0.005 5.8 ±4.66
Wavelet absolute mean_2 0.28 ±0.006 12.1 ±5.84
Table 3: Confusion matrices of balanced and unbalanced
datasets following the training of a simple random decision
tree. Due to the higher frequency of "Not Falling", classi-
cation without class balancing produces misleadingly high
Balanced (Acc 83.3%) Unbalanced (Acc 92.21%)
No Fall Fall No Fall Fall
856 246 No Fall 4771 261 No Fall
122 980 Fall 217 885 Fall
Table 4: Classication metrics on normalised and non-
normalised numeric attributes with a simple random de-
cision tree.
Normalised Non-Normalised
Precision Recall F-Score Precision Recall F-Score
0.839 0.834 0.833 0.837 0.833 0.833
dataset is 50% whereas the baseline for the unbalanced dataset is
82.03% - thus, balancing in this preliminary example provides a
33.3% advantage over the baseline, whereas leaving the dataset
unbalanced provides only a 10.18% advantage over the baseline. In
Table 4, the classication metrics are compared when normalising
the data within the range of 0-1. As can be observed for this pre-
liminary decision tree classier, the metrics increase slightly when
normalisation is performed.
It is due to these examples and discussion that the normalised
and equally balanced dataset is chosen for the remainder of the
experiments presented in this work.
4.2 Hyperparameter Tuning
Figures 1 and 2 show the aects of the number of estimators in the
Random Forest model. The best overall model was a random forest
containing 80 decision trees, which had a mean accuracy of 84.94,
a precision of 0.81, a recall of 0.915, and an F-Score of 0.856. These
were the highest observed metrics within the linear search except
for mean precision, where a Random Forest of 50 trees scored 0.81.
A similar linear search for the value of
within the K-Nearest
Neighbour model can be observed in Figures 3 and 4. The most
eective model was
40, which had a mean accuracy of 73.37, a
precision of 0.793, a recall of 0.634 and an F score of 0.704.
Trees in Forest
Mean Accuracy
20 40 60 80 100
Figure 1: Eect of the number of estimators on the mean
kfold accuracy of the Random Forest model.
Trees in Forest
20 40 60 80 100
Mean Accuracy Mean Precision Mean Recall
Mean F-Score
Figure 2: Eect of the number of estimators on the mean
kfold accuracy, precision, recall, and F-Score of the Random
Forest model.
K-Nearest Neighbours
Mean Accuracy
20 40 60 80 100
Figure 3: Eect of the number of K-Nearest Neighbours on the
mean 10-fold accuracy of the K-Nearest Neighbours model.
K-Nearest Neighbours
20 40 60 80 100
Mean Accuracy Mean Precision Mean Recall
Mean F-Score
Figure 4: Eect of the number of K-Nearest Neighbours on
the mean 10-fold accuracy, precision, recall, and F-Score of
the K-Nearest Neighbours model.
Table 5: Results for the Adaptive Boosted models (Log. - Lo-
gistic Regression).
Model Acc. Prec. Recall F1
RF 84.71 (2.51) 0.81 (0.03) 0.908 (0.027) 0.856 (0.021)
Log. 61.57 (2.28) 0.575 (0.024) 0.881 (0.022) 0.696 (0.021)
SGD 59.84 (2.48) 0.564 (0.023) 0.846 (0.132) 0.672 (0.063)
NB 48.69 (5.49) 0.536 (0.257) 0.459 (0.425) 0.359 (0.283)
Mean Accuracy
Naive Bayes Stochastic
Vanilla Boosted
Figure 5: Comparison of models before and after being Adap-
tive Boosted.
4.3 Adaptive Boosting
The models which supported adaptive boosting due to their ability
to predict probabilities are presented in Table 5. Figure 5 shows a
comparison between the original model and the eect of adaptive
boosting. It can be observed that adaptive boosting Random Forest
and Naive Bayes models for this problem leads to a lower mean
classication accuracy, whereas Logistic Regression and Stochas-
tic Gradient Descent classication is improved. It must be noted
here that although improvements were made in some cases, these
were not competitive with the other results explored. Additionally,
Mean Accuracy
5 10 15 20 25 30
Iteration 1 Iteration 2 Iteration 3
Figure 6: Best tness (mean accuracy) observed during each
generation for three genetic programming experiments.
Table 6: Classication metrics of the best solutions discov-
ered after three individual runs for the genetic programming
GP Accuracy Precision Recall F1
188.79 (1.88) 0.892 (0.024) 0.889 (0.042) 0.888 (0.016)
288.97 (2.14) 0.882 (0.013) 0.901 (0.04) 0.891 (0.021)
389.34 (2.19) 0.883 (0.021) 0.908 (0.037) 0.895 (0.02)
Adaptive Boosting is computationally expensive compared to many
of the approaches explored in this work.
4.4 Genetic Programming
As previously described, the genetic programming approach ex-
plored 30 generations with 20 solutions as a population size. The
learning process for three iterations of the GP algorithm can be ob-
served in Figure 6, and the best nal solutions are detailed further in
Table 6. Although starting at the highest tness, iteration 1 had the
lowest nal score of 88.79%, with iteration 2 (which started at the
lowest tness) scoring slightly more by the end of the simulation
at 88.79%. The best solution found was that by iteration 3, which
scored 89.34%. Due to their complexity, the solutions are presented
by their iteration ID in this work - the source code for all three
machine learning pipelines can be found in Appendix A. Although
features are extracted manually, it is interesting to note that all
simulations decided upon further engineering through Principal
Component Analysis (PCA); a number of related works have also
proposed PCA as a dimensionality reduction technique to improve
EEG classication [13, 28, 32].
4.5 Comparison of all Models
A nal comparison of all models is provided in Table 7. As can
be observed, the best models were all those that were explored
through genetic programming. Though, it is worth noting that
these models are relatively complex, whereas the Gaussian Process
and Random Forest models are less computationally expensive but
compete at -2.86% -4.4%, respectively. Interestingly, the adaptive
Table 7: Overall comparison of all fall detection models explored within this work.
Model Accuracy Precision Recall F1
Genetic Programming (3) 89.34 (2.19) 0.883 (0.021) 0.908 (0.037) 0.895 (0.02)
Genetic Programming (2) 88.97 (2.14) 0.882 (0.013) 0.901 (0.04) 0.891 (0.021)
Genetic Programming (1) 88.79 (1.88) 0.892 (0.024) 0.889 (0.042) 0.888 (0.016)
Gaussian Process 86.48 (2.65) 0.842 (0.044) 0.902 (0.033) 0.87 (0.024)
Random Forest 84.94 (2.39) 0.81 (0.03) 0.915 (0.027) 0.856 (0.021)
AB(Random Forest) 84.71 (2.51) 0.81 (0.03) 0.908 (0.027) 0.856 (0.021)
Extreme Gradient Boost 76.95 (3.16) 0.791 (0.038) 0.733 (0.036) 0.761 (0.033)
Adaptive Boosting 73.59 (4.12) 0.777 (0.058) 0.665 (0.051) 0.715 (0.045)
K-Nearest Neighbours 73.37 (3.29) 0.793 (0.049) 0.634 (0.046) 0.704 (0.038)
Linear Discriminant Analysis 64.93 (3.29) 0.611 (0.037) 0.824 (0.056) 0.7 (0.034)
AB(Logistic Regression) 61.57 (2.28) 0.575 (0.024) 0.881 (0.022) 0.696 (0.021)
Linear SVM 61.16 (1.92) 0.572 (0.022) 0.88 (0.018) 0.694 (0.019)
Logistic Regression 60.84 (1.92) 0.57 (0.022) 0.882 (0.02) 0.692 (0.019)
Radial Basis SVM 59.98 (2.15) 0.562 (0.024) 0.898 (0.021) 0.691 (0.021)
AB(Stochastic Gradient Descent) 59.84 (2.48) 0.564 (0.023) 0.846 (0.132) 0.672 (0.063)
Quadratic Discriminant Analysis 59.39 (2.08) 0.557 (0.025) 0.914 (0.015) 0.692 (0.021)
Naive Bayes 58.44 (1.78) 0.549 (0.021) 0.939 (0.016) 0.693 (0.019)
Stochastic Gradient Descent 55.17 (3.68) 0.381 (0.251) 0.625 (0.433) 0.468 (0.312)
AB(Naive Bayes) 48.69 (5.49) 0.536 (0.257) 0.459 (0.425) 0.359 (0.283)
0.0 0.2 0.4 0.6 0.8 1.0
False Positive Rate
True Positive Rate
ROC fold 0 (AUC = 0.951)
ROC fold 1 (AUC = 0.927)
ROC fold 2 (AUC = 0.934)
ROC fold 3 (AUC = 0.940)
ROC fold 4 (AUC = 0.954)
ROC fold 5 (AUC = 0.892)
ROC fold 6 (AUC = 0.969)
ROC fold 7 (AUC = 0.949)
ROC fold 8 (AUC = 0.920)
ROC fold 9 (AUC = 0.916)
Mean ROC (AUC = 0.935
1 std. dev.
Figure 7: Receiver Operating Characteristic (ROC) curve for
the best genetic programming-based solution.
boost of the Naive Bayes model was worse than random guessing,
and this was the only instance of such an occurrence. The Receiver
Operating Characteristic (ROC) and Precision-Recall curves for the
best solution can be observed within Figures 7 and 8, respectively.
To nally conclude, this work has explored how machine learning
and genetic programming can be leveraged to autonomously detect
physical falls via a single electrode reading brain activity. Although
the problem was dicult, due in part to activities such as laying
down being present in the category of not falling, genetic program-
ming developed a machine learning pipeline that could detect falls
0.0 0.2 0.4 0.6 0.8 1.0
PR fold 0 (AUC = 0.929)
PR fold 1 (AUC = 0.906)
PR fold 2 (AUC = 0.898)
PR fold 3 (AUC = 0.920)
PR fold 4 (AUC = 0.937)
PR fold 5 (AUC = 0.876)
PR fold 6 (AUC = 0.942)
PR fold 7 (AUC = 0.917)
PR fold 8 (AUC = 0.905)
PR fold 9 (AUC = 0.898)
Precision-Recall (AUC = 0.912)
Figure 8: Precision-Recall curve for the best genetic
programming-based solution.
with an average accuracy of 89.34%.
The results presented in this work provide a good basis for future
experiments, given that some approaches were particularly worse
than the more impressive set of results. In the future, larger datasets
could be leveraged to attempt a generalisation of the population. In
particular, a larger dataset collected from a larger number of subjects
would also enable leave-one-subject-out cross validation to test this.
Additional ensemble methods could also be explored, as the genetic
programming results seem to point towards a statistical ensemble
being a particularly powerful method for EEG-based fall detection.
In addition to the models explored, future work could involve the
multimodal classication of falls by including information collected
by other sensors e.g. those which are wearable and ambient sensors
around the home environment. Finally, deep learning and data
augmentation could be explored towards methods that can be tuned
in the future as more data becomes available.
Allan L Adkin, Sylvia Quant, Brian E Maki, and William E McIlroy. 2006. Cortical
responses associated with predictable and unpredictable compensatory balance
reactions. Experimental brain research 172, 1 (2006), 85–93.
Anne Felicia Ambrose, Geet Paul, and Jerey M Hausdor. 2013. Risk factors for
falls among older adults: a review of the literature. Maturitas 75, 1 (2013), 51–61.
Valerio F Annese, Marco Crepaldi, Danilo Demarchi,and Daniela De Venuto. 2016.
A digital processor architecture for combined EEG/EMG falling risk prediction.
In 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE).
IEEE, 714–719.
Jordan J Bird, Diego R Faria, Luis J Manso, Anikó Ekárt, and Christopher D Buck-
ingham. 2019. A deep evolutionary approach to bioinspired classier optimisation
for brain-machine interaction. Complexity 2019 (2019).
Jerey Braithwaite, Russell Mannion, Yukihiro Matsuyama, Paul G Shekelle,
Stuart Whittaker, Samir Al-Adawi, Kristiana Ludlow, Wendy James, Hsuen P
Ting, Jessica Herkes, et al
2018. The future of health systems to 2030: a roadmap
for global progress and sustainability. International Journal for Quality in Health
Care 30, 10 (2018), 823–831.
Jay Chen, Karric Kwong, Dennis Chang, Jerry Luk, and Ruzena Bajcsy. 2006.
Wearable sensors for reliable fall detection. In 2005 IEEE Engineering in Medicine
and Biology 27th Annual Conference. IEEE, 3551–3554.
Sameer Raju Dhole, Amith Kashyap, Animesh Narayan Dangwal, and Rajasekar
Mohan. 2019. A novel helmet design and implementation for drowsiness and fall
detection of workers on-site using EEG and Random-Forest Classier. Procedia
Computer Science 151 (2019), 947–952.
David Eibling. 2018. Balance disorders in older adults. Clinics in geriatric medicine
34, 2 (2018), 175–181.
Dhavalkumar H Joshi, UK Jaliya, and Darshak Thakore. 2015. Raw EEG-based
Fatigue and Drowsiness Detection: A Review. International Institute for Techno-
logical Research and Development 1, 1 (2015).
Sridhar Krishnan and Yashodhan Athavale. 2018. Trends in biomedical signal
feature extraction. Biomedical Signal Processing and Control 43 (2018), 41–63.
Yun Li, KC Ho, and Mihail Popescu. 2012. A microphone array system for
automatic fall detection. IEEE Transactions on Biomedical Engineering 59, 5 (2012),
Fabien Lotte and Marco Congedo. 2016. EEG feature extraction. Brain–Computer
Interfaces 1: Foundations and Methods (2016), 127–143.
Kavita Mahajan, MR Vargantwar, and Sangita M Rajput. 2011. Classication of
EEG using PCA, ICA and Neural Network. International Journal of Engineering
and Advanced Technology 1, 1 (2011), 80–83.
Lourdes Martínez-Villaseñor, Hiram Ponce, Jorge Brieva, Ernesto Moya-Albor,
José Núñez-Martínez, and Carlos Peñafort-Asturiano. 2019. UP-fall detection
dataset: A multimodal approach. Sensors 19, 9 (2019).
Muhammad Mubashir, Ling Shao, and Luke Seed. 2013. A survey on fall detection:
Principles and approaches. Neurocomputing 100 (2013), 144–152.
Abdallah Naser, Ahmad Lot, and Junpei Zhong. 2022. Multiple thermal sensor ar-
ray fusion towards enabling privacy-preserving human monitoring applications.
IEEE Internet of Things Journal (2022).
[17] National Health Service. 2021. Falls.
Oce for National Statistics. 2021. Dataset: Vital statistics
in the UK: births, deaths and marriages. https://www.ons.
Randal S Olson, Nathan Bartley, Ryan J Urbanowicz, and Jason H Moore. 2016.
Evaluation of a tree-based pipeline optimization tool for automating data science.
In Proceedings of the genetic and evolutionary computation conference 2016. 485–
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M.
Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cour-
napeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine
Learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
Parisa Rashidi and Alex Mihailidis. 2012. A survey on ambient-assisted living
tools for older adults. IEEE journal of biomedical and health informatics 17, 3
(2012), 579–590.
Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau. 2007.
Fall detection from human shape and motion history using video surveillance. In
21st International Conference on Advanced Information Networking and Applica-
tions Workshops (AINAW’07), Vol. 2. IEEE, 875–880.
Mridu Sahu, Praveen Shukla, Aditya Chandel, Saloni Jain, and Shrish Verma. 2021.
Eye Blinking Classication Through NeuroSky MindWave Headset Using EegID
Tool. In International Conference on Innovative Computing and Communications.
Springer, 789–799.
A Hasan Sapci and H Aylin Sapci. 2019. Innovative assisted living tools, remote
monitoring technologies, articial intelligence-driven solutions, and robotic
systems for aging societies: systematic review. JMIR aging 2, 2 (2019), e15429.
Robert E Schapire. 2013. Explaining adaboost. In Empirical inference. Springer,
Judy A Stevens, Phaedra S Corso, Eric A Finkelstein, and Ted R Miller. 2006. The
costs of fatal and non-fatal falls among older adults. Injury prevention 12, 5 (2006),
Abdulhamit Subasi. 2007. EEG signal classication using wavelet feature extrac-
tion and a mixture of expert model. Expert Systems with Applications 32, 4 (2007),
Abdulhamit Subasi and M Ismail Gursoy. 2010. EEG signal classication using
PCA, ICA, LDA and support vector machines. Expert systems with applications
37, 12 (2010), 8659–8666.
Mary E Tinetti, Mark Speechley, and Sandra F Ginter. 1988. Risk factors for falls
among elderly persons living in the community. New England journal of medicine
319, 26 (1988), 1701–1707.
Elif Derya Übeyli. 2009. Combined neural network model employing wavelet
coecients for EEG signals classication. Digital Signal Processing 19, 2 (2009),
Yuxi Wang, Kaishun Wu, and Lionel M Ni. 2016. Wifall: Device-free fall detection
by wireless networks. IEEE Transactions on Mobile Computing 16, 2 (2016), 581–
Xinyang Yu, Pharino Chum, and Kwee-Bo Sim. 2014. Analysis the eect of PCA
for feature reduction in non-stationary EEG based motor imagery of BCI system.
Optik 125, 3 (2014), 1498–1502.
Biao Zhang, Jianjun Wang, and Thomas Fuhlbrigge. 2010. A review of the
commercial brain-computer interface technology from perspective of industrial
robotics. In 2010 IEEE international conference on automation and logistics. IEEE,
This appendix provides the source code for the nal solutions found
by the three iterations of genetic programming. Python 3.x code is
presented and is compatible with the scikit-learn library.
A.1 Iteration 1
iter1 =make_pipeline(
StackingEstimator(estimator =make_pipeline(
PCA(iterated_power = 6, svd_solver =
ExtraTreesClassifier(bootstrap =False,
criterion ="entropy", max_features =
0.6500000000000001, min_samples_leaf
= 5, min_samples_split = 11,
n_estimators = 100)
KNeighborsClassifier(n_neighbors = 64,p= 2,
weights ="distance")
A.2 Iteration 2
iter2 =make_pipeline(
PCA(iterated_power = 3, svd_solver =
StackingEstimator(estimator =LogisticRegression(C
= 10.0, dual =False, penalty ="l2")),
GradientBoostingClassifier(learning_rate = 0.5,
max_depth = 7, max_features = 0.5,
min_samples_leaf = 2, min_samples_split = 8,
n_estimators = 100, subsample = 1.0)
A.3 Iteration 3
iter3 =make_pipeline(
PCA(iterated_power = 7, svd_solver =
StackingEstimator(estimator =
GradientBoostingClassifier(learning_rate =
0.5, max_depth = 8, max_features = 0.5,
min_samples_leaf = 12, min_samples_split = 7,
n_estimators = 100, subsample = 1.0)),
KNeighborsClassifier(n_neighbors = 63,p= 1,
weights ="distance")
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Background The increase in life expectancy and recent advancements in technology and medical science have changed the way we deliver health services to the aging societies. Evidence suggests that home telemonitoring can significantly decrease the number of readmissions, and continuous monitoring of older adults’ daily activities and health-related issues might prevent medical emergencies. Objective The primary objective of this review was to identify advances in assistive technology devices for seniors and aging-in-place technology and to determine the level of evidence for research on remote patient monitoring, smart homes, telecare, and artificially intelligent monitoring systems. Methods A literature review was conducted using Cumulative Index to Nursing and Allied Health Literature Plus, MEDLINE, EMBASE, Institute of Electrical and Electronics Engineers Xplore, ProQuest Central, Scopus, and Science Direct. Publications related to older people’s care, independent living, and novel assistive technologies were included in the study. Results A total of 91 publications met the inclusion criteria. In total, four themes emerged from the data: technology acceptance and readiness, novel patient monitoring and smart home technologies, intelligent algorithm and software engineering, and robotics technologies. The results revealed that most studies had poor reference standards without an explicit critical appraisal. Conclusions The use of ubiquitous in-home monitoring and smart technologies for aged people’s care will increase their independence and the health care services available to them as well as improve frail elderly people’s health care outcomes. This review identified four different themes that require different conceptual approaches to solution development. Although the engineering teams were focused on prototype and algorithm development, the medical science teams were concentrated on outcome research. We also identified the need to develop custom technology solutions for different aging societies. The convergence of medicine and informatics could lead to the development of new interdisciplinary research models and new assistive products for the care of older adults.
Full-text available
This paper proposes a low-cost novel EEG based BCI prototype to detect if an on-site worker is sleep-deprived or not elegantly. The worker is required to wear a modified safety helmet with an innocuously placed signal acquisition device and it’s paraphernalia that does not hinder the worker’s activities. A few time and frequency domain features have been derived from the collected data to recognize sleep deprivation of workers. The smart helmet communicates with a local server within radio range. The server runs a random forest classifier algorithm to classify if the worker is sleep deprived or not and alerts the supervisor if necessary. A single Inertial Measurement Unit (IMU) sensor is utilized to detect if the worker has fallen down. The entire setup is supported by an android application that keeps the supervisor up-to-date on the statuses of the workers. A classification accuracy as high as 98% for the helmet based EEG setup was obtained through in-house live experiments upon sleep-deprived subjects.
Full-text available
Falls, especially in elderly persons, are an important health problem worldwide. Reliable fall detection systems can mitigate negative consequences of falls. Among the important challenges and issues reported in literature is the difficulty of fair comparison between fall detection systems and machine learning techniques for detection. In this paper, we present UP-Fall Detection Dataset. The dataset comprises raw and feature sets retrieved from 17 healthy young individuals without any impairment that performed 11 activities and falls, with three attempts each. The dataset also summarizes more than 850 GB of information from wearable sensors, ambient sensors and vision devices. Two experimental use cases were shown. The aim of our dataset is to help human activity recognition and machine learning research communities to fairly compare their fall detection solutions. It also provides many experimental possibilities for the signal recognition, vision, and machine learning community.
Full-text available
This study suggests a new approach to EEG data classification by exploring the idea of using evolutionary computation to both select useful discriminative EEG features and optimise the topology of Artificial Neural Networks. An evolutionary algorithm is applied to select the most informative features from an initial set of 2550 EEG statistical features. Optimisation of a Multilayer Perceptron (MLP) is performed with an evolutionary approach before classification to estimate the best hyperparameters of the network. Deep learning and tuning with Long Short-Term Memory (LSTM) are also explored, and Adaptive Boosting of the two types of models is tested for each problem. Three experiments are provided for comparison using different classifiers: one for attention state classification, one for emotional sentiment classification, and a third experiment in which the goal is to guess the number a subject is thinking of. The obtained results show that an Adaptive Boosted LSTM can achieve an accuracy of 84.44%, 97.06%, and 9.94% on the attentional, emotional, and number datasets, respectively. An evolutionary-optimised MLP achieves results close to the Adaptive Boosted LSTM for the two first experiments and significantly higher for the number-guessing experiment with an Adaptive Boosted DEvo MLP reaching 31.35%, while being significantly quicker to train and classify. In particular, the accuracy of the nonboosted DEvo MLP was of 79.81%, 96.11%, and 27.07% in the same benchmarks. Two datasets for the experiments were gathered using a Muse EEG headband with four electrodes corresponding to TP9, AF7, AF8, and TP10 locations of the international EEG placement standard. The EEG MindBigData digits dataset was gathered from the TP9, FP1, FP2, and TP10 locations.
Human-centric applications of a single Thermal Sensor Array (TSA) have performed extremely well in many areas. However, most of these works have not yet reached the real applicability stage of the Internet of Things (IoT) applications. The main limitation of deploying such systems on a large scale is the challenge of fusing multiple TSAs to cover a wide inspection area, e.g. smart homes, hospitals and many other domestic environments. On the other hand, objects that appear in the low-resolution thermal images acquired from TSA have low intra-class variations and high inter-class similarities, making the identification of the overlapping regions through matching a comparable template image in multiple images very difficult. This paper proposes a motion-based approach to fuse multiple TSAs and learn the domestic environment layout to enable further human-centred IoT applications to run in the cloud. Besides, a privacy-improvement on utilising these sensors in IoT applications is proposed. The proposed approach is evaluated with comprehensive experiments on different sensor placements and domestic environment conditions. This paper shows an average performance of 92.5% accuracy using various machine learning techniques and use case scenarios. Index Terms-sensor fusion, human-centred approach, Inter-net of Things, thermal sensor array, machine learning, optical flow, privacy-preserving approach.
Signal analysis involves identifying signal behaviour, extracting linear and non-linear properties, compression or expansion into higher or lower dimensions, and recognizing patterns. Over the last few decades, signal processing has taken notable evolutionary leaps in terms of measurement – from being simple techniques for analysing analog or digital signals in time, frequency or joint time–frequency (TF) domain, to being complex techniques for analysis and interpretation in a higher dimensional domain. The intention behind this is simple – robust and efficient feature extraction; i.e. to identify specific signal markers or properties exhibited in one event, and use them to distinguish from characteristics exhibited in another event. The objective of our study is to give the reader a bird's eye view of the biomedical signal processing world with a zoomed-in perspective of feature extraction methodologies which form the basis of machine learning and hence, artificial intelligence. We delve into the vast world of feature extraction going across the evolutionary chain starting with basic A-to-D conversion, to domain transformations, to sparse signal representations and compressive sensing. It should be noted that in this manuscript we have attempted to explain key biomedical signal feature extraction methods in simpler fashion without detailing over mathematical representations. Additionally we have briefly touched upon the aspects of curse and blessings of signal dimensionality which would finally help us in determining the best combination of signal processing methods which could yield an efficient feature extractor. In other words, similar to how the laws of science behind some common engineering techniques are explained, in this review study we have attempted to postulate an approach towards a meaningful explanation behind those methods in developing a convincing and explainable reason as to which feature extraction method is suitable for a given biomedical signal.
Most research on health systems examines contemporary problems within one, or at most a few, countries. Breaking with this tradition, we present a series of case studies in a book written by key policymakers, scholars and experts, looking at health systems and their projected successes to 2030. Healthcare Systems: Future Predictions for Global Care includes chapters on 52 individual countries and five regions, covering a total of 152 countries. Synthesised, two key contributions are made in this compendium. First, five trends shaping the future healthcare landscape are analysed: sustainable health systems; the genomics revolution; emerging technologies; global demographics dynamics; and new models of care. Second, nine main themes arise from the chapters: integration of healthcare services; financing, economics and insurance; patient-based care and empowering the patient; universal healthcare; technology and information technology; aging populations; preventative care; accreditation, standards, and policy; and human development, education and training. These five trends and nine themes can be used as a blueprint for change. They can help strengthen the efforts of stakeholders interested in reform, ranging from international bodies such as the World Health Organization, the International Society for Quality in Health Care and the World Bank, through to national bodies such as health departments, quality and safety agencies, non-government organisations (NGO) and other groups with an interest in improving healthcare delivery systems. This compendium offers more than a glimpse into the future of healthcare-it provides a roadmap to help shape thinking about the next generation of caring systems, extrapolated over the next 15 years.
Balance disorders are common in the elderly and can lead to falls, with resultant severe morbidity and even mortality. Progressive loss of vestibular function begins in middle age and is affected by multiple disease processes. Polypharmacy impacts many disease processes in the elderly, with balance function being one of the most susceptible. Evaluation of the older patient with a balance disorder is critical for the well-being of these patients, as it may drive intervention. This article reviews balance disorders often encountered in older patients and makes recommendations regarding education of nonotolaryngologists.
Injuries that are caused by falls have been regarded as one of the major health threats to the independent living for the elderly. Conventional fall detection systems have various limitations. In this work, we first look for the correlations between different radio signal variations and activities by analyzing radio propagation model. Based on our observation, we propose WiFall, a truly unobtrusive fall detection system. WiFall employs physical layer Channel State Information (CSI) as the indicator of activities. It can detect fall of the human without hardware modification, extra environmental setup, or any wearable device. We implement WiFall on desktops equipped with commodity 802.11n NIC, and evaluate the performance in three typical indoor scenarios with several layouts of transmitter-receiver (Tx-Rx) links. In our area of interest, WiFall can achieve fall detection for a single person with high accuracy. As demonstrated by the experimental results, WiFall yields 90 percent detection precision with a false alarm rate of 15 percent on average using a one-class SVM classifier in all testing scenarios. It can also achieve average 94 percent fall detection precisions with 13 percent false alarm using Random Forest algorithm.