Conference PaperPDF Available

Human Activity Recognition from Accelerometer Data Using a Wearable Device


Abstract and Figures

Activity Recognition is an emerging field of research, born from the larger fields of ubiquitous computing, context-aware computing and multimedia. Recently, recognizing everyday life activities becomes one of the challenges for pervasive computing. In our work, we developed a novel wearable system easy to use and comfortable to bring. Our wearable system is based on a new set of 20 computationally efficient features and the Random Forest classifier. We obtain very encouraging results with classification accuracy of human activities recognition of up to 94%.
Content may be subject to copyright.
Human Activity Recognition from
Accelerometer Data Using a Wearable Device
Pierluigi Casale, Oriol Pujol, and Petia Radeva
Computer Vision Center, Bellaterra, Barcelona, Spain
Dept. of Applied Mathematics and Analysis, University of Barcelona,
Barcelona, Spain
Abstract. Activity Recognition is an emerging field of research, born
from the larger fields of ubiquitous computing, context-aware computing
and multimedia. Recently, recognizing everyday life activities becomes
one of the challenges for pervasive computing. In our work, we devel-
oped a novel wearable system easy to use and comfortable to bring. Our
wearable system is based on a new set of 20 computationally ecient
features and the Random Forest classifier. We obtain very encouraging
results with classification accuracy of human activities recognition of up
to 94%.
Keywords: Physical Activity Recognition, Wearable Computing, Per-
vasive Computing.
1 Introduction
Activity Recognition is an emerging fieldofresearch,bornfromthelargerfields
of ubiquitous computing, context-aware computing and multimedia. Recogniz-
ing everyday life activities is becoming a challenging application in pervasive
computing, with a lot of interesting developments in the health care domain,
the human behavior modeling domain and the human-machine interaction do-
main [3]. Even if first works about activity recognition used high dimensional
and densely sampled audio and video streams [9], in many recent works ([2],[1]),
activity recognition is based on classifying sensory data using one or many ac-
celerometers. Accelerometers have been widely accepted due to their compact
size, their low-power requirement, low cost, non-intrusiveness and capacity to
provide data directly related to the motion of people.
In recent years, several papers have been published where accelerometer data
analysis has been applied and investigated for physical activity recognition [5].
Nevertheless, few of them override the diculty to perform experiments out-of-
the-lab. The condition to perform experiments out-of-the-lab creates the need
to build easy to use and easy to wear systems in order to free the testers from
the expensive task of labeling the activities they perform.
In our work, we propose a new set of features extracted from wearable data
that are competitive from computational point of view and able to ensure high
classification results comparable with the state of the art wearable systems. The
J. Vitri`a, J.M. Sanches, and M. Hern´andez (Eds.): IbPRIA 2011, LNCS 6669, pp. 289–296, 2011.
Springer-Verlag Berlin Heidelberg 2011
290 P. Casale, O. Pujol, and P. Radeva
features proposed can be computed in real-time and provide physical meaning
to the quantities involved in classification. The new set of features has been
valida ted by mean o f a r eliable a naly sis compa ring the new features with the
majority of all the features commonly usedinphysicalactivityrecognitionus-
ing accelerometer data. Based on these features, we show that Random Forest
classifier is an optimal classifier that reaches classification performances between
90% and 94%.
Moreover, we present a custom wearable system for human action recogni-
tion, developed in our lab, that is based on the analysis of accelerometer data.
The wearable system is easy to use–users need only to start-stop the device,
and comfortable to bring, having a reduced form which does not prevent any
type of movement. Acceleration data can be acquired in many dierent, non-
controlled environments allowing to overpass the laboratory limitation setting.
Five basic every-day life activities like walking, climbing stairs, staying standing,
talking with people and working at computer are considered in order to show its
performance and robustness.
The paper is structured as follows. After discussing related work in Section
2, we describe in Section 3 how we create the dataset using in Section 3 we
provide the technical details about the best features extraction for classifing
human activities. In Section 4, we present the results of the classification of the
activities. Finally, Section 5 concludes the paper.
In [5], Mannini and Sabatini give a complete review about the state of the
art of activity classification using data from one or more accelerometers. In
their review, the best classification approaches are based on wavelet features
using threshold classifiers. In their work, they separate high-frequency (AC)
components, related to the dynamic motion the subject is performing from low-
frequency (DC) components of the acceleration signal related to the influence
of gravity and able to identify static postures. They extracted features from the
DC components. The authors classify 7 basic activities and transitions between
activities from data acquired in the lab, from 5 biaxial accelerometer placed in
dierent part of the body, using a 17th-dimensional feature vector and a HMM-
based sequential classifier, achieving 98.4% of accuracy.
Lester, Choudhury and Borriello in [4] summarize their experience in devel-
oping an automatic physical activities recognition system. In their work, they
answer some important questions about where sensors have to be placed in a
person, if variation across users helps to improve the accuracy in activity classi-
fication and which are the best modalities for recognizing activities. They reach
the conclusion that it does not matter where the users place the sensors, variation
across users do help improving accuracy classification and the best modalities
for physical activities recognition are accelerometers and microphones. Again,
human activities are acquired in a controlled environment.
Our previous work in this research line [10], uses a prototype of wearable
device completed by camera. Data of five everyday life activities have been
Human Activity Recognition from Accelerometer Data 291
collected from people acting in two circumscribed environments. A GentleBoost
classifier has been used for classifying the five activities with 83% of accuracy for
each activity. Using the combination of a physical activity classifier and a face
detector, face-to-face social activities have been detected with high confidence.
In contrast, in this work we question how far we can get in human activities
recognition using only wearable data.
Recognizing human activities depends directly on the featuresextractedformo-
tion analysis. Accelerometers provide three separated accelerometer data time
series, one time series for acceleration on each axis Ax,Ay,Az.Anexampleof
accelerometer data for five dierent activities is shown in Figure 1(a). Activities
refer to regular walking, climbing stairs, talking with a person, staying standing
and working at computer. In the figure, one can appreciate a pattern arising from
pattern seems not to be present, even if some common components between the
two activities can be noted. The rest of activities dier significantly from the pre-
vious ones specially in the waveform and in the acceleration intensities involved,
although forming another group of similar dynamic patterns. Small dierences
in the variation of the acceleration can help to discriminate the three activities.
Complementary to the three axes data, an additional time series, Am,havebeen
obtained computing the magnitude of the acceleration:Am=!A2
(a) (b)
Fig. 1. (a) Accelerometer Data for Five Dierent Activities..(b) Minmax sample in
Accelerometer Data
3.1 Features Selection for Motion Data
Each time series Ai,withi={x, y, z , m}has been filtered with a digital filter in
order to separate low frequencies components and high frequencies components
as suggested in [5]. The cut-ofrequency has been set to 1Hz,arbitrarily.Inthis
way, we obtain for each time series, three more time series Aij with j={b, dc, ac},
where b,dc,ac represent respectively the time series without filtering, the time
292 P. Casale, O. Pujol, and P. Radeva
series resulting from a low pass filtering and the time series resulting from a high
pass filtering. Finally, we extract features from each one of the time series.
been demonstrated to be windowing with overlapping. We extract features from
data using windows of 52 samples, corresponding to 1 second of accelerometer
data, with 50% of overlapping between windows. From each window, we propose
to extract the following features: root mean squared value of integration of ac-
celeration in a window, and mean value of Minmax sums. In next section, we
will show that these two features play important role being two of the most dis-
criminant ones because they provide informations about the physical nature of
the activity being performed. The integration of acceleration corresponds to the
Velo c i ty. Fo r e a ch w i ndow , the integra l o f t h e s i gnal and the R M S value o f t h e
series are computed. The integral has been approximated using running sums
with step equals to 10 samples. The physical meaning that this feature provides
is evident. The Minmax sums are computed as the sum of all the dierences of
the ordered pairs of the peaks of the time series. Note that minmax sums can be
considered as a naive version of standard deviation. In Figure 1(b), an example
of minmax sample is shown.
Still, in order to complete the set of features we add features that have proved
to be useful for human activity recognition [5] like: mean value, standard devia-
tion, skewness, kurtosis, correlation between each pairwise of accelerometer axis
(not including magnitude), energy of coecients of seven level wavelet decom-
position. In this way, we obtain a 319-dimensional feature vector.
3.2 Classification and Derivation of Importance Measurement
Random forest [6] is an ensemble classifier that, besides classifying data, can be
used for measuring attribute importance. Random Forest builds many classifica-
tion trees, where each tree votes for a class and the forest choose the classification
having the most votes over all the trees. Each tree is built as follows:
-IfthenumberofcasesinthetrainingsetisN,Ncases are sampled at
random with replacement. This sample is the training set.
-IfthereareMinput variables, a number mMof variables is selected
at random and the best split on these m variables is used to split the node.
The value of mis held constant during the construction of the forest.
When the training set for the current tree is drawn with replacement, about
one-third of the cases is left out of the sample. This Out-Of-Bag (OOB) data is
used to get an unbiased estimate of the classification error as trees are added to
the forest. Random Forest has the advantage to assign explicitly an information
measurement to each feature. Measuring the importance of attributes is based
on the idea that randomly changing an important attribute between the mse-
lected variables for building a tree aects the classification, while changing an
unimportant attribute does not aect it in a significant way. Importance of all
attributes for a single tree are computed as: correctly classified OOB examples
Human Activity Recognition from Accelerometer Data 293
Tabl e 1. List of Features selected by Random Forest
Featu r e Importance Feature Importance
Mean Value Azdc 4.64 Mean Value Aydc 3.86
MinMax Azdc 4.61 Rms Velocity Aydc 3.67
RMS Velocity Azdc 4.23 Mean Value Azb 3.59
RMS Velocity Amdc 4.2 Mean Value Axdc 3.57
RMS Velocity Axac 4.14 MinMax Axdc 3.52
Mean Value Amdc 4.07 MinMax Azb 3.51
MinMax Aydc 3.92 Mean Value Ayb 3.33
Standard Deviation Axb 3.9 Rms Velocity Axdc 3.22
MinMax Amdc 3.89 Rms Velocity Azb 3.2
Standard Deviation Axdc 3.87 MinMax Ayb 2.96
minus correctly classified OOB examples when an attribute is randomly shued.
The importance measure is obtained dividing the accumulated attribute by the
number of used trees and multiplying the result by 100.
Using Random Forest, an importance measure of the features has been ob-
tained. In Table 1, the best 20 features obtained out of 319 are reported with
their respective importance value.
First we discuss the architecture of our wearable system and then discuss the
obtained results.
System architexture: Our wearable system, shown in Figure 2(a), is based on
We use Linux as op e r ating sys t e m o n t h e b oar d . A l o w - c o st US B we bca m an d a
Bluetooth accelerometer are connected with the board. The system is powered
using a portable lithium battery able to power up to four hours the system. Users
can wear the system as in Figure 2(b), where the directions of the acceleration
axis are printed upon the picture. More specifically, Z-axis represents the axis
concordant to the direction of movement and the plane defined by the Xand
Yaxis lies on the body of the person. The system works with three modalities,
video, audio and accelerometer data. It takes photos, grabs audio continuously
applying a filter for voice removal and it receives via bluetooth data from the
accelerometer. All the sensors can be localized in the same part of the body. In
our setting, sensors are located on the breast.
Data acquisition: Data have been collected from fourteen testers, three women
and eleven men with age between 27 and 35. For labeling activities, people
were asked to annotate the sequential order of the activities they performed
and restart the system. Every time the system starts, data are named with
perform the activity. The system boots in less then 2 minutes and the acquisition
automatically starts while the user is already performing the activity. In this way,
there are no “border eects“ due to starting. The user can stop the acquisition in
294 P. Casale, O. Pujol, and P. Radeva
(a) (b)
Fig. 2. (a) The components of the wearable system, (b) The wearable system worn by
an experimenter
every moment pressing again the start button. The data set collected is composed
by 33 minutes of walking up/down stairs, 82 minutes of walking, 115 minutes of
talking, 44 minutes of staying standing and 86 minutes of working at computer.
Human activity classification: Random Forest selects really meaningful fea-
tures for classifying activities. The most important features selected are related
to the Zaxis that is, the direction of the movements. The majority of the features
are relative to the DC components of movements and only the RMS velocity
feature relative to the Xaxis from the AC components has been selected. The
information relative to the variation of movements on the Xaxis can help to
discriminate between activities like staying standing, talking and working at PC.
On the other side, features relativetothevariationofmovementsonYaxis, can
help to discriminate between activities like walking and walking up/down stairs.
Mean value, minmax features and RMS velocity are selected for all the DC com-
ponents of all the time series. Random Forest selects the best features but it is
not able to discriminate between features bringing the same information. For
example, all the features selected that have been extracted from the time series
without filtering are also selected from the DC time series and, in all the cases,
the features selected from the DC time series have an importance value bigger
than the corresponding value from the series without filtering. Features derived
from higher level statistics (skewness andkurtosis)andfeaturesrelativetothe
correlation between axis are features with the lowest importance.
In order to verify if the features selected are really informative, we use dif-
ferent classification methods for classifying the five activities. We compare the
classification results obtained using Decision Trees, Bagging of 10 Decision Trees,
AdaBoost using Decision Trees as base classifiers and a Random Forest of 10
Decision Trees. All the results are validated by 5-fold cross validation. The data
set Dmhas been created using the 20 features selected by the Random Forest
classifier. In Figure 3(a) we show the classification accuracy of the classifiers
trained on Dm.InFigure3(b)weshowtheF-Measureofeachactivityforevery
As can be seen from the graphics, the best classification accuracy is obtained
using Random Forest. The F-Measure obtained for each class shows how each
Human Activity Recognition from Accelerometer Data 295
(a) (b)
Fig. 3. (a) Classification Accuracy for Dierent Classifiers.(b) F-Measure for each Ac-
tivity on the Motion Dataset.
activity can be classified with high precision and recall. In particular, activity
with the best performances are walking and working at computer. Bagging and
Random Forest are the classifier that give the best performances for each class.
The confusion matrix obtained with the Random Forest classifier is reported in
Table 2. Note h ow similar a c t i v i ty like walking and climbing stairs have so me
confusions between them. The biggest confusion in obtained between talking and
standing, activity that can be easily confused from the perspective of motion.
From Table 2 i t c a n b e conclud e d t h a t a ll the clas s i e r s h a ve accur a c y a b ov e
Tabl e 2 . Confusion Matrix of Random Forest trained on Dm
stairs walking talking standing workingPC
stairs 0.898 0.029 0.004 0.002 0.001
walking 0.075 0.959 0.006 0.002 0.001
talking 0.015 0.007 0.929 0.093 0.012
standing 0.006 0.001 0.039 0.888 0.006
working 0.004 0.001 0.02 0.014 0.977
the 90% using only the motion modality. The Random Forest classifier trained
on Dmshows confusions between similar activities like walking and walking
up/down stairs, and between talking and standing. The F-Measure does not
present significative dierences between the classes that means that the five
activities can be recognized with high confidence.
In this work, a study on the best features able to classify physical activities has
been done. A new set of features has been taken into account and compared
to the most commonly used features used for activity recognition in literature.
The Random Forest classifier has been used to evaluate the informative measure
of this new set of features. Results obtained show that the new set of features
represent a very informative group of features for activity recognition. Using the
features selected by Random Forest, dierent classifiers have been used for evalu-
ating classification performances in activity recognition. Very high classification
296 P. Casale, O. Pujol, and P. Radeva
performances have been reached, obtained up to 94% of accuracy using Random
Forest. Sta t e of the art c l a s s ication p e r f o r mances ([5 ] , [ 4 ] ) ensures cl a s s i c ation
performances higher than 94% when two-stages classification pipeline are used.
The validation of the new set of features has been performed using data col-
lected using a custom wearable system, easy to use and comfortable to bring.
The custom wearable device allows to perform experiments in uncontrolled envi-
ronment overpassing the laboratory setting limitation. Testers perform activities
in the environment they selected without the eort of labeling activities.
Based on these results obtained using only the motion sensor, future works
plan to add the other sensors to increase the classification performances. We
expect that adding further information from the camera and the microphone can
help considerably in discriminating between activities like “standing”, “talking”
and “workingPC” or “walking” and “walking up/down stairs” activities where
the biggest confusions are present. Moreover, we plan to extend the set of human
activities in order to address the problem of short-term and long-term human
behavior based on the accelerometer and video data.
Acknowledgments. This work is partially supported by a research grant from
projects TIN2009-14404-C02, La Marato de TV3 082131 and CONSOLIDER
1. Ravi, N., Nikhil, D., Mysore, P., Littman, M.L.: Activity recognition from ac-
celerometer data. In: IAAI, pp. 1541–1546 (2005)
2. Bao, L., Intille, S.S.: Activity recognition from user-annotated acceleration data,
pp. 1–17. Springer, Heidelberg (2004)
3. Choudhury, T., Lamarca, A., Legr, L., Rahimi, A., Rea, A., Borriello, G., Hem-
ingway, B., Koscher, K., Lester, J., Wyatt, D., Haehnel, D.: The Mobile Sensing
Platform: An Embedded Activity Recognition System. IEEE Pervasive Comput-
ing 7, 32–41 (2008)
4. Lester, J., Choudhury, T., Borriello, G.: A practical approach to recognizing
physical activities. In: Fishkin, K.P., Schiele, B., Nixon, P., Quigley, A. (eds.)
PERVASIVE 2006. LNCS, vol. 3968, pp. 1–16. Springer, Heidelberg (2006)
5. Mannini, A., Sabatini, A.M.: Machine Learning Methods for Classifying Human
Physical Activities from on-body sensors. Sensors 10, 1154–1175 (2010)
6. Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)
7. Krause, A., Siewiorek, D., Smailagic, A., Farrigdon, J.: Unsupervised, dynamic
identification of Physiological and Activity Context in Wearable Computing. In:
Fen s e l , D ., Sycara , K . , M ylopoulos , J . ( e d s .) ISWC 2 003. LNCS, v o l. 2870 . S p r i n ger,
Heidelberg (2003)
8. Huynh, T., Fritz, M., Schiele, B.: Discovery of Activity Patterns using Topic Mod-
els. In: UbiComp 2008, pp. 10–19 (2008)
9. Clarkson, B., Pentland, A.: Unsupervised Clustering of ambulatory audio and
video. In: ICASSP 1999, pp. 3037–3040 (1999)
10. Casale, P., Pujol, O., Radeva, P.: Face-to-Face Social Activity Detection Using Data
Collected with a Wearable Device. In: Araujo, H., Mendon¸ca, A.M., Pinho, A.J.,
Tor r e s , M .I. (eds.) I b P R I A 200 9 . L N C S , vol . 5 5 24, pp. 56 6 3. S p r inger, He i d e lb e r g
... In [10], researchers studied the performance of heuristic features for five publicly available datasets, which they labeled as A [44], B [45], C [21], D [46], and E [47]. Besides using the 9 features altogether, they also used only the first 3 or the first 6 heuristic features and recorded the performance of 4 classifiers, Bayesian decision-making (BDM), K-nearest neighbor (KNN), support vector machine (SVM), and artificial neural network (ANN). ...
Full-text available
Many studies have explored divergent deep neural networks in human activity recognition (HAR) using a single accelerometer sensor. Multiple types of deep neural networks, such as convolutional neural networks (CNN), long short-term memory (LSTM), or their hybridization (CNN-LSTM), have been implemented. However, the sensor orientation problem poses challenges in HAR, and the length of windows as inputs for the deep neural networks has mostly been adopted arbitrarily. This paper explores the effect of window lengths with orientation invariant heuristic features on the performance of 1D-CNN-LSTM in recognizing six human activities; sitting, lying, walking and running at three different speeds using data from an accelerometer sensor encapsulated into a smartphone. Forty-two participants performed the six mentioned activities by keeping smartphones in their pants pockets with arbitrary orientation. We conducted an inter-participant evaluation using 1D-CNN-LSTM architecture. We found that the average accuracy of the classifier was saturated to 80 ± 8.07% for window lengths greater than 65 using only four selected simple orientation invariant heuristic features. In addition, precision, recall and F1-measure in recognizing stationary activities such as sitting and lying decreased with increment of window length, whereas we encountered an increment in recognizing the non-stationary activities.
... In comparison, the supervised classification methods with prior categories show good classification performance in remote sensing images, such as naive Bayesian (NB), support vector machine (SVM), random forest (RF), and convolutional neural networks (CNN) (Shi et al., 2016;Bonaccorso, 2017;Zhong et al., 2019;Yan et al., 2021). These supervised methods have been generally used as potential classification models with high accuracy in remote sensing and other areas of research (Talukdar et al., 2020;Antoniadis et al., 2021), such as land cover classification (Tatsumi et al., 2016;Wang et al., 2021), fault diagnosis (Yin and Hou, 2016), deformation prediction , human activity recognition (Casale et al., 2011), etc. Therefore, supervised classification methods are used to classify corn residue-covered areas in this study. ...
Full-text available
The management of crop residue covering is a vital part of conservation tillage, which protects black soil by reducing soil erosion and increasing soil organic carbon. Accurate and rapid classification of corn residue-covered types is significant for monitoring crop residue management. The remote sensing technology using high spatial resolution images is an effective means to classify the crop residue-covered areas quickly and objectively in the regional area. Unfortunately, the classification of crop residue-covered area is tricky because there is intra-object heterogeneity, as a two-edged sword of high resolution, and spectral confusion resulting from different straw mulching ways. Therefore, this study focuses on exploring the multi-scale feature fusion method and classification method to classify the corn residue-covered areas effectively and accurately using Chinese high-resolution GF-2 PMS images in the regional area. First, the multi-scale image features are built by compressing pixel domain details with the wavelet and principal component analysis (PCA), which has been verified to effectively alleviate intra-object heterogeneity of corn residue-covered areas on GF-2 PMS images. Second, the optimal image dataset (OID) is identified by comparing model accuracy based on the fusion of different features. Third, the 1D-CNN_CA method is proposed by combining one-dimensional convolutional neural networks (1D-CNN) and attention mechanisms, which are used to classify corn residue-covered areas based on the OID. Comparison of the naive Bayesian (NB), random forest (RF), support vector machine (SVM), and 1D-CNN methods indicate that the residue-covered areas can be classified effectively using the 1D-CNN-CA method with the highest accuracy (Kappa: 96.92% and overall accuracy (OA): 97.26%). Finally, the most appropriate machine learning model and the connected domain calibration method are combined to improve the visualization, which are further used to classify the corn residue-covered areas into three covering types. In addition, the study showed the superiority of multi-scale image features by comparing the contribution of the different image features in the classification of corn residue-covered areas.
... Acceleration signals obtained from accelerometers have been used for HAR [318], due to their robustness against occlusion, viewpoint, lighting, and background variations, etc. Specifically, a tri-axial accelerometer can return an estimation of acceleration along the x, y, and z axes, that can be used to perform human activity analysis [319]. ...
Full-text available
Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.
... The increase in cancer, diabetes, heart disease, neurological conditions and chronic diseases around the world is increasing the number of wearable products. (Maurer, et al., 2006) (Casale, et al., 2011) (Dinh, et al., 2009) (Cole, et al., 1999) (Asada, et al., 2003). With the exception of wristbands, every category of wearable devices has increased year over year since 2019 indicating a shift in consumer preference. ...
Conference Paper
Full-text available
In the globalizing world, the disappearance of borders and the increasing trade volume have made logistics activities and logistics sector very important. In order for companies to have a say in international markets under tough competition conditions, they need to minimize logistics activities. Logistic villages play an important role in developing countries in order to carry out logistics activities with the least cost. Establishment of a logistics village that will prevent traffic congestion and reduce commercial costs, especially in a metropolitan city like Istanbul, which has a great historical background and depth, is the center of the world for logistics, connects two continents, has a high commercial and human density, has become an indispensable need. In this study, since it is aimed to provide information about logistics villages, to provide information about the current status of mega projects in Istanbul and to propose a logistics village model to Northern Marmara, a wide document review was made on these issues.Statistical data on published books, journals, articles, theses and some research results on related subjects were obtained. This study and its data were analyzed and evaluated for use in our study. As a result of the evaluations, an integrated logistics village model proposal was presented to the mega projects being implemented.
... Additionally, it will not raise users' privacy concerns. As a result, sensor-based techniques are more appropriate for recognizing human activities [5]. This work is primarily concerned with the issue of sensor-based HAR. ...
Full-text available
Numerous learning-based techniques for effective human behavior identification have emerged in recent years. These techniques focus only on fundamental human activities, excluding transitional activities due to their infrequent occurrence and short period. Nevertheless, postural transitions play a critical role in implementing a system for recognizing human activity and cannot be ignored. This study aims to present a hybrid deep residual model for transitional activity recognition utilizing signal data from wearable sensors. The developed model enhances the ResNet model with hybrid Squeeze-and-Excitation (SE) residual blocks combining a Bidirectional Gated Recurrent Unit (BiGRU) to extract deep spatio-temporal features hierarchically, and to distinguish transitional activities efficiently. To evaluate recognition performance, the experiments are conducted on two public benchmark datasets (HAPT and MobiAct v2.0). The proposed hybrid approach achieved classification accuracies of 98.03% and 98.92% for the HAPT and MobiAct v2.0 datasets, respectively. Moreover, the outcomes show that the proposed method is superior to the state-of-the-art methods in terms of overall accuracy. To analyze the improvement, we have investigated the effects of combining SE modules and BiGRUs into the deep residual network. The findings indicates that the SE module is efficient in improving transitional activity recognition.
... Classical Techniques: The system in [36] uses feature engineering and domain-specific hand-crafted features (e.g., root mean squared value of integration of acceleration in a window and mean value of Minmax sums) over the lowlevel sensor data. In [37], the impact of window size on the performance of HAR is investigated. ...
Full-text available
The World Health Organization reported that face touching is a primary source of infection transmission of viral diseases, including COVID-19, seasonal Influenza, Swine flu, Ebola virus, etc. Thus, people have been advised to avoid such activity to break the viral transmission chain. However, empirical studies showed that it is either impossible or difficult to avoid as it is unconsciously a human habit. This gives rise to the need to develop means enabling the automatic prediction of the occurrence of such activity. In this paper, we propose SafeSense, a cross-subject face-touch prediction system that combines the sensing capability of smartwatches and smartphones. The system includes innovative modules for automatically labeling the smartwatches' sensor measurements using smartphones' proximity sensors during normal phone use. Additionally, SafeSense uses a multi-task learning approach based on autoencoders for learning a subject-invariant representation without any assumptions about the target subjects. SafeSense also improves the deep model's generalization ability and incorporates different modules to boost the per-subject system's accuracy and robustness at run-time. We evaluated the proposed system on ten subjects using three different smartwatches and their connected phones. Results show that SafeSense can obtain as high as 97.9% prediction accuracy with a F1-score of 0.98. This outperforms the state-of-the-art techniques in all the considered scenarios without extra data collection overhead. These results highlight the feasibility of the proposed system for boosting public safety.
Wearable devices are contributing heavily towards the proliferation of data and creating a rich minefield for data analytics. Recent trends in the design of wearable devices include several embedded sensors which also provide useful data for many applications. This research presents results obtained from studying human-activity related data, collected from wearable devices. The activities considered for this study were working at the computer, standing and walking, standing, walking, walking up and down the stairs, and talking while walking. The research entails the use of a portion of the data to train machine learning algorithms and build a model. The rest of the data is used as test data for predicting the activity of an individual. Details of data collection, processing, and presentation are also discussed. After studying the literature and the data sets, a Random Forest machine learning algorithm was determined to be best applicable algorithm for analyzing data from wearable devices. The software used in this research includes the R statistical package and the SensorLog app.
Conference Paper
Full-text available
Activity recognition fits within the bigger framework of context awareness. In this paper, we report on our efforts to recognize user activity from accelerometer data. Activity recognition is formulated as a classifica- tion problem. Performance of base-level classifiers and meta-level classifiers is compared. Plurality Voting is found to perform consistently well across different set- tings.
Conference Paper
Full-text available
In this work the feasibility of building a socially aware badge that learns from user activities is explored. A wearable multisensor device has been prototyped for collecting data about user movements and photos of the environment where the user acts. Using motion data, speaking and other activities have been classified. Images have been analysed in order to complement motion data and help for the detection of social behaviours. A face detector and an activity classifier are both used for detecting if users have a social activity in the time they worn the device. Good results encourage the improvement of the system at both hardware and software level.
Conference Paper
Full-text available
Context-aware computing describes the situation where a wearable / mobile computer is aware of its user's state and surroundings and modifies its behavior based on this information. We designed, implemented and evaluated a wearable system which can determine typical user context and context transition probabilities online and without external supervision. The system relies on techniques from machine learning, statistical analysis and graph algorithms. It can be used for online classification and prediction. Our results indicate the power of our method to determine a meaningful user context model while only requiring data from a comfortable physiological sensor device.
Full-text available
The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Conference Paper
We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board.
Conference Paper
In this work we propose a novel method to recognize daily routines as a probabilistic combination of activity patterns. The use of topic models enables the automatic discovery of such patterns in a user's daily routine. We report experimen- tal results that show the ability of the approach to model and recognize daily routines without user annotation.
Conference Paper
In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84%. The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.
Conference Paper
A truly personal and reactive computer system should have access to the same information as its user, including the ambient sights and sounds. To this end, we have developed a system for extracting events and scenes from natural audio/visual input. We find our system can (without any prior labeling of data) cluster the audio/visual data into events, such as passing through doors and crossing the street. Also, we hierarchically cluster these events into scenes and get clusters that correlate with visiting the supermarket, or walking down a busy street