ArticlePDF Available

A study of human activity recognition using adaboost classifiers on WISDM dataset

Authors:
  • Anuradha Engineering College
  • Director, IIIT Kottayam, Kerala, India Institute of National Importance

Abstract and Figures

Human activity recognition is bringing much attention because of its applications in many areas like health care, adaptive interfaces and a smart environment. Today's smartphone is well equipped with advanced processor, more memory, powerful battery and built-in sensors. This provides an opportunity to open up new areas of data mining for activity recognition of Daily Living. In this paper, the benchmark dataset is considered for this work is acquired from the WISDM laboratory, which is available in public domain. We performed experiment using AdaBoost.M1 algorithm with Decision Stump, Hoeffding Tree, Random Tree, J48, Random Forest and REP Tree to classify six activities of daily life by using Weka tool. Then we also see the test output from weka experimenter for these six classifiers. We found the using Adaboost,M1 with Random Forest, J.48 and REP Tree improves overall accuracy. We showed that the difference in accuracy for Random Forest, REP Tree and J48 algorithms compared to Decision Stump, and Hoeffding Tree is statistically significant. We also show that the accuracy of these algorithms compared to Decision Stump, and Hoeffding Tree is high, so we can say that these two algorithms achieved a statistically significantly better result than the Decision Stump, and Hoeffding Tree and Random Tree baseline.
Content may be subject to copyright.
A preview of the PDF is not available
... A variety of DL methods have been successfully used for HAR, such as recurrent neural networks (RNN) [24,26,28,29], long short-term memory (LSTM) [22,23,30], autoencoder (AE) [4,20], deep neural network (DNN) [1,9,13], and convolutional neural network (CNN) [31,32]. [12,35,[37][38][39][40]42,44,45], naive Bayes [37,45,46], logistic regression [33,34,39,48,49], k-nearest neighbors [35][36][37]42,45], AdaBoost [47], and random forest [12,[35][36][37][38][39]43,50]. ...
... To demonstrate the superiority of the proposed approach (HARSI) over the previous approaches, we compared it with the classical machine learning methods (SVM, DT, NB, KNN, MLP, AdaBoost, and RF) [12,[33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50] and also compared it with the state-of-the-art methods [5,12,13,24,26,34, on the same dataset. ...
... Table 4 lists the related studies with their methods and the corresponding accuracy rates. It can be seen from the table that the proposed HARSI method outperformed the other methods [12,[33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50] with a 13.72% improvement on average. Employing HARSI achieved higher accuracy (98%) than the traditional machine learning models on the same dataset. ...
Article
Full-text available
Traditional indoor human activity recognition (HAR) has been defined as a time-series data classification problem and requires feature extraction. The current indoor HAR systems still lack transparent, interpretable, and explainable approaches that can generate human-understandable information. This paper proposes a new approach, called Human Activity Recognition on Signal Images (HARSI), which defines the HAR problem as an image classification problem to improve both explainability and recognition accuracy. The proposed HARSI method collects sensor data from the Internet of Things (IoT) environment and transforms the raw signal data into some visual understandable images to take advantage of the strengths of convolutional neural networks (CNNs) in handling image data. This study focuses on the recognition of symmetric human activities, including walking, jogging, moving downstairs, moving upstairs, standing, and sitting. The experimental results carried out on a real-world dataset showed that a significant improvement (13.72%) was achieved by the proposed HARSI model compared to the traditional machine learning models. The results also showed that our method (98%) outperformed the state-of-the-art methods (90.94%) in terms of classification accuracy.
... The i th class's probability is P(ci To assess the success of our work, a variety of metrics have been considered, such as correctness, high accuracy [11], recalls [11], F1-score [12], ROC Region [12], PRC Region [13], MCC [13], and Root Mean Squared Error [13]. ...
... The i th class's probability is P(ci To assess the success of our work, a variety of metrics have been considered, such as correctness, high accuracy [11], recalls [11], F1-score [12], ROC Region [12], PRC Region [13], MCC [13], and Root Mean Squared Error [13]. ...
... The i th class's probability is P(ci To assess the success of our work, a variety of metrics have been considered, such as correctness, high accuracy [11], recalls [11], F1-score [12], ROC Region [12], PRC Region [13], MCC [13], and Root Mean Squared Error [13]. ...
Article
A person's ability to lead a stable, affluent life is made possible through education. In the same way, a country's development may be influenced by the proportion of its population with a higher level of education. This number does, however, decline because of early schooling dropouts. Furthermore, a nation's resources are diminished if a student cannot continue because of a dropout. Although the number of dropouts is constantly falling, it is still very challenging for educational institutions to identify these individuals. An educational institution's first priority is to improve student Performance; therefore, it makes sure that every student graduates on time. Nevertheless, a significant barrier that has a negative effect on this goal is student dropout. Understanding the causes of dropouts is necessary to finding a solution. The causes differ from one student to another; some are connected to the student's workload and mental fortitude. Various ways using Decision Tree (DT) methodologies have been suggested and studied in this study.
... RF was considered fast bagging classifier as reported by Nurwulan and Selamaj (2020). The boosting classifiers such as AdaBoost and GBM were reported by Walse et al. (2016). The three ensemble classifiers based on bagging, boosting, and stacking were examined for recognizing human activity using smartphones (Bulbul et al., 2018). ...
... Human activities have already been classified using methods such as DT and SVM (Ignatov and Strijov, 2016;Walse, Dharaskar and Thakare, 2016;Agarwal and Alam, 2022;Gaur and Dubey, 2022). Catal, Tufecki, Pirmit, and Kocabag (2015) proposed a method for HAR that combines several classification approaches using an ensemble of classifiers to maximise the accuracy of each classification method. ...
Book
Full-text available
The proceeding of International Conference on Social and Applied Sciences (ICSAS2022) "Sustainable Development with Ethical Practices and Smart Technologies"
... Different techniques such as support vector machines and decision trees [14] are trained to perform segregation tasks by using "shallow features". Statistical parameters, symbolic representation [17], basis transform coding [6] are some of the "shallow features" that are capable of describing time series data. A method has been proposed which combines many classification methods to increase the accuracy and performance of the device. ...
Article
Due to advancement in technology, availability of resources and by increased utilization of on node sensors enormous amount of data is obtained. There is a necessity of analyzing and classifying this physiological information by efficient and effective approaches such as deep learning and artificial intelligence. Human Activity Recognition (HAR) is assuming a dominant role in sports, security, anti-crime, healthcare and also in environmental applications like wildlife observation etc. Most techniques work well for processing offline instead of real- time processing. There are few approaches which provide maximum accuracy for real time processing of large-scale data, one of the compromising approaches is deep learning. Limitation of resources is one of the causes to restrict the usage of deep learning for low power devices which can be worn on our body. Deep learning implementations are known to produce precise results for different computing systems.We suggest a deep learning approach in this paper which integrates features and data learned from inertial sensors with complementary knowledge obtained from a collection of shallow features which generates the possibility of performing real time activity classification accurately. Eliminating the obstructions caused by using deep learning methods for real-time analysis is the aim of this integrated design. Before passing the data into the deep learning framework, we perform spectral analysis to optimize the planned methodology for on-node computation. The accuracy obtained by combined approach is tested by utilizing datasets obtained from laboratory and real world controlled and uncontrolled environment. Our outcomes demonstrate the legitimacy of the methodology on various human action datasets, beating different techniques, including the two strategies utilized inside our consolidated pipeline. We additionally exhibit that our integrated design's classification times are reliable with on node real-time analysis criteria on smart phones and wearable technology.
... Therefore, they can provide new opportunities in the HAR research [8][9][10]. Putting in plain terms, it includes the use of diverse sensing technologies to collect and categorize user activities in different domains, starting from medical applications, home monitoring & assisted living, to sports and leisure applications [11][12][13][14][15]. The human activity to be studied can be identified by using various sensors placed on the individual's body. ...
Article
Full-text available
The field of Human Activity Recognition (HAR) is an active research field in which methods are being developed to understand human behavior by interpreting features obtained from various sources, these activities can be recognized using interactive sensors that are affected by human movement. Sensor can embed elements within Smartphones or Personal Digital Assistants (PDAs). The great increase in smart phone users and the increase in the sensor ability of these smart phones, and users usually carry their smartphones with them. This fact makes HAR more important and accepted. In this survey, A number of previous studies were studied and analyzed, where we prepared a comparison of the research works conducted over the period 2010-2020 in human activity recognition using Smartphone sensors. Comparison charts highlight their most important aspects such as a type of sensor used, activities, sensor placement, HAR-system type (offline, online), computing device, classifier (type of algorithms) and system accuracy levels. ‫ال‬ ‫على‬ ‫التعرف‬ ‫حول‬ ‫اسة‬ ‫در‬ ‫شنطةة‬ ‫الذكي‬ ‫الهاتف‬ ‫باستخدام‬ ‫البطرية‬ ‫اهيم‬ ‫إبر‬ ‫دمحم‬ ‫لهيب‬ ‫أديبة‬ ‫خالد‬ ‫عبو‬ ‫ياضيات‬ ‫الخ‬ ‫و‬ ‫الحاسهب‬ ‫عمهم‬ ‫كمية‬ ‫اق‬ ‫العخ‬ ‫السهصل،‬ ‫السهصل،‬ ‫جامعة‬ ‫ت‬ ‫ا‬ ‫البحث:‬ ‫استالم‬ ‫ريخ‬ 92 / 80 / 9898 ‫ت‬ ‫ا‬ ‫البحث:‬ ‫قبول‬ ‫ريخ‬ 13 / 83 / 9893 ‫الملخص‬ ‫البذخي‬ ‫الشذاط‬ ‫عمى‬ ‫التعخف‬ (Human Activity Recognition(HAR)) ‫إنذاء‬ ‫فيو‬ ‫يتم‬ ‫بحث‬ ‫مجال‬ ‫ىه‬ ‫لفيم‬ ‫طخق‬ ‫الفعاليات‬ ‫السرادر.‬ ‫من‬ ‫متشهعة‬ ‫مجسهعة‬ ‫من‬ ‫جسعيا‬ ‫تم‬ ‫التي‬ ‫ات‬ ‫السيد‬ ‫تفديخ‬ ‫خالل‬ ‫من‬ ‫ية‬ ‫البذخ‬ ‫و‬ ‫يسكن‬ ‫ال‬ ‫ىحه‬ ‫عمى‬ ‫التعخف‬ ‫فعالي‬ ‫اإلندان‬ ‫بحخكة‬ ‫تتأثخ‬ ‫تفاعمية‬ ‫استذعار‬ ‫ة‬ ‫أجيد‬ ‫باستخجام‬ ‫ات‬. ‫الحكية‬ ‫اتف‬ ‫اليه‬ ‫تحتهي‬ ‫بيا‬ ‫مجمجة‬ ‫استذعار‬ ‫ة‬ ‫أجيد‬ ‫عمى‬ ‫الذخرية‬ ‫قسية‬ ‫الخ‬ ‫السداعجات‬ ‫و‬ ، ‫لم‬ ‫ونتيجة‬ ‫اتف‬ ‫اليه‬ ‫مدتخجمي‬ ‫عجد‬ ‫في‬ ‫ة‬ ‫الكبيخ‬ ‫يادة‬ ‫د‬ ‫اتف‬ ‫اليه‬ ‫ىحه‬ ‫في‬ ‫االستذعار‬ ‫ة‬ ‫أجيد‬ ‫ة‬ ‫قجر‬ ‫يادة‬ ‫وز‬ ‫الحكية‬ ‫الحقيق‬ (‫جعمت‬ ‫التي‬ ‫ة‬ HAR ‫أكثخ‬) ‫ال‬ ‫وقبه‬ ‫أىسية‬. ‫التي‬ ‫البحثية‬ ‫لألعسال‬ ‫نة‬ ‫مقار‬ ‫بإعجاد‬ ‫قسشا‬ ‫حيث‬ ‫الدابقة‬ ‫اسات‬ ‫الجر‬ ‫من‬ ‫عجد‬ ‫وتحميل‬ ‫اسة‬ ‫در‬ ‫تم‬ ‫االستطالع‬ ‫ىحا‬ ‫في‬ ‫ة‬ ‫الفتخ‬ ‫خالل‬ ‫يت‬ ‫أجخ‬ 0202-0202 ‫الحكية.‬ ‫اتف‬ ‫اليه‬ ‫ات‬ ‫مدتذعخ‬ ‫باستخجام‬ ‫البذخي‬ ‫الشذاط‬ ‫عمى‬ ‫التعخف‬ ‫في‬ ‫وتم
... Franco et al. proposed a descriptor for the human action recognition system using skeletal data captured by Kinect sensor [51]. Many researchers have proposed Adaboost for the classification of human postures [24,35,40,63]. ...
Article
Full-text available
Automatic human posture recognition in surveillance videos has real world applications in monitoring old-homes, restoration centers, hospitals, disability, and child-care centers. It also has applications in other areas such as security and surveillance, sports, and abnormal activity recognition. Human posture recognition is a challenging problem due to occlusion, background clutter, illumination variations, camouflage, and noise in the captured video signal. In the current study, which is an extension of our previous work (Ali et al. Sensors, 18(6):1918, 2018), we propose a novel combination of a number of spatio-temporal features computed over human blobs in a temporal window. These features include aspect ratios, shape descriptors, geometric centroids, ellipse axes ratio, silhouette angles, and silhouette speed. In addition to these features, we also exploit the radon transform to get better shape based analysis. In order to obtain improved posture classification accuracy, we used J48 classifier under a boosting framework by employing the AdaBoost algorithm.The proposed algorithm is compared with eighteen existing state-of-the-art approaches on four publicly available datasets including MCF, UR Fall detection, KARD, and NUCLA. Our results demonstrate the excellent performance of the proposed algorithm compared to these existing methods.
... It was tested that Adaboost combined with C4.5 gave an accuracy of 94.04%. A similar technique with slight modification by combining AdaBoost algorithm with decision stump (DS), Hoeffding tree (HT), random tree (RT), J48, random forest (RF) and reduce error pruning (REP) Tree was discussed to classify six activities of daily life by using the Weka tool [8]. Bayat et al. [9] used a single triaxial accelerometer to obtain accurate recognition. ...
Article
Full-text available
In recent times, fitness trackers and smartphones equipped with different sensors like gyroscopes, accelerometers, Global Positioning System sensors and programs are used for recognizing human activities. In this paper, the results collected from these devices are used to design a system that can have an application in monitoring a person’s health. Such systems take the raw sensor signals as input, preprocesses it and using machine learning techniques outputs the state of the user with minimum error. The objective of this paper is to compare the performance of different algorithms Logistic Regression, Support Vector Machine, k-Nearest Neighbor and Random Forest. The algorithms are trained and tested with an original number of features as well as with transformed number of features. The data with a smaller number of features is then used to visualize the high dimensional data. In this paper, each data point is mapped in the high dimensional data to two-dimensional data using t-distributed stochastic neighbour embedding technique. Overall, the first high dimensional data is visualized and compared with model’s performance with different algorithms and with different number of coordinates
... Participants put a smartphone in their pocket to record activities. During these activities, the sampling rate of the 20 Hz accelerometer sensor was maintained [56]. ...
Article
Full-text available
Human activity recognition (HAR) has been of interest in recent years due to the growing demands in many areas. Applications of HAR include healthcare systems to monitor activities of daily living (ADL) (primarily due to the rapidly growing population of the elderly), security environments for automatic recognition of abnormal activities to notify the relevant authorities, and improve human interaction with the computer. HAR research can be classified according to the data acquisition tools (sensors or cameras), methods (handcrafted methods or deep learning methods), and the complexity of the activity. In the healthcare system, HAR based on wearable sensors is a new technology that consists of three essential parts worth examining: the location of the wearable sensor, data preprocessing (feature calculation, extraction, and selection), and the recognition methods. This survey aims to examine all aspects of HAR based on wearable sensors, thus analyzing the applications, challenges, datasets, approaches, and components. It also provides coherent categorizations, purposeful comparisons, and systematic architecture. Then, this paper performs qualitative evaluations by criteria considered in this system on the approaches and makes available comprehensive reviews of the HAR system. Therefore, this survey is more extensive and coherent than recent surveys in this field.
Conference Paper
Full-text available
Human activity recognition (AR) has begun to mature as a field, but for AR research to thrive, large, diverse, high quality, AR data sets must be publically available and AR methodology must be clearly documented and standardized. In the process of comparing our AR research to other efforts, however, we found that most AR data sets are sufficiently limited as to impact the reliability of existing research results, and that many AR research papers do not clearly document their experimental methodology and often make unrealistic assumptions. In this paper we outline problems and limitations with AR data sets and describe the methodology problems we noticed, in the hope that this will lead to the creation of improved and better documented data sets and improved AR experimental methodology. Although we cover a broad array of methodological issues, our primary focus is on an often overlooked factor, model type, which determines how AR training and test data are partitioned—and how AR models are evaluated. Our prior research indicates that personal, hybrid, and impersonal/universal models yield dramatically different performance [30], yet many research studies do not highlight or even identify this factor. We make concrete recommendations to address these issues and also describe our own publically available AR data sets.
Conference Paper
Full-text available
Activity Recognition (AR), which identifies the activity that a user performs, is attracting a tremendous amount of attention, especially with the recent explosion of smart mobile devices. These ubiquitous mobile devices, most notably but not exclusively smartphones, provide the sensors, processing, and communication capabilities that enable the development of diverse and innovative activity recognition-based applications. However, although there has been a great deal of research into activity recognition, surprisingly little practical work has been done in the area of applications in mobile devices. In this paper we describe and categorize a variety of activity recognition-based applications. Our hope is that this work will encourage the development of such applications and also influence the direction of activity recognition research.
Conference Paper
Full-text available
Mobile devices such as smart phones, tablet computers, and music players are ubiquitous. These devices typically contain many sensors, such as vision sensors (cameras), audio sen-sors (microphones), acceleration sensors (accelerometers) and location sensors (e.g., GPS), and also have some capabil-ity to send and receive data wirelessly. Sensor arrays on these mobile devices make innovative applications possible, espe-cially when data mining is applied to the sensor data. But a key design decision is how best to distribute the responsibili-ties between the client (e.g., smartphone) and any servers. In this paper we investigate alternative architectures, ranging from a "dumb" client, where virtually all processing takes place on the server, to a "smart" client, where no server is needed. We describe the advantages and disadvantages of these alternative architectures and describe under what cir-cumstances each is most appropriate. We use our own WISDM (WIreless Sensor Data Mining) architecture to pro-vide concrete examples of the various alternatives.
Conference Paper
Full-text available
Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.
Article
Full-text available
Physical-activity recognition via wearable sensors can provide valuable information regarding an individual's degree of functional ability and lifestyle. In this paper, we present an accelerometer sensor-based approach for human-activity recognition. Our proposed recognition method uses a hierarchical scheme. At the lower level, the state to which an activity belongs, i.e., static, transition, or dynamic, is recognized by means of statistical signal features and artificial-neural nets (ANNs). The upper level recognition uses the autoregressive (AR) modeling of the acceleration signals, thus, incorporating the derived AR-coefficients along with the signal-magnitude area and tilt angle to form an augmented-feature vector. The resulting feature vector is further processed by the linear-discriminant analysis and ANNs to recognize a particular human activity. Our proposed activity-recognition method recognizes three states and 15 activities with an average accuracy of 97.9% using only a single triaxial accelerometer attached to the subject's chest.
Conference Paper
Full-text available
Activity Recognition is an emerging field of research, born from the larger fields of ubiquitous computing, context-aware computing and multimedia. Recently, recognizing everyday life activities becomes one of the challenges for pervasive computing. In our work, we developed a novel wearable system easy to use and comfortable to bring. Our wearable system is based on a new set of 20 computationally efficient features and the Random Forest classifier. We obtain very encouraging results with classification accuracy of human activities recognition of up to 94%.
Conference Paper
Full-text available
Accurate recognition and tracking of human activities is an important goal of ubiquitous computing. Recent advances in the development of multi-modal wearable sensors enable us to gather rich datasets of human activities. However, the problem of automatically identifying the most useful features for modeling such activities remains largely unsolved. In this paper we present a hybrid approach to recognizing activities, which combines boosting to discriminatively select useful features and learn an ensemble of static classifiers to recognize different activities, with hidden Markov models (HMMs) to capture the temporal regularities and smoothness of activities. We tested the activity recognition system using over 12 hours of wearable-sensor data collected by volunteers in natural unconstrained environments. The models succeeded in identifying a small set of maximally informative features, and were able identify ten different human activities with an accuracy of 95%.
Article
ABSTRACT In this paper we describe a real time system for detecting and recognizing lower body activities (walking, sitting, standing, running and lying down) using streaming data from tri-axial accelerometers. While there have been various attempts to solve this problem, what makes our system unique is that it uses a minimal set of sensors and works in real time. We have divided the system into three components: preprocessing, fea- ture extraction and classification. This paper describes each component,and addresses the issue of locating the sensors on a human,body. We also discuss different elementary signal processing techniques that we experimented,with to extract salient features from the sensory stream, bearing in mind the computation,costs of each method. We used the AdaBoost al- gorithm built on decision stumps for classification, and our system is able to recognize each activity (walking, sitting, standing, running, lying down) with 95% accuracy. Index Terms— Accelerometers, human activity recogni-
Conference Paper
In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84%. The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.
Article
Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human–computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.