Conference PaperPDF Available

Performance Evaluation of Classifiers on WISDM Dataset for Human Activity Recognition

Authors:
  • Anuradha Engineering College
  • Director, IIIT Kottayam, Kerala, India Institute of National Importance

Abstract and Figures

Mobile Phone used not to be matter luxury only, it has become a significant need for rapidly evolving fast track world. In this paper, we evaluate the performance of a various machine learning classifiers on WISDM human activity recognition dataset which is available in public domain. We show that while keeping smartphone in pocket, it is very easy to recognize activity of daily living with the help of built-in sensors. We further demonstrated that by using a proper classifier, recognition rate can improve in most of the activities more than 96%. The experiments were performed by other researcher using Multilayer Perceptron classifier (MLP) and random forest (RF) classifier. They were received 91.7% and 75.9% of overall accuracy with MLP and RF on impersonal data respectively. Our results are much better with overall accuracy of 98.09% using random forest classifier. In addition, these activities are recognized quickly.
Content may be subject to copyright.
A preview of the PDF is not available
... Recall or hit rate or true positive rate (TPR) is also known as sensitivity such as in [37]. It is the rate of corrected predicted samples to the total number of positive samples in the dataset as illustrated in Eq. 8: Table 8 illustrates the obtained accuracy, precision, recall and F-measure of our proposed model compared with the stateof-the-art models [38][39][40][41][42][43][44][45] on raw version of WISDM dataset. The Accuracy of the proposed model has the highest accuracy of 98.67%. ...
... Based on Precision, the proposed model has achieved the highest precision of 98.66%. In the second place, Random Forest Classifier [43] has precision of 98.1% while in the third-place CNN + BLSTM [44] has precision of 97.8%. Based on recall, the proposed model has achieved the highest recall of 98.67%. ...
... Based on recall, the proposed model has achieved the highest recall of 98.67%. In the second place, Random Forest Classifier [43] has recall of 98.1% while in the third-place, CNN + BLSTM) [44] has recall of 97.8%. On basis F-measure, the proposed model has achieved the highest F-measure with 0.987. ...
Article
Full-text available
In recent years, the adoption of machine learning has grown steadily in different fields affecting the day-to-day decisions of individuals. This paper presents an intelligent system for recognizing human’s daily activities in a complex IoT environment. An enhanced model of capsule neural network called 1D-HARCapsNe is proposed. This proposed model consists of convolution layer, primary capsule layer, activity capsules flat layer and output layer. It is validated using WISDM dataset collected via smart devices and normalized using the random-SMOTE algorithm to handle the imbalanced behavior of the dataset. The experimental results indicate the potential and strengths of the proposed 1D-HARCapsNet that achieved enhanced performance with an accuracy of 98.67%, precision of 98.66%, recall of 98.67%, and F1-measure of 0.987 which shows major performance enhancement compared to the Conventional CapsNet (accuracy 90.11%, precision 91.88%, recall 89.94%, and F1-measure 0.93).
... The personalized model reached an F-score of 95.95% while the generalized models reached an F-score of 96.26%. Further on classifier performance, Walse, Dharaskar and Thakare [21] use the WISDM HAR public domain dataset to evaluate various machine learning classifiers to prove the efficiency of smartphone collected activity data in determining daily activities despite the device being in the user's pocket. They observed that with the correct classifier, the recognition accuracy for most activities can be as high as 96%. ...
... Zheng [34] utiliza máquinas de soporte vectorial combinado un modelo de agrupación dispersa de dos capas. WIDSM v1.1 Kishor Walse [30] utilizaárboles de decisión. Cagatay Catal [6] realiza una combinación de diferentes técnicas,árboles de decisión, regresión logística y perceptrón multicapa. ...
Article
Full-text available
The recognition of patterns for the recognition of faces becomes an issue of extreme importance when the number and complexity of images increases. For the last fifteen years, artificial neural networks have shown its effectiveness to address complex pattern problems at the same time that they have evolved to hybrid and deep architectures. This paper presents the implementation of a deep neural network developed in three parallel layers, and with parallel feeding of patterns with three levels of granularity: fine, medium and thick. This results in conforming an analysis of the characteristic robust pattern applied in the images. The initial deep network was modified and oriented to address the complexity of the images by integrating human activity recognition (HAR). The results are promising achieving 99 % of recognition of the performed tests.
... This model was developed by combining a shallow RNN and LSTM algorithm, and its overall accuracy on the WISDM dataset achieved 95.78% accuracy (Agarwal and Alam, 2020). In addition, previous studies like Walse et al. (Walse et al., 2016) and Khin (Oo, 2019) have also used the WISDM accelerometer data to classify a maximum of 6 activities in their work. Although the above models could generally recognize human activities, they were evaluated on their ability to recognize just six human activities and therefore do not provide generalization. ...
Preprint
In recent years, human activity recognition has garnered considerable attention both in industrial and academic research because of the wide deployment of sensors, such as accelerometers and gyroscopes, in products such as smartphones and smartwatches. Activity recognition is currently applied in various fields where valuable information about an individual's functional ability and lifestyle is needed. In this study, we used the popular WISDM dataset for activity recognition. Using multivariate analysis of covariance (MANCOVA), we established a statistically significant difference (p<0.05) between the data generated from the sensors embedded in smartphones and smartwatches. By doing this, we show that smartphones and smartwatches don't capture data in the same way due to the location where they are worn. We deployed several neural network architectures to classify 15 different hand and non-hand-oriented activities. These models include Long short-term memory (LSTM), Bi-directional Long short-term memory (BiLSTM), Convolutional Neural Network (CNN), and Convolutional LSTM (ConvLSTM). The developed models performed best with watch accelerometer data. Also, we saw that the classification precision obtained with the convolutional input classifiers (CNN and ConvLSTM) was higher than the end-to-end LSTM classifier in 12 of the 15 activities. Additionally, the CNN model for the watch accelerometer was better able to classify non-hand oriented activities when compared to hand-oriented activities.
... This model was developed by combining a shallow RNN and LSTM algorithm, and its overall accuracy on the WISDM dataset achieved 95.78% accuracy (Agarwal and Alam, 2020). In addition, previous studies like Walse et al. (Walse et al., 2016) and Khin (Oo, 2019) have also used the WISDM accelerometer data to classify a maximum of 6 activities in their work. Although the above models could generally recognize human activities, they were evaluated on their ability to recognize just six human activities and therefore do not provide generalization. ...
Article
Efficiently identifying activities of daily living (ADL) provides very important contextual information that is able to improve the effectiveness of various sports tracking and healthcare applications. Recently, attention mechanism that selectively focuses on time series signals has been widely adopted in sensor based human activity recognition (HAR), which can enhance interesting target activity and ignore irrelevant background activity. Several attention mechanisms have been investigated, which achieve remarkable performance in HAR scenario. Despite their success, prior these attention methods ignore the cross-interaction between different dimensions. In the paper, in order to avoid above shortcoming, we present a triplet cross-dimension attention for sensor-based activity recognition task, where three attention branches are built to capture the cross-interaction between sensor dimension, temporal dimension and channel dimension. The effectiveness of triplet attention method is validated through extensive experiments on four public HAR dataset namely UCI-HAR, PAMAP2, WISDM and UNIMIB-SHAR as well as the weakly labeled HAR dataset. Extensive experiments show consistent improvements in classification performance with various backbone models such as plain CNN and ResNet, demonstrating a good generality ability of the triplet attention. Visualization analysis is provided to support our conclusion, and actual implementation is evaluated on a Raspberry Pi platform.
Article
Full-text available
Sensor-based activity recognition (AR) depends on effective feature representation and classification. However, many recent studies focus on recognition methods, but largely ignore feature representation. Benefitting from the success of Convolutional Neural Networks (CNN) in feature extraction, we propose to improve the feature representation of activities. Specifically, we use a reversed CNN to generate the significant data based on the original features and combine the raw training data with significant data to obtain to enhanced training data. The proposed method can not only train better feature extractors but also help better understand the abstract features of sensor-based activity data. To demonstrate the effectiveness of our proposed method, we conduct comparative experiments with CNN Classifier and CNN-LSTM Classifier on five public datasets, namely the UCIHAR, UniMiB SHAR, OPPORTUNITY, WISDM, and PAMAP2. In addition, we evaluate our proposed method in comparison with traditional methods such as Decision Tree, Multi-layer Perceptron, Extremely randomized trees, Random Forest, and k-Nearest Neighbour on a specific dataset, WISDM. The results show our proposed method consistently outperforms the state-of-the-art methods.
Article
Full-text available
Recent Activity Daily Living (ADL) not only tackles simple activities, but also caters to a wide range of complex activities. Although the same activity has been carried out under the same environmental conditions, the acceleration signal obtained from each subject considerably differs. This happens due to the pattern of action generated for each subject is diverse based on several aspects such as subject age, gender, emotion and personality. This project therefore compares the accuracy of various machine learning models for ADL classification. On top of that, this research work also scrutinizes the effectiveness of various feature selection methods to identify the most relevant attribute for ADL classification. As a result, Random Forest was able to achieve the highest accuracy of 83.3% on subject independent matter in ADL classification. Meanwhile, CFS Subset Evaluator is considered to be a good feature selector as it successfully selected the 8 most relevant features compared with Correlation and Information Gain Evaluator.
Conference Paper
Full-text available
Human activity recognition (AR) has begun to mature as a field, but for AR research to thrive, large, diverse, high quality, AR data sets must be publically available and AR methodology must be clearly documented and standardized. In the process of comparing our AR research to other efforts, however, we found that most AR data sets are sufficiently limited as to impact the reliability of existing research results, and that many AR research papers do not clearly document their experimental methodology and often make unrealistic assumptions. In this paper we outline problems and limitations with AR data sets and describe the methodology problems we noticed, in the hope that this will lead to the creation of improved and better documented data sets and improved AR experimental methodology. Although we cover a broad array of methodological issues, our primary focus is on an often overlooked factor, model type, which determines how AR training and test data are partitioned—and how AR models are evaluated. Our prior research indicates that personal, hybrid, and impersonal/universal models yield dramatically different performance [30], yet many research studies do not highlight or even identify this factor. We make concrete recommendations to address these issues and also describe our own publically available AR data sets.
Article
Full-text available
Activity recognition allows ubiquitous mobile devices like smartphones to be context-aware and also enables new appli-cations, such as mobile health applications that track a user's activities over time. However, it is difficult for smartphone-based activity recognition models to perform well, since only a single body location is instrumented. Most research fo-cuses on universal/impersonal activity recognition models, where the model is trained using data from a panel of repre-sentative users. In this paper we compare the performance of these impersonal models with those of personal models, which are trained using labeled data from the intended user, and hybrid models, which combine aspects of both types of models. Our analysis indicates that personal training data is required for high accuracybut that only a very small amount of training data is necessary. This conclusion led us to implement a self-training capability into our Actitracker smartphone-based activity recognition system[1], and we be-lieve personal models can also benefit other activity recog-nition systems as well.
Conference Paper
Full-text available
Activity Recognition (AR), which identifies the activity that a user performs, is attracting a tremendous amount of attention, especially with the recent explosion of smart mobile devices. These ubiquitous mobile devices, most notably but not exclusively smartphones, provide the sensors, processing, and communication capabilities that enable the development of diverse and innovative activity recognition-based applications. However, although there has been a great deal of research into activity recognition, surprisingly little practical work has been done in the area of applications in mobile devices. In this paper we describe and categorize a variety of activity recognition-based applications. Our hope is that this work will encourage the development of such applications and also influence the direction of activity recognition research.
Conference Paper
Full-text available
Mobile devices such as smart phones, tablet computers, and music players are ubiquitous. These devices typically contain many sensors, such as vision sensors (cameras), audio sen-sors (microphones), acceleration sensors (accelerometers) and location sensors (e.g., GPS), and also have some capabil-ity to send and receive data wirelessly. Sensor arrays on these mobile devices make innovative applications possible, espe-cially when data mining is applied to the sensor data. But a key design decision is how best to distribute the responsibili-ties between the client (e.g., smartphone) and any servers. In this paper we investigate alternative architectures, ranging from a "dumb" client, where virtually all processing takes place on the server, to a "smart" client, where no server is needed. We describe the advantages and disadvantages of these alternative architectures and describe under what cir-cumstances each is most appropriate. We use our own WISDM (WIreless Sensor Data Mining) architecture to pro-vide concrete examples of the various alternatives.
Article
Full-text available
Smart phones comprise a large and rapidly growing market. These devices provide unprecedented opportunities for sensor mining since they include a large variety of sensors, including an: accele-ration sensor (accelerometer), location sensor (GPS), direction sensor (compass), audio sensor (microphone), image sensor (cam-era), proximity sensor, light sensor, and temperature sensor. Com-bined with the ubiquity and portability of these devices, these sensors provide us with an unprecedented view into people's lives—and an excellent opportunity for data mining. But there are obstacles to sensor mining applications, due to the severe resource limitations (e.g., power, memory, bandwidth) faced by mobile devices. In this paper we discuss these limitations, their impact, and propose a solution based on our WISDM (WIireless Sensor Data Mining) smart phone-based sensor mining architecture.
Article
Full-text available
Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily/weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise.
Mobile Sensor Data Mining
  • J W Lockhart
J. W. Lockhart, "Mobile Sensor Data Mining," Fordham Undergrad. Res. Journal2, vol. 1, pp. 67-68, 2011.
The Benefits of Personalized Data Mining Approaches to Human Activity Recognition with Smartphone Sensor Data
  • J W Lockhart
J. W. Lockhart, "The Benefits of Personalized Data Mining Approaches to Human Activity Recognition with Smartphone Sensor Data," Fordham University, New york, 2014.
Smartphone Sensor Data Mining for Gait Abnormality Detection
  • S Gallagher
S. Gallagher, "Smartphone Sensor Data Mining for Gait Abnormality Detection," Fordham University, New York, 2014.
Design considerations for the WISDM smart phone-based sensor mining architecture
  • J W Lockhart
  • G M Weiss
  • J C Xue
  • S T Gallagher
  • A B Grosner
J. W. Lockhart, G. M. Weiss, J. C. Xue, S. T. Gallagher, A. B. Grosner, and T. T. Pulickal, "Design considerations for the WISDM smart phone-based sensor mining architecture," in Proceedings of the Fifth International Workshop on Knowledge Discovery from Sensor Data -SensorKDD '11, 2011, pp. 25-33.