Figure - available from: Mathematics
This content is subject to copyright.
Source publication
Comprehensively extracting spatio-temporal features is essential to research topic trend prediction. This necessity arises from the fact that research topics exhibit both temporal trend features and spatial correlation features. This study proposes a Temporal Graph Attention Network (T-GAT) to extract the spatio-temporal features of research topics...
Similar publications
Graph-level clustering is a fundamental and significant task in data mining. The advancement of graph neural networks has provided substantial impetus to this area of research. However, existing graph-level clustering methods often focus exclusively on either graph structure or node attributes, which limits their ability to comprehensively capture...
Citations
Human Activity Recognition (HAR) has recently attracted the attention of researchers. Human behavior and human intention are driving the intensification of HAR research rapidly. This paper proposes a novel Motion History Mapping (MHI) and Orientation-based Convolutional Neural Network (CNN) framework for action recognition and classification using Machine Learning. The proposed method extracts oriented rectangular patches over the entire human body to represent the human pose in an action sequence. This distribution is represented by a spatially oriented histogram. The frames were trained with a 3D Convolution Neural Network model, thus saving time and increasing the Classification Correction Rate (CCR). The K-Nearest Neighbor (KNN) algorithm is used for the classification of human actions. The uniqueness of our model lies in the combination of Motion History Mapping approach with an Orientation-based 3D CNN, thereby enhancing precision. The proposed method is demonstrated to be effective using four widely used and challenging datasets. A comparison of the proposed method’s performance with current state-of-the-art methods finds that its Classification Correction Rate is higher than that of the existing methods. Our model’s CCRs are 92.91%, 98.88%, 87.97.% and 87.77% which are remarkably higher than the existing techniques for KTH, Weizmann, UT-Tower and YouTube datasets, respectively. Thus, our model significantly outperforms the existing models in the literature.