ArticlePDF Available

Abstract and Figures

Table tennis game is based on the speed of the player's response to different attacks and defense strokes. A way to enhance the player's performance and technique while training is to update the player with the mistakes in real-time. This paper presents a system that focuses on detecting the correct and wrong strokes within the following stroke types: Forehand drive, backhand drive, and forehand topspin. By the usage of Augmented Reality, the system helps the players to get their results and direction easily using AR-based mobile application when practicing real-time. A usability study has been made to measure the learning style of the players by letting the players train on different strokes with the system. Moreover, an experiment has been done to measure the efficiency of the application and compare different algorithms to overview their performance in identifying the strokes based on accuracy and time taken.
Content may be subject to copyright.
Journal of Ubiquitous Systems & Pervasive Networks
Volume 13, No. 1 (2020) pp. 01-09
* Corresponding author. Tel.: +201094586906
E-mail: habiba1611146@miuegypt.edu.eg
© 2020 International Association for Sharing Knowledge and Sustainability.
DOI: 10.5383/JUSPN.13.01.001 1
Usability Study of a comprehensive table tennis AR-based training
system with the focus on players' strokes
Ayman Nabil1, Habiba Hegazy1, *, Mohamed Abdelsalam1, Moustafa Hussien1, Seif
Elmosalamy1, Yomna M.I. Hassan1, Ayman Atia2,3
1 Faculty of Computer Science, Misr International University, Egypt
2HCI-LAB, Faculty of Computers and Artificial Intelligence, Helwan University, Egypt
3October University for Modern Sciences and Arts (MSA), Egypt
Abstract
Table tennis game is based on the speed of the player’s response to different attacks and defense
strokes. A way to enhance the player’s performance and technique while training is to update the
player with the mistakes in real-time. This paper presents a system that focuses on detecting the
correct and wrong strokes within the following stroke types: Forehand drive, backhand drive, and
forehand topspin. By the usage of Augmented Reality, the system helps the players to get their
results and direction easily using AR-based mobile application when practicing real-time. A
usability study has been made to measure the learning style of the players by letting the players train
on different strokes with the system. Moreover, an experiment has been done to measure the
efficiency of the application and compare different algorithms to overview their performance in
identifying the strokes based on accuracy and time taken.
Keywords: Stroke Classification, Stroke Identification, Table Tennis, IR Depth Camera, Hand
Gestures, Augmented Reality
1. Introduction
With the presence of modern technologies like smart bands,
mobiles and, advanced glasses, technology become an
incredible aspect of our everyday. In sports particularly,
technology is utilized to help increase the player's performance
and playing technique through the training sessions. Sensors
were used to help in detecting the player's motion and strokes.
Nowadays, virtualization technology such as Virtual Reality,
and Augmented Reality headsets are integrated into the sports
field to notify players and improving their training
environment. In table tennis, technology was investigated in
the creation of the electronic scoreboard, automated scoring
system, interactive ping pong table, and high-speed camera's
and recently, technology was shown to create robots to
compete with. On the other hand, any technology that can be
created to points on the player's correct and wrong movements
to predict or measure the player's performance will be very
useful [1]. This makes the human-computer interaction in table
tennis an important research field and important to increase the
player performance and speed. According to statistics made in
the United States in 2017, there are 16.04 million players
around the world play table tennis [2]. This paper focuses on
three different types of strokes such as forehand topspin,
forehand drive and, backhand drive movements.
One of the basic table tennis strokes is a forehand drive. It is
a stroke of an attack with a little percentage from the topspin
stroke. It is performed as a reply for medium or long-range
topspin or ball drifts. According to Ben Larcombe [3] a
forehand drive should be done correctly on six steps: The
racket should be ranging between the waist height, close the
racket angle a little bit, lean backward from the waist by a
small amount, when the ball arrives, turn your racket and move
it forward towards the table and up, contact the ball instantly
when it comes next to your body, the touch would have been
very smooth, just in the center of the racket. These steps are the
same for either a forehand drive or backhand drive. On the
other hand, a common mistake is putting your arm across your
body on the follow-through this is going to slow your recovery
for the upcoming shot as shown in fig. 1 (a).
The backhand drive stroke is one of the most important
strokes in table tennis sport, according to Killertips Network,
L. Sharon, and W. Suwito [4] there are four sections for the
backhand drive; the backswing, the stance, the finish, and the
strike. Mainly the stance consists of positioning your feet,
knees, body, and arms to get started to the shot. The backswing
is about keeping your hand in the right place and placing the
racket in front of your belly. The strike is the part where the
player starts hitting the ball as if it's the topspin to create some
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
2
momentum. Lastly, the finish where the bat should be in front
of the player's chin. Players usually make some common
mistakes while playing a backhand drive such as playing the
stroke as a short shot and standing directly behind the ball
while it's coming as shown in fig. 1 (b).
The forehand topspin stroke is a complete attacking stroke
where it takes all the power from the player to get the ball to
the maximum speed possible to make one of two things, to
make it harder on the opposite player to block or to use it to
block an opponent spin. The technique used to make a correct
forehand topspin [3, 4]: Begin with your racket under waist
height, close your racket slightly, twist your knees, and lean
backward from your lower body. As the ball gets to you, push
your legs up, twist forwards, and accelerate your racket
upwards. One of the common mistakes for doing a topspin is
either close to the racket angle or open it just before ball
contact, which means that the racket angle is not correct, each
stroke is different. This leads to inefficient play with very bad
stability in the player's hands as shown in fig. 1 (c).
(a) The correct and wrong techniques while the player was performing
forehand drive stroke.
(b) The correct and wrong techniques while the player was performing
backhand drive stroke.
(c) The correct and wrong techniques while the player was performing
forehand topspin stroke.
Fig. 1. Clarification of table tennis strokes techniques during the
game.
This paper is an extension work for our previous papers [5,
6], the main contribution of this paper relies mainly on
classifying the different stroke types (correct/wrong) through
various algorithms. Also, to support the system with an AR
notification system to increase the table tennis player's skills to
the next level by giving the users real-time performance and
results. Also, we aim to measure the learning style of the
players by undertaking a usability study.
The paper is partitioned into four sections. Firstly, the
related work where we present other researches in different
domains related to the techniques used in the proposed system.
Secondly, the methodology where we mainly explain the
proposed approach and the techniques used. Then we go to the
experiment where we show the proposed system contribution
and results. Fourthly, a discussion where we investigate the
basis of the introduced platform. Finally, summary and future
studies where the results and outcomes of the research are
presented.
2. Related Work
This section is split up into various parts. Each part is
required to indicate how it is used in our framework.
2.1. Systems of table tennis
In this sport, movements are categorized into either primary
and advanced stages. Drive, topspin, and push are crucial basic
strokes. It can determine and identify tennis strokes by
capturing the motion of the hand. The research [7] was used to
assign a hardware component to the racket to track player
movements. Some studies achieved 95.7% accuracy [8] by the
use of this methodology. By thus the existence of wearable
devices such as wearable IMU systems and utilizing basic
equations [9] reached a significant classification result for table
tennis serves, forehand, and backhand strokes.
The classification of ball spins was mentioned in another
research. Many kinds of strokes lead to many spins and speeds
of the ball. [10] offered a ball speed and spin system, inserted
within the table racket, with a single IMU, researchers reached
a precise of 79.4%. In a different study, a robot has been made
by the researchers that can compete with a human player [11].
Therefore, detection and classification of the strokes in table
tennis are known to be a hand motions problem based on hand
movements as table tennis sport.
2.2. Hand gestures
In human-computer interaction, the recognition of hand
motions is considered to be a genuine domain. It provides
multiple techniques that are considered close to the behavior of
humans [12]. Gestures act as a very significant and efficient
technique, especially through sport training applications,
Augmented Reality, and online environments. Using Kinect
and SVM algorithm, [13] offers a suitable solution to weighty
HCI interfaces in real-time which results in 95.42% accuracy.
Authors in the [14] developed a multi-core DTW solution,
0.28-second time-consuming, multi-core algorithm. Hand
movements to be observed will be recorded. In detecting
movements and actions of the body the sensors play a
significant function [15, 16]. IR Depth camera is one of these
sensors.
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
3
2.3. Background of the IR depth camera sensor
The idea of smart technology involves the use of knowledge
in various technologies and areas. Sports are one of these areas.
Consequently, people seek to efficiently utilize technologies to
increase sport competitive environment. The identification and
recognition of the player's movements are the primary goals of
the development aspect of the sport. Detection of movements
achieved generally using a camera or wearable sensor
movement devices in a specific IR depth camera. The use of IR
depth cameras like Kinect which will be a vital role in the
sports field in the future such as folk dance [17]. In [18]
authors used the SVM algorithm and IR depth camera to
identify the human posture to reach optimal accuracy for
various positions. The results indicate that Kinect can identify
various positions in high accuracy with a low-cost device. In
addition, [19] research has pointed out how Kinect SDK can
identify joints correctly and it has proposed an AR judo
training program. The use of IR depth cameras including
Kinect also plays a leading part in the development of the
sports sector.
Neither of the previous studies stated that an IR depth
camera had been used in table tennis, relying on player motions
for the optimal performance. The paper has a maximum
exactness of 96.29% [20] instead of an IR-depth camera used
for the performance of the player when balls are received. A
large amount of noise is often produced by the usage of the IR
depth camera which affects data collection and consequently
the system classification [21]. Therefore, the filtering phase
with sensors is an essential prerequisite.
2.4. Pre-processing (Filters)
Tools like IR-depth cameras and other sensors identify data
with a huge background noise [21]. Although there's a real
need to remove noise, due to the light systems, and the data is
encircled by noise. The bilateral filter for the joint was used for
the identification of captured images in order to improve image
quality [22]. [23] used the Gabor Filter with the SVM
algorithm to resolve a hand motion detection procedure which
removed lighting limits. In addition, the Gabor filter improved
the accuracy of the recognition. Moreover, Nuttaitanakul and
Leauhatong [24] utilized the channel length for human action
characterization filtration. On the other hand, the Kalman filter
has been used to monitor the body joints to remove the noise of
the unwanted vibrations and thus decreases the variation in the
center of joint position [25]. Furthermore, the Kalman filter
contributed by dismissing signals to create a moving object
position and orientation in a real-time application using sensor
motion devices which affect signal reliability [26]. The main
aspect left is to identify the movements after recording the
gestures of table tennis utilizing the IR-depth camera. One of
the most significant classifiers is the dynamic time warping
algorithm for real-time systems.
2.5. Hand Gesture Algorithms
By the usage of the signal taken from the IR depth camera,
we need to classify the movements and strokes made by the
player. Numerous and various algorithms were utilized in
various hand signals application. The study [27] presented an
android application online for evaluating the strokes performed
mounting the smartphone to the player’s wrist using the
decision tree algorithm, reaching to 77.21% and 69.63% on
average accuracy. According to Chao Xu et al., [28] they tested
the different algorithm on Finger-writing with Smartwatch
system, reaching a result that out of naïve bayes, logistic
regression, and decision trees, logistic regression was the best
classifier with 99.20% and 97.10% for detecting finger and
hand movements sequentially. Also, the use of the Random
Forest algorithm was used for automatic hand motion
recognition where authors reached an accuracy of 98% [29].
The usage of a convolutional neural network (CNN) algorithm
was mainly used in the sport in the field of image processing.
[30] represented a real-time table tennis forecasting system, the
usage of CNN was to do the 3D pose estimation with the usage
of an RGB camera. Moreover, the study [31] suggests a quite
effective online classification and recognition method of null or
negative lateness in hand movements by the usage of CNN.
In addition, Dynamic Time Warping (DTW) examines the
appropriate resemblance among two-time series with particular
limitations. Any information can be analyzed using DTW,
which can be converted into a linear sequence. They utilized
dynamic time warping and IR-depth camera in activity
recognition for human development because of their solidity of
pace or design while executing the tasks [32, 33]. In another
instance, the usage of a smart band that consists of an
accelerometer and gyroscope with the DTW algorithm [34].
The authors of [35] presented a system to train tennis shooting
data on two levels of hierarchical classification based on the
principle of using the DTW and QDTW. [36] Authors used
Microsoft Kinect to develop the recognition of hand
movements using DTW and HMM algorithms. Researchers
found that at the moment of the consideration of a requirement,
DTW was a better option than HMM.
Due to the FastDTW appearance, the speed and accuracy of
systems had been different [37]. This has assisted other
applications to reach 98% accuracy in real-time. Authors in
[38] introduce a methodology that optimizes the concept of
athlete coaching through the use of an IR camera to recognize
the athlete’s wrong joints and notifies him/her prior to the
occurrence of the injury. Their outcomes indicate that the
approach of FastDTW has surpassed different methods and
may achieve a 98% accurate recognition of dependent user
gestures. The classified data would be used to get information
to the user through an interface, one of those interfaces is
augmented reality.
2.6. The use of Augmented Reality in hand gestures
Augmented Reality can turn into the irreplaceable apparatus
which will be utilized by all players to get constant information
about each activity or movement they make. The usage of
Augmented Reality with self-directed sessions, sustained
guidance, and feedback in training or learning sessions can
improve the playing style and performance [39]. Authors in
[16] suggested an AR framework, primarily in sports, that
provides guidance and feedback. The major difficulties in the
area of hand gestures would be to have the best classification
accuracy, as the variety of other ways people perform gestures
with different speed levels.
Moreover, authors of [40] proposed an improved 3D ping
pong platform for two players through Wi-Fi access called
ARPP. Where the players can control the rackets by moving
their smartphones. In the volleyball sports field, the study was
made to evaluate and analyze the load of the knee for the
players in volleyball by jumping into Augmented Reality (AR)
during preserving the perceptive associations by repeating the
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
4
visual characteristics of the court of volleyball. The study
shows that AR can be used to better replicate competitive
information from a clinical assessment [41]. Besides, a
research study made to examine the boundaries and plan
prerequisites for creating a headset for downhill winter athletes
for Augmented Reality that can enhance spatial-awareness,
visual perception, and lessen injury [42]. However, AR was
also used for navigation in lots of systems. Where authors [43]
proposed a system based on a usability study for Indoor
Library Navigation, they used AR technology to quickly
enhance user functionality and knowledge inside the library in
the right direction.
3. Methodology
The system consists of several sections. Firstly, the
collection of data using an IR depth camera, in which data is
collected for different joints. Secondly, a pre-processing
section consisting of the segmentation of strokes and the
filtration of data. Thirdly, the feature extraction section for the
identification of critical data features. Fourthly, the processing
section where the strokes are classified. Finally, the interfaces
showing how data is displayed on both mobile devices and
augmented reality as shown in fig. 2.
Fig. 2. Block technical diagram and system architecture.
3.1. Data Acquisition
The acquisition of the data is performed by utilizing the IR
depth camera SDK, which differentiates different joints
(elbow, shoulder, wrist, and waist) as shown in fig. 3 from the
player's body. This approach of data gathering wasn’t affected
by environmental factors. The IR depth camera receives three-
dimensional points (X, Y, Z), for each joint. The obtained data
is transferred to the inner room server linked to the IR depth
camera.
3.2. Pre-processing
This phase involves filtering, segmenting, and modifying
data collected by the sensor just before the phase of processing.
3.2.1. Stroke Segmentation
After data acquisition, the stroke should be segmented from
the collected source of data. In table tennis strokes, the system
measures the Euclidean distance between the main starting
point and the following points to which the player is allocated.
The Euclidean distance is utilized in identifying and detecting
any stroke from the sensor sequence. The length of the section
which linked the two points is tested by the Euclidean distance.
Fig. 3. The framework tracks and records the player's joints and
its developments identified with table tennis strokes
3.2.2. Data Filtering
The Kalman filter is used for removing noise from the IR
depth camera. For noise reduction and movement tracking
Kalman filter preferred. It is used to improve the precision of
the IR depth measurement and shows an appropriate pre-
processing and precise change, as described in [25, 26]. Due to
the high-frequency variations in the joint location and skeleton
extraction, two forms of noise impact the IR camera depth time
series results. Our obtained data is passed through the Kalman
filter to enhance the IR-depth camera position estimation and
to remove the noise affecting data such as temperature change,
gravity, vibration, etc.
3.2.3. Feature Extraction
Extraction of the features is an important phase before the
classification of the data and provides more information about
the data and the mechanism. It gives us more understanding of
the data and increases the performance and accuracy of
learning algorithms [44]. Through this stage, the perception of
the gathered data and the initial raw data set was decreased for
the processing phase to more manageable and related
categories. Multiple feature extraction techniques are used to
decrease the number of resources available for evaluation
without lacking significant or specific information and classify
essential data characteristics. Multiple feature extraction
techniques were used such as mean, median, minimum, and
maximum.
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
5
3.3. Processing
After the pre-processing phase, the collected data is being
labeled using FastDTW. It is an analysis of time series and
algorithms used to calculate the connection between two
separate time and speed sequences. FastDTW provides near-
optimum alignments to DTW and O(N) complexity of time and
memory that requires O(N2).
FastDTW was developed on the basis of Chan and Salvador
[37], utilizing a multi-level approach with three main
operations. FastDTW is really useful. First, the coarsening of
the data points to the time series. Second, a prediction is
primarily found on a minimum distance warp trajectory in the
lowest part of the stroke flow. Finally, optimizing the
disruption path generated by local warp trajectory alterations
from the lower part of the stroke current. FastDTW decreases
the time series in DTW by this approach.
First, a cost matrix begins to be established between the
player's stroke and every stroke in the dataset. FastDTW begins
to calculate the initial value on a matrix between the stroke
points and each stroke point of a dataset with the two points
and the neighbor.
Second, The FastDTW uses backtracking of the greedy
search and the cost matrix to only reach the gap through the
two strokes. In addition, FastDTW typically begins to
distinguish among these two strokes by adding the element to a
cost matrix at the top left. The algorithm ensures that each
stroke in the data set reaches a balance of the distance from the
test stroke. Eventually, FastDTW starts to search for the
optimal distance of equilibrium. Thus, the label of the stroke
may be identified from the dataset.
Although, in order to test the efficiency of FastDTW
algorithm, we compared this algorithm will logistic regression,
random forest and decision tree which were implemented in
their default structure. Moreover, we compared the FastDTW
algorithm with the Convolutional neural network algorithm
which was implemented by the model presented in the research
presented by Devineau et al., [45]. The model is based on using
CNN classifying dimensional data and not depending on image
processing.
3.4. Augmented Reality Interface
Augmented Reality (AR) is a technique that deals primarily
with the graphics, sound, video, and many other sensor-based
data on physical objects, utilizing the camera interface and
using the computer-based vision recognition algorithms. By the
usage of AR, the system can give real-time feedback to the
player instead of the old technique used of recording the
training session and then entering the video recorded into
software that analyzes their sessions and get them a report of
how many wrong and correct strokes they have done.
The Augmented Reality was built to give certain
notifications for the player such as: if the stroke is wrong or
correct, and if the stroke was wrong the system notify the joint
where the mistake took place as shown in fig. 4. In the
experiment section, we will test the usage of Augmented
Reality and make a usability study on learning style with the
players which tried the system.
4. Experiments
In this part, we set out the environment needed, and we
collected the dataset. We present the algorithm comparison for
the stroke classification in our system. Also, we present an
experiment on usability study divided into two sections the
usage of AR and the learning style of the player.
Fig. 4. AR display screen to notify players while training
4.1. Equipment Setup
With the original IR depth camera in the proposed platform,
gesture recording was accomplished. The IR depth camera was
utilized to monitor X, Y, and Z directions and its timestamps of
four joints of the skeleton. This camera is placed on the table,
78 cm from the ground while the player is set to be 152 cm
away as shown in fig. 3.
4.2. Data Collection
Overall, around 1200 trials were collected on three main
strokes: forehand drive, backhand drive, and forehand topspin,
from five professional players (three males and two females).
Every player had to repeat 80 times every stroke with a total of
240 trails per player. Testing is equally divided into correct and
incorrect strokes.
4.3. Experiment (1) Stroke Classification Accuracy
In this experiment, our system evaluated FastDTW, CNN,
Logistic Regression, Random Forest, and Decision Tree
algorithms for detecting and classifying the three main strokes
of table tennis performed correctly and incorrectly. We used
150 samples for training and 50 samples for testing for each
correct/wrong stroke with a total of 900 samples for training
and 300 for testing. We also calculated the time that it is taken
by each algorithm for classifying a single played stroke. The
results of the algorithms are shown in table 1.
4.4. Experiment (2) Usability
The usability experiment was done with the help of 50
different participants in the age group of (13 28). The
percentage of females where 45% and males where 55%.
Moreover, 60% of the players where international table tennis
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
6
players and 40% where national table tennis players.
Participant's preferences were indicated as to the number of
questions they were asked. Firstly, we asked them to give us a
brief about them and their expertise in table tennis. Secondly,
they started rating the system from the weakest points to the
strongest, covering all the critical and major usages of the
system. Such as performance, complexity, setup,
destructibility, comfortably, and the whole experience with IR
depth camera and AR glasses.
4.4.1. Usage of Augmented Reality
Overall, users’ perceptions of the performance usability of
the system were very positive. The usage of AR was reported
as highly comfortable 40 of 50 of the players were very
satisfied as shown in fig. 5 (b). Moreover, as AR is a head-set
wearable device. we focused also on is the AR screen display is
distributive or not. We came out with a result that 36 out of 50
players agreed that it was not distributive as shown in fig. 5 (a).
Participants also indicated that they would not hesitate to wear
the AR head-set and that they did not experience any negative
reactions while wearing it. Moreover, their feedback on the
navigation on the AR that was very reasonable and satisfying
to them.
Fig. 5. Players' opinions for the usage of Augmented Reality in
the sports field
4.4.2. Learning Style
We constructed an experiment that shows how well the
system improves the playing style of the players. In this
experiment, we asked 20 beginners’ players to perform the
three different strokes of the table tennis (forehand topspin,
forehand drive, and backhand drive) in all of the three sessions.
Each session contains the average percentage of mistakes made
by players on all three strokes. Each player played the three
strokes around 25 times by focusing on improving the style
according to the system instructions. The average number of
mistakes is not the direct number of mistakes. The aim was to
measure the enhancement of the player at each stroke.
As shown in graph fig. 6 the number of mistakes produced
by the players decreased by each session. Moreover, we asked
the users to answer a survey question whether their
performance enhances and 90% concluded that the system did
enhance their learning curve.
Fig. 6. Usability study on player’s learning performance
5. Discussion
According to the results shown in table 1 illustrates how the
system was successful in the detection and classification of
strokes. FastDTW and CNN have shown the highest accuracy
for each class and overall average accuracy. Since our system
is time-sensitive to provide real-time feedback we depended on
FastDTW. This algorithm has ideal or near-optimal
synchronization, with O(N) time and memory complexity. It is
an estimated time and frequency algorithm. However, the
dependence of CNNs on the initial parameter tuning to avoid
local optimal, this mostly computationally expensive because it
has to take a large dataset for training. We also conducted a
confusion matrix on our chosen algorithm FastDTW ro clarify
more our results as shown in table 2. Which was built on the
same data used in table 1. Moreover, the player's mistakes
decreased during the usage of our system, and this is due to
feedback in real-time the system provides for the player of the
mistakes made in each stroke and overall performance of the
session.
5. Future work and conclusion
In brief, during this study, we have proposed a framework
to train table tennis players and increase their performance. The
Table 1. Different classification algorithm comparison
FastDTW
Logistic
Regression
Decision Tree
Correct backhand drive
94%
86%
80%
Correct forehand drive
98%
86%
78%
Correct forehand topspin
94%
84%
76%
Wrong backhand drive
90%
80%
72%
Wrong forehand drive
96%
86%
78%
Wrong forehand topspin
92%
82%
70%
Average Accuracy
94%
84%
75.67%
Average time (single stroke)
0.98 sec
1.73 sec
0.99 sec
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
7
system was built on the basics of strokes in table tennis.
Moreover, by the utilization of an IR depth camera, the system
acknowledges the mistakes made by the player in his/her
playing technique. In addition, the usage of AR was very useful
and was not either distributive or uncomfortable. Based on the
experiments tested using different classification algorithms and
sensors on various players, we have concluded the system was
able to increase the player’s performance by 66%. Our future
work targets adding extra advances to the framework
informational index, normalize, and get with an advanced level
in utilizing the AR.
References
[1] C. Andrews, “Sports tech: Table tennis technology,” May
2017. [On-line]. Available:
https://eandt.theiet.org/content/articles/2017/05/sports-
tech-table-tennis-technology/
[2] S. Lock, “Table tennis: number of participants U.S. 2017,”
Feb. 2017. [On-line]. Available:
https://www.statista.com/statistics/191959/participants-in-
table-tennis-in-the-us-since-2006/
[3] Ben Larcombe. The four basic table tennis strokes, Sep
2018.
[4] Killertips Network, L. Sharon, and W. Suwito. Table
Tennis Killer Tips: National Team Edition. Killer tips.
Killer tips Network, 2019.
[5] H. Hegazy, M. Abdelsalam, M. Hussien, S. Elmosalamy,
Y. M. Hassan, A. M. Nabil and A. Atia, Online
detection and classification of in-corrected played strokes
in table tennis using IR depth camera,” Procedia Computer
Science, vol. 170, pp. 555 562, 2020.
https://doi.org/10.1016/j.procs.2020.03.125
[6] Habiba Hegazy, Mohamed Abdelsalam, Moustafa
Hussien, Seif Elmosalamy, Yomna M.I. Hassan, Ayman
M. Nabil, Ayman Atia. “IPingPong: A Real-time
Performance Analyzer System for Table Tennis Stroke’s
Movements,” Procedia Computer Science, vol. 175, pp. 80
87, 2020. https://doi.org/10.1016/j.procs.2020.07.014
[7] E. Boyer, F. Bevilacqua, F. Phal, and S. Hanneton,
“Low-cost motion sensing of table tennis players for real
time feedback,” Int. J. Table Tennis Sci., vol. 8, 01 2013.
[8] P. Blank, J. Hossbach, D. Schuldhaus, and B. Eskofier,
“Sensor-based stroke detection and stroke type
classification in table tennis,” 09 2015, pp. 93–100.
https://doi.org/10.1145/2802083.2802087
[9] M. Kos, J.ˇZenko, D. Vlaj, and I. Kramberger, “Tennis
stroke detection and classification using miniature
wearable IMU device,” 05 2016.
https://doi.org/10.1109/IWSSIP.2016.7502764
[10] P. Blank, B. H. Groh, and B. M. Eskofier, “Ball
speed and spin estimation in table tennis using a
racket-mounted inertial sensor,” in Proceedings of the
2017 ACM International Symposium on Wearable
Computers. New York, NY, USA: Association for
Computing Machinery, 2017, p. 29. [Online].
Available: https://doi.org/10.1145/3123021.3123040
[11] J. Tebbe, L. Klamt, Y. Gao, and A. Zell, “Spin detection in
robotic table tennis,” 2019.
https://doi.org/10.1109/ICRA40945.2020.9196536
[12] M. Popa, “Hand gesture recognition based on
accelerometer sensors,”01 2011.
[13] R. Liu, Z. Wang, X. Shi, H.-Y. Zhao, S. Qiu, J. Li, and N.
Yang, Table Tennis Stroke Recognition Based on Body
Sensor Network, 11 2019, pp. 110. https://doi.org/10.
1007/978-3-030-34914-1_1
[14] A. Atia and N. Shorim, “Hand gestures classification
with multi-core dtw,” pp. 91–96, 08 2019.
[15] Y. Chen, B. Luo, Y.-L. Chen, G. Liang, and X. Wu, “A
real-time dynamic hand gesture recognition system using
Kinect sensor,” 2015 IEEE International Conference on
Robotics and Biomimetics (ROBIO), pp. 20262030,
2015. https://doi.org/10.1109/ROBIO.2015.7419071
[16] H.-S. Yeo, H. Koike, and A. Quigley, “Augmented
learning for sports using wearable head-worn and wrist-
worn devices,” 03 2019, pp. 1578–1580.
https://doi.org/10.1109/VR.2019.8798054
[17] E. Protopapadakis, A. Grammatikopoulou, A. Doulamis,
and G. Nikos, “Folk dance pattern recognition over depth
images acquired via Kinect sensor,” ISPRS - International
Archives of the Photogrammetry, Remote Sensing and
Spatial Information Sciences, vol. XLII-2/W3, pp. 587
593, 02 2017. https://doi.org/10.5194/isprs-archives-XLII-
2-W3-587-2017
Table 2. Confusion matrix on FastDTW algorithm
CFD
CBD
CFTS
WFD
WBD
WFTS
PRECISON
CFD
47
0
0
4
0
0
92.157%
CBD
0
49
0
0
2
0
96.078%
CFTS
0
0
47
0
0
3
94%
WFD
3
0
0
45
0
0
93.75%
WBD
0
1
0
1
48
1
94.11%
WFTS
0
0
3
0
0
46
93.878%
Recall
94%
98%
94%
90%
96%
92%
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
8
[18] T. Le, M.-Q. Nguyen, and T.-M. Nguyen, “Human posture
recognition using human skeleton provided by Kinect,” 01
2013. https://doi.org/10.1109/ComManTel.2013.6482417
[19] C. Sielu ̇zycki, P. Kaczmarczyk, J. Sobecki, K.
Witkowski, J. Maslinski, and W. Cieslinski, “Microsoft
Kinect as a tool to support training in professional sports:
Augmented reality application to tachi-waza techniques in
judo,” 09 2016, pp. 153–158.
https://doi.org/10.1109/ENIC.2016.030
[20] S. Triamlumlerd, M. Pracha, P. Kongsuwan, and P.
Angsuchotmetee, “A table tennis performance analyzer via
a single-view low-quality camera,” 03 2017, pp. 1–4.
https://doi.org/10.1109/IEECON.2017.8075888
[21] A. Chatterjee and V. Govindu, “Noise in structured-light
stereo depth cameras: Modeling and its applications,” 05
2015.
[22] G. Li, H. Tang, Y. Sun, J. Kong, G. Jiang, D. Jiang, B.
Tao, S. Xu, and H. Liu, “Hand gesture recognition based
on convolution neural network,” Cluster Computing, 12
2017. https://doi.org/10.1007/s10586-017-1435-x
[23] D.-Y. Huang, W.-C. Hu, and S.-H. Chang, “Gabor filter-
based hand-pose angle estimation for hand gesture
recognition under varying illumination,” Expert Syst.
Appl., vol. 38, pp. 60316042, 05 2011.
https://doi.org/10.1016/j.eswa.2010.11.016
[24] N. Nuttaitanakul and T. Leauhatong, “A novel algorithm
for detection human falling from accelerometer signal
using wavelet transform and neural network,” 10 2015, pp.
215220. https://doi.org/10.1109/ICITEED.2015.7408944
[25] P. Das, K. Chakravarty, A. Chowdhury, D. Chatterjee,
A. Sinha, and A. Pal, “Improving joint position
estimation of Kinect using anthropometric constraint
based adaptive Kalman filter for rehabilitation,”
Biomedical Physics and Engineering Express, vol. 4, 12
2017. https://doi.org/10.1088/2057-1976/aaa371
[26] C. Kownacki, “Optimization approach to adapt Kalman
filters for the real-time application of accelerometer and
gyroscope signal’ filtering,” Digital Signal Processing,
vol. 21, pp. 131140, 01 2011.
https://doi.org/10.1016/j.dsp.2010.09.001
[27] W. Viyanon, V. Kosasaeng, S. Chatchawal, and A.
Komonpetch, “Swingpong: analysis and suggestion based
on motion data from mobile sensors for table tennis
strokes using decision tree,” 12 2016, pp. 1–6.
https://doi.org/10.1145/3028842.3028860
[28] C. Xu, P. H. Pathak, and P. Mohapatra, “Finger-writing
with smartwatch: A case for finger and hand gesture
recognition using smartwatch,” in HotMobile ’15, 2015.
https://doi.org/10.1145/2699343.2699350
[29] S. Canavan, W. Keyes, R. Mccormick, J. Kunnumpurath,
T. Hoelzel,and L. Yin, “Hand gesture recognition using
a skeleton-based feature representation with a random
regression forest, 09 2017, pp. 2364–2368.
https://doi.org/10.1109/ICIP.2017.8296705
[30] E. Wu and H. Koike, “Futurepong: Real-time table tennis
trajectory forecasting using pose prediction network,” in
Extended Abstracts of the 2020 CHI Conference on
Human Factors in Computing Systems, ser. CHI EA
’20. New York, NY, USA: Association for
Computinghttps://doi.org/10.1145/3334480.3382853
Machinery, 2020, p. 18. [Online]. Available:
https://doi.org/10.1145/3334480.3382853
[31] P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree,
and J. Kautz, “Online detection and classification of
dynamic hand gestures with recurrent 3d convolutional
neural networks,” 06 2016, pp. 4207–4215.
https://doi.org/10.1109/CVPR.2016.456
[32] S. Sempena, N. Maulidevi, and P. Aryan, “Human action
recognition using dynamic time warping,” 07 2011, pp. 1
5. https://doi.org/10.1109/ICEEI.2011.6021605
[33] I. Pernek, K. A. Hummel, and P. Kokol, “Exercise
repetition detection for resistance training based on
smartphones,” Personal and Ubiquitous Computing, vol.
17, pp. 771782, 2012. https://doi.org/10.1007/s00779-
012-0626-y
[34] F. Normanton, I. Ardiyanto, and S. Wibirama, “Light
sport exercise detection based on smartwatch and
smartphone using k-nearest neighbour and dynamic time
warping algorithm,” 10 2016, pp. 15.
https://doi.org/10.1109/ICITEED.2016.7863299
[35] A. Switonski, H. Josinski, and K. Wojciechowski,
“Dynamic time warping in classification and selection of
motion capture data,” Multidimensional Systems and
Signal Processing, vol. 30, no. 3, pp. 14371468, Jul
2019. https://doi.org/10.1007/s11045-018-0611-3
[36] J. Raheja and A. Chaudhary, “Robust gesture recognition
using Kinect: A comparison between dtw and hmm,”
Optik International Journal for Light and Electron
Optics, 01 2015.
https://doi.org/10.1016/j.ijleo.2015.02.043
[37] S. Salvador and P. Chan, “Toward accurate dynamic time
warping in linear time and space,” intel. Data Anal., vol.
11, pp. 561580, 10 2007. https://doi.org/10.3233/IDA-
2007-11508
[38] A. Yasser, D. Tariq, R. Samy, M. Allah, and A. Atia,
“Smart coaching: Enhancing weightlifting and preventing
injuries,” International Journal of Advanced Computer
Science and Applications, vol. 10, 01 2019.
https://doi.org/10.14569/IJACSA.2019.0100789
[39] S. Zollmann, T. Langlotz, M. Loos, W. H. Lo, and
L. Baker, “Arspectator: Exploring augmented reality
for sport events,” in SIGGRAPH Asia 2019 Technical
Briefs, ser. SA ’19. New York, NY, USA: Association
for Computing Machinery, 2019, p. 7578.
https://doi.org/10.1145/3355088.3365162
[40] X. Gao, J. Tian, X. Liang, and G. Wang, “Arpp: An
augmented reality3d ping-pong game system on android
mobile platform,” 05 2014, pp.1–6.
https://doi.org/10.1109/WOCC.2014.6839917
[41] K. Adams, A. Kiefer, D. Panchuk, A. Hunter, R.
MacPherson, and W. Spratford, “From the field of play to
the laboratory: Recreating the demands of competition
with augmented reality simulated sport,” Journal of Sports
Sciences, vol. 38, 12 2019.
https://doi.org/10.1080/02640414.2019.1706872
[42] D. O’Neill, “Enhancing winter sport activities: Improving
the visual perception and spatial awareness of downhill
Author et. al. / Journal of Ubiquitous Systems & Pervasive Networks, 1 (2020) 01-09
9
winter athletes with augmented reality headset displays,”
2019.
[43] N. Romli, A. Razali, N. H. Ghazali, N. Hanin, and S.
Ibrahim, “Mobile augmented reality (AR) marker-based
for indoor library navigation,” IOPConference Series:
Materials Science and Engineering, vol. 767, p.012062,
03 2020. https://doi.org/10.1088/1757-899X/767/1/012062
[44] S. Khalid, T. Khalil, and S. Nasreen, “A survey of
feature selection and feature extraction techniques in
machine learning,” Proceedings of2014 Science and
Information Conference, SAI 2014, pp. 372378, 102014.
https://doi.org/10.1109/SAI.2014.6918213
[45] Devineau, Guillaume & Xi, W & Moutarde, Fabien &
Yang, J. (2019). Convolutional Neural Networks for
Multivariate Time Series Classification using both Inter-
and Intra- Channel Parallel Convolutions.
... e stepstep method is mostly used in the fast-break game near the table, counterattacking the ball from a large angle in the forehand position; when the subject moves to the forehand position with the forehand stride, in order to meet the near billiards, the body's center of gravity moves toward the near table; at this time, the force on the forefoot area is increased relative to the rear area, so the force on the forefoot area is the most obvious relative to each area during the forehand step; the same is true for the force on the sole of the backhand stride [17]. Comparing the mobile auxiliary foot with the batting force foot, there was no significant difference in the force in the midfoot area between the two, and the force in the forefoot area was greater when hitting the ball; the force of the auxiliary foot in the rear area of the foot is greater than that of the ball-hitting force [18,19]. Because the batting force foot also undertakes the task of actively pushing and extending the force to participate in the batting action and returning to the original position, at this time, the forefoot area, which is the main force-bearing area of the kicking and stretching action, bears more pressure. ...
Article
Full-text available
In order to solve the problem of athletes with lower limb movement injuries in table tennis footwork, with the increasing risk of sports, the incidence of acute injury also increased. The increase in athletic training, on the one hand, improves the skill level of the athlete; on the other hand, it increases the chances of more chronic injuries. Through the investigation of the injury site of 143 table tennis players, the researchers found that the lower limb injury accounted for 32% of the total injuries and the upper limb injury rate accounted for 33.14% of the trunk, accounting for 34.29%. The injury sites are mostly concentrated in the lumbar region, followed by the shoulders and knees. Through epidemiological research on the injuries of outstanding table tennis players, the survey results show that the probability of lower extremity injuries ranks in the top three, and most of them are acute sprains and chronic strain injuries. By applying the principles of sports biomechanics, a biomechanical analysis of the asynchronous foot movement in table tennis is proposed; from three aspects of kinematics, dynamics, and plantar pressure, it is found that the injury of table tennis is closely related to technical play connect. The risk of sports injury is inevitable in a sense, due to the long-term local overload, which causes the strain of the sports system. In order to avoid or reduce the occurrence of such sports injuries, coaches should standardize the technical movements of the players and arrange the exercise load reasonably according to the characteristics of the sports.
... e height of the world's top tennis players (men) is about 184 cm, the weight is about 79 kg, and the BMI is about 23.5 kg/m 2 ; there is a certain gap between the height and weight of tennis players with low sports level. e V.O2max of tennis players is 46 to 72 mL/ min/kg (male) and 42 to 52 mL/min/kg (female), which is lower than that of elite endurance athletes, but similar to those of other racket-holding athletes [21]. At low levels, athlete V.O2max appeared to be related to exercise level, but at high levels, V.O2max was not related to exercise level. ...
Article
Full-text available
In order to improve the effect of certain theoretical basis for tennis coaches to correct technical movements and teaching training, a method oriented towards discrete gradient methods that can be used for computational solid mechanics is presented. The biggest feature of this method is that it can directly perform numerical simulation analysis on any point cloud model, without relying on any structured or unstructured grid model. Experimental results show that about 80% of the shots in the game are within 2.5 m of the athlete’s moving distance, and the athlete needs to have 300 to 500 high-intensity exercises; the total running distance of the competition is 1 100–3 600 m. The average VO2 of athletes during the competition is 20–30 mL/min/kg (45%–55% V.O2max), the average heart rate is 135–155 beats/min (70%–85% HRmax), the mean blood lactate was <4 mmol/L, the subjective fatigue was 12–14 (moderate intensity), and the mean metabolic equivalent was 5–7 METs. It is proved that the discrete gradient method can effectively solve the biomechanical analysis problem of tennis forehand hitting the ball. Make up for the lack of action details in the differentiation stage. Improve the effect of certain theoretical basis for tennis coaches to correct technical movements and teaching training.
... The literature [13] combined the eigenfunctions of the Laplace Beltrami operator with homology consistency theory to generate a hierarchical segmentation method for the model. The literature [14] first calculates the global signature of each point on the model, then maps the model into its eigenspace, and finally uses a clustering algorithm in that space to achieve the segmentation of the model. All the above algorithms use the eigenfunctions of the Laplace Beltrami operator for model segmentation; however, the eigenfunctions are prone to problems such as significant change or eigenvector switching, especially when the differences between the corresponding eigenvalues are small [15]. ...
Article
Full-text available
The research in this paper mainly includes as follows: for the principle of action recognition based on the 3D diffusion model convolutional neural network, the whole detection process is carried out from fine to coarse using a bottom-up approach; for the human skeleton detection accuracy, a multibranch multistage cascaded CNN structure is proposed, and this network structure enables the model to learn the relationship between the joints of the human body from the original image and effectively predict the occluded parts, allowing simultaneous prediction of skeleton point positions and skeleton point association information on the one hand, and refinement of the detection results in an iterative manner on the other. For the combination problem of discrete skeleton points, it is proposed to take the limb parts formed between skeleton points as information carriers, construct the skeleton point association information model using vector field, and consider it as a feature, to obtain the relationship between different skeleton points by using the detection method. It is pointed out that the reorganization problem of discrete skeleton points in multiperson scenes is an NP-Hard problem, which can be simplified by decomposing it into a set of subproblems of bipartite graph matching, thus proposing a matching algorithm for discrete skeleton points and optimizing it for the skeleton dislocation and algorithm problems of human occlusion. Compared with traditional two-dimensional images, audio, video, and other multimedia data, the 3D diffusion model data describe the 3D geometric morphological information of the target scene and are not affected by lighting changes, rotation, and scale transformation of the target and thus can describe the realistic scene more comprehensively and realistically. With the continuous updating of diffusion model acquisition equipment, the rapid development of 3D reconstruction technology, and the continuous enhancement of computing power, the research on the application of 3D diffusion model in the detection and extraction of a human skeleton in sports dance videos has become a hot direction in the field of computer vision and computer graphics. Among them, the feature detection description and model alignment of 3D nonrigid models are a fundamental problem with very important research value and significance and challenging at the same time, which has received wide attention from the academic community. 1. Introduction With the rapid development of 3D sensors, such as structured light coding and LiDAR, the acquisition of 3D diffusion model data has become increasingly convenient and fast in recent years. Diffusion model data is mathematically abstractly described as a collection of three-dimensional coordinates of points, which is essentially a discrete sampling of geometric information of the external world in a specific coordinate system. Compared with traditional 2D images, 3D diffusion model data have the following significant advantages. 1.1. Describe the 3D Geometric Morphological Information of the Target Traditional 2D images describe the appearance of the external scene, losing 3D spatial information. Diffusion model data describe the 3D geometry of the target surface and thus can more directly inform computer vision tasks such as feature extraction and matching. 1.2. Unaffected by Changes in External Light Most of the common 3D imaging sensors use active imaging, such as structured light sensors and LiDAR. Therefore, the change of light in the external world does not affect the acquisition of diffusion model data. 1.3. Less Influenced by Imaging Distance The traditional 2D image imaging process is susceptible to changes in imaging distance, resulting in changes in the scale of the imaged target. The diffusion model data is a discrete sampling of the 3D geometry of the target surface in the external scene, and the imaging distance does not change the scale of the imaged target, but only affects the accuracy and resolution of the acquired data, and thus is more suitable for computer vision tasks. In recent years, along with the rapid development of 3D reconstruction technology, it has become increasingly convenient to obtain 3D models through 3D data [1]. Since there are many nonrigid objects in the real world, the study of 3D nonrigid models is receiving widespread attention and has become a research hotspot in the fields of computer vision and computer graphics. The study of human skeleton detection in sports dance video images has been a very popular research direction in image processing and computer vision [2]. The human skeleton information can greatly help people analyze the behavior of the target human body in pictures or videos and lay the foundation for further processing of images and videos [3]. The human skeleton detection algorithm divides the human skeleton into multiple joints, such as head, shoulder, and wrist and then analyzes the position, direction, and movement of each joint to obtain the human skeleton information. The human skeleton is drawn to further analyze the posture and behavior of the human body to obtain the activity and motion information of the human body in the image [4]. Applications related to human posture estimation are based on the premise of obtaining a clear and accurate human skeleton in the image, and inaccurate skeleton extraction will lead to incorrect analysis of human behavior and movements, with incalculable consequences [5]. For example, in the field of sports dance, inaccurate skeleton extraction may lead to incorrect analysis of action, which may even endanger the lives of athletes or performers in serious cases. Therefore, it is of great importance to improve the accuracy of human skeleton detection. In recent years, the rapid development of the hardware field makes the computer’s computing power increase, increasingly excellent human skeleton detection algorithms emerge, and the human skeleton detection accuracy is continuously improved. As the basis of human pose recognition, human skeleton detection technology will play an increasingly important role in increased fields. 2. Related Work Since the 1970s, the study of geometric morphological information of target 3D diffusion models has been receiving attention, and a series of results have been achieved in the 1980s and 1990s. The detection of saliency regions for 3D diffusion geometry models is a complex problem, especially for 3D diffusion models with isometric transformations [6]. In recent years, the problem has been intensively investigated in the fields of computer vision and computer graphics. The literature [7] first started to address the problem of 3D deformable model region detection by describing it abstractly as finding the most stable component on the model. To have invariance to isometric transformations, the method uses diffusion geometry to derive weighting functions and proposes two representations of mesh surfaces, namely, mesh vertex-based and edge-weighted graph structures, respectively [8]. Experimental results are realized that the edge-weighted graph structure-based representation is more general than the vertex-weighted graph and exhibits superior performance [9]. The algorithmic framework has been extended to handle shapes with volumes. Inspired by cognitive theory, the literature [10] considers saliency regions as “key components” on the model and considers that they contain rich and distinguishable local features. According to this theory, saliency regions correspond to parts of the model with high protrusions and can be detected by a clustering process in geodesic space. However, this method is an incomplete decomposition of the model and many regions of saliency are not detected [11]. The method based on diffusion geometry has achieved remarkable success in the analysis of 3D nonrigid models due to the reflection of the model’s intrinsic properties [12]. The literature [13] combined the eigenfunctions of the Laplace Beltrami operator with homology consistency theory to generate a hierarchical segmentation method for the model. The literature [14] first calculates the global signature of each point on the model, then maps the model into its eigenspace, and finally uses a clustering algorithm in that space to achieve the segmentation of the model. All the above algorithms use the eigenfunctions of the Laplace Beltrami operator for model segmentation; however, the eigenfunctions are prone to problems such as significant change or eigenvector switching, especially when the differences between the corresponding eigenvalues are small [15]. The literature [16] introduces the idea of consensus clustering into this domain to achieve stable segmentation. First, multiple clusters are computed in the global point signature space to generate a heterogeneous set of model partitions. The literature [17] argues that a stable model segmentation can be obtained by extracting statistical information from these segmentations. This method has the best current results in the case of model data receiving various disturbances. Human skeleton detection in images can be divided into two directions: 2D human skeleton detection and 3D human skeleton detection. 3D human skeleton detection is the process of obtaining the 3D shape or coordinates of human skeleton points by analyzing the images obtained from 3D cameras such as Kinect. 3. Application of THE Three-Dimensional Diffusion Model in the Detection and Extraction of a Human Skeleton in Sports Dance Videos 3.1. Principle of Action Recognition Based on THE 3D Diffusion Model Convolutional Neural Network The 3D diffusion model neural network is one of the first deep learning methods to achieve great success in the fields of image analysis, target detection, and so on. It applies trainable filters (trained by backpropagation algorithm), local domain pooling operations (to prevent overfitting), etc. in the original input to extract gradually complex and highly abstract input features, and the network model can achieve very good discriminative effects through long training with a large amount of data [18]. And it also has lighting, background, pose extraction invariance, and other characteristics, which are very popular. As an exemplary end-to-end network model, convolutional neural networks can produce effects directly on the original input, which makes the traditional manual extraction of features outdated. However, currently, such convolutional neural networks are still only heavily used in fields such as input recognition of 2D images, and Figure 1 illustrates the traditional 2D convolution process. To make greater use of its power, some groups extended it to the 3D domain, generating a new 3D diffusion model and applying it to the subject of human action recognition, producing very good results. Its main feature is that it is not only able to extract features in space but also combines feature extraction in the temporal dimension, using 3D convolution to capture human motion information in consecutive frames.
... After the self-test, the output value will change compared to the value without the self-test state. When the self-test function is activated, the sensor will generate an output signal to observe the self-test condition, and the selftest response value is equal to the difference between the [19][20][21]. e commonly used frequency domain features are FFT transform, wavelet transform, and discrete cosine transform. In the complex motion data of the human body, it is necessary to analyse the features of each action to train and recognize each action. ...
Article
Full-text available
In this paper, through an in-depth study and analysis of dance motion capture algorithms in wearable sensor networks, the extended Kalman filter algorithm and the quaternion method are selected after analysing a variety of commonly used data fusion algorithms and pose solving algorithms. In this paper, a sensor-body coordinate system calibration algorithm based on hand-eye calibration is proposed, which only requires three calibration poses to complete the calibration of the whole-body sensor-body coordinate system. In this paper, joint parameter estimation algorithm based on human joint constraints and limb length estimation algorithm based on closed joint chains are proposed, respectively. The algorithm is an iterative optimization algorithm that divides each iteration into an expectation step and a great likelihood step, and the best convergence value can be found efficiently according to each iteration step. The feature values of each pose action are fed into the algorithm for model learning, which enables the training of the model. The trained model is then tested by combining the collected gesture data with the algorithmic model to recognize and classify the gesture data, observe its recognition accuracy, and continuously optimize the model to achieve accurate recognition of human gesture actions.
... Several previous studies have attempted to look at the problem of service accuracy performed by table tennis players (Nabil, et al., 2020). There are many factors that affect serive accuracy, wrist flexibility is one of the factors that can increase service accuracy when playing table tennis. ...
Article
Full-text available
The purpose of this study was to analyze the relationship between concentration and hand eye coordination with the accuracy of backhand backspin service. Quantitative approaches and correlational methods are used in this study. This research was conducted at PTM Gempas. The population in this study were all cadet athletes at PTM Gempas, while the sampling technique used was saturated sampling, which means that all athletes totaling 20 people were used as samples. Player concentration data is collected using a concentration grid test, hand eye coordination data is obtained by throwing a tennis ball test, and data for backhand backspin service accuracy is obtained by testing servicing. The results of this study are as follows: 1) Concentration has a strong enough relationship with the accuracy of backhand backspin service, 2) Hand eye coordination has a strong enough relationship with backhand backspin service accuracy. 3) Concentration and hand eye coordination together have a strong enough relationship with backhand backspin service accuracy. Table tennis coach should train the concentration and eye-hand coordination to improve the athlete's service accuracy.
Article
Full-text available
Assisting table tennis coaching using modern technologies is one of the most trending researches in the sports field. In this paper, we present a methodology to identify and recognize the wrong strokes executed by players to improve the training experience by the usage of an IR depth camera. The proposed system focuses mainly on the errors in table tennis player’s strokes and evaluating them efficiently and based on the analysis and classification of the data obtained from an IR depth camera using multiple algorithms. This paper is a continuation of our previous work [10], focusing more on identifying common wrong strokes in table tennis by utilizing IR depth camera classification algorithms. The classification of the mistakes that took place while playing can be classified based on each player dependently or independently for all players.
Article
Full-text available
Table tennis is a complex sport with a distinctive style of play. Due to the rising interest in this sport the past years, attempts have been targeted towards enhancing the training experience and quality through various techniques. Technology has been used to support training sessions for table tennis players before, with a focus on players’ performance measures rather than technique. In this paper, we propose a methodology based on IR depth camera for detecting and classifying the efficiency of strokes performed by players in order to enhance the training experience. Our system is to based on analyzing depth data collected from IR depth camera and recognized using fastDTW algorithm. The results show an average accuracy of 88% - 100%. This is the first paper to address the usage of IR depth camera on the table tennis player to detect and classify the strokes played.
Article
Full-text available
This paper presents the development of Augmented Reality (AR) for smart campus urbanization using library as the environment for the demonstration of the AR prototype. The main goal of the AR development is to help users to get information and direction easily using AR based mobile application when walking inside the library. In normal circumstances, users typically walk around and explore the library area before reaching their targeted destinations. Depending on the library size and number of reading corners, exploration and walking in the library can be time consuming. Therefore, an AR technology is introduced in this paper to improve the user experience inside the library in the right direction and information instantly. This application is developed using Vuforia software to set the image marker-based and process the output into Unity3D software, Android Studio for the Main Menu interface and IBM Watson for voice recognition. The final form of the application is successfully generated from the development of Augmented Reality (AR) application for the smart campus by using a library for the demonstration of the AR prototype. A series of application tests are conducted in each corner of the library to evaluate the effectiveness of developed AR.
Conference Paper
Full-text available
Augmented Reality (AR) has gained a lot of interests recently and has been used for various applications. Most of these applications are however limited to small indoor environments. Despite the wide range of large scale application areas that could highly benefit from AR usage, until now there are rarely AR applications that target such environments. In this work, we discuss how AR can be used to enhance the experience of on-site spectators at live sport events. We investigate the challenges that come with applying AR for such a large scale environment and explore state-of-the-art technology and its suitability for an on-site AR spectator experience. We also present a concept design and explore the options to implement AR applications inside large scale environments.
Chapter
Full-text available
Table tennis stroke recognition is very important for athletes to analyze their sports skills. It can help players to regulate hitting movement and calculate sports consumption. Different players have different stroke motions, which makes stroke recognition more difficult. In order to accurately distinguish the stroke movement, this paper uses body sensor networks (BSN) to collect motion data. Sensors collecting acceleration and angular velocity information are placed on the upper arm, lower arm and back respectively. Principal component analysis (PCA) is employed to reduce the feature dimensions and support vector machine (SVM) is used to recognize strokes. Compared with other classification algorithms, the final experimental results (97.41% accuracy) illustrate that the algorithm proposed in the paper is effective and useful.
Article
Full-text available
Classifications of several gesture types are very helpful in several applications. This paper tries to address fast classifications of hand gestures using DTW over multi-core simple processors. We presented a methodology to distribute templates over multi-cores and then allow parallel execution of the classification. The results were presented to voting algorithm in which the majority vote was used for the classification purpose. The speed of processing has increased dramatically due to using multi-core processors and DTW.
Chapter
This research study addresses the design and development of an augmented reality headset display for downhill winter athletes, which may improve visual perception and spatial awareness, and reduce injury. We have used a variety of methods to collect the participant data, including surveys, experience-simulation-testing, user-response-analysis, and statistical analysis. The results revealed that various levels of downhill winter athletes may benefit differently from access to athletic data during physical activity, and indicated that some expert level athletes can train to strengthen their spatial-awareness abilities. The results also generated visual design recommendations, including icon colours, locations within the field-of-view, and alert methods, which could be utilized to optimize the usability of a headset display.
Article
Biomechanical analysis has typically been confined to a laboratory setting. While attempts have been made to take laboratory testing into the field, this study was designed to assess whether augmented reality (AR) could be used to bring the field into the laboratory. This study aimed to measure knee load in volleyball players through a jump task incorporating AR while maintaining the perception-action couplings by replicating the visual features of a volleyball court. Twelve male volleyball athletes completed four tasks: drop landing, hop jump, spike jump, and spike jump while wearing AR smart glasses. Biomechanical variables included patellar tendon force, knee moment and kinematics of the ankle, knee, hip, pelvis and thorax. The drop landing showed differences in patellar tendon force and knee moment when compared to the other conditions. The hop jump did not present differences in kinetics when compared to the spike conditions, instead of displaying the greatest kinematic differences. As a measure of patellar tendon loading the AR condition showed a close approximation to the spike jump, with no differences present when comparing landing forces and mechanics. Thus, AR may be used in a clinical assessment to better replicate information from the competitive environment.