Content uploaded by Ping Wang

Author content

All content in this area was uploaded by Ping Wang on Dec 10, 2018

Content may be subject to copyright.

Original Article

Proc IMechE Part P:

J Sports Engineering and Technology

1–17

ÓIMechE 2018

Article reuse guidelines:

sagepub.com/journals-permissions

DOI: 10.1177/1754337118815021

journals.sagepub.com/home/pip

Studies and simulations on the flight

trajectories of spinning table tennis

ball via high-speed camera vision

tracking system

Ping Wang

1,2

, Qian Zhang

1

, Yinli Jin

1

and Feng Ru

1

Abstract

When a table tennis ball is hit by a racket, the ball spins and undergoes a complex trajectory in the air. In this article, a

model of a spinning ball is proposed for simulating and predicting the ball flight trajectory including the topspin, backspin,

rightward spin, leftward spin, and combined spin. The actual trajectory and rotational motion of a flying ball are captured

by three high-speed cameras and then reconstructed using a modified vision tracking algorithm. For the purpose of

model validation, the simulated trajectory is compared to the reconstructed trajectory, resulting in a deviation of only

2.42%. Such high modeling accuracy makes this proposed method an ideal tool for developing the virtual vision systems

emulating the games that can be used to train table tennis players efficiently.

Keywords

Three-dimensional reconstruction, ball spinning, flight trajectory, high-speed camera, table tennis ball

Date received: 1 February 2018; accepted: 31 October 2018

Introduction

Unexpected flight trajectories of a fast-flying table ten-

nis ball are very common in table tennis matches.

When a ball is hit by a racket, it flies through the air

and it may curve due to unbalanced air friction. Thus,

the flight trajectory of a ball may be difficult to predict

due to its complex movement

1

as shown in Figure 1.

However, such a complex maneuver is extremely hard

to observe by the naked eye during the games because

of the ball’s small size, fast flying speed, and long dis-

tance from the spectators. Therefore, fast reconstruc-

tion of a ball’s flight path and its spinning motion are

worth studying for both spectators and ball players.

Various theoretical studies have been conducted to

predict ball flight trajectory without spinning motion.

For instance, the forces affecting the table tennis ball

during its flight were analyzed and a collision model of

a table tennis ball hitting the table was established.

2,3

From that data, the trajectories of ball flight without

spinning were successfully reconstructed. This method

was improved in other studies

4–6

by a locally weighted

linear regression algorithm and used to determine the

equation of a ball trajectory by learning and memoriz-

ing the coefficients of the trajectory equation from

many samples. In other work,

7,8

a physical model of

table tennis ball motion was established and a dynamic

system was used for speed feedback adjustment. Then,

a good tracking result to fit the flight trajectory of a

table tennis ball was obtained using polynomial fitting

based on the least-squares method. Although the accu-

racy was greatly improved, the effect of a ball spinning

motion on its trajectory was not included in the studies

mentioned above. In a study by Ren et al.,

9

the influ-

ence of the Magnus force on a ball flight trajectory was

taken into account. A method for analyzing and classi-

fying the ball flight trajectories based on a fuzzy neural

network was proposed to effectively identify different

types of spinning motions including topspin, backspin,

rightward spin, and leftward spin. However, this

method requires the offline training of a neural net-

work and a large amount of data are needed for

1

School of Electronics and Control Engineering, Chang’an University,

Xi’an, China

2

School of Automation, Northwestern Polytechnical University, Xi’an,

China

Corresponding author:

Ping Wang, School of Electronics and Control Engineering, Chang’an

University, Mailbox 370, Nan Er Huan Zhong Duan, Xi’an 710064,

Shaanxi, China.

Email: wang0372@e.ntu.edu.sg

training, so it is not highly applicable in practice. In

recent years, deep learning algorithms have been gradu-

ally applied to moving object recognition.

10

In compar-

ison with traditional neural networks, deep learning

can reduce the complexity of feature extraction, achiev-

ing a relatively high recognition rate. In addition, a

method based on the extended Kalman filter was used

to measure the angular velocity of a spinning ball to

improve the anti-interference capability of reducing the

measurement noise.

11

Besides the theoretical predictions, in related studies,

the methods for video tracking of flight trajectory of a

fast-moving table tennis ball were also investigated. In

the study by Acosta et al.,

12

they used a monocular

vision system of a table tennis robot to study the geo-

metrical relationship among the light, shadow, and ball

to determine ball location. Modi et al.

13

built a visual

table tennis robot containing sensors to determine ball

position. In a study by Yang et al.,

14

color information

was first applied to locate a table tennis ball. Then,

edge detection and Hough’s transform were used to

obtain the image coordinates of the ball center. Finally,

the Kalman filter algorithm was employed to track the

table tennis ball as the background changed. Rusdorf

and Brunnett

15

used three cameras to build a multi-

vision system to perform real-time simulation of ball

movement and then predicted the three-dimensional

(3D) trajectory of a ball. Despite identifying the flight

path successfully, in most of the above methods, the

sampling frequency was low and experimental results

were affected by smear phenomenon due to the usage

of low-speed cameras. Thus, to overcome these difficul-

ties, a high-speed camera vision system

16

has been

recently proposed to track a ball trajectory and analyze

high-speed moving objects.

In this article, top, back, rightward, leftward, and

combined spinning motions are investigated and simu-

lated by the proposed model of a spinning ball to pre-

dict the ball flight trajectory. Various algorithms have

been tested and the most effective one was selected for

this application. Background subtraction is used to

detect a table tennis ball, and the mean-shift algorithm

is used to track the target, which searches for a locally

optimal solution by probability density gradient climb.

However, when it is used to determine a fast-moving

ball real position in every frame, it often leads to poor

tracking performance because the initial position in a

current frame requires information on ball position in

the previous frame. In addition, this algorithm often

fails when a ball has a large proportion of

occlusion.

17,18

To solve these problems, a Kalman filter was applied

in the proposed method to predict a ball position which

was then used as a starting point of a mean-shift itera-

tion to improve the tracking. Moreover, the least-

squares method was used to calibrate cameras to get an

accurate position of a table tennis ball and curve fitting

was adopted to fit the trajectory to obtain ball speed at

a certain moment. Experimentally, a multiple high-

speed camera system was set up to capture the flying

ball from different angles and 3D reconstruction of ball

spinning and flight trajectory was obtained by the pro-

posed systematic vision tracking method.

Theoretical analysis

Aerodynamics modeling

To analyze the aerodynamics model of a table tennis ball,

it is necessary to analyze the force affecting the ball dur-

ing its flight and establish an aerodynamic model accord-

ing to Newton’s second law. During the flight, a table

tennis ball is mainly influenced by the Magnus force,

gravity, air resistance, and buoyancy. Among them,

buoyancy has far less effect than other forces, so its influ-

ence on ball trajectory can be considered negligible.

The Magnus force is generated when a ball spins

within a surrounding fluid. The spinning increases fluid

velocity on one side of a ball and decreases the fluid

velocity on the other side. Based on the fact that posi-

tion pressure of large flow velocity is small and small

flow velocity is large, the lateral pressure difference of a

spinning ball is introduced, representing the Magnus

force. The direction of the Magnus force is perpendicu-

lar to the rotation axis and movement direction of a

ball, so it basically changes the flight velocity direction.

According to the spinning direction, ball spinning is

categorized into top, back, rightward, leftward, and

combined spinning as shown in Figure 2. Figure 2(a)

shows the momentary force of a table tennis ball on

contact with a racket and its spinning after it hits the

racket. However, Figure 2(b) shows the Magnus force

and gravity of a table tennis ball during the ball flight

process (both buoyancy and air resistance of a table

tennis ball are ignored).

According to the Kutta–Joukowski theorem, the

Magnus force is defined by equation (1)

FL=1

2rSCLr(v3v)ð1Þ

where ris the air density, which is usually equal to

1:205 kg=m3;Sis the windward area of a ball; CLis the

air resistance factor, which is usually equal to 1.23; r

represents the ball radius, which is 0:02 m for a stan-

dard ball; vis the spinning angular velocity of a ball;

and vis the ball speed.

Figure 1. Flight trajectories of the spinning and non-spinning

table tennis balls.

2Proc IMechE Part P: J Sports Engineering and Technology 00(0)

Gravity, which affects the entire ball flight, has a ver-

tical-downward direction and is defined by equation (2)

G=mg ð2Þ

where mis the ball mass, which is 0:0027 kg for a

standard ball; and gis the acceleration due to gravity,

which is generally 9:8m=s2.

The direction of air resistance is opposite to the

movement direction of a table tennis ball during the

entire flight and it is defined by equation (3)

Fd=1

2rSCdv2ð3Þ

where ris the air density; Cdis the air resistance coeffi-

cient, which is usually equal to 0.2; Sdenotes the wind-

ward area of a ball; and vis the ball speed.

After analyzing the force exerted on a table tennis

ball, the ball flight trajectory is further investigated.

First, the coordinate system is defined such that the x-

axis represents the horizontal axis, the y-axis represents

the longitudinal axis, and the z-axis is perpendicular to

the xOworldyplane as shown in Figure 3. Then, accord-

ing to the analysis of all the above-listed forces and

Newton’s second law, the ball motion is modeled as

shown in equations (4) and (5)

m_

vx_

vy_

vz

½

T=

cos ucos ucos ucos j0

cos usin ucos usin j0

sin usin a1

2

6

43

7

5

Fd

FL

G

2

6

43

7

5

ð4Þ

_

x_

y_

z½

T=vxvyvz

½

Tð5Þ

where FLdenotes the Magnus force; Fddenotes the air

resistance; Gdenotes the gravity; mrepresents the mass

of a table tennis ball; vx,vy,vzrepresent the speed in the

x,y, and zdirections, respectively; x,y,zrepresent the

ball positions on the x,y, and zaxes, respectively; uis

the angle between the ball moving direction and

the xOy plane; uis the angle between the velocity

projection on the xOy plane and the positive part of

the x-axis; ais the angle between the force FLand the

xOy plane; and jis the angle between the projection

of FLand the x-axis. All these angles are presented in

Figure 4. The relationship between the velocity com-

ponents and angles is as follows in equation (6):

Figure 2. Ball spinning: (a) spinning and force of a table tennis ball at the initial moment; (b) force during the table tennis ball flight,

where Gdenotes the gravity of a table tennis ball, Fis the support force of the racket to the ball, fis the friction force between the

racket and ball, and FLrepresents the Magnus force.

Figure 3. Coordinate system of a table tennis table.

Wang et al. 3

v=ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

v2

x+v2

y+v2

z

q

tan u=6vy

vx

tan u=6vz

vysin u

8

>

<

>

:

ð6Þ

A collision model that is generated when a table tennis

ball hits the table is also important because it changes the

speed, direction, and trajectory of a flight dramatically.

Hence, after the collision, the collision position, time, and

instantaneous velocity should be identified to form the

complete and actual flight trajectory. The model of a ball

rebound from the table in the xand ydirections is defined

by the first-order polynomial shown in equation (7)

vxout =Krxvxin +bx

vyout =Kryvyin +by

ð7Þ

Moreover, the vertical rebound model is defined by

equation (8)

vzout =Krzvzin ð8Þ

where Krx,Kry,Krz,bx, and byare the rebound para-

meters and can be obtained from the curve fitting of

the table tennis ball trajectory.

Spin modeling

When a table tennis ball flies in the air, it spins around

a virtual axis. Assume a point Pon the ball surface.

After the ball spins through a certain angle, its coordi-

nates can be obtained by the spin of point Pand trans-

lation of the coordinate system.

19

Figure 5 shows point

Pspinning around the unit vector OballNthat passes

through the origin Oball with the angle of c. The coordi-

nates of point Nare (a,b), so the rotation matrix R(c)

is given by equation (9)

R(c)=Rx(g)Ry(b)Rz(c)Ry(b)Rx(g)

ð9Þ

where Rx(g) is the rotation matrix of point Pwhen it

spins around the x-axis with the angle of g,Ry(b)is

the rotation matrix of point Pwhen it spins around the

y-axis with the angle of b, and Rz(c) is the rotation

matrix of point Pwhen it spins around the z-axis with

the angle of c.

The spin transformation model of point Pthat spins

around an axis relative to the coordinate system Oworld

is defined by equation (10)

POworld =POball R(c)+TOworld

Oball

=POball

a2+(1a2) cos cab(1 cos c)csin cac(1 cos c)+bsin c0

ab(1 cos c)+csin cb2+(1b2) cos cbc(1 cos c)asin c0

ac(1 cos c)bsin cbc(1 cos c)+asin cc2+(1c2) cos c0

0001

2

6

6

6

4

3

7

7

7

5

+TOworld

Oball

ð10Þ

Figure 4. Angle characteristic for the flight of a table tennis

ball.

Figure 5. Spin of the point Pon the ball surface.

4Proc IMechE Part P: J Sports Engineering and Technology 00(0)

where POworld denotes the coordinates of point Prelative

to the coordinate system Oworld after transformation,

POball denotes the coordinates of point Prelative to the

coordinate system Oball before transformation, and

TOworld

Oball represents the translation of the coordinate sys-

tem Oball relative to the coordinate system Oworld.

Video tracking method

Detection of ball flight

Presently, background subtraction is a commonly used

method in motion detection to obtain the difference

image using the difference between the background

frame and current frame.

20

Then, binary processing,

domain analysis, and morphological filtering are con-

ducted, which results in the position of a table tennis

ball.

21

The difference between the background frame

and current frame Dk(i,j), and the binary image Rk(i,j)

are, respectively, defined by equations (11) and (12)

Dk(i,j)= fk(i,j)Bk(i,j)jj ð11Þ

Rk(i,j)= 0, Dk(i,j)\T

1, Dk(i,j)øT

ð12Þ

where Tis the threshold value obtained from the aver-

age background frame with a flying table tennis ball in

it, Bk(i,j) is the average background frame, and fk(i,j)is

the current frame.

Fast tracking

In order to track a fast-moving table tennis ball, an

improved fast-tracking algorithm is proposed. The

Kalman filter is used to predict ball position in the cur-

rent frame. If the parameters of the filter are set prop-

erly, the predicted position will be close to the actual

ball position, which significantly improves the tracking.

Target tracking. To describe the target feature, it first

needs to select a specific space feature, and use the

probability density function of it as a reference model

to represent the target. A virtual rectangle box is used

to initialize the target object. To reduce the interaction

between objects of different sizes, the initialized target

is normalized to the unit circle. Namely, xi

fg

i=1,2,...,n

is used to represent the pixel coordination xiin the tar-

get area; x0denotes the center point of the target area;

u=1,2, ...,mdenotes the featured value of the target

area; k(x) is the contour function of the Epanechnikov

kernel function, which assigns different weights to the

pixel points with different distances from the center

point. The farther the distance from the center point,

the smaller the weight that is assigned to the pixel

which minimizes the impact on the final result, and it is

defined as follows

k(x)= 1x2,x\1

0, otherwise

ð13Þ

Finally, his the bandwidth of a kernel function k(x),

which is selected herein as the number of pixels of the

candidate target.

The probability density function of the target model

is expressed by equation (14)

^

qu=CX

n

i=1

kx0xi

h

2

db(xi)u½ ð14Þ

where d(x) is the Delta function, db(xi)u½is used to

determine whether the color value of the pixel coordi-

nation xiin the target area belongs to the ufeature

value or not, and if it belongs to the ufeature value,

the value of db(xi)u

½

is equal to 1, otherwise, the

value is equal to 0; Cis the normalization constant,

and according to the constraint conditions P

m

u=1

^

qu=1

which can be obtained as shown in equation (15)

C=1

P

n

i=1

kx0xi

h

2

ð15Þ

Furthermore, in the candidate target area,

fx

igi=1,2,...,nhdenotes the pixel coordination x

i;yisthe

center point in the current frame of the candidate target

area; u=1,2, ... ,mrepresents the featured value of the

candidate target area; the contour function k(x)andthe

bandwidth hare the same as in the previous case. The

probability density function of the candidate target model

is expressed by equations (16) and (17)

^

pu(y) = ChX

nh

i=1

kyx

i

h

2

!

db(x

i)u

ð16Þ

Ch=1

P

nh

i=1

kyx

i

h

2

ð17Þ

The similarity function defines the similarity between

the target area and the candidate target area, which is

defined by equations (18) and (19):

d(y)= ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

1r^

p(y), ^

q½

p=ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

1X

m

u=1 ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

^

pu(y)^

qu

p

sð18Þ

r(y) = r(^

p(y), ^

q)= X

m

u=1 ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

^

pu(y)^

qu

pð19Þ

Equation (19) denotes the Bhattacharyya coefficient

of an estimated sample point. To locate the table tennis

Wang et al. 5

ball in the current frame, the distance function d(y)

should be minimized, that is, the r(y) will be maxi-

mized. By expanding equation (19) with the Taylor for-

mula at, ^

pu(y0), equations (20) and (21) are obtained as

r(y) = r(^

p(y), ^

q)’1

2X

m

u=1 ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

^

pu(y)^

qu

p+Ch

2X

nh

i=1

wikyx

i

h

2

! ð20Þ

where

wi=X

m

u=1 ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

^

qu

^

pu(y0)

sdb(x

i)u

ð21Þ

In equation (20), the first element is independent

of y, so if the second element is maximized, equation

(20) will also be maximized. The second element

denotes the probability density estimation at the cur-

rent frame position y, wiis the weight, and its density

distribution in the local region can be maximized by

the mean-shift algorithm. Based on the current position

y0, a new ball position is defined by equation (22)

y1=P

nh

i=1

x

iwigy0x

i

h

2

P

nh

i=1

wigy0x

i

h

2

ð22Þ

where g(x)= k0(x).

Target position prediction. Before the initialization opera-

tion in each frame, the mean-shift algorithm searches

for the candidate position of a ball using the effective

prediction based on the Kalman filter. The ball state is

predicted based on the information from the previous

motion and a possible ball position is obtained. Since

the Kalman filter contains the information on the pre-

vious frame such as motion speed, in most cases, a ball

position determined by the Kalman filter will be more

accurate than the ball position that is directly initialized

from the previous frame. The iterative process of the

mean-shift algorithm starts from the ball position pre-

dicted by the Kalman filter. In this way, the key cause

that makes the mean-shift algorithm invalid in the fast-

moving ball tracking can be overcome.

22

The Kalman filtering algorithm is composed of the

state equation and observation equation, which are,

respectively, given by equations (23) and (24)

xk=Fxk1+BuK1+wk1ð23Þ

zk=Oxk+vkð24Þ

where xkis the state vector, xk=xkykx0ky0k

½

T,

where xkand ykare the coordinates of the ball center

pixel on the x-axis and y-axis, respectively; x0

kand y0

k

are the ball speed components on the x-axis and y-axis,

respectively, that is, the change rate of a pixel distance

of the ball on the x-axis and y-axis, respectively; Fis

the state transition matrix; Bis the input matrix; uk1is

the deterministic control input; zkis the observation

vector, which is defined as zk=xzk yzk

½

T, and it con-

tains the information on coordinates on the x-axis and

y-axis of the ball center in the current frame; Ois the

observation matrix; wk1and vkare the independent

white Gaussian noise and they obey the distribution of

p(w);N(0, Qk1) and p(v);N(0, Rk), respectively.

The Kalman filter uses the previous state to predict

the current state and then revises the predicted state

using the current observation value. Therefore, the

Kalman filter consists of two parts: state prediction

and predicted-state correction.

The prediction process is given by equations (25)

and (26)

^

x

k=F^

xk1+Buk1ð25Þ

P

k=FPk1FT+Qð26Þ

and the correction process is given by equations

(27)–(29)

Kk=P

kOT(OP

kOT+R)1ð27Þ

^

xk=^

x

k+Kk(zkO^

x

k)ð28Þ

Pk=(1KkO)P

kð29Þ

The Kalman filtering process can be summarized as

follows. The optimal state ^

xk1and its error covariance

Pk1at moment k1 are estimated to predict the state

^

x

kand its error covariance P

kat moment k.

Furthermore, Kkis calculated by equation (27). Then,

the observation value zkis used to calculate the optimal

posterior state ^

xkand posterior error covariance Pkat

moment k, and ^

xkand Pkare used as the prior state

and error covariance at the next moment, respectively,

that is, at k+ 1 moment.

According to the above discussion, the steps of the

improved fast-tracking method are as follows:

Step 1. Select manually the location y0in the target

area of the initial frame and calculate the

probability distribution ^

quof the target

model.

Step 2. Determine the initial state vector of the

Kalman filter and initialize other necessary

parameters.

Step 3. The initial candidate for a target position of

the current frame is predicted by the

6Proc IMechE Part P: J Sports Engineering and Technology 00(0)

initialized Kalman filter and the initial candi-

date model ^

pu(y0) is calculated.

Step 4. Estimate the Bhattacharyya coefficient by

r(y0)= P

m

u=1 ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

^

pu(y0)^

qu

p.

Step 5. Calculate the weight value wi.

Step 6. Calculate the next candidate for a target loca-

tion y1of the current frame and the candidate

target model ^

pu(y1).

Step 7. Estimate the Bhattacharyya coefficient

r(y1)= P

m

u=1 ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

^

pu(y1)^

qu

p.

Step 8.Ifr(y1)\r(y0), then y1 1

2(y0+y

1), recal-

culate ^

pu(y1), and go to step 7.

Step 9.Ify

1y0

jj

.e,theny

0 y1, recalculate

^

pu(y0) and go to step 4; otherwise, y1is the

convergent true target position of the current

frame and this true target position is used as

the observation vector of the Kalman

filter to update the Kalman filter. Repeat

steps 3–9 to calculate the target position in

the next frame and repeat these steps until

target positions of all frames are determined.

Multiple-camera vision tracking system and 3D

reconstruction

Camera calibration is the key technology of the vision

system. The traditional method of direct linear trans-

formation (DLT) is used to match the known 3D coor-

dinate points on the calibration block with the image

points to calculate the internal and external parameters

of the camera. The calculation speed is fast, which

requires a high-precision calibration block that makes

the calibration process more complicated. It is not suit-

able for applications where calibration blocks cannot

be used. In addition, it does not consider nonlinear dis-

tortion, so the calibration accuracy often fails to meet

the requirements.

23

Based on the shortcomings and

application limitations of the DLT, an improved plane

target calibration method was adopted in this study,

which took into account the camera distortion and

optimized the camera’s parameters, thus greatly

improving the calibration accuracy. A planar calibra-

tion plate was further used to make the calibration pro-

cess more convenient.

Camera calibration. Imaging is the process of projecting

a point from a 3D coordinate system to a two-

dimensional (2D) plane. In the event where lens distor-

tion is neglected, the relationship between the real table

tennis ball position in the 3D coordinate system and its

position in the 2D projection on the imaging surface is

given by equation (30)

s

x

y

1

2

6

43

7

5=

au0x00

0avy00

0010

2

6

43

7

5

Rt

0T1

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5

=M1M2

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5

=M

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5

ð30Þ

where Mis the projection matrix; (x,y) denote the ball

coordinates in the pixel coordinate system;

(XW,YW,ZW) denote the ball coordinates in the world

coordinate system; sis the coefficient, and s=Zc,

where Zcis the vertical coordinate of the ball in the

camera coordinate system; au,av,x0,y0are the internal

parameters of the camera; and the rotation matrix R

and translation matrix tare the external parameters of

the camera, representing the rotation and transfer rela-

tions of the camera coordinate system relative to the

world coordinate system, respectively.

24

The camera calibration process is mainly divided

into four steps:

Step 1. Homography matrix computation

Suppose the camera coordinate system coincides with

the world coordinate system and the calibration tem-

plate plane lies on the plane ZW= 0 in the world coor-

dinate system. Then, the ith column vector of the

rotation matrix Ris represented by riand each point

on the plane is computed by equation (31)

s

x

y

1

2

43

5=Ar

1r2r3t½

XW

YW

0

1

2

6

6

43

7

7

5

=Ar

1r2t½

XW

YW

1

2

43

5ð31Þ

where sis the proportional coefficient, set

½r1r2r3tdenotes the external parameters,

t=½txtytzT, and A=

au0x0

0avy0

001

2

43

5denotes the

internal parameters without the distortion factor.

Then, the homography matrix is expressed as shown

in equation (32)

H=tz

h1h2h3

h4h5h6

h7h81

2

43

5=

aur11 aur12 autx

avr21 avr22 avty

r31 r32 tz

2

43

5ð32Þ

By eliminating and transforming s, the following

relation shown in equation (33) is obtained

Wang et al. 7

XWYW10 00XW(xx0)YW(xx0)

000XWYW1XW(yy0)YW(yy0)

h1

h2

h3

h4

h5

h6

h7

h8

2

6

6

6

6

6

6

6

6

6

6

4

3

7

7

7

7

7

7

7

7

7

7

5

=(xx0)

(yy0)

ð33Þ

Equation (33) can be obtained for each standard point,

and in theory, only four standard points are needed to

determine these eight unknowns. However, usually a few

more points are needed to reduce the error, and h1,...,h8

are obtained by the least-squares method.

Step 2. Computation of internal and external para-

meters of camera

Based on the fact that r1and r2are the unit orthogo-

nal vectors, the constraint condition is given as shown

in equation (34)

r2

11 +r212+r31 2=r122+r22 2+r322=1

r11r12 + r21r22 +r31r32 =0

ð34Þ

By combining equations (32) and (34), a simple

transformation is performed and the following is

obtained as shown in equation (35)

h2

1h2

2h2

4h2

5

h1h2h4h5

=

1

a2

u

1

a2

v

"#

=h2

8h2

7

h7h8

ð35Þ

Each homography matrix can provide equation (35),

which has only two unknown parameters (au,av) that

can be solved. However, multiple template images

taken at different locations should be considered to

eliminate the noise interference in calculation of these

parameters. Then, the values of h1,...,h8correspond-

ing to each image can be calculated and the values of au

and avcan be solved by the linear least-squares method.

The values obtained by this method have an anti-noise

interference.

By transforming equation (32), the following is

obtained as shown in equation (36)

r11 r12 tx

r21 r22 ty

r31 r32 tz

2

43

5=tz

au00

0av0

001

2

43

5

2

43

5

1h1h2h3

h4h5h6

h7h81

2

43

5

ð36Þ

Therefore, both homography matrix and internal

and external parameter matrixes are completely solved.

Step 3. Distortion coefficient computation

In practice, lens often have varying distortion, includ-

ing tangential distortion and radial distortion. Usually,

the tangential distortion is very small and only the

radial distortion is considered. The radial distortion is

defined as

(xx0)(u2+v2)(xx0)(u2+v2)2

(yy0)(u2+v2)(yy0)(u2+v2)2

"#

k1

k2

=x

^x

y

^y

"# ð37Þ

where (x,y) denote the ideal pixel coordinates, (x

^,y

^)

denote the actual pixel coordinates after distortion,

(x0,y0) denote the center of the distorted pixel plane,

(u,v) represent the ideal image coordinates, and k1,k2

are the distortion coefficients.

Equation (37) is defined for each point in each

image, so for given mpoints in nimages, there are 2mn

equations, which can be represented in a matrix form

by equation (38)

Dk =dð38Þ

Then, the least-squares method is employed to calcu-

late the distortion coefficients by equation (39)

k=(DTD)1DTdð39Þ

Step 4. Parameter optimization

The values of A,R,t,k1,k2obtained by the above

method are imprecise, so they need to be optimized by

an optimization algorithm. The optimization of listed

parameters is obtained by minimizing the expression

shown in equation (40)

X

n

i=1X

m

j=1

p

^

ij pA,k1,k2,Ri,ti,Pj

2ð40Þ

where p

^

ij denotes the actual pixel coordinates of the jth

point in the ith image; p(A,k1,k2,Ri,ti,Pj) denote the

pixel coordinates of the jth point in the ith picture cal-

culated by equations (31) and (37); Riand tiare the

rotation matrix and translation matrix of the coordi-

nate system, respectively; and Pjdenotes the world

coordinates of the jth point on the calibration plate.

8Proc IMechE Part P: J Sports Engineering and Technology 00(0)

Trinocular stereo vision measurement. The binocular stereo

vision system may not capture a complete trajectory of

a table tennis ball due to occlusion and mistaken shoot-

ing angle caused by an improper position of the cam-

era. Therefore, in this article, a trinocular stereo vision

technology was used to capture the complete ball tra-

jectory. Three high-speed cameras were used to over-

come the gaps introduced by the binocular stereo vision

systems. Thus, if one of the cameras was shielded, the

remaining two cameras were able to capture the com-

plete trajectory.

In this article, a 3D trajectory of the table tennis ball

center was obtained by the least-squares method to fill

the gaps in the 3D trajectories synthesized by two non-

parallel cameras.

25

In the case of a binocular 3D vision

ranging system, when the corresponding ball in the

image sequence captured by two cameras is known, the

method of calculating the spatial ball coordinates is

represented as shown in equations (41) and (42)

s1

x1

y1

1

2

6

43

7

5=M11M12

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5

=

m111 m112 m113 m114

m121 m122 m123 m124

m131 m132 m133 m134

2

6

43

7

5

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5ð41Þ

s2

x2

y2

1

2

6

43

7

5=M21M22

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5

=

m211 m212 m213 m214

m221 m222 m223 m224

m231 m232 m233 m234

2

6

43

7

5

XW

YW

ZW

1

2

6

6

6

4

3

7

7

7

5ð42Þ

where (x1,y1) and (x2,y2) are the ball coordinates in

the image obtained by two different cameras; M11 and

M21 are the internal parameter matrices of two cam-

eras; M12 and M22 are the external parameter matrices

of two cameras. By eliminating and transforming s1

and s2, equation (43) is obtained

(x1m131 m111)XW+(x1m132 m112)YW+(x1m133 m113)ZW=m114 x1m134

(y1m131 m121)XW+(y1m132 m122 )YW+(y1m133 m123 )ZW=m124 y1m134

(x2m231 m211)XW+(x2m232 m212)YW+(x2m233 m213)ZW=m214 x2m234

(y2m231 m221)XW+(y2m232 m222 )YW+(xym233 m223)ZW=m224 y2m234

8

>

>

<

>

>

:

ð43Þ

When equation (43) is re-written in the matrix form,

A1

XW

YW

ZW

2

43

5=b1, and the least-squares method can be

used to obtain the following relationship as shown in

equation (44)

XW

YW

ZW

2

43

5=(AT

1A1)1AT

1b1ð44Þ

Thereby, the ball coordinates are calculated.

As previously mentioned, three high-speed cameras

were used to capture videos of a table tennis ball in dif-

ferent positions. Filtering and dilation were used to

eliminate noise. The ball in all three videos was detected

and located by the background subtraction method

and then it was tracked by an improved fast-tracking

method. Then, the least-squares method combined with

the camera calibration parameters was used to recon-

struct and merge ball trajectories of the two cameras to

obtain the centroid trajectory. Finally, the spinning tra-

jectory of all points on the ball was obtained according

to the rotation matrix. The specific flowchart of a table

tennis ball tracking using the trinocular vision system is

shown in Figure 6.

After reconstruction of the ball spinning trajectory,

the calculated angle the ball had spun through the first

and last frames, which were used for the spinning tra-

jectory, was clearly obtained. The angular velocity of

ball rotation can be estimated by equation (45)

v=m32p

(n1)3DTð45Þ

where vis the angular velocity, mis the number of ball

rotations, nis the number of frames, and DTis the

interframe spacing.

The above-presented modified vision tracking

method was used to obtain the actual ball flight tra-

jectory, which was then compared with the simulated

flight trajectory obtained by the kinetics model.

The difference between these two trajectories was

determined and labeled as the error as shown in

equation (46)

error =simulated actual

actual 3100% ð46Þ

Wang et al. 9

where simulated represents the simulated flight trajec-

tory, actual represents the actual flight trajectory, and

error represents the deviation of two trajectories.

Experimental results and analysis

Flight experiments

The indoor experiments were conducted with the stan-

dard table tennis table and equipment presented in

Figure 7. Three identical high-speed cameras, whose

parameters are given in Table 1, were used to record

the motion of a fast-moving table tennis ball.

26–28

The

first high-speed camera (camera no. 1 in Figure 7(a))

was placed in front of the table tennis table to capture

the complete positive trajectory of a table tennis ball.

The other two high-speed cameras (cameras no. 2 and

no. 3 in Figure 7(a)) were located on the same side of

the table tennis table. High-speed camera no. 2 was

used to record the motion of the right side of the table

tennis ball, and high-speed camera no. 3 was used to

record the motion of the left side of the table tennis

ball. Camera no. 1 formed two binocular vision sys-

tems, one with camera no. 2 and the other with camera

no. 3. In the experiments, a planar target calibration

plate consisting of 1037 squares and 54 corner points

was used. In addition, four types of marker balls with

different patterns were used as shown in Figure 7(b). In

the experiments, all four types of marker balls had the

same service speed, direction, and angular velocity.

Each ball was spun four ways, namely, no-spin, back-

spin, leftward spin, and leftward backspin, each of

which was detected. For each type of spin, the ball with

the best detection accuracy was selected as the experi-

mental ball for that spin type. Ball 0 was selected for

the no-spin, Ball 1 was selected for the backspin, Ball 2

was selected for the leftward spin, and Ball 3 was

selected for the leftward backspin.

29

The optimal detec-

tion accuracy of all spinning types and the correspond-

ing balls are shown in Table 2. The table tennis robot

used for ball serving, presented in Figure 7(c), allowed

for easy adjustment of ball speed, angle, and spin. All

table tennis balls were served from the same position

and at the same initial velocity, v0=25m=s.

Flight trajectory simulation and reconstruction results

The world coordinate system was set as shown in

Figure 7(a). The corner of the table tennis table that

Figure 6. Flowchart of a spinning table tennis ball trajectory tracking using the trinocular vision system.

10 Proc IMechE Part P: J Sports Engineering and Technology 00(0)

was nearest to high-speed camera no. 3 was selected as

the origin of the coordinate system, such that the hori-

zontal axis was the x-axis, the longitudinal axis was the

y-axis, and the z-axis was perpendicular to the xOworldy

plane. All three cameras were calibrated to optimize

their internal and external parameters. After calibra-

tion, the internal and external parameters of the high-

speed camera no. 1 were as follows

M11

1239:52:2 408:2

0 1233:7 298:3

001

2

6

43

7

5

M12 =

0:9999 0:0119 0:0001 506:4738

0:0119 0:0064 0:9999 199:6935

0:0002 0:9999 0:0000 2010:1017

2

6

43

7

5

Table 1. Parameters of the high-speed cameras used in the experiments.

Camera features Parameters

Sensor CMOS (Bayer system color, single sensor) with 17 mm pixel

Resolution 1024 3512 color

Shutter Global electronic shutter from 16.7ms to 1.5 ms independent of frame rate

Data display Frame rate, shutter speed, trigger mode, date or time (can be switched), status(playback/record),

real time, frame count, and resolution

Frames per second 500

Triggering Positive TTL 5Vp-p or switch closure

Saved image formats JPEG, AVI, TIFF, BMP, RAW (compressed or uncompressed) PNG (10-bit), and FTIF

(10-bit). Images can be saved with or without image or comment data

Figure 7. (a) Experimental setup; (b) balls with different recognition patterns used in the experiments; and (c) table tennis robot.

Table 2. Optimal detection accuracy and the corresponding ball of four spin types.

Spin type No spin Backspin Leftward spin Leftward backspin

Ball Ball 0 Ball 1 Ball 2 Ball 3

Optimal accuracy (%) 95.83 92.44 89.57 95.90

Wang et al. 11

The internal and external parameters of high-speed

camera no. 2 after calibration were as follows

M21 =

2231:83:1 322:8

0 2218:7 260:3

001

2

6

43

7

5

M22 =

0:9999 0:0018 0:0007 1306:4738

0:0108 0:0065 0:9998 198:4975

0:0012 0:9987 0:0001 2023:0004

2

6

43

7

5

The internal and external parameters of high-speed

camera no. 3 were as follows

M31 =

1627:31:9 508:2

0 1619:7 302:9

001

2

6

43

7

5

M32 =

0:0001 0:0018 0:9999 813:0924

0:9989 0:0035 0:0015 207:6931

0:0007 0:9987 0:0001 2506:4358

2

6

43

7

5

The radial distortion coefficients of high-speed cam-

eras no. 1, no. 2, and no. 3, respectively, were as follows

k1=7:783104

2:343106

"#

k2=7:783104

2:343106

"#

k3=7:783104

2:343106

"#

After calibration, 10 points were selected from the

calibration board to verify the calibration results. The

results showed that calibration was successful and the

errors of all three cameras were less than 0.5 pixels.

The improved fast-tracking method was used to

track the flight trajectories of four table tennis balls.

Then, the centroid trajectories of the table tennis balls

were reconstructed using the measuring principle of the

calibration of camera parameters and the trinocular

stereo visual least-squares method. The centroid trajec-

tories of all four types of table tennis balls after detec-

tion, tracking, and 3D reconstruction are shown in

Figure 8. To analyze the flight trajectories of table ten-

nis balls more intuitively and clearly, three spinning tra-

jectories and one no-spinning trajectory were projected

onto both xOworldyand xOworldzplanes, and they were

compared. The comparison results are presented in

Figure 9. Figure 9(a) shows the results of the projection

on the xOworldyplane and Figure 9(b) shows the results

of the projection on the xOworldzplane. As shown in

Figure 9(a), the projected trajectories of Ball 0 and Ball

1 were approximately straight lines and they basically

coincided, while the projected trajectories of Ball 2 and

Ball 3 were shifted to the right compared to Ball 0. This

shift appeared because the no-spin and backspin balls

were not subjected to the force in the horizontal direc-

tion, while both leftward spin and leftward backspin

were subjected to the Magnus force to the right. In

Figure 9(b), it can be seen that the projected trajectories

of Ball 0 and Ball 2 basically coincided. The radian of

the projected trajectories of Ball 1 and Ball 3 was larger

than that of Ball 0 and the flying distance was farther.

Due to the racket hitting, the spinning tennis ball

was spinning around its rotation axis during the flight,

so the spin and spinning trajectory of the ball were also

studied. A point Pwas selected with the spherical coor-

dinates (2, 0, 0) on the table tennis ball, and then stud-

ied the spinning trajectory between two collision points

of all three types of spinning balls. Based on the analy-

sis of the spin model given in section ‘‘Spin modeling,’’

the rotation matrix of Paround any axis was

R(c)=Rx(g)Ry(b)Rz(c)Ry(b)Rx(g). For

Ball 1 with backspin, the motion of point Pwas equiva-

lent to the counter-rotation around the y-axis, thus

R(c)=Ry(c). For Ball 2 with the leftward spin, the

motion of point Pwas equivalent to the counter-

rotation around the z-axis, thus R(c)=Rz(c). For

Figure 8. Three-dimensional centroid trajectories of four types of the spinning table tennis balls.

12 Proc IMechE Part P: J Sports Engineering and Technology 00(0)

Ball 3 with leftward backspin, the rotation matrix of P

was given by R(c)=Rxp

4

Ryp

4

Rz(c)Ry

p

4

Rxp

4

. The spinning trajectories of point Pof

all three spinning balls were obtained by the spinning

and translation transformations, and the results are

shown in Figure 10.

To compare the simulated and reconstructed trajec-

tories, the kinetics model and video tracking method

were used to simulate and reconstruct the flight trajectory

of the backspin, respectively. For the dynamic model,

since the motion process was continuous in time, the

computer simulation requires discretization of the contin-

uous process. Therefore, the accuracy of the simulation

results was largely influenced by the processing period

Dt=t2t1. Theoretically, when Dtapproached zero,

the obtained flight trajectory was the real flight trajec-

tory. The larger the Dt, the larger the calculated amount,

which inevitably affected the simulation. The table tennis

robot was adjusted to determine the initial position, velo-

city v0, and angles u0,u0,a0,j0of the ball when the ball

was launched. After the ball was thrown, in the subse-

quent simulation process, the ball was considered to per-

form a uniformly accelerated motion during the period

Dt, and the three acceleration components _

vx,_

vy,_

vzwere

calculated by equation (4). The integrals of _

vx,_

vy,_

vzwere

calculated separately to get the change of velocity compo-

nents. The velocity components vx,vy,vzwere then inte-

grated separately to obtain the movements in three

directions at time Dtby equation (5), so the ball position

at t2was obtained. When calculating the next moment t

3

,

the motion information at t

2

wasusedtorecalculatethe

force affecting the ball and angles update were updated

according to equation (6). Then equation (4)-(5) were

used to calculate the ball position at the next time t

3

.

Repeat these steps until the entire flight path of the ball

was calculated.

Figure 9. Comparison of projections of the trajectories of three spinning balls and one no-spinning ball. (a) Comparison of

trajectories projected on the xOworldyplane; (b) comparison of trajectories projected on the xOworld zplane.

Figure 10. Spinning trajectories of the spherical point P.

Wang et al. 13

The simulated and reconstructed flight trajectories of

the table tennis ball are presented in Figure 11, wherein

the magenta dotted line (the actual trajectory) denotes

the flight trajectory of the backspin obtained by the

method presented in this article and the magenta solid

line (the simulated trajectory 1) denotes the flight trajec-

tory of the backspin obtained by the kinetics model

presented in this article. The deviation from the actual

trajectory was 8:69%, which represents a relatively

poor prediction result. By adjusting the relevant para-

meter Dt, a better simulated trajectory was obtained,

which is shown as the black solid line (the simulated

trajectory 2) in Figure 11. The simulated trajectory 2

was relatively close to the actual trajectory with a

Figure 11. Comparison of simulated and actual trajectories. (a) 3D graphs of simulated and actual trajectories; (b) time-history

graphs of simulated and actual trajectories on the x-axis; (c) time-history graphs of simulated and actual trajectories on the y-axis;

and (d) time-history graphs of simulated and actual trajectories on the z-axis.

14 Proc IMechE Part P: J Sports Engineering and Technology 00(0)

trajectory deviation of 2:42%, which showed that the

method presented in this article could closely approxi-

mate the actual trajectory of a table tennis ball more

accurately.

Discussions

By studying the forces affecting different spinning balls

and reconstructing the ball flight trajectories, the rela-

tionships between flight trajectories and spinning balls

were analyzed. Compared with the no-spin, the topspin

was served in the clockwise direction along the horizon-

tal axis of the ball and its flight trajectory had a down-

ward trend with fast-descending speed and shorter

flight distance. The backspin was served in the anti-

clockwise direction along the horizontal axis of the ball

and its flight trajectory had an upward trend with larger

flight arc and longer flight distance. The leftward spin

was served approximately in the clockwise direction

along the vertical axis of the ball and its flight trajectory

was deflected to the right. Finally, the rightward spin

was served approximately in the anti-clockwise direc-

tion along the vertical axis of the ball and its flight tra-

jectory was deflected to the left.

The combined spinning was served along the inclined

straight line. Compared with the no-spin, the leftward

topspin had a tendency of right deviation and decreased

with the smaller flight arc and shorter flight distance.

The leftward backspin had a tendency of right deviation

and increased with the larger flight arc and farther

flight distance. The rightward topspin had a tendency

of left deviation and decreased with the smaller flight

arc and shorter flight distance. The rightward backspin

had a tendency of left deviation and increased with the

larger flight arc and farther flight distance.

Combining the aerodynamics and classical

mechanics theories, the forces affecting a spinning ball

were analyzed and the kinetics model was established

to simulate a ball flight trajectory, which was then com-

pared with the reconstructed ball flight trajectory. The

results showed that these two flight trajectories were

close with a small deviation, which verified the accu-

racy and efficiency of the model and simulations.

Conclusion

This article studies the tracking and prediction of a fly-

ing trajectory of a table tennis ball. A kinetics model is

proposed to predict a flight trajectory of a spinning

ball. The real trajectory captured by three high-speed

cameras was processed by the improved fast-tracking

method. Next, the ball centroid trajectory for different

spinning ways was modeled and analyzed to obtain a

3D trajectory of the ball mass center. Then, the rela-

tionship between the flight trajectory and the spinning

ball was analyzed based on the simulation results of the

3D trajectory. The experimental results showed that

simulated trajectory almost overlapped with the recon-

structed (actual) trajectory, achieving a deviation of

only 2:42%, which shows relatively high accuracy and

validity of this method. According to the results, the

proposed method can be used to train table tennis play-

ers and construct the virtual reality games, making the

study of actual sports games more feasible.

Acknowledgements

The authors would like to express appreciation for the

assistance and discussion from Mr Yabo Wang, Ms Ni

Zhang, Ms Mengdan Cui, Mr Calvin Lin, and Mr

Xiaoyu Peng.

Declaration of conflicting interests

The author(s) declared no potential conflicts of interest

with respect to the research, authorship, and/or publi-

cation of this article.

Funding

The author(s) disclosed receipt of the following finan-

cial support for the research, authorship, and/or publi-

cation of this article: This work was supported by the

National Natural Science Foundation of China (grant

no. 51505037), Key Science and China Postdoctoral

Science Foundation (grant no. 2016M600814), the

Fundamental Research Funds for the Central

Universities (grant nos 3102017zy023, 300102328401,

300102328101, and 300102328205), and the Traffic

Project Research Fund of the Shaanxi Provincial

Transport Bureau (grant no. 16-57X).

References

1. Dupeux G, Goff AL, Que

´re

´D, et al. The spinning ball

spiral. New J Phys 2010; 12: 093004.

2. Zhang Z, Xu D and Yu J. Research and latest develop-

ment of ping-pong robot player. In: 7th world congress

on intelligent control and automation, Chongqing, China,

25–27 June 2008, pp.4881–4886. New York: IEEE.

3. Robinson G and Robinson I. Are inertial forces ever of

significance in cricket, golf and other sports? Phys Script

2017; 92: 043001.

4. Deng XQ and Wang J. Study of the strategy for soccer

robot to meet an emergency based on LWR. J Xihua

Univ 2006; 25: 10–13.

5. Miyazaki F, Masutani Y, Hirose E, et al. State estima-

tion of a spinning ball using LWR (locally weighted

regression). J Robot Soc Jpn 2010; 16: 684–689.

6. Wang H, Cao C and Leung H. An improved locally

weighted regression for a converter re-vanadium predic-

tion modeling. In: 6th world congress on intelligent control

and automation, vol. 1, Dalian, China, 21–23 June 2006,

pp.1515–1519. New York: IEEE.

7. Wang QZ and Yang XX. Simulation of trajectory predic-

tion of ping-pong. Comput Eng Sci 2013; 35: 164–168.

8. Peploe C, Mcerlain-Naylor SA, Harland AR, et al. A

curve fitting methodology to determine impact location,

timing, and instantaneous post-impact ball velocity in

cricket batting. Proc IMechE, Part P: J Sports Engineer-

ing and Technology 2018; 232: 185–196.

Wang et al. 15

9. Ren YQ, Fang ZJ, Xu D, et al. Spinning pattern classifi-

cation of table tennis ball’s flying trajectory based on

fuzzy neural network. Kongzhi Juece / Control Decis

2014; 29: 263–269.

10. Mora SV and Knottenbelt WJ. Deep learning for

domain-specific action recognition in tennis. In: IEEE

conference on computer vision and pattern recognition

workshops, Honolulu, HI, 21–26 July 2017, pp.170–178.

New York: IEEE.

11. Zhang YH and Wei W. Online angular velocity estimated

visual measurement for ping pong robot. J Zhejiang Univ

2012; 46: 1320–1326.

12. Acosta L, Rodrigo JJ, Mendez JA, et al. Ping-pong player

prototype. IEEE Robot Autom Mag 2003; 10: 44–52.

13. Modi KP, Sahin F and Saber E. An application of human

robot interaction: development of a ping-pong playing

robotic arm. In: IEEE international conference on systems,

man and cybernetics, vol. 2, Waikoloa, HI, 12 October

2006, pp.1831–1836. New York: IEEE.

14. Yang H, Yi YH, Liu GD, et al. Real-time recognizing

and tracking of fast flying ping-pong ball under motion

blur. J Shenyang Aerosp Univ 2014; 31: 47–51.

15. Rusdorf S and Brunnett G. Real time tracking of high

speed movements in the context of a table tennis applica-

tion. In: ACM symposium on virtual reality software

and technology, Monterey, CA, 7–9 November 2005,

pp.192–200. New York: ACM.

16. Zhang ZT and De XU. High-speed vision system based

on smart camera and its target tracking algorithm. Robot

2009; 31: 229–223.

17. Chao L and Jing H. Object tracking algorithm based on

improved camshift. Comput Eng Appl 2014; 11: 149–153.

18. Yang C, Duraiswami R and Davis L. Efficient mean-shift

tracking via a new similarity measure. In: IEEE Com-

puter Society conference on computer vision and pattern

recognition, vol. 1, San Diego, CA, 20–25 June 2005,

pp.176–183. New York: IEEE.

19. Teng PSP, Kong PW, Leong KF, et al. Effects of foot-

landing techniques on lower extremity kinematics during

single-leg drop landing. In: 1st international conference in

sports science and technology (ICSST), Singapore, 11–12

December 2014, pp.613–618. New York: IEEE.

20. Zhang Y, Xiong R, Zhao Y, et al. Real-time spin estima-

tion of ping-pong ball using its natural brand. IEEE T

Instrum Meas 2015; 64: 2280–2290.

21. Gan MG, Jie C, Jin L, et al. Moving object detection algo-

rithm based on three-frame-differencing and edge infor-

mation. J Electron Inform Technol 2010; 32: 894–897.

22. Porikli F. Achieving real-time object detection and track-

ing under extreme conditions. J Real-Time Image Pr

2006; 1: 33–40.

23. Chi D, Wang Y, Ning L, et al. Experimental research of

camera calibration based on Zhang’s method. JChin

Agric Mech 2015; 36: 287–289.

24. Miyata S, Saito H, Takahashi K, et al. Ball 3D trajectory

reconstruction without preliminary temporal and geome-

trical camera calibration. In: IEEE conference on com-

puter vision and pattern recognition workshops, Honolulu,

HI, 21–26 July 2017, pp.164–169. New York: IEEE.

25. Yuan GW, Chen ZQ, Gong J, et al. A moving object

detection algorithm based on a combination of optical

flow and three-frame difference. J Chin Comput Syst

2013; 34: 668–671.

26. Lin CS, Chua CK and Yeo JH. Badminton shuttlecock

stability: modelling and simulating the angular response

of the turnover. Proc IMechE, Part P: J Sports Engineer-

ing and Technology 2015; 230: 111–120.

27. Zhang Z, Halkon B, Chou SM, et al. A novel phase-

aligned analysis on motion patterns of table tennis

strokes. Int J Perf Anal Spor 2016; 16: 305–316.

28. Andersen MS, Yang J, de Zee M, et. al. Full-body mus-

culoskeletal modeling using dual microsoft kinect sensors

and the anybody modeling system. In: Proceedings of the

14th International Symposium on Computer Simulation in

Biomechanics, Natal, 1 August 2013, pp. 23–24.

29. Borg JP and Morrissey MP. Aerodynamics of the knuck-

leball pitch: experimental measurements on slowly rotat-

ing baseballs. Am J Phys 2014; 82: 921–927.

Appendix

Notation

Ainternal parameter

Binput matrix

Bk(i,j) background frame

C,Chnormalization constant

Cdair resistance coefficient

CLair resistance factor

d(y) similarity function

Dk(i,j) difference image

error deviation

fk(i,j) current frame

Fstate transition matrix

Fdair resistance direction

FLMagnus force

gacceleration of gravity

GGravity

hbandwidth of a kernel function

Hhomography matrix

k distortion coefficient

k(x) contour function of the Epanechnikov

kernel function

KkKalman gained

mmass of the ball

Mprojection matrix

Oobservation matrix

p(A,k1,calculated pixel coordinate of the jth

k2,Ri,ti,Pj)point if the ith picture

p

^

ij actual pixel coordinates of the jth point

of the ith picture

^

pu(y) probability density function of the

candidate target model

Pkestimated error covariance

P

koptimal estimated error covariance

POball coordinate of point Prelative to the

coordinate system Oball before

transformation

POworld coordinate of the point Prelative to the

coordinate system Oworld after

transformation

^

quprobability density function of the target

model

16 Proc IMechE Part P: J Sports Engineering and Technology 00(0)

rball radius

Rrotation matrix

R(c) rotation matrix

Rk(i,j) binary image

scoefficient

Swindward area of the ball

ttranslation matrix

Dtinterframe spacing

Tthreshold value

TOworld

Oball translation of the coordinate system Oball

relative to the coordinate system Oworld

ufeatured value of the target area

uk1deterministic control input

vball speed

vxout ball speed on the x-axis after collision

vxin ball speed on the x-axis before collision

vyout ball speed on the y-axis after collision

vyin ball speed on the y-axis before collision

vzout ball speed on the z-axis after collision

vzin ball speed on the z-axis before collision

wiweight

xipixel coordination in the target area

x

ipixel coordination in the current frame

candidate target area

x0center position of the target area

xkstate vector

^

x

kstate estimation variable

^

x

koptimal posterior state estimation variable

y center position in the current frame

candidate target area

zkobservation vector

aangle between the force FLand the xOy

plane

uangle between the projection of the

velocity and the xaxis

uangle between the velocity and the xOy

plane

rair density

r(y) Bhattacharyya coefficient

vspinning angular velocity

jangle between the projection of FLand

the xaxis

cangle of point Pspins around ball’s own

rotation axis

Wang et al. 17