Available via license: CC BY 4.0

Content may be subject to copyright.

.

Lithuanian Journal of Statistics 2013, vol. 52, No 1, pp. 5-13

Lietuvos statistikos darbai 2013, 52 t., Nr. 1, 5-13 p.

.

www.statisticsjournal.lt

.

.

.

SOME GOODNESS OF FIT TESTS FOR RANDOM SEQUENCES

Yuriy Kozachenko1, Tetiana Ianevych2

Taras Shevchenko National University of Kyiv, Mechanics and Mathematics Faculty

Address: Volodymyrska str., 64/13, Kyiv, 01601, Ukraine

E-mail: 1ykoz@ukr.net, 2yata452@univ.kiev.ua

Received: August 2013 Revised: September 2013 Published: November 2013

Abstract. In this paper we had made an attempt to incorporate the results from the theory of square Gaussian random variables in

order to construct the goodness of ﬁts test for random sequences (time series). We considered two versions of such tests. The ﬁrst one

was designed for testing the adequacy of the hypotheses on expectation and covariance function of univariate non-centered sequence,

the other one was constructed for testing the hypotheses on covariance of the multivariate centered sequence. The simulation results

illustrate the behavior of these tests in some particular cases.

Keywords: goodness of ﬁt test, multivariate random sequence, time series, square Gaussian random variable.

1. Introduction

The task of investigating the properties of a random sequence is very important for its application. Very often some

phenomena can be observed only at certain points in time. In some cases the values of continuous quantities, such as

temperature and voltage, can be written only at discrete moments of time. And even if the observations can be recorded

continuously we can only use discrete data for computational purposes. That is why we usually deal with random se-

quences or time series in practice. The latter term is used more frequently but we prefer to use the former one designating

the connection to random processes.

There is much literature devoted to this topic, in particular the classic books on the statistical analysis of time series

written by Anderson [1], Box and Jenkins [2], and Brockwell and Davis [4].

To date, many goodness-of-ﬁt tests in time series are residual-based. For example, the classic portmanteau test of

Box and Pierce [3] and its improvement by Ljung and Box [15] are based on the sample autocorrelations of the residuals.

In the context of goodness of ﬁt of nonlinear time series models, the McLeod and Li [16] test is based on the sample

autocorrelations of the squared residuals. Based on a spectral approach of the residuals, Chen and Deo [6] proposed some

new diagnostic tests. More recently, perhaps inﬂuenced by the empirical distribution function approach in the goodness-

of-ﬁt test for independent observations, substantial developments for time series data have taken place in the form of tests

based on empirical processes marked by certain residuals, see for instance Chen and Härdle [7] and Escanciano [8]. For

more information and details see the references within the literature proposed.

In the model-based approach to time series analysis, estimated residuals are computed once a ﬁtted model has been

obtained from the data, and these are then tested for “whiteness”, i. e. it is determined whether they behave like white

noise. Tests for residual whiteness generally postulate whiteness of the residuals as the Null Hypothesis, so that signiﬁcant

rejections indicate model inadequacy. These tests require the computation of residuals from the ﬁtted model, which can be

quite tedious when the model does not have a ﬁnite order autoregressive representation. Also, in such cases, the residuals

are not uniquely deﬁned.

In this paper we use another approach gained from the theory of square Gaussian random variables. This theory

was developed in works by Kozachenko et al. [11], [12], [13], for the investigation of stochastic processes. In the book

by Buldygin and Kozachenko [5] the properties of the space of square Gaussian random variables were studied and the

connection with Orlicz spaces of random variables was established. We use this theory for the construction goodness

of ﬁt tests on the expectation and covariance function for the non-centered univariate stationary Gaussian sequence and

covariance function for the centered but multivariate stationary random sequence. Our tests do not require the computation

Lithuanian Statistical Association, Statistics Lithuania

Lietuvos statistiku˛ s ˛ajunga, Lietuvos statistikos departamentas ISSN 2029-7262 online

6Some Goodness of Fit Tests for Random Sequences

of residuals and can be applied to inﬁnite order representations. This paper is the continuation of the work started in [14],

which was devoted to testing the centered univariate random sequence.

The paper consists of 5 sections and 2 annexes. The second section is devoted to the theory of square Gaussian

random variables and contains the main deﬁnitions and results. In particular, we found the estimate for the distribution

of the maximum of the quadratic form of the square Gaussian random variables. Sections 3 and 4 apply the estimator

obtained in section 2 to construct different aggregate tests.

The test in section 3 was constructed for testing the aggregated hypothesis on the expectation and covariance function

of the non-centered stationary Gaussian sequence. It is based on the approach used in stochastic process L2theory. Within

this inference the process identiﬁcation is made on the basis of the two main characteristics: mathematical expectation

and covariance function.

In the section 4 we consider multivariate sequences. The approach analyzing the residuals dominates in the multi-

variate case too. See, for example, papers by Hosking [9], [10], Mahdi and McLeod [17] and the references therein. The

goodness of ﬁt test we have constructed for the centered Gaussian multivariate stationary sequence is based on ﬁtting the

covariance function.

The power properties of our tests were studied through simulations. Section 5 draws some conclusions. Some neces-

sary mathematical calculations are relegated to the annexes at the end.

2. Square Gaussian random variables

Let Ξ={γi,i∈I}be a family of joint Gaussian random variables for which Eγi=0 for all i∈I.

Deﬁnition 1. [13] The space SGΞ(Ω)is the space of square Gaussian random variables if any element ξ∈SGΞ(Ω)can

be presented as

ξ=−→

γTA−→

γ−E−→

γTA−→

γ,(1)

where −→

γT= (γ1,...,γr),γi∈Ξ,i=1,r,Ais a real-valued matrix; or the element ξ∈SGΞ(Ω)is the square mean limit of

the sequence {ξn,n≥1}of the form (1)

ξ=l.i.mn→∞ξn.

It was proved by Buldygin & Kozachenko in [5] that SGΞ(Ω)is a linear space.

For the square Gaussian random variables the following results hold true.

Theorem 1. [11] Let ~

ξT= (ξ1,ξ2,...,ξd)be a random vector such that ξi∈SGΞ(Ω)and Bbe a symmetric semi-deﬁnite

matrix. Then for all 0<s<1

√2the following inequality is true

Ecosh

v

u

u

ts2~

ξTB

~

ξ

E~

ξTB

~

ξ

≤R(√2s),(2)

where R(y) = 1

√1−yexp−y

2,0<y<1.

Theorem 2. Let {ηm,1≤m≤M}be a sequence of random variables that can be presented as quadratic form of square

Gaussian random variables (that is, η=~

ξTB

~

ξ, where Bis a symmetric semi-deﬁnite matrix). Then, for any x ≥0

Pmax

1≤m≤M

ηm

Eηm

>x≤M·W(x),(3)

where W (x) = R√2x

1+√2x

coshx

1+√2xand the function R is deﬁned in Theorem 1.

Proof. For all 0 <s<1/√2 and x>0

Pmax

1≤m≤M

ηm

Eηm

>x≤Mmax

1≤m≤M

Pηm

Eηm

>x≤

≤Mmax

1≤m≤M

Ecoshqs2ηm

Eηm

cosh√s2x≤MR(s√2)

cosh(s√x).

Yuriy Kozachenko, Tetiana Ianevych 7

Putting s=√x

1+√2x, which is approximately the minimal point, we obtain (3) and

Pmax

1≤m≤M

ηm

Eηm

>x≤M(1−√2s)−1/2exp{−s/√2}

cosh(s√x)≤

≤Me−1/2exp n1

2(1+√2x)op1+√2x

coshx

1+√2x=

=O(x1/4e−√x)as x→∞.

The theorem is proved.

3. Testing a hypotheses on the expectation and covariance function of a univariate sequence

Using the inequality (3) it is possible to test a hypothesis on the expectation and covariance function of the non-

centered univariate stationary Gaussian sequence. Hereinafter we will consider stationarity in a strict sense.

Let us consider the stationary sequence {γ(n),n≥1}for which Eγ(n) = aand E(γ(n)−a)(γ(n+m)−a) = B(m),

m≥0 is its covariance function. We assume that we have N+Mconsecutive observations of this random sequence. Let

us choose the estimators in the following way:

for the expectation

b

am=1

N

N

∑

n=1

γ(n+m),0≤m≤M;

for the covariance function

b

B(m) = 1

N

N

∑

n=1

(γ(n)−b

a0)(γ(n+m)−b

am),0≤m≤M.

We denote

Ea:=E(b

am−a)2=1

N2

N

∑

n=1

N

∑

k=1

B(n−k) = 1

NB(0) + 2

N2

N−1

∑

i=1

(N−i)B(i);

EB(m):=Eb

B(m) = B(m)−1

N2

N

∑

n=1

N

∑

k=1

B(n−k−m) =

=1−1

NB(m)−2

N2

N−1

∑

i=1

(N−i)B(i−m),0≤m≤M,

and introduce the random variables:

ηa(m):= (b

am−a)2−Ea,0≤m≤M;

and

ηB(m):=b

B(m)−EB(m),0≤m≤M.

It is easy to prove that these random variables are square Gausssian.

Let us deﬁne~

η(m)T= (ηa(m),ηB(m)), 0 ≤m≤M. So, for any semi-deﬁnite matrix Bm= (bi j(m))i,j=1,2the random

variable η(m):=~

η(m)TBm~

η(m)is actually the quadratic form of a square Gaussian random variable.

Remark 1.If bi j =(1,i=j;

0,i6=j.(that is, Bis the identity matrix of order 2), then η(m) = η2

a(m)+ η2

B(m)and Eη(m) =

Eη2

a(m) + Eη2

B(m). All the necessary calculations for the terms of Eη(m)are included to the Annex to this section.

Remark 2.If for every mBm=C−1(m)is the inverse of matrix C(m), whose components are the covariances between

the vector ~

η(m)items then Eη(m) = const for all m. But, in this case one should be careful since the matrices C(m)have

to be invertible.

Criterion 1. Let the null hypothesis H0state that a and B(m), m ≥0are the expectation and covariance function of the

non-centered Gaussian stationary sequence {γ(n),n≥1}and the alternative Haimplies the opposite statement.

8Some Goodness of Fit Tests for Random Sequences

If for signiﬁcance level αand corresponding critical value εαwhich can be found from the equation MW (εα) = α

max

0≤m≤M

~

ηT(m)Bm~

η(m)

E(

~

ηT(m)Bm~

η(m)) >εα,

then the hypothesis H0should be rejected and accepted otherwise.

Remark 3.The probability of a type I error for Criterion 1 is less than or equal to α.

Example 1.Let us consider the non-centered Gaussian sequence {γ(n),n≥1}whose elements can be presented according

to the expression

γ(n) = a+

∞

∑

j=0

β(j)ζn−j,n≥1 (4)

where β(j) = e−λj,j≥0, λ>0 and {ζk,k∈Z}is a sequence of independent random variables such that for all kEζk=0,

Eζ2

k=1. In this case Eγ(n) = afor all nand

B(m) = E(γ(n)−a)(γ(n+m)−a) =

∞

∑

j=0

β(j)β(j+m) =

∞

∑

j=0

e−λje−λ(j+m)=e−λ|m|

1−e−2λ,m∈Z.(5)

Using the simulation study we investigated how the Criterion 1 works. We made 10 000 Monte Carlo simulations of

the non-centered Gaussian stationary sequence γ(n), with a=Eγ(n)and covariance function deﬁned by (5) with ﬁxed

parameter λ. For our needs we used the the simulation methods developed in paper [18].

For the symmetric semi-deﬁnite matrix Bmwe choose the identity matrix of order 2 – I2. Then η(m) = ~

ηT(m)Bm~

η(m) =

η2

a(m) + η2

B(m).

1. Let us check the null hypothesis (H0) that states that the stationary Gaussian sequence γ(n)has expectation a=1 and

covariance function deﬁned by the formula (5) with parameter λ=1 versus the alternative hypothesis (Ha) implying that

the stationary Gaussian sequence γ(n)has expectation a=1 and covariance function deﬁned by the formula (5) with

parameter λ=0.5.

We simulated 10 000 realizations of the sequences deﬁned by (4) with parameters a=1 & λ=1 and a=1 & λ=0.5.

Let us deﬁne the necessary constants: the signiﬁcance level α=0.1, M=10, N=1000 (M+N=1010). In this case

the critical value εα=87.82.

For the simulated sequences we obtained an estimate of the probability of a type I error b

α=0 and the estimate for the

probability of a type II error b

β=0.23.

2. Let us now check the null hypothesis (H0) that states that the stationary Gaussian sequence γ(n)has expectation a=1

and covariance function deﬁned by the formula (5) with parameter λ=1 versus the alternative hypothesis (Ha) implying

that the stationary Gaussian sequence γ(n)has expectation a=0 and covariance function deﬁned by the formula (5) with

parameter λ=1.

We used again the simulated 10 000 realizations of the sequence deﬁned by (4) with parameters a=1 & λ=1 and

another sequence with parameters a=0.25 & λ=1.

The required constants are the same as previously. In this case we obtained an estimate for the probability of a type I

error b

α=0 and the estimate for the probability of a type II error b

β=0.0055.

Remark 4.It is evident that the more observations we have the more sensitive the criterion is. Finding the number Nfor

which the null and alternative hypotheses can be distinguished is the subject of our continuing investigation.

Yuriy Kozachenko, Tetiana Ianevych 9

4. Testing a hypotheses on the covariance function of a centered multivariate sequence

The inequality (3) can also be useful for testing a hypothesis on the covariance function of a centered multivariate

random sequence.

Let us assume that the components of the multivariate random sequence −→

γ(n),n≥1 are jointly Gaussian, stationary

(in the strict sense) sequences {γk(n),n≥1,k=1,K}for which Eγk(n) = 0 and Eγk(n)γl(n+m) = Bkl (m),m≥0 is the

covariance function of these sequences. It is worth mentioning that for k=l Bkk is the ordinary autocovariance function

of the k-th component and when k6=l Bkl are the joint covariances or sometimes called cross-covariances. Hereinafter we

shell use the term covariance function of the sequence −→

γ(n).

We suppose that the sequence −→

γ(n)is observed at points 1,2,...,N+M(N,M>1). As an estimator of the covariance

function Bkl (m)we choose

b

Bkl (m) = 1

N

N

∑

n=1

γk(n)γl(n+m),N≥1,m=0,M.

The estimator b

Bkl (m)is unbiased:

Eb

Bkl (m) = 1

N

N

∑

n=1

Eγk(n)γl(n+m) = Bkl (m).

The random variables ∆kl (m) = b

Bkl (m)−Bkl (m)are square Gaussian since b

Bkl (m)can be presented as

(γk(1),...,γk(N))TA(γl(m+1),...,γl(N+m)), where the matrix

A=

1

N··· 0

.

.

.....

.

.

0··· 1

N

.

Let ~

∆(m)be a vector with components ∆kl (m).

Criterion 2. Let the null hypothesis H0state that Bkl (m), m ≥0is the covariance function of the centered Gaussian

stationary sequence −→

γ(n) = {γk(n),n≥1}k=1,Kand the alternative Hastate the opposite.

If for signiﬁcance level αand corresponding critical value εα, which can be found from the equation MW (εα) = α,

max

0≤m≤M

~

∆T(m)Bm~

∆(m)

E(~

∆T(m)Bm~

∆(m)) >εα,

then the hypothesis H0should be rejected and accepted otherwise.

Remark 5.The probability of a type I error for the Criterion 2 is less or equal to α.

Remark 6.The simplest way is to choose the matrix Bmto be identical.

If for every mthe matrix Bm=C−1(m)is inverse to C(m)which consists of the covariances of the vector ~

∆(m)

components, then E(~

∆T(m)B

~

∆(m)) = const for all m. But in this case we should pay attention to the invertibility of the

matrices C(m).

Let’s illustrate how this criterion works on the example.

Example 2.We consider the K=2 – component stationary centered Gaussian sequence −→

γ(n) = {γk(n),n≥1,k=1,2}.

We assume that each component can be presented as moving average

γk(n) =

∞

∑

j=0

βk(j)ξn−j,n≥1,

with coefﬁcients βk(j) = e−λkj,λk>0, k=1,2, j≥0 and the random variables ξjare independent with zero mean and

variance equal to 1.

If the components of −→

γ(n)are not independent then the covariance function of this sequence has a form:

Bkl (m) =

e−λlm

1−e−(λk+λl),m≥0;

e−λk|m|

1−e−(λk+λl),m<0, k,l=1,2.(6)

10 Some Goodness of Fit Tests for Random Sequences

In the case when k=lwe obtain the covariance function of the k-th component:

Bkk(m) = e−λk|m|

1−e−2λk,m∈Z,k=1,2.(7)

Let ~

∆T(m) = (∆11(m),∆12 (m),∆21(m),∆22(m)) and matrix B=I4is the identity matrix of 4-th order. Then

E(~

∆T(m)B

~

∆(m)) =

2

∑

k=1

2

∑

l=1

E∆2

kl (m),(8)

where

E∆2

kl (m) = 1

N"1

(1−e−2λk)(1−e−2λl)+e−2λlm

(1−e−(λk+λl))2#+

+2

N

m

∑

t=11−t

N"e−(λk+λl)t

(1−e−2λk)(1−e−2λl)+e−2λlm

(1−e−(λk+λl))2#+

+2

N

N−1

∑

t=m+11−t

N"e−(λk+λl)t

(1−e−2λk)(1−e−2λl)+e−(λk+λl)t−(λl−λk)m

(1−e−(λk+λl))2#=

=O1

N,as N→∞.(9)

Using simulations we investigated how the Criterion 2 works. We had made 10 000 Monte Carlo simulations of two

sequences with covariance function deﬁned by (6) and (7). For the simulations we used the methods described in the

paper by Vasylyk et al. [18].

1. Let the null hypothesis (H0) state that the two components of the multivariate sequence −→

γ(n)are two jointly Gaussian,

centered stationary sequences γ1(n)and γ2(n)with covariance function deﬁned by (6) and (7) with parameters λ1=1 and

λ2=0.1, respectively, and the alternative hypothesis (Ha) state that λ1=0.3 and λ2=3.

We deﬁne the constants as α=0.1, M=10, N=1000 (M+N=1010). Under these deﬁnitions the critical value

εα=87.82.

We simulated 10 000 realizations of the two bivaiate sequences with parameters λ1=1 & λ2=0.1 and λ1=0.1 &

λ2=1. We obtained an estimate for the probability of a type I error b

α=0 and an estimate for the probability of a type II

error b

β=0.9616.

2. Let the null hypothesis (H0) state that the components of the multivariate sequence −→

γ(n)are two jointly Gaussian,

centered stationary sequences γ1(n)and γ2(n)with covariance function deﬁned by (6) and (7) with parameters λ1=1 and

λ2=0.1, respectively, and the alternative hypothesis (Ha) state that λ1=0.05 and λ2=5.

We used again simulated 10 000 realizations of the two bivariate sequences with parameters λ1=1 & λ2=0.1 and

λ1=0.05 & λ2=5 and deﬁned the constants as previously.

In this case we obtained an estimate for the probability of a type I error b

α=0 and the estimate for the probability of a

type II error b

β=0.0104.

Remark 7.It is evident that the more observations we have the more sensitive the test is.

5. Conclusions

In this paper we estimated the distribution of the maximum of a random sequence which can be presented as a

quadratic form of square Gaussian random variables. This result made it possible to build the criterion for testing a

hypothesis on the expectation and covariance function of a non-centered univariate stationary Gaussian sequence and a

hypothesis on the covariance function of a centered multivariate stationary Gaussian sequence. The simulation studies

were also incorporated.

The inequality obtained in the section 2 can also be useful for testing the similar hypotheses for non-centered mul-

tivariate random sequences. Our test statistics are quite easy to compute and do not require the calculation of residuals

from the ﬁtted model. This is especially advantageous when the ﬁtted model is not a ﬁnite order autoregressive model.

There is of course, a lot of room for improvement of the tests. Comparison with other tests and ﬁnding the number N

for which the null and alternative hypotheses can be distinguishable are also very important issues for further investiga-

tion.

Yuriy Kozachenko, Tetiana Ianevych 11

6. Acknowledgments

We are grateful to two anonymous referees for their insightful comments that have signiﬁcantly improved the paper.

7. Annex to section 3

This Annex includes the requirements for the section 3 calculations for Eη2

a(m)and Eη2

B(m).

Eη2

a(m) = E((b

am−a)2−Ea)2=E(b

am−a)4−(Ea)2

Using the Isserlis’ formula for the centered Gaussian random variables e

γ(n):=γ(n)−awe obtain

E(b

am−a)4=1

N4

N

∑

n=1

N

∑

k=1

N

∑

t=1

N

∑

s=1

Ee

γ(n+m)e

γ(k+m)e

γ(t+m)e

γ(s+m) =

=1

N4

N

∑

n=1

N

∑

k=1

N

∑

t=1

N

∑

s=1

[B(k−n)B(s−t) + B(t−n)B(s−k) + B(s−n)B(t−k)] =

=3

N4 N

∑

n=1

N

∑

k=1

B(k−n)!2

.(10)

Then

Eη2

a(m) = 3

N4 N

∑

n=1

N

∑

k=1

B(k−n)!2

− 1

N2

N

∑

n=1

N

∑

k=1

B(k−n)!2

=2

N4 N

∑

n=1

N

∑

k=1

B(k−n)!2

.(11)

Let us make the required calculation for Eη2

B(m).

Eη2

B(m) = E(b

B(m)−EB(m))2=E(b

B(m))2−(EB(m))2.

E(b

B(m))2=1

N2

N

∑

n=1

N

∑

k=1

E(γ(n)−b

a0)(γ(n+m)−b

am)(γ(k)−b

a0)(γ(k+m)−b

am).

Using the Isserlis’ formula we obtain that

Eη2

B(m) = S0−2S1+S2−(EB(m))2,(12)

where

S0=1

N2 N2B2(m) +

N

∑

n=1

N

∑

k=1

[B2(k−n) + B(k−n+m)B(k−n−m)]!;

S1=1

N2B(m)

N

∑

n=1

N

∑

k=1

B(k−n+m) +

+1

N3

N

∑

n=1

N

∑

k=1

B(k−n)!2

+ N

∑

k=1

B(k−n+m)! N

∑

l=1

B(l−n−m)!

;

S2=1

N4

N

∑

n=1

N

∑

k=1

B(k−n)!2

+2 N

∑

n=1

N

∑

k=1

B(k−n+m)!2

.

8. Annex to section 4

In chapter 4 we need to ﬁnd the expectation of ∆2

kl (m)in order to calculate the value of E(~

∆T(m)B

~

∆(m)) (see

formula (8)). Let us do it.

E∆2

kl (m) = E(b

Bkl (m)−Bkl (m))2=E"1

N

N

∑

n=1

(γk(n)γl(n+m)−Bkl (m))#2

=

=1

N2E"N

∑

n=1

N

∑

i=1

(γk(n)γl(n+m)−Bkl (m))(γk(i)γl(i+m)−Bkl (m))#=

=1

N2

N

∑

n=1

N

∑

i=1

Eγk(n)γl(n+m)γk(i)γl(i+m)−B2

kl (m).

12 Some Goodness of Fit Tests for Random Sequences

Using again the Isserlis’ formula for the centered Gaussian random variables we obtain

Eγk(n)γl(n+m)γk(i)γl(i+m) = Eγk(n)γl(n+m)Eγk(i)γl(i+m) +

+Eγk(n)γk(i)Eγl(n+m)γl(i+m) +

+Eγk(n)γl(i+m)Eγl(n+m)γk(i) =

=B2

kl (m) + Bkk (i−n)Bll (i−n) + Bkl (i−n+m)Blk (i−n−m).

Then

E∆2

kl (m) = 1

N2

N

∑

n=1

N

∑

i=1

[Bkk(i−n)Bl l (i−n) + Bkl (i−n+m)Blk (i−n−m)] =

=1

N[Bkk(0)Bl l (0) + B2

kl (m)] +

+2

N

N−1

∑

t=11−t

N[Bkk(t)Bl l (t) + Bkl(t+m)Bl k(t−m)]

Putting the covariance function deﬁned by (6) into the last formula we get (9).

References

1. Anderson, T. W., 1971: The Statistical Analysis of Time Series. New York: John Wiley & Sons, 704 p.

2. Box, G. E. P., Jenkins, G. M., Reinsel, G.C., 2011: Time Series Analysis: Forecasting and Control, 4th Edition. Wiley Series in

Probability and Statistics, 784 p.

3. Box, G. E. P. and Pierce, D. A., 1970: Distribution of the Residual Autocorrelations in Autoregressive Integrated Moving Average

Time Series Models. J. Amer. Statist. Assoc., 65, p. 1509–1526.

4. Brockwell, P. J., Davis, R. A., 2009: Time Series: Theory and Methods. New York: Springer Series in Statistics, Springer-Verlag,

Second Edition, 586 p.

5. Buldygin, V. V., Kozachenko, Yu. V., 2000: Metric Characterization of Random Variables and Random Processes. Amer. Math.

Soc., Providence, R I, 257 p.

6. Chen, W. W., Deo, R. S., 2004: A Generalized Portmanteau Goodness-of-Fit Test for Time Series Models. Econometric Theory,

20(2), p. 382–416.

7. Chen, S. X., Härdle, W., Li, M., 2003: An Empirical Likelihood Goodness-of-Fit Test for Time series. J. R. Statist. Soc. B , 65,

Part 3, p. 663–678.

8. Escanciano, J. C., 2007: Model Checks using Residual Marked Empirical Processes. Statist. Sinica, 17, p. 115–138.

9. Hosking, J. R. M., 1980: The Multivariate Portmanteau Statistic. Statist. Sinica, 75, p. 602–608.

10. Hosking, J. R. M., 1981: Lagrange-Multiplier Tests of Multivariate Time-Series Models. Journal of the Royal Statistical Society.

Series B (Methodological), 43(2), p. 219–230.

11. Kozachenko, Yu. V., Fedoryanych, T. V., 2004: A Criterion for Testing Hypotheses about the Covariance Function of a Gaussian

Stationary Process. Theory of Probability and Mathematical Statistics, 69, p. 85–94.

12. Kozachenko, Yu. V., Stadnik, A. I., 1991: Pre-Gaussian Processes and Convergence in C(T)of Estimators of Covariance Function

Theory of Probability and Mathematical Statistics, 45, p. 51–57.

13. Kozachenko, Yu. V., Stus, O. V., 1998: Square-Gaussian Random Processes and Estimators of Covariance Functions. Math.

Communications, 3(1), p. 83–94.

14. Kozachenko, Yu. V., Yakovenko, T. O., 2010: Criterion for Testing the Hypotethis about the Covariance Function of the Stationary

Gaussian Random Sequence. Bulletin of Uzhgorod University, Series: Mathematics&Informatics, 20, p. 39–43. (In Ukrainian)

15. Ljung, G. M., Box, G. E. P., 1978: On a Measure on Lack of Fit in Time Series Models. Biometrica, 65(2), p. 297–303.

16. McLeod, A. I. and Li, W. K., 1983: Diagnostic Checking ARMA Time Series Models Using Squared-Residual Autocorrelations.

J. Time Series Anal., 4, p. 269–273.

17. Mahdi, E. and McLeod, A. I., 2012: Improved Multivariate Portmanteau Test. J. Time Series Anal., 33(2), p. 211–222.

18. Vasylyk, O. I., Kozachenko, Yu. V., Yakovenko, T. O., 2009: Simulation of the Stationary Random Sequences. Bulletin of the

University of Kyiv, Series: Physics &Mathematics, 1, p. 7–10. (In Ukrainian)

Yuriy Kozachenko, Tetiana Ianevych 13

KAI KURIŲ ATSITIKTINIŲ SEKŲ SUDERINAMUMO KRITERIJAI

Yuriy Kozachenko, Tetiana Ianevych

Santrauka. Kvadratinių Gauso atsitiktinių dydžių teorijos rezultatai pritaikyti atsitiktinių sekų (laiko eilučių) su-

derinamumo kriterijams sudaryti. Nagrinėjami du tokių kriterijų atvejai. Pirmasis kritetijus skirtas tikrinti hipotezei

apie vienmatės necentruotos sekos vidurkį ir kovariacinę funkcijąs; antrasis kriterijus skirtas tikrinti hipotezei apie dau-

giamatės centruotos sekos kovariacinę funkciją. Modeliavimo rezultatai iliustruoja šių kriterijų elgesį kai kuriais atskirais

atvejais.

Reikšminiai žodžiai: suderinamumo kriterijus, daugiamatė atsitiktinė seka, laiko eilutė, kvadratinis Gauso atsi-

tiktinis dydis.