# SUP-TESTS FOR LINEARITY IN A GENERAL NONLINEAR AR(1) MODEL

**ABSTRACT** We consider linearity testing in a general class of nonlinear time series models of order one, involving a nonnegative nuisance parameter that (a) is not identified under the null hypothesis and (b) gives the linear model when equal to zero. This paper studies the asymptotic distribution of the likelihood ratio test and asymptotically equivalent supremum tests. The asymptotic distribution is described as a functional of chi-square processes and is obtained without imposing a positive lower bound for the nuisance parameter. The finite-sample properties of the sup-tests are studied by simulations.

**0**Bookmarks

**·**

**94**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**This paper investigates the identifiability conditions of M-estimation for some nonlinear time series models. We present primitive conditions for the identifiability in smooth transition GARCH, nonlinear Poisson autoregressive, and multiple regime smooth transition AR models. As an unexpected result, the smooth transition GARCH model turns out not to require the familiar common root condition for the identifiability. The method for verification is anticipated to be applicable to other nonlinear models.12/2013;

Page 1

INSTITUT NATIONAL DE LA STATISTIQUE ET DES ETUDES ECONOMIQUES

Série des Documents de Travail du CREST

(Centre de Recherche en Economie et Statistique)

n° 2009-16

Sup-Tests for Linearity in a

General Nonlinear AR(1) Model

C. FRANCQ

L. HORVATH

J.-M. ZAKOIAN

1

2

3

Les documents de travail ne reflètent pas la position de l'INSEE et n'engagent que

leurs auteurs.

Working papers do not reflect the position of INSEE but only the views of the authors.

1 Université Lille III, GREMARS-EQUIPPE, BP 60149, 59653 Villeneuve d’Ascq Cedex, France.

Email : christian.francq@univ-lille3.fr

2 University of Utah, Department of Mathematics, 155 South 1400 East, Salt Lake City, UT 84112-

0090, USA. Email : horvath@math.utah.edu

3 GREMARS-EQUIPPE and CREST, 15 boulevard Gabriel Péri, 92245 Malakoff Cedex, France.

Email : zakoian@ensae.fr

Page 2

Sup-tests for linearity in a general nonlinear AR(1) model∗

Christian Francq†, Lajos Horvath‡and Jean-Michel Zakoïan§

Abstract

We consider linearity testing in a general class of nonlinear time series model of order 1, involving

a nonnegative nuisance parameter which (i) is not identified under the null hypothesis and (ii)

gives the linear model when equal to zero. This paper studies the asymptotic distribution of the

Likelihood Ratio test and asymptotically equivalent supremum tests. The asymptotic distribution

is described as a functional of chi-square processes and is obtained without imposing a positive

lower bound for the nuisance parameter. The finite sample properties of the sup-tests are studied

by simulations.

Résumé

Nous étudions le test de l’hypothèse de linéarité dans une classe générale de modèles non

linéaires de séries temporelles d’ordre 1, faisant intervenir un paramètre de nuisance positif ou nul

qui (i) n’est pas identifiable sous l’hypothèse nulle et (ii) correspond au modèle linéaire lorsqu’il

est égal à zéro.Cet article étudie la loi asymptotique du test du rapport de vraisemblance

et de sup-tests asymptotiquement équivalents. La loi asymptotique est décrite comme une

fonctionnelle de processus chi-deux et est obtenue sans imposer de borne inférieure positive au

paramètre de nuisance. Les propriétés à distance finie de ces sup-tests sont étudiées par simulation.

∗Research partially supported by NSF grant DMS 0604670 and grant RGC-HKUST 6428/OGH

†Université Lille III, GREMARS-EQUIPPE Université Lille 3, BP 60149, 59653 Villeneuve d’Ascq

cedex, France. E-Mail: christian.francq@univ-lille3.fr

‡University of Utah, Department of Mathematics, 155 South 1400 East, Salt Lake City UT 84112-0090,

USA. E-mail: horvath@math.utah.edu

§GREMARS-EQUIPPE and CREST, 15 boulevard Gabriel Péri, 92245 Malakoff Cedex, France. E-mail:

zakoian@ensae.fr.

Page 3

1Introduction

Building nonlinear time series models is, in general, a difficult task which requires a large

amount of care.As can be seen from recent studies comparing the forecast accuracy

of linear AR models and nonlinear models on real macroeconomic time series, a careful

specification of the nonlinear models is required to produce forecasts that improve upon

linear forecasts (see Stock and Watson (1999), Teräsvirta, van Dijk and Medeiros (2004)).

In general, nonlinear models (such as the Threshold AR (TAR), the Smooth Transition

Autoregressive (STAR) regime-switching or bilinear models) contain the linear one as par-

ticular case but often, some of the parameters are not identified when linearity holds. This

is, for example, the case of the threshold value in the TAR framework. This identifiability

problem results in parameter inconsistency and, if the series under consideration is close to

be linear, the nonlinear model is bound to produce forecasts that are unreliable compared

to linear ones. It is therefore essential to test first for linearity before fitting any particular

nonlinear model.

The aim of this paper is to consider linearity testing in a relatively general, first-order

nonlinear framework. Given the unlimited number of nonlinear models it is not possible

to nest all of them in a general class. Many of them, however, can be seen as particular

cases of a nonlinear AR(1) model of the form

Yt= µ0+ {a0+ b0H(γ0,Yt−1)}Yt−1+ ǫt,ǫt∼ IID(0,σ2),

(1.1)

for some function H defined on Γ × R, for some set Γ ⊂ R containing 0, and such that

H(0,·) = 0. Clearly, the specification of the function H may include more than one

parameter but we only need to underline the parameter γ0controlling the nullity of the

function H. Examples and precise assumptions will be given in the next sections. We

are interested in testing the linearity hypothesis b0= 0. Problems of this nature, where

a nuisance parameter γ0 is present only under the alternative hypothesis, often occur

in econometric models and have been considered by many authors. See, among others,

Davies (1977, 1987), King and Shively (1993), Andrews and Ploberger (1995), Hansen

(1996), Stinchcombe and White (1998).

The contribution of this paper is to derive the asymptotic distribution of supremum

tests, namely the Likelihood Ratio (LR) test, and asymptotically equivalent sup-Wald and

Lagrange Multiplier (LM) tests, without bounding the nuisance parameter away from zero.

1

Page 4

The difficulty is that, when γ0approaches zero the nonlinear term vanishes in (1.1) and

the Fisher information matrix becomes singular. In the literature, this problem is typi-

cally circumvented by imposing a lower bound for the nuisance parameter. We avoid this

restriction. To our knowledge, this is the first paper deriving the asymptotic distribu-

tion of a supremum test with a nuisance-parameter range implying a case of noninvertible

information matrix.

The paper is organized in the following way: Section 2 discusses the model and gives

stationarity conditions. Section 3 derives the asymptotic properties of the Least Squares

Estimator (LSE) of (a0,b0) under the null assumption of linearity, i.e. b0= 0. Section

4 defines the LR, Wald and LM-type tests which are based on the LSE. The asymptotic

null distribution is derived. Section 4 also presents a Monte Carlo study, in which the

supremum tests enjoy good size and power properties. This study compares the powers

of the sup-tests and of tests based on expansions of the function H(·,y), which are often

used in practice. The appendix provides proofs of the results given in the paper.

2Examples and stationarity conditions

Before turning to the framework of this paper, leaving the function H unspecified, it is

of interest to present special cases of (1.1) that have been popular in forecasting applica-

tions. See Tong (1990) and Teräsvirta, van Dijk and Medeiros (2004) for a more complete

discussion.

One example is the exponential autoregressive (EXPAR) model introduced by Haggan

and Ozaki (1981) which, after reparameterization, is obtained for

H(γ0,y) = 1 − e−γ0y2

(2.1)

The parameter γ0 is often referred to as the slope parameter.Model (1.1) includes

other smooth transition models, such as the Logistic Smooth Transition AutoRegres-

sive (LSTAR) model, introduced in the time series literature by Luukkonen, Saikko-

nen and Teräsvirta (1988). In this latter model, we have H(γ0,y) = H(γ0,c,y) =

(1 + e−γ0(y−c))−1− 1/2 where c is a location coefficient allowing for asymmetries in the

conditional mean of Yt. When c = 0 the model is simply

H(γ0,y) =

1

1 + e−γ0y−1

2.

(2.2)

2

Page 5

Letting the slope parameter γ0→ ∞, we obtain the two-regime Self-Exciting Threshold

AutoRegressive (SETAR) model of Tong and Lim (1980). The SETAR model will not

be covered by the results of this paper, however, because smoothness assumptions on the

function H will be required.

The existence of strictly stationary solutions to (1.1) can be investigated using Markov

chains theory.The following result is an immediate consequence of Tjøstheim (1990,

Theorem 4.1 and Lemma 6.1).

Theorem 2.1 Suppose that ǫthas a positive density function over the real line. Then, if

there exists r,K > 0 such that

sup

y

|a0+ b0H(γ0,y)| < K, sup

|y|>r

|a0+ b0H(γ0,y)| < 1

there exists a strictly stationary and geometrically ergodic solution to model (1.1). More-

over, for any k > 1, if E|ǫt|k< ∞ then E|Yt|k< ∞.

For example the EXPAR model admits a strictly stationary solution whenever |a0+b0| < 1

and γ0≥ 0. For other models, such as the LSTAR, γ0> 0 is not required for stationarity

but is a natural constraint for interpretation and identifiability (see e.g. Teräsvirta et al

(2004)). For this reason we will take throughout a compact nuisance parameter space of

the form Γ = [0,γ]. Now we turn to the LS estimation.

3 Asymptotic properties of the LSE of µ0, a0 and b0 under

the linear model

Let Y1,...,Ynbe observations of a non anticipative strictly stationary solution of (1.1).

Recall that the function H is known a priori. Throughout we assume that

A0: H(0,·) = 0 and H(γ,·) is not identically 0, for any γ > 0,

so that the standard AR(1) model is obtained for γ0= 0 but also for b0= 0. Thus it is not

restrictive to assume γ0> 0 and interpret γ0as a nuisance parameter, which is not present

when b0= 0. Notice also that b0cannot be identified when γ0= 0. For a given value γ

of γ0, the LSE of θ0= (µ0,a0,b0)′coincides with the Gaussian quasi-maximum likelihood

estimator and is defined as any measurable solution of

ˆθ := (ˆ µγ,ˆ aγ,ˆbγ) = arg max

θ∈Θ

Ln(θ) = arg min

θ∈Θ

Qn(θ),

3

Page 6

where

Ln(θ) = −n

2log2πσ2−

n

2σ2Qn(θ),Qn(θ) = n−1

n

?

t=2

ǫ2

t(θ)

with

ǫt(θ) = Yt− µ − {a + bH(γ,Yt−1)}Yt−1.

Assuming γ > 0, the LSE of (µ0,a0,b0)′is explicitly given, when Jn(γ) is nonsingular, by

ˆbγ

ˆ µγ

ˆ aγ

:=

ˆδγ

ˆbγ

= J−1

n(γ)

n−1?n

t=2Yt

n−1?n

t=2YtYt−1

n−1?n

t=2YtYt−1H(γ,Yt−1)

,

(3.1)

where

Jn(γ) =

1Un,1,0

Un,1,1(γ)

Un,1,0

Un,2,0

Un,2,1(γ)

Un,1,1(γ)Un,2,1(γ)Un,2,2(γ)

,

Un,i,j(γ) = n−1

n

?

t=2

Yi

t−1Hj(γ,Yt−1),

(3.2)

for i = 1,2 and j = 0,1,2, with the convention 00= 1. Jn(γ) is singular for γ = 0. But

it can be shown that, under appropriate moment assumptions and Assumption A4 below,

when γ > 0 the matrix Jn(γ) is almost surely invertible, at least for large n. See Chesher

(1984), Lee and Chesher (1986), Rotnitzky, Cox, Bottai and Robins (2000) for cases where

the information matrix is singular for any value of the nuisance parameter.

Under the constraint b0= 0 the restricted LSE for (µ0,a0) is simply

˜ a

t=2YtYt−1

˜ µ

:=˜δ =˜J−1

n

n−1?n

t=2Yt

n−1?n

,

˜Jn=

1Un,1,0

Un,1,0

Un,2,0

.

(3.3)

We will now derive asymptotic properties of the LS estimator under the linear model. We

assume that H admits second-order partial derivatives with respect to γ, and we make the

following assumptions on the first and second partial derivatives H1(γ,y) = ∂H(γ,y)/∂γ

and H2(γ,y) = ∂2H(γ,y)/∂γ2.

A1: |H1(γ,y)| ≤ K (|y|α1+ 1)

A2: |H2(γ,y)| ≤ K (|y|α2+ 1)

A3: |H2(γ,y) − H2(γ′,y)| ≤ K|γ − γ′|α(|y|α3+ 1),

with some

1/2 < α ≤ 1,

4

Page 7

where α1≥ 0, α2≥ 0, α3≥ 0 and K are constants. In the sequel we use the notation

K as a generic constant whose value can change. Conditions A1 and A2 are needed for

the existence of the limit process in Theorem 3.1 below. The proofs are based on Taylor

expansions of H(·,y) and A3 is used to control the remainder terms.

Elementary calculations show that A1-A3 hold for the EXPAR model with α1= 2,

α2= 4 and α3= 6 and α = 1. Also, the LSTAR model satisfies A1-A3 with α1= 1,

α2= 2 and α3= 3 and α = 1. Similarly, for any constant c and any β > 0, the generalized

EXPAR

H(γ,y) = 1 − e−γ|y−c|β

and the generalized LSTAR

H(γ,y) =

1

1 + e−γ|y−c|βsign(y)−1

2,

where sign(y) =

1 y > 0

0y = 0

−1 y < 0

,

satisfy A1-A3. Another example is the normal STAR model of Chan and Tong (1986),

defined by H(γ,y) = Φ{γ(y −c)}−1/2, where Φ(·) is the N(0,1) cumulative distribution

function. In all these models, other nuisance parameters c and/or β may be present, but

vanish under the linearity hypothesis b0= 0. The results obtained in the sequel will hold

for any fixed values of c and β. Our method can be extended to the case when the other

nuisance parameters c and β are also estimated from the sample. For the sake of simplicity,

we assume that the only unknown parameter in H(γ,y) is γ.

We have

ˆ µγ− µ0

ˆ aγ− a0

ˆbγ− b0

= J−1

n(γ)

Sn,0,0

Sn,1,0

Sn,1,1(γ)

,

˜ µ − µ0

˜ a − a0

=˜J−1

n

Sn,0,0

Sn,1,0

(3.4)

where

Sn,i,j(γ) = n−1

n

?

t=2

ǫtYi

t−1Hj(γ,Yt−1).

We will also need to consider the sums

Tn,i(γ) = n−1

n

?

t=2

ǫtYt−1Hi(γ,Yt−1),i = 1,2.

Our first result establishes the weak convergence of the processes {Sn,i,j(γ),Tn,i(γ),γ > 0}.

For any γ > 0, the symbol

=⇒ denotes the weak convergence in the Skorokhod space

D[0,γ]

5

Page 8

D[0,γ]. The existence of the variances of the Sn,i,j(γ) and Tn,i(γ) requires E|Y0|κ< ∞

with κ = 2 + 2max(α1,α2), i.e. E|ǫ0|κ< ∞ under H0. For testing against EXPAR we

need Eǫ10

0< ∞ in the LSTAR model. However, the tightness condition,

which is used in the proof of the following theorem, requires a stronger moment condition.

0 < ∞ and Eǫ6

Theorem 3.1 Let b0= 0 and suppose E|Y0|κ< ∞ with κ = 2+2max(α1,α2,α3). Then,

under A0-A3, for any γ > 0,

√n

σ

D[0,γ]

=⇒

RR

?

where W is a standard Brownian motion and F(x) = P(Y0≤ x).

(Sn,0,0,Sn,1,0,Sn,1,1(γ),Tn,1(γ),Tn,2(γ))

?

xH1(γ,x) dW(F(x)),

?

W(1),x dW(F(x)),

?

xH(γ,x) dW(F(x)),

?

RR

xH2(γ,x) dW(F(x))

?

,

For the next result we need the following assumption.

A4: For any constants K1,K2and any 0 ≤ γ ≤ γ,

P[Y0= K1Y0H(γ,Y0) + K2] < 1

and

P[Y0= K1Y0H1(0,Y0) + K2] < 1.

We can now state the following result, which is proved in the Appendix. By convention,

for γ = 0 we set ˆ µγ= ˜ µ, ˆ aγ= ˜ a, γˆbγ= 0 andˆbγYt−1H(γ,·) = 0.

Theorem 3.2 Under the assumptions of Theorem 3.1 and A4 we have

sup

0≤γ≤γ|ˆ µγ− µ0| = OP(n−1/2),sup

0≤γ≤γ|ˆ µγ− ˜ µ| = OP(n−1/2),

sup

0≤γ≤γ|ˆ aγ− a0| = OP(n−1/2),sup

0≤γ≤γ|ˆ aγ− ˜ a| = OP(n−1/2),sup

0≤γ≤γγ|ˆbγ| = OP(n−1/2).

Now we turn to asymptotic properties of the constrained and unconstrained LS esti-

mators of σ2, which are respectively defined by

˜ σ2=1

n

n

?

t=2

(Yt−˜ µ−˜ aYt−1)2,ˆ σ2

γ=1

n

n

?

t=2

{Yt−ˆ µγ−ˆ aγYt−1−ˆbγYt−1H(γ,Yt−1)}2. (3.5)

The proof of the following result is in the Appendix.

Theorem 3.3 Under the assumptions of Theorem 3.2 we have

sup

0≤γ≤γ|ˆ σ2

γ− σ2| = oP(1).

6

Page 9

4Linearity testing

Given that model (1.1) involves four parameters, a natural idea would be to estimate the

parameters (µ0,a0,b0,γ0) of the unconstrained model by QMLE. The asymptotic prop-

erties of this estimator could be derived when no identifiability problem arises, that is

b0γ0?= 0. The constraint b0?= 0 is however an important restriction. When b0= 0 the

parameter γ0is not identified, so that we do not know the behaviour of the QMLE when

the data generating process is an AR(1). Consequently the test of

H0: b0= 0

against

H1: b0?= 0

is not standard. We first consider a strategy based on setting an arbitrary value to γ.

Then the testing problem can be easily solved by a standard test, using for example the

Wald, Lagrange-Mutiplier (LM) or Likelihood-Ratio (LR) principle.

4.1Setting an arbitrary value γ

Fixing an arbitrary value of γ for the nuisance parameter, a convenient form for the Wald-

type, LM-type and LR-type statistics is given by

Wn(γ) = n˜ σ2− ˆ σ2

γ

ˆ σ2

γ

,

LMn(γ) = n˜ σ2− ˆ σ2

γ

˜ σ2

,

LRn(γ) = nlog˜ σ2

ˆ σ2

γ

.

(4.1)

The form of these statistics is obtained under normal errors, but we do not make this

assumption in the sequel. The expression for the LR statistic is the standard one. For the

Wald statistic, the standard expression is

Wn(γ) = n

ˆb2

ˆ σ2

γ

ˆbγ

,

where

ˆ σ2

ˆbγ= ˆ σ2

γJ−1

n(γ)(3,3),

where J−1

n(γ)(3,3) denotes the (3,3) element of the matrix J−1

n(γ). The form given in

(4.1) for Wn(γ), and similarly for LMn(γ), relies on the linearity of the model when γ

is fixed. See for example Godfrey (1988), Gouriéroux and Monfort (1995). Notice that

Wn(0) = LMn(0) = LRn(0) = 0 because ˜ σ2and ˆ σ2

γ, as defined in (3.5), are equal when

γ = 0.

For every γ > 0, the three statistics Wn(γ), LMn(γ) and LRn(γ), are asymptotically

χ2

1distributed under H0. Note that the tests based on those statistics are in general

consistent, even for alternatives such that γ0 ?= γ.However this procedure may lack

7

Page 10

of power for alternatives where γ0 is far from γ. In other words, the test statistics are

sensitive to γ so this coefficient cannot be selected in a completely arbitrary way if it is

not known. On the other hand, when γ0is unknown, then its LS estimator ˆ γ can be found

by minimizing ˆ σ2

γover Γ = [0,γ]. A plug-in approach seems natural, but the asymptotic

null distribution of Wn(ˆ γ), LRn(ˆ γ), and LMn(ˆ γ) is no longer χ2

1.

4.2 Using supremum statistics

The sup-LR statistic is defined by

LRn= sup

γ∈ΓLRn(γ) = nlog˜ σ2

ˆ σ2,

where ˆ σ2= inf

γ∈Γˆ σ2

γ= Qn(ˆθ).

Sup-Wald and LM statistics can similarly be defined as

Wn= sup

γ∈ΓWn(γ) = n˜ σ2− ˆ σ2

ˆ σ2

,

LMn= sup

γ∈ΓLMn(γ) = n˜ σ2− ˆ σ2

˜ σ2

.

Note that the sup-LR statistic is actually the conventional LR statistic, i.e.

LRn =

LRn(ˆ γ) where ˆ γ = arginfγ∈Γˆ σ2

γis the LS estimator of γ0. In the next theorem we will

obtain the asymptotic null distribution of the LR, LM and Wald statistics. Given that

Wn ≥ Wn(γ) for any n, the same inequality holds asymptotically for any γ, and the

asymptotic distribution of Wnis expected to be different from the χ2(1). Figure 1 reveals

an important difference between the two distributions.

The sup-Wald statistic is also the conventional Wald statistic. This is less straight-

forward than it is for the LR statistic because the model is no longer linear when γ is

not fixed, so it is not obvious that a form equivalent to (4.1) holds for the standard Wald

statistic. However we have

Wn(ˆ γ) = n

ˆb2

ˆ γ

ˆ σ2

ˆbˆ γ

= n˜ σ2− ˆ σ2

ˆ γ

ˆ σ2

ˆ γ

= n˜ σ2− ˆ σ2

ˆ σ2

= Wn,

noting that ˆ σ2= ˆ σ2

ˆ γ. The same remark holds for the LM statistic.

The main result of this paper is the following, providing the asymptotic null distribution

of the supremum test statistics.

Theorem 4.1 Suppose the conditions of Theorem 3.3, in particular the null hypothesis

H0, hold. Then, for any γ > 0

Wn= sup

γ∈[0,γ]

Wn(γ) =⇒ W := sup

γ∈(0,γ]

W(γ),

8

Page 11

02468 10

0.0

0.2

0.4

0.6

0.8

Distribution of W and of Wn and Wn(0.5) under H0 for n=100

χ1

Wn(0.5)

Wn

W

2

Figure 1: For the EXPAR model, kernel density estimator of the distribution of Wn(0.5) (in dotted line)

and limiting distribution of Wn(0.5) (the χ2

1distribution in thin full line), kernel density estimator of the

distribution of Wn = supγ∈[0,100]Wn(γ) (in dashed line) and kernel density estimator of the distribution

of W (i.e. the limiting distribution of Wn, in thick full line). The density estimators are obtained by

computing the statistics on N = 5,000 independent replications of N(0,1) simulated samples of length

n = 100 for the first two kernel density estimators, and of length 500 for the last one.

where for γ > 0,

W(γ) ={V (0)Z(γ) − V (γ)Z(0) − ∆(γ)σW(1)}2

σ2D(γ)Var(Y0)

,

with

Z(γ) =

?

R

x{H(γ,x) + 1} dW(F(x)),

V (γ) = Cov(Y0,Y0{H(γ,Y0) + 1}),

∆(γ) = EY2

0EY0H(γ,Y0) − EY0EY2

D(γ) = Var(Y0)Var(Y0H(γ,Y0)) − {Cov(Y0,Y0H(γ,Y0))}2.

0H(γ,Y0),

Moreover,

sup

γ∈[0,γ]

LMn(γ) =⇒

sup

γ∈(0,γ]

W(γ),sup

γ∈[0,γ]

LRn(γ) =⇒

sup

γ∈(0,γ]

W(γ).

9

Page 12

Contrary to the standard situation (γ fixed) where the asymptotic distribution is a χ2(1)

whatever the model, the law of W depends on the model, through the function H.

Notice that W(γ) is not defined when γ = 0 because D(0) = 0. However, the limiting

distribution of W(γ) when γ → 0 is nondegenerate and is that of a χ2(1). Lemma A.7

below shows that we can define W(0) as

lim

γ→0W(γ) = W(0),

where the limit exists with probability one. It is clear that the law of W(0) is not the

limiting distribution of Wn(0), which is always equal to zero. In other words,

lim

n

lim

γ→0Wn(γ) = 0,

a.s.,

but

lim

γ→0lim

nWn(γ) ∼ χ2(1).

It is important to notice that we do not require that γ be bounded away from zero.

The supremum can be taken over all possible values of the nuisance parameter, instead of

restricting γ to a compact subset excluding 0 as it is done when testing for structural change

(see Andrews, 1993). Note that the framework of the present paper is quite different from

that of Andrews (1993). When testing for a structural break, it seems necessary to bound

the nuisance parameter away from zero. The reason is that the asymptotic distribution of

the test statistic indexed by the nuisance parameter π, say, is a function of a Brownian

Bridge. Thus, when taking the supremum over the full range of values of π, the statistic

diverges under the null hypothesis (see Andrews, Corollary 1, 1993). In our setup, the

asymptotic distribution of the tests statistics indexed by γ is a process whose supremum

is well-behaved for all possible values of the nuisance parameter belonging to a bounded

set.1

This theorem can be adapted to deal, more generally, with statistics of the form

g({Wn(γ),γ ∈ [0,γ]}) for arbitrary functions g which are continuous with respect to

the uniform metric (and likewise for LMn(·) and LRn(·)). The use of a function g that

differs from the sup function can depend on the alternatives of interest. See Andrews and

Ploberger (1994) for discussion of different statistics of this form.

4.3 Model without intercept

When the intercept is not present in Model (1.1), i.e. when

Yt= {a0+ b0H(γ0,Yt−1)}Yt−1+ ǫt,

1We thank a referee for pointing out to us the difference between the two kinds of testing problems.

ǫt∼ IID(0,σ2),

(4.2)

10

Page 13

the results are slightly different. The tests statistics are still of the form (4.1) but with

˜ σ2= min

a

1

n

n

?

t=2

(Yt− aYt−1)2,ˆ σ2

γ= min

a,b

1

n

n

?

t=2

{Yt− aYt−1− bYt−1H(γ,Yt−1)}2.

We give them without proof, keeping the previous notations with obvious adaptations.

Theorem 4.2 Suppose that H0: b0= 0 in Model (4.2). Let the assumptions of Theorem

3.1 be satisfied. Then, the results of Theorem 4.1 continue to hold with

W(γ) ={V (0)Z(γ) − V (γ)Z(0)}2

σ2D(γ)EY2

0

,

V (γ),Z(γ) as in Theorem 4.1, and D(γ) = EY2

0EY2

0H2(γ,Y0) − {EY2

0H(γ,Y0)}2.

It can be noted that the asymptotic distribution depends on constants and {Z(γ),γ ≥ 0},

which is a zero mean Gaussian process with covariance kernel

K(γ,γ′) = EZ(γ)Z(γ′) = σ2E?Y2

It is interesting to see that in general, unless if EY0H(γ,Y0) = 0, the distribution of the

0{H(γ,Y0) + 1}?H(γ′,Y0) + 1??.

process {W(γ),γ > 0} is not simply obtained from that of Theorem 4.1 with µ0replaced

by 0.

4.4 Implementation

We now focus on the practical implementation of the tests of this paper. For simplicity, we

present the results for Model (4.2) without intercept. Some of the results of this section

are not new but are given for the reader’s convenience.

4.4.1 Computation of the test statistics

We focus on the LM statistic which is very easy to compute. Following Godfrey (1988),

the LMn(γ) test can be implemented as follows:

1) fit an AR(1), compute the residuals ˜ ǫtand the residual sum of squares RSS = n˜ σ2,

2) regress linearly ˜ ǫton Yt−1and Yt−1H(γ,Yt−1), compute the residual sum of squares

RSSγand the uncentered determination coefficient R2

γof the regression.

11

Page 14

Noting that the residuals of the second regression are also the residuals of the regression

of Yton Yt−1and Yt−1H(γ,Yt−1), we have RSSγ= nˆ σ2

γ, which gives LMn(γ) = nR2

γ=

n(RSS − RSSγ)/RSS.

For the computation of the LMnstatistic we can replace 2) by

2’) compute the residual sum of squares RSSˆ γ= nˆ σ2of the nonlinear regression model

˜ ǫt= cYt−1+ bYt−1H(γ,Yt−1) + ǫt.

We have LMn= n(RSS − RSSˆ γ)/RSS.

4.4.2Computation of the critical values

In view of (A.19) below, and following Hansen (1996), one can approximate the distribution

of supγ∈[0,γ]Wn(γ) by that of

sup

γ∈[0,γ]

?

Wn(γ),

?

Wn(γ) ={Vn(0)Z◦

n(γ) − Vn(γ)Z◦

Dn(γ)Un,2,0

n(0)}2

,

where Vn(0) = Un,2,0, Vn(γ) = Un,2,1(γ)+Un,2,0, where Un,2,j(γ) and Dn(γ) are defined by

(3.2) and (A.20), and where {Z◦

a zero mean Gaussian process with covariance kernel

n(γ),γ ≥ 0} is, conditionally on the observation Y1,...,Yn,

Kn(γ,γ′) = EZ◦

n(γ)Z◦

n(γ′) =1

n

n

?

t=2

Y2

t−1{H(γ,Yt−1) + 1}{H(γ′,Yt−1) + 1}].

The conditional distribution of supγ∈[0,γ]?

(i) generate a N(0,1) sample ǫ(i)

(ii) set Z(i)

?

(iv) compute supγ∈[0,γ]?

Conditional on Y1,...,Yn, the sequence supγ∈[0,γ]?

critical value cαof the tests of rejection regions

?

γ∈[0,γ]

Wn(γ) can be obtained by the following algo-

rithm. For i = 1,...,N:

1,...,ǫ(i)

n ;

n (γ) = n−1/2?n

W(i)

n (γ) =

t=2ǫ(i)

tYt−1{H(γ,Yt−1) + 1};

(iii) set?

Vn(0)Z(i)

n (γ) − Vn(γ)Z(i)

W(i)

n (γ).

n (0)

?2D−1

n(γ)U−1

n,2,0;

W(i)

n (γ), i = 1,...,N constitutes an iid

sample of the random variable supγ∈[0,γ]?

Wn(γ). At the nominal level α, the common

sup

Wn(γ) > cα

?

,

?

sup

γ∈[0,γ]

LMn(γ) > cα

?

or

?

sup

γ∈[0,γ]

LRn(γ) > cα

?

will be defined as being the empirical (1 − α)-quantile of the artificial sample

supγ∈[0,γ]?

12

W(i)(γ), i = 1,...,N.

Page 15

4.4.3Cases where the limiting law is parameter-free

We now describe a situation where the previous algorithm (i)-(iv) can be avoided, and the

critical values of the test can be obtained once and for all. Assume that

H(γ,y) = h(γyk),

(4.3)

for some integer k and some measurable function h(·). Note that the previous assumption

is satisfied in the EXPAR case (2.1) with k = 2, and in the LSTAR case (2.2) with k = 1,

when the location parameter c = 0.

Denote by σ2

Y0the variance of Y0. Let˘V (γ),˘D(γ) and˘K(γ,γ′) be obtained by replacing

Y0by σ−1

Y0Y0and σ2by 1 in the definition of V (γ), D(γ) and K(γ,γ′) given in Theorem

4.2, and let the process {˘Z(γ) = σ−1σ−1

Y0Z(γσ−k

Y0),γ ≥ 0}. By (4.3) we conclude

V (γ) = σ2

Y0˘V (γσk

Y0),D(γ) = σ4

Y0˘D(γσk

Y0),K(γ,γ′) = σ2σ2

Y0˘K(γσk

Y0,γ′σk

Y0)

and {˘Z(γ),γ ≥ 0} is a zero mean almost surely continuous Gaussian process with covari-

ance kernel˘K(γ,γ′). We thus have

sup

γ∈(0, σ−k

Y0γ]

W(γ) =sup

γ∈(0, σ−k

Y0γ]

˘

W(γσk

Y0) = sup

γ∈(0,γ]

˘

W(γ),

where

˘

W(γ) =

?˘V (0)˘Z(γ) −˘V (γ)˘Z(0)

˘D(γ)E(σ−2

?2

Y0Y2

0)

.

Note that when ǫtis Gaussian, the moments˘V (γ),˘D(γ) and E(σ−2

Y0Y2

0), as well as the

distribution of the process˘Z(·), do not depend on any unknown parameter. In particular

the kernel is explicitly given by

?

˘K(γ,γ′) = y2{H(γ,y) + 1}{H(γ′,y) + 1}

1

√2πe−y2/2dy.

We deduce that in the Gaussian case, i.e. when ǫtis Gaussian, the asymptotic distribution

of

sup

γ∈[0, ˆ σ−k

Yγ]

Wn(γ),ˆ σ2

Y=1

n

n

?

t=1

Y2

t−

?

1

n

n

?

t=1

Yt

?2

is parameter-free under H0 (i.e. does not depend on a0 and σ2). In consequence, the

distribution of supγ∈[0,ˆ σ−k

Yγ]Wn(γ) can be approximated by that of the Wald statistic˘

obtained by replacing Y1,...,Ynby a N(0,1) sample ǫ1,...,ǫnin Wn= supγ∈(0, γ]Wn(γ).

Wn

13

#### View other sources

#### Hide other sources

- Available from Lajos Horvath · May 15, 2014
- Available from crest.fr