Page 1
Electronic copy available at: http://ssrn.com/abstract=1984844
Two estimators of the longrun variance: beyond short
memory
Karim M. Abadir∗
Imperial College Business School, Imperial College London, London SW7 2AZ, UK
Walter Distaso
Imperial College Business School, Imperial College London, London SW7 2AZ, UK
Liudas Giraitis
Department of Economics, Queen Mary, University of London, London E14 NS, UK
January 19, 2009
Abstract
This paper deals with the estimation of the longrun variance of a sta
tionary sequence. We extend the usual Bartlettkernel heteroskedasticity and
autocorrelation consistent (HAC) estimator to deal with long memory and an
tipersistence. We then derive asymptotic expansions for this estimator and the
memory and autocorrelation consistent (MAC) estimator introduced by Robin
son (2005). We offer a theoretical explanation for the sensitivity of HAC to the
bandwidth choice, a feature which has been observed in the special case of short
memory. Using these analytical results, we determine the MSEoptimal band
width rates for each estimator. We analyze by simulations the finitesample
performance of HAC and MAC estimators, and the coverage probabilities for
the studentized sample mean, giving practical recommendations for the choice
of bandwidths.
JEL Classification: C22, C14.
Keywords: longrun variance, long memory, heteroskedasticity and autocorrelation
consistent (HAC) estimator, memory and autocorrelation consistent (MAC) estima
tor.
∗Corresponding author. Email address: k.m.abadir@imperial.ac.uk.
1
Page 2
Electronic copy available at: http://ssrn.com/abstract=1984844
1 Introduction and setup
In empirical studies, it is now standard practice to produce robust estimates of stan
dard errors (SEs). Popular references in econometrics for such procedures include
White (1980), Newey and West (1987), Andrews and Monahan (1992). In statis
tics, the literature goes further back to Jowett (1955) and Hannan (1957). These
procedures for estimating covariance matrices account for heteroskedasticity and
autocorrelation of unknown form, for short memory models.
There is now an increasing body of evidence suggesting the existence of long
memory in macroeconomic and financial series; e.g. see Diebold and Rudebusch
(1989), Baillie and Bollerslev (1994), GilAlaña and Robinson (1997), Chambers
(1998), Cavaliere (2001), Abadir and Talmain (2002). It is therefore of interest to
adapt the most popular of these procedures, the Bartlettkernel heteroskedastic
ity and autocorrelation consistent (HAC) estimator, to account for the possibility
of long memory and antipersistence. In addition to HAC, we study the alterna
tive memory and autocorrelation consistent (MAC) estimator recently introduced
by Robinson (2005). He established the consistency of his MAC estimator of the
covariance matrix, leaving open the issue of its higherorder expansion.
Our first contribution is to derive second order expansions for HAC and MAC
in the univariate case, reducing the problem to the estimation of a scalar (the long
run variance) instead of estimating the covariance matrix. Our derivations give
an insight into the more difficult multivariate case and provide the first step in
understanding this problem.
The second contribution of this paper is to provide a theoretical explanation for
the sensitivity of HAC estimators to the choice of bandwidth, a feature that has
been widely observed in the special case of short memory. Our results show that
the HAC estimator is sensitive because the minimumMSE bandwidth depends on
the persistence in the series. The theoretical part of this paper explains where the
problem comes from and gives some practical advice for selecting the bandwidth.
We also show that, on the other hand, the MAC estimator is more robust to the
bandwidth selection, since its asymptotic properties are not affected by long memory
or antipersistence.
The final theoretical contribution of this paper is to obtain the distribution
of the estimated normalized spectrum at the origin, by virtue of its link to the
longrun variance. The distribution is Gaussian for MAC, but the one for HAC is
2
Page 3
Gaussian only if the long memory is below some threshold. In the case of short
memory, HAC is the usual Bartlettkernel estimator of the spectral density at zero
frequency, and its bias and asymptotic distribution are well investigated in the
literature. The asymptotic results for the HAC estimator provide the background
for the development of kernel estimation of a spectral density under long memory
and antipersistence.
The plan of the paper is as follows. In Sections 2 and 3, we derive the bias
and asymptotic expansions for both types of estimators, allowing us to describe the
limiting distributions as well as the asymptotic MSEs. This enables us to determine
the rate of the MSEoptimal bandwidth for each estimator. Section 4 investigates
by simulations the finitesample performance of HAC and MAC estimators, and
coverage probabilities for the studentized sample mean, giving practical recommen
dations for the choice of bandwidths. Section 5 concludes. The derivations are given
in the Appendix.
We now detail the setting for our paper. Let {}∈Zbe a stationary sequence
with unknown mean := E(). Let the spectral density of {} be denoted by
() and defined over  ≤ . Suppose that it has the property
() = 0−2+ (−2) as → 0(1.1)
where  12 and 0 0.
ARIMA(): when and are finite; but see Abadir and Taylor (1999) for
Special cases include stationary and invertible
identification issues when or are allowed to be infinite. We shall call the
memory parameter of {}; with = 0 indicating short memory, 0 12 long
memory, and −12 0 antipersistence.
To conduct inference on , define the sample mean¯ := −1P
satisfies
var(12−¯ ) = −1−2
−
As → ∞, we can use assumption (1.1) and a change of variable of integration to
get the convergence
Z∞
where we have the continuous function
⎧
⎩
3
=1 which
Z
µsin(2)
sin(2)
¶2
()d
var(12−¯ ) → 2
:= 0
−∞
µsin(2)
2
¶2
−2d = 0()(1.2)
() :=
⎨
2Γ(1−2)sin()
(1+2)
if 6= 0
if = 02
(1.3)
Page 4
We notice from (1.2) that 2
is just a scaling of 0by the function (), so in the
usual short memory case of = 0 we get
2
= 2(0)and0= (0)
In general, the problem of the estimation of the longrun variance 2
related to the estimation of and 0≡ lim→02() appearing in (1.1). The
HAC and MAC procedures mentioned at the start of this section hinge on the
is closely
estimation of the long run variance 2
.
We will consider the behaviour of the estimators under two alternative sets of
assumptions. The first one is stronger than the second one. It allows the derivation
of asymptotic expansions and the resulting investigation of MSEoptimal bandwidth
rates. The second one is sufficient to establish the consistency of the estimators
for a wide class of stationary sequences. It allows the use of estimates of for
robust SEs for¯ . The second type of conditions are very weak, so they yield only
consistency and are not sufficient to obtain other asymptotic results. 2 The first set
of assumptions is common for HAC and MAC:
Assumption L. {} is a linear sequence
= +
∞
X
=0
− ∈ Z
whereP∞
zero mean and unit variance. Moreover, the spectral density () of {} has the
property
() = −2()
where ∈ (−1212) and (·) is a continuous bounded function such that () =
0(1 + (2)) as → 0 and 0= (0) 0.
Let b 2
t := 12−(¯ − )
b
For HAC, the second type of assumptions (to establish consistency) is:
=02
∞, is a real number and {} are i.i.d. random variables with
(1.4)
be a consistent estimator of 2
the sample mean¯ satisfies
. Under condition (1.4), the tratio for
→ N(01) → ∞(1.5)
so that a consistent HAC or MAC estimator of 2
allows inference on .
Assumption M. {} is a fourth order stationary process such that, for some
∈ (−1212) and 6= 0,
∼ 2−1if 6= 0
∞
X
=−∞
 ∞ if = 0
4
Page 5
where := cov(+); and
∞
X
=−∞
() ≤ if 0sup
X
=−
() ≤ 2if ≥ 0
where denotes a generic constant and () is a fourthorder cumulant defined
by () := E(+++)−(−+−+−). In addition, if
0, then () ≤ −2 ∈ [−].
For MAC, the second type of assumptions differs from Assumption M and is
straightforward to discuss at the end of Section 3.
2 Asymptotic properties of HACtype estimators
In this section, we first adapt the HAC estimator to allow for long memory and an
tipersistence, introducing two HACtype estimators. Then, we analyze their prop
erties under Assumption L that {} is a linear process, presenting limiting distrib
utions and asymptotic expansions for the estimators. To the best of our knowledge,
the asymptotic normality of the HAC estimator was investigated in the literature
only in the short memory case of = 0 and under the assumption that E(4
) ∞.
Our Theorem 2.1(a) will require for {} the existence of only a moment of order
2+ (for some 0), which is a new result in the field. It also shows that, under the
strong persistence 14 12, the asymptotic distribution will be nonGaussian.
Finally, we show that Assumption M guarantees consistency (but not necessarily
the other properties) of the estimators.
Let
e := −1
−
X
=1
(− E())(+− E()) 0 ≤
be the sample autocovariances of {} centered around E(), and
:= −1
−
X
=1
(−¯ )(+−¯ ) 0 ≤
the sample autocovariances of {} centered around the sample mean¯ .
Define
e 2
() := −1−2
X
=1
e −= −2(e 0+ 2
X
=1
(1 − )e )(2.1)
5
Page 6
which uses a known (or correctly hypothesized) E(), and
¯ 2
() := −1−2
X
=1
−= −2(0+ 2
X
=1
(1 − ))(2.2)
where the mean is estimated unrestrictedly, and assume that the bandwidth para
meter satisfies
→ ∞ = (1−) (2.3)
for some 0. The difference between the stochastic expansions of the two
estimators will reveal just how much is the impact of estimating E().The
asymptoticallyoptimal choice of will arise from the first theorem below. To make
() operational, we can employ any estimatorb that is consistent at the
and two such estimators ofb will be discussed later in Section 3.
of this section, we need to assume that the coefficients decay as
e 2
() and ¯ 2
rate of log or faster, calculating e 2
We start by making Assumption L. In addition, to establish the main theorem
(b) and ¯ 2
(b). This is a very weak condition,
= −1+(1 + (−1)) 6= 0 if 6= 0;
∞
X
=0
= 0 if 0; and(2.4)
∞
X
=
 = (−2) if = 0(2.5)
Such additional requirements are satisfied, for example, by ∼ ARIMA()
where ∈ (−1212). We now derive asymptotic expansions for the estimators
e 2
:=1
2
−∞
(b) and ¯ 2
(b), where the bias will be expressed in terms of
()
sin2(2)1≤−0−2
Z∞
µ
(2)2
¶
d(2.6)
In the case of −12 14, these HAC estimators have Gaussian limit distri
butions. However, if 14 12, then the limit can be written in terms of a
random variable given by the double ItôWiener integral
Z00
where (d) is a standard Gaussian complex measure ((−d) is the conjugate of
(d)) with mean zero and variance E((d)2) = d. The limit variable () has
a (nonGaussian) Rosenblatt distribution and is welldefined when 14 12.
The symbolR00
6
() :=
R2
ei(1+2)− 1
i(1+ 2)
1−2−(d1)(d2)(2.7)
R2indicates that one does not integrate on the diagonals 1= ±2.
Page 7
Theorem 2.1. Supppose that {} satisfies Assumption L and (2.4)—(2.5), and that
b is an estimator of such that
(b − )log = (1)
(a) If −12 14 and E(2+) ∞, for some 0, then, as → ∞,
(2.8)
e 2
(b) − 2
= ()12+ −1−2 + (()12) + (−1−2) (2.9)
and
¯ 2
(b) − 2
= ()12+ −1−2 + (()12) + (−1−2) (2.10)
where
→ N(02
Z∞
),
2
:= 82
0
0
µsin(2)
2
¶4
−4d =
⎧
⎩
⎨
162
0
2(21+4−1)
Γ(4+4)cos(2)if 6= 0
if = 0
1622
03 = 44
3
(2.11)
and it is understood that lim→−14(21+4− 1)cos(2) = log4.
(b) If 14 12, E¡4
then
¢ ∞ and () in (1.4) has bounded derivative,
e 2
(b) − 2
= ()1−2e+ −1−2 + (()1−2) + (−1−2)
X
(b) has the property
¯ 2
(2.12)
where
e:= −2
=1
(2
− E(2
))
→ 20();
whereas ¯ 2
(b) − e 2
(b) =
³
()1−2´
) ∞, the MSEs of HACtype esti
(2.13)
Under the additional assumption that E(4
mators exist and are minimized asymptotically by
∝
(
1(3+4)−12 14
14 12
12−
(2.14)
where ∝ denotes proportionality. We now list other comments and implications
arising from Theorem 2.1:
Remark 2.1. Since E() = E(e) = 0, the asymptotic bias of the estimators is
given by −1−2. It tends to zero as (hence ) tends to infinity.
7
Page 8
Remark 2.2. When −12 14, the convergence ¯ 2
implies that 2
can be consistently estimated by replacing 0by ¯ 2
(b)
−→ 2
(b)(b) in (2.11).
= ()0
Remark 2.3. If 14, then estimates with known and estimated mean have the
same asymptotic properties. However, if 14, then the rate of convergence of
the sample mean to is rather slow, and replacing by¯ leads to an additional
term in the limiting distribution of the HAC estimator whose consistency is nev
ertheless unaffected. In the context of hypothesis testing about the mean , one
can estimate the long run variance by treating as unknown and estimating it by
the sample mean. Alternatively, one can compute the longrun variance under the
null hypothesis, treating as known. This will improve the size but may have an
adverse effect on the finitesample power of tests based on HAC estimators.
Remark 2.4. As a general rule, convergence in distribution does not necessarily
imply a corresponding convergence for moments such as the MSE. However, our
proofs are based on 2expansions for which this implication holds if we make the
additional assumption that E(4
) ∞, hence our stated results for the asymptotic
bias and variance. Note that for the validity of asymptotic expansions (2.9)—(2.10),
only 2 + moments of {} are needed.
Remark 2.5. If {} is a nonlinear process, then Theorem 2.1 might not hold.
For example, the nonlinear transformation = eof a linear process {} will, in
general, increase the bias of estimators. Therefore, the optimal minimizing the
MSE might also change in this case.
Relaxing Assumption L, we obtain the following concistency result.
Theorem 2.2. Suppose that → ∞, = (12), that Assumption M holds, and
thatb − = (1log). Then,
¯ 2
→ 2
(b)
e 2
(b)
−→ 2
as → ∞ (2.15)
3 Robinson’s MAC estimator
In this section, we derive the asymptotic properties of Robinson’s MAC estimator
of 2
= ()0, where () is given by (1.3). We shall show that the asymptotic
properties of the MAC estimator do not depend on the memory parameter , and its
asymptotic distribution is always Gaussian. Hence, it is more robust than HAC to
8
Page 9
the bandwidth selection in practice, something that will be illustrated numerically
in the subsequent section. Define
b 2
() := ()b()
X
where
b() := −1
=1
2
()
is a consistent estimator of 0,
() := (2)−1
¯¯¯¯¯
X
=1
ei
¯¯¯¯¯
2
is the periodogram, = 2 are the Fourier frequencies, and the bandwidth
parameter satisfies → ∞ and = ((log)2).
This estimator has a number of features. First, it does not require estimation
of the unknown mean E() since the periodogram is selfcentring at the Fourier
frequencies . Contrast this with HAC estimators; see also Remark 2.3. Second,
as the following theorem will show, the bias and asymptotic distribution of the
estimator do not depend on ∈ (−1212), and the asymptotic distribution is
always Gaussian.
In addition to Assumption L, we will need the condition that () :=
P∞
d
d() = (())
=0eisatisfies
as → 0+(3.1)
in order to derive the CLT in the following theorem.
Theorem 3.1. Suppose that {} satisfies Assumption L with E(4
Assume thatb is an estimator of such thatb − = (1log). Then
b 2
) ∞ and (3.1).
(b) − 2
= −122
+ 2(b − )(log)2
(1 + (1))(3.2)
+(()2) + (−12)
where
→ N(01)(3.3)
The parameter can be estimated, for example, by the local Whittle estimator
b := argmin∈ [−1212]()
9
Page 10
which minimizes the objective function
() := log
⎛
⎝1
X
=1
2()
⎞
⎠−2
X
=1
log
with bandwidth parameter such that → ∞ and = ((log)2). We
use the notation for the bandwidth of the local Whittle estimator, stressing
that it can be set to values that can differ from the bandwidth used in b 2
Theorem 3.1
√(b − )
For the estimation of , the logperiodogram estimator can be used as an alternative
to the local Whittle estimator; see Robinson (1995a).
(b), whenb is the local Whittle estimator. Let
a CLT to analyze the decline of the MSE as increases. Under Assumption L,
E((b − )2) = (−1
b 2
by (3.2). Since (3.5) is derived using an 2approximation, a more detailed analysis
shows that the MSE is ¡−1+ (log)2−1
optimal bandwidth is therefore the one taking that grows at the maximal rate of
45.
.
If = (45), then Robinson (1995b) showed that under the assumptions of
→ N(014)(3.4)
We now turn to the MSE of b 2
= (45) and = (45), since we only need a consistency rate rather than
). Therefore,
³
(b) − 2
´2
=
µ1
+(log)2
¶
(3.5)
¢, hence decreasing in . The MSE
In general, without recourse to Assumption L, the consistency of Robinson’s
MAC estimator follows immediately fromb
assume Gaussianity or linearity of {}; see Dalla et al. (2006) and Abadir et al.
(2007). For example, if = 0 and (1.4) holds, then such consistency follows under
the assumptionP∞
4Simulation results
→ andb(b)
→ 0. The estimators
b andb(b) are consistent under very weak general assumptions, which do not
=−∞() ≤ ; see Corollary 1 of Dalla et al. (2006).
The objective of this section is to illustrate the asymptotic results for the HAC and
(b) and b 2
MAC estimators ¯ 2
(b), to examine their finitesample performance, and
to give advice on how to choose the bandwidth parameters in practical applications.
10
Page 11
We focus on the MSE because the primary use of these estimators is the consis
tent estimation of the longrun variance 2
used in various statistics; e.g. in the
denominator b of HAC and MAC robust tratios
t := 12−(¯ − )
b
→ N(01) → ∞(4.1)
For this reason, we also consider the closeness of HAC and MAC robust tratios to
their limiting normal distributions; see Velasco and Robinson (2001) for expansions
relating to tratios using smoothed autocovariance estimates for (0). We study
the coverage probabilities (CPs) of 95% asymptotic confidence intervals (CIs) for ,
considering how the choice of bandwidths affects the closeness of CPs to the nominal
95% level based on the limiting normal distribution of the tratio.
We let {} be a linear Gaussian ARIMA(10) process with unit standard
deviation, for different values of (AR parameter) and . We link to 2
, the object
of our analysis, by means of (1.2)—(1.3). Throughout the simulation exercise, the
number of replications is 5,000. We consider three sample sizes = 2505001000
and we estimate the parameter using the local Whittle estimatorˆ with bandwidth
=¥065¦. We do not report the results for =¥05¦¥08¦because they
Table 1 contains the MSE of the HAC estimator ¯ 2
are dominated by =¥065¦.
values of the bandwidth . The minimumMSE value for each and is highlighted
(b) calculated for different
by shaded gray boxes. The results for these optima are so scattered across the table,
that in practice it will be difficult to achieve them.
Table 2 reports the MSEs of the HAC estimator ¯ 2
(b) when is chosen according
to the asymptoticallyoptimal rule (2.14). It gives MSEs comparable to the optimal
MSEs of Table 1, except when and are simultaneously large. In this case, the
cost in terms of the MSE can be substantial.
Table 3 contains the MSE of the MAC estimator b 2
that resulted from (3.5): almost all the optima are for = (45) and, in the four
(b) calculated for different
values of the bandwidth . It reveals the accuracy of the simple bandwidth rule
exceptions (shaded boxes), there is little loss in nevertheless sticking to = (45).
Both Tables 2 and 3 show that the MSEs of HAC and MAC estimators usually
increase when  or  increase.
Tables 4 and 5 report CPs for using, respectively, the HAC estimator ¯ 2
(b)
with chosen by the rule (2.14) and the MAC estimator with various bandwidths
. HAC and MAC estimators gives comparable CPs, which are slightly better
11
Page 12
for MAC. CPs approach the nominal 95% level as sample size increases. They are
close to the 95% level except when → 05 or when becomes negative. The
bandwidth =¥08¦tends to give better CPs for MAC, and this is in line with
the recommendations of Table 3.
Because of the specificity of MC studies to the generating process that is used, it
is recommended in practice that the user tries also bandwidths that are smaller than
the maximum allowable =¥08¦which we recommended. This could be used to
check the stability of the estimator as varies near its (unknown) optimal value.
For example, data that are not generated by a linear process (such as ARIMA)
require smaller bandwidths like¥07¦; see Dalla et al. (2006).
5Concluding Remarks
In this paper, the properties of two alternative types of estimators of the longrun
variance have been derived. The first one is an extension of the widely used Bartlett
kernel HAC estimator, while the second one is the frequencybased MAC estimator
suggested by Robinson (2005). We give guidance on how to choose the bandwidths
in practice, for each estimator. The calculation of both estimators is numerically
straightforward, and allows for the possibility of longmemory or antipersistence in
the data.
Our theoretical results explain that the HAC estimator is sensitive to the se
lection of the bandwidth , since the order of minimizing the MSE depends on
the extent of the memory in the series. This problem often complicates bandwidth
selection in applied work. The MAC estimator is more robust to the choice of the
bandwidth, which does not depend on the memory. The simulation study confirms
this analytical finding.
On the other hand, the paper does not provide a theory of deriving optimal
estimators, e.g. under MSEoptimality or closeness to normality of the Studentized
tratio for . We have studied two types of estimators without establishing whether
or not they are dominated by others, but the asymptotic normality of the MAC
estimator for ∈ (−1212) is an encouraging sign, and so is the good simulation
performance of the two estimators.
12
Page 13
Appendix
A Proofs of the theorems, auxiliary lemmas and propo
sitions
There are four subsections. The first proves the results relating to the theorems of
Section 2, while the second proves the theorem of Section 3. For the first theorem,
we need lemmas that are derived in the third subsection, and propositions that are
obtained in the fourth one. We require these auxiliary results here, but they can
also be of use beyond our paper.
Throughout this section, we take ∼ to mean that → 1 as → ∞.
A.1Proof of Theorems 2.1 and 2.2
proof of Theorem 2.1. By definitions (2.1), (2.2), and (2.8)
e 2
(b) = e 2
(b) and ¯ 2
()(1 + (1))and¯ 2
(b) = ¯ 2
()(1 + (1))
Condition (2.8) and asymptotic results derived for ¯ 2
(b) by e 2
Also, observe that
Z
where
() and ¯ 2
() below then allows
us to replace e 2
() and ¯ 2
() in the statement of the theorem
without altering the expansions, so we will prove the theorem for e 2
e =
() and ¯ 2
().
−
ei()d=
Z
−
ei¯()d 0 ≤
() := (2)−1¯¯¯
are the corresponding periodograms. Therefore,
X
=1
ei(− E())
¯¯¯
2
¯() := (2)−1¯¯¯
X
=1
ei(−¯ )
¯¯¯
2
e 2
¯ 2
() =
Z
−
()()d (A.1)
and
() =
Z
−
()¯()d(A.2)
where
() := −1−2¯¯¯
X
=1
ei¯¯¯
2= −1−2
µsin(2)
sin(2)
¶2
(A.3)
13
Page 14
is the renormalized Fejér kernel.
By (A.1) and (A.2), we can write ¯ 2
() = e 2
()(¯() − ())d
() + , where
:=
Z
−
(A.4)
In Lemma A.4, we will show that E() ≤ (()1−2+ ()). Hence,
¯ 2
() = e 2
() + (()1−2) (A.5)
If −12 14, then ()1−2= (()12), and we can write (A.5) as
³Z
where
:= ()12³
By Proposition A.1,
−→ N(02
sition A.2,
Z
which proves (2.9) and (2.10).
¯ 2
() − 2
= ()12+
−
()()d − 2
´
+ (()12)
e 2
() −
Z
−
()()d
´
), where 2
is given by (2.11), whereas by Propo
−
()()d = 2
+ −1−2 + (−1−2)
In the case 14 12, write
e 2
() − 2
=¡e 2
() − E¡e 2
()¢¢+ E¡e 2
()¢− 2
= −1−2+(−1−2)+
Proposition A.3 derives the asymptotic bias E¡e 2
totic behavior
()¢−2
(()1−2) and shows that the stochastic term exhibits the nonstandard asymp
()1−2¡e 2
Thus, the term on the lefthand side above can be approximated by the normalized
sum −2P
Gaussian limit distribution. These relations imply (2.12) and (2.13).
Proof of Theorem 2.2. The conditionb− = (1log) allows us to prove
(2.15) was shown in Giraitis et al. (2003, Theorem 3.1). For 0,
() − E¡e 2
()¢¢= −2
X
=1
(2
− E(2
)) + (1)
−→ 20()
=1(2
− E(2
)) of strongly dependent variables 2
which has a non
the theorem for e 2
() and ¯ 2
() instead of e 2
1 −
(b) and ¯ 2
³
(b). For ≥ 0, convergence
2¯ 2
=
X

³
´
e +
X

1 −
´
=: 1+ 2
14
Page 15
where
e
= −1
−
X
=1
(− )(+− )
´
=(− ). It suffices to show that
=
³
1 −
(¯ − )2− −1(¯ − )(1−+ +1)
= E(), and :=P
−21
→ 2
and−22
−→ 0 (A.6)
The verification of the relations −22
→ 0 and −2E(1) → 2
is the same as
in Giraitis et al. (2003).
To prove the convergence (A.6), it remains to check that E((1− E(1))2) =
(4) We have E((1− E(1))2) ≤  + 0, where
:=
X
0
³
1 −
´³
1 −0
´
−2
−
X
=1
−0
X
0=1
(00)
(00) := −0−0+−0+ −0−0−0+
and
0
:= −2
X
X
0≤
−
X
∞
X
=1
−0
X
0=1
(0− 0− + 0)
≤ −2X
≤
=−∞() ∞ for 0, −12 0 and = ().
To work out , write = + 0where
=1
00=−∞
(00) ≤ −1= (4)
by the assumptionP∞
:=
X
0
³
1 −
´³
1 −0
´
−2
∞
X
=−∞
−0
X
0=1
(00)
whereas 0can be bounded by
0
 ≤ −2
X
0
X
− ≤0
X
0=1
(00)
We split summation over into three regions: −  ≤ ≤ , , and ≤ 0. In
the case of −  ≤ ≤ , the order of this part of the sum is straightforward.
15
Page 16
Since  ≤ −1+2=: for all ≥ 1, then for all  ≤ , , and
1 ≤ 0≤ , we can bound
−0 ≤ −0 and −0+ ≤ −0
and, for all  ≤ , ≤ 0, and 1 ≤ 0≤ , we bound
−0 ≤ 0 and −0− ≤ 0+1
SinceP ∞ andP ∞, then
0

≤ 2−2= (4)
because −12 0 and = ().
To estimate , denote :=P∞
0
 ≤ −2³X
X
=−
∞
X
0=−∞
0
∞
X
0=−∞
0 +
X
0
∞
X
0=1
0
∞
X
=−∞

´
=−∞+= 2R
1 −
−ei2()d. Then
= −1
X
0
³
´³
1 −0
´³
−0+ +0
´
It remains to show that
0
= (4) (A.7)
Note that
0

Z
Z
(A.8)
¯¯¯
≤−1¯¯¯
−1
−
X
³
0
³
ei(1 −
1 −
´³
1 −0
´
(ei(−0)+ ei(+0))2()d
≤
−

X
=1
)2+ 1
´
2()d
Summation by parts yieldsP
=1ei(1 − ) = −1P−1
sin(2)
sin(2)
=1
P
=1ei, where
¯¯¯¯¯
X
=1
ei
¯¯¯¯¯=
¯¯¯¯¯
¯¯¯¯¯≤

1 + ≤

for all ≥ 1 and  ≤ . Set = max(4−1 + ) where 0 will be chosen
sufficiently small. Then 2() ≤ −4≤ −and
≤ −1³
≤ −1³
for −12 0, when = () and 0 is sufficiently small.
0

−1
−1
X
−1
X
=1
Z
−
2(1 + )−2−d + 1
´
−1
=1
1+
Z∞
−∞
(1 + )−2−d + 1
´
≤ (()+ −1) = (4)
16
Page 17
A.2 Proof of Theorem 3.1
We show first that
b() − 0= −120+ (()2−12) + (−12)
(A.9)
where
→ N(01) (A.10)
Write
b= 0−1
X
=1
−1
02
() = 0
³
1+ 2
´
where
1:= −1
X
=1
³
−1
02
() − 2()
´
2:= −1
X
=1
2()
and () := (2)−1P
Under the assumptions of the theorem, (4.8) of Robinson (1995b) implies that
=1ei.
1= (−12) + (()2−12) (A.11)
Note that E(2) = 1. Write
12(2− E(2)) = 12−1
X
=1
(2
− 1) +
X
=2
where :=
P−1
=1−and := 2−12−1P
12−1
(2
=1cos(). Note that
X
=1
− 1) = 12−1(12) = (1)
On the other hand, the variables {}
and, using same argument as checking conditions (4.12) and (4.13) of the martin
gale central limit theorem in Robinson (1995b), it follows thatP
Therefore
12(2− E(2))
=2form a sequence of martingale differences
=2
→ N(01).
−→ N(01)
which, together with (A.11), proves (A.9).
Next, we prove (3.2). By (3.4),b − = (−12
). The mean value theorem
implies that
b 2
(b) = b 2
() + (A.12)
17
Page 18
where
:= (b − )
e −  ≤ b −  = (1log)
b 2
(e)
e ∈ (b)
and
(A.13)
We have that
b 2
(e) =
[(e)b(e)] = 0(e)b(e) + (e)
(e)
b(e)
Note that
→ ()0(e)
→ 0()
whereas
b(e) = 2−1
X
=1
2
log() () = 2
³
log()1+ 2
´
where
1:= −1
X
=1
2
()2:= −1
X
=1
2
log() ()
By (A.13) and Lemma 6.2 of Dalla et al. (2006) it follows that
1
→ 02
→ 0
Z1
0
logd = −0
b(e)
−→ 0
Thus,
=(b − )[0()0+ 2log()()0− 2()0](1 + (1)) (A.14)
=2(b − )log()2
(1 + (1))
Equation (A.12), together with (A.9) and (A.14), implies (3.2).
A.3 Auxiliary lemmas
Set
() := 2()()(A.15)
where () is an even real function and is the spectral density of {}. Defining
X
() :=
=1
ei= ei(+1)2sin(2)
sin(2) (A.16)
18
Page 19
we have
() ≤ (1 + )−1 ≤ 32 (A.17)
Set := ()where 0 is a small number. Then,
= ()1−→ 0 (A.18)
Lemma A.1. Let ∈ (−1214) and () :=R
assumptions of Theorem 2.1,
−( − )()d. Under the
() ≤ (0)
µsin(2)
(A.19)
(0) ∼ 422
0
Z∞
−∞
2
¶4
−4d → ∞ (A.20)
and, for satisfying (A.18),
sup
≤
() − (0) = () (A.21)
Proof of Lemma A.1. By Cauchy’s inequality,
³Z
since ( + 2) = (), proving (A.19).
Using the asymptotic approximation () = 0−2(1 + (1)) together with
(A.15) and (A.3), we get as → ∞
Z
∼ 422
−∞
proving (A.20).
() ≤
−
2
( − )d
´12³Z
−
2
()d
´12
=
Z
−
2
()d = (0)
(0)=
−
2
()d = 42
Z∞
Z
−
−2−4
¶4
µsin(2)
−4d
sin(2)
¶4
2()d
0
µsin(2)
2
By (A.15) and (A.3),
() = 2()() = 2−1−2
µsin(2)
sin(2)
¶2
−2()
Let maximize −1() − (0) in the set { :  ≤ }. Then,
Z
Z
=
2≤≤
sup
≤
−1() − (0)≤
−
−1(− ) − (−)()d
≤
−
Z
−2(− −1) − (−−1)(−1)d
Z
+
0≤2
=: 1+ 2
19
Page 20
To prove (A.21), it remains show that
→ 0 → ∞ = 12 (A.22)
We first show (A.22) for = 1. Observe that, if 2 ≤ , then
 +  ≤  +  ≤ 2 +  ≥  −  ≥ 2
This, together with (1.4), (A.15), and (A.17), implies the bound
−1(− −1) ≤ (1 + )−2−2=: ()
which holds for all  ∈ [2]. Moreover, for any fixed , as → ∞,
−1(− −1) → 2
µsin(2)
2
¶2
0−2≤ ()
since sin2(2)(2)2≤ (1 + )−2. Therefore, estimating −1(− −1) ≤
(), −1(−−1) ≤ (), and −1(−1) ≤ (), it follows that
−2(− −1) − (−−1)(−1) ≤ (())2
and, for any fixed ,
−2(− −1) − (−−1)(−1) → 0
Since ()2is an integrable function, the theorem of majorating convergence im
→ ∞
plies that 1→ 0.
To work out 2, note that in 2we integrate over  ≤ 2 ≤ 2 =:
→ 0, as → ∞. By (A.17), −1( + ) ≤ and by (1.4), −2( +
) ≤  + −2. Therefore,
Z
as → ∞, since 4 1 and → 0.
Lemma A.2. Let ∈ (1412) and
Z
where () is periodically extended to R. Then, under the assumptions of Theorem
2.1,
() − (0) = (()1−2) + (−1−2)
2≤
0≤≤2
( + −4+ −4)d ≤
Z
0≤≤2
−4d → 0
() :=
−
( − )()d(A.23)
sup
≤
(A.24)
and
() − (0) ≤ ()1−2 ≤ (A.25)
20
Page 21
Proof of Lemma A.2. First, we show (A.24). Let  ≤
is the same as in (A.18). Then,
, where → ∞
() − (0) = −1−2
Z
−
(( − ) − (−))()2d
Z
=: 1+ 2+ 3
(A.26)
=
Z
0≤2
+
2≤≤1
+
Z
1≤≤
To work out 1, note that if  2, then  ≤ 2 = (1) and therefore
µsin(2)
=2(1 + ()2) = 2(1 + (()2))
()2
=
sin(2)
¶2
=(2)2(1 + ()2)2
(2)2(1 + ()2)2
for all ∈ [−]. Since  ≤
= 1−2¯¯¯
and () ≤ −2, we obtain
³
Z2+
≤ 1−2h
≤ ()1−2−
1
Z2
−2
(( − ) − ())1 + (()2)
´
d
¯¯¯
()d
≤ 1−2h
2
2−
()d + (()2)
Z2+
Z3
()2= (()1−2)
−2−
i
()−2+ (()2)
−3
−2d
i
+ ()1−23−2
To work out 2, observe that if ∈ [21] and  ≤
 −  ≥ 2, as → ∞, and therefore
, then  −  ≥
( − ) − (−)
¯¯¯ − −2− −2¯¯¯( − ) + −2( − ) − () ≤ −2−1
since, by the mean value theorem,
¯¯¯ − −2− −2¯¯¯ ≤ 
and d()d ≤ by the assumption of Theorem 2.1(b). Using
(A.27)
≤
sup
−≤≤
−2−1≤ −2−1
()2≤ 2
(A.28)
and  ≤
, we obtain that, for  ≤
Z1
,
2 ≤ 1−2
−2−1d ≤ ()1−2−
= (()1−2)
21
Page 22
Finally, for 1 ≤  ≤ , (A.17) implies that ()2≤ , and therefore
Z
for all  ≤
3 ≤ −1−2
1≤≤
( − ) − ()d = (−1−2)
→ 0 as → ∞. Thus,
sup
≤
() − (0) = (()1−2) + (−1−2)
to complete proof of (A.24).
Now we show (A.25). Applying in (A.26) the result (A.28) and, for 2 ≤ ,
using the result (A.27), we obtain
Z
≤
2≤≤
≤
() − (0)≤ 1−2
1−2³Z
()1−2
−
( − ) − (−)d
−2−1d +
Z
≤2
( − −2+ −2)d
´
to prove (A.25).
Lemma A.3. Let −1 1. Then for any 0, ∈ R and ∈ Z,
Z
Proof of Lemma A.3. It suffices to show that (A.29) holds when 0 is
() :=
−
−()( + )d ≤ 1+(1 + )−1−min{0}+ (A.29)
sufficiently small. Write
() :=
Z
≤2
+
Z
2≤
=: 1() + 2()
Using (A.17), we obtain that
1()≤
Z2
−2
−
Z∞
1 + 
−
1 + ( + )d
1
1 +  + d
(A.30)
≤ 1+
−∞
1
1 + 
First, we prove that (A.29) holds for  ≥ 1.
a) Let ∈ [01) and choose 0 such that 1 − 2. Then, by (A.30),
Z∞
≤ 1+−1+
−∞
1()≤ 1+
−∞
−1+2 + −1+2d
Z∞
−1+2 + 1−1+2d ≤ 1+(1 + )−1+
22
Page 23
which proves (A.29).
b) Let ∈ (−10) and choose 0 such that + 1 − 0. Then, by (A.30),
Z∞
≤
−∞
which implies (A.29) .
1()≤ 1+
−∞
−−1 + −1+d
Z∞
1+−1−+
−−1 + 1−1+d ≤ 1+(1 + )−1−+
If  ≤ 1, then applying Cauchy’s inequality in (A.30) we obtain that
1+³Z∞
≤
1()≤
−∞
−(1 + )−2d
´12³Z∞
−∞
−(1 +  + )−2d
´12
1+≤ 1+(1 + )−1−min{0}+
proving (A.29).
If 2 ≤  ≤ , then −() ≤ , by (A.17), and therefore
Z
Observing that
2() ≤
−
( + )d ≤
Z2
−2
()d ≤ log
1+(1 + )−1−min{0}+≥ 1+(1 + )−1−min{0}+≥
this implies that 2() satisfies (A.29).
Lemma A.4. Under the assumptions of Theorem 2.1, :=
R
−()(¯() −
())d satisfies
E() ≤ [()1−2+ ()] (A.31)
Proof of Lemma A.4. Denote
() := (2)−12
X
=1
ei(− )
Then,
()= () + ( −¯ )(2)−12()2
= () + ( −¯ )(2)−12[()(−) + (−)()]
+ ( −¯ )2(2)−1()2
Therefore,
 ≤
³
 −¯ −12 + ( −¯ )2−1
Z
−
()()2d
´
(A.32)
23
Page 24
where
:=
Z
−
()()(−)d
By (1.2), E(( −¯ )2) ≤ −1+2 We shall show below that
E¡2¢≤
⎧
⎩
⎨
2−42 if 0 ≤ 12
if − 12 0
2−2
(A.33)
To work out the last term in (A.32), observe that, since () ≤ 1−2, then
Z
because
Z
Therefore,
−1
−
()()2d ≤ −11−2
Z
−
()2d ≤ 1−2
−
()2d = 2
E()≤
≤
¡E(( −¯ )2)¢12−12¡E¡2¢¢12+ E(( −¯ )2)(1−2)
−1+¡E¡2¢¢12+ (()1−2)
Applying (A.33), we see that for ≥ 0, E() ≤ ()1−2; whereas for 0,
E() ≤ (() + ()1−2) ≤ (); proving (A.31).
It remains to show (A.33). Note that
E((1)(−2)) = (2)−1
X
Z
=1
ei(1−2)
Z
−
ei(−)()d
= (2)−1
−
(1+ )(−2− )()d
Thus,
E¡2¢
= (2)−1
Z
−
³Z
−
Z
−
(1)(−1)(2)(−2) (A.34)
´
()(−)( + )d
×(1+ )(−2− )d1d2
= (2)−1
−
First, let 0 ≤ 12. Note that () ≤ 1−2for ≥ 0, and () ≤ −2
by (1.4). Therefore, applying to (A.34) result (A.29) with = 0 and 0 12,
()d
Z
¯¯¯
Z
−
¯¯¯
2()d
we obtain that
E¡2¢
≤ 2−4
Z
−
(1 + )−2+2−2d
24
Page 25
≤ 2−42
Z∞
−∞
(1 + )−2+2−2d ≤ 2−42
proving (A.33).
Second, let −12 0. Then,
() ≤ −1−2
¯¯¯¯
sin(2)
¯¯¯¯
2+2¯¯¯¯
sin(2)
¯¯¯¯
−2
≤ 2
(A.35)
since sin(2) ≤  ≤ Applying to (A.34) result (A.29) with = −2
and 0 such that 2 + 2 − 2 1, we obtain that
E¡2¢≤ 1−42
to complete the proof of (A.33).
Z
−
(1 + )−2−2+2−2d ≤ −22
A.4 Propositions
Proposition A.1. Let the assumptions of Theorem 2.1 be satisfied and −12
14. Then,
()12³Z
where 2
is given by (2.11).
−
()()d −
Z
−
()()d
´
−→ N(02
)(A.36)
Proof of Proposition A.1. Define the × matrix E:= (−)=1with
entries :=R
() ≤ 2−
−ei()d and ∈ Z, and denote its Euclidean norm by E :=
−)12. If
(P
=12
 ≤ (A.37)
where 0 ≤ 14, ≥ 0 and, as → ∞,
max{0}log
E
→ 0(A.38)
and
Z
−
()d = (−12E)(A.39)
where () is defined by (A.15), then Corollary 1.2 of Bhansali, Giraitis and
Kokoszka (2004) implies that
√2
E
³Z
−
()()d −
Z
−
()()d
´
→ N(01) (A.40)
25
Page 26
Therefore, to verify convergence (A.36), it suffices to check that conditions (A.37)—
(A.39) are satisfied in the case of the function (), defined in equation (A.3).
Suppose that the asymptotic approximation
E2∼ 832
0
Z∞
−∞
µsin(2)
2
¶4
−4d = 222
(A.41)
is valid for ∈ (−1214). We shall prove this at the end of the proposition, but
we first show that it implies the required (A.36).
Assume that ≥ 0. Then
() ≤ 1−2=: (A.42)
and therefore () has property (A.37) with = 1−2and = 2. Then
max{0}log
E
≤ 1−22log
()12
≤ ()12−2log → 0
since 14 and satisfies (2.3). Thus (A.38) holds. On the other hand, from the
definitions of , () and assumption (1.4), it follows that
Z
−
()d≤
Z
Z∞
−
−1−2
µsin(2)
µsin(2)
¶2
sin(2)
¶2
−2d (A.43)
≤
−∞
2
−2d ≤
which, together with (A.41), implies (A.39). Hence assumptions (A.37)—(A.39) are
satisfied and (A.40), together with (A.41), implies (A.36).
Assume now that −12 0. Then, (A.35) shows that (A.37) holds with
= and = 0. To check (A.38), observe that
max{0}log
E
≤ log
()12≤ ()12log → 0
by (A.41) and (2.3), whereas (A.39) follows from (A.43) and (A.41). Hence, the
assumptions (A.37)—(A.39) are satisfied, and (A.36) follows from (A.40) and (A.41).
It remains to show (A.41). Write
E2
=
X
Z
=1
2
−=
Z
−
Z
−
¯¯¯
X
=1
ei(+)¯¯¯
2()()dd
=
−
¯¯¯
X
=1
ei¯¯¯
2()d =
Z
−
µsin(2)
sin(2)
¶2
()d
26
Page 27
where
() =
Z
−
( − )()d (A.44)
and () is periodically extended to R. Then
Z
=1
E2=
−
¯¯¯
X
ei¯¯¯
2(0)d +
Z
−
¯¯¯
X
=1
ei¯¯¯
2(() − (0))d =: 1+ 2
By (A.20),
1= 2(0) ∼ 1632
0
Z∞
0
µsin(2)
2
¶4
−4d
Finally, we need to show that
2= () (A.45)
By (A.18) and (A.21),
Z
≤
¯¯¯
X
=1
ei¯¯¯
2() − (0)d = ()
Z
≤
¯¯¯
X
=1
ei¯¯¯
2d = ()
whereas by (A.19)—(A.20), () − (0) ≤ 2(0) = (), and therefore
Z
=1
≤≤
¯¯¯
X
ei¯¯¯
2() − (0)d = ()
since
Z
≤≤
µsin(2)
sin(2)
¶2
d ≤
Z
≤≤∞
µsin(2)
2
¶2
d = ()
because → ∞. This completes the proof (A.45).
Proposition A.2. Assume that the spectral density has the property
() = 0−2(1 + (2)) as → 0
with  12. Then, as → ∞,
Z
−
()()d = 2
+ −1−2 + (−1−2) (A.46)
where is given in (2.6).
proof of Proposition A.2. Set
1() =0−2
(2)22() =
()
sin2(2)− 1()
27
Page 28
Then, we can write
:=
Z
−
()()d = −1−2
Z
−
sin2(2)
()
sin2(2)d
= −1−2
Z
−
sin2(2)(1() + 2())d = −1−2(1− 22+ 3)
where
1=
Z∞
−∞
sin2(2)1()d = 1+2
Z∞
−∞
sin2(2)0−2
(2)2d = 1+22
and
2=
Z∞
sin2(2)1()d3=
Z
−
sin2(2)2()d
Since sin2(2) = (1−cos())2 and 1 is an integrable function in [∞), then
Z∞
Z∞
Bearing in mind that by assumption (1.4), () = 0−2(1 + (2)) and
sin() = 1 + (2), as → 0, it follows that 2() ≤ −2for all  ≤ .
Since −2is an integrable function in [−], then
Z
Z
Estimates of = 123 imply (A.46).
2=
sin2(2)1()d = (12)
Z∞
1()d − (12)
Z∞
cos()1()d
→ (12)
1()d → ∞
3=
−
sin2(2)2()d = (12)
Z
−
1()d − (12)
Z
−
cos()2()d
→ (12)
−
2()d → ∞
Proposition A.3. Let the assumptions of Theorem 2.1 be satisfied and 14
12. Define
:= 1−2
Z
−
()d(A.47)
Then, as → ∞,
var(e 2
() − ) = (()2−4)
()1−2(− E())
(A.48)
−→ 20()(A.49)
where () is given by (2.7), and
E¡e 2
()¢= 2
+ −1−2 + (()1−2+ −1−2)(A.50)
where is given by (2.6).
28
Page 29
Proof of Proposition A.3. Proof of (A.48). Set
(12) :=
X
=1
( − )−1−2
where are the coefficients of the linear process ,
Z
and () is defined in (A.16). For simplicity, let = 0 ≤ −1. Then,
Z
() :=
−
ei()d() = ()2−2− 1
e 2
Note that if
() − =
−
(() − 1−2)()d = 1−2(2)−1
X
12∈Z
(12)12
=
X
12∈Z
(12)¡12− E(12)¢
is a quadratic form with real coefficients (12) where {} is a sequence of i.i.d.
random variables with zero mean and finite fourth moment, then
X
Therefore, by (A.51),
var() ≤
12∈Z
2(12) (A.51)
var(e 2
() − ) ≤ 2−4−2
X
12∈Z
(12)2=:
Since , ∈ Z, are square summable by Assumption L, there exists function
b() ≤ , such that =R
(12)=
−
×
−
for 12∈ Z. By Parseval’s equality,
2−4−2
−
¯¯¯¯
−1→ 0. Such exists because satisfies (2.3). Since b()2= (22)() ≤
≤ 2−4−2(1+ 2)
−eib()d, ∈ Z. Therefore,
e−i11e−i12
Z
Z
Z
ei1ei2b(1)b(2)
−
h
()( + 1)(− + 2)d
i
d1d2
=
Z
Z
−
d1d2b(1)2b(2)2
×
Z
−
d()(1+ )(2− )
¯¯¯¯
2
Set 1 := { ≤ −1} and 2 := {−1  ≤ }, where 0 is such that
−2 we obtain that
(A.52)
29
Page 30
where
:=
Z
−
Z
−
d1d21−22−2
¯¯¯¯
Z
d()(1+)(2−)
¯¯¯¯
2
= 12
We show that
= (4) → ∞ = 12 (A.53)
which, together with (A.52), proves (A.48). To work out 1, note that if ∈ 1,
then  ≤ −1→ 0, and therefore
¯¯¯
Applying (A.29) of Lemma A.3, we see that
¯¯¯¯
sup
∈1
() = sup
∈1
µsin(2)
sin(2)
¶2
− 1
¯¯¯ = (1)
Z
1
d()(1+ )(2− )
¯¯¯¯
2
= (1)
¯¯¯¯
Z
1
d(1+ )(2− )
= (1)2(1 + (1+ 2))−2+20
¯¯¯¯
2
with some 0 0 14. Then,
1
= (1)
Z
−
Z
Z∞
−
1−22−22(1 + (1+ 2))−2+20d1d2
Z∞
= (1)4
−∞ −∞
1−22−2(1 + 1+ 2)−2+20d1d2= (4)
since the last integral is finite when 4 1.
To work out 2, recall (A.16) and write
Z
Since () ≤ , we can bound 2≤ (1+ 2), where
Z
and 1:= {2−1≤  ≤ }, 2:= { ≤ 2−1}.
Now,
Z
≤
1
1
³Z
2=
2
Z
2
d1d2(1)(2)
¯¯¯¯
Z
−
d−2(1+ )(2− )
¯¯¯¯
2
=
2
Z
2
d1d2
¯¯¯¯
Z
d−2(1+ )(2− )
¯¯¯¯
2
= 12
1
≤
−
Z
Z
−
d1d2
¯¯¯¯
Z
1
d−2(1+ )(2− )
¯¯¯¯
2
Z
1−22−2(1+ 2)2d1d2
≤
2−1≤12≤1
1−22−2(1+ 2)2d1d2
30
Page 31
+
Z
1≤1≤
Z
−
2−2(1+ 2)2d1d2
´
=: (0
+ 00
)
Using the bound (A.17), we have
Z
≤ 4
0
≤
2−1≤12≤1
Z
1−22−22(1 + 1+ 2)−2d1d2
2≤12∞
1−22−2(1 + 1+ 2)−2d1d2= (4)
since → ∞ and the last integral is finite when 4 1.
On the other hand,
Z
since 4 1. Thus 1= (4).
It remains to show that 2= (4). By (A.17),
Z
Z
=: 4
00
≤
0≤2≤
2−2d2
Z
0≤≤2
()2d = () = (4)
2≤
−1≤12≤
d1d2
¯¯¯¯
Z
≤2−1d−2(1 + 1+ )−1(1 + 2− )−1
¯¯¯¯
¯¯¯¯
2
≤ 4
≤12∞
d1d2
Z
≤2d−2(1 + 1+ )−1(1 + 2− )−1
¯¯¯¯
2
It suffices to show that → 0. As → ∞, in the integral above we can apply the
bound ±  ≥  −  ≥ 2, = 12. Therefore
Z
≤
×
≤2d−2(1 + 1+ )−1(1 + 2+ )−1
(1 + 1)−12−2(1 + 2)−12−2
Z
(1 + 1)−12−2(1 + 2)−12−2
≤2d−2³
(1 + 1+ )−1++ (1 + 2+ )−1+´
≤
for 12. Thus,
≤
Z
≤12≤∞
(1 + 1)−1−(1 + 2)−1−d1d2→ 0
to complete the proof of (A.53) and (A.48).
Proof of (A.49). By (A.47), = ()1−2−2P
()1−2(− E()) = −2
=12
Therefore,
X
=1
(2
− E¡2
¢)
31
Page 32
By Theorem 2.1 in Giraitis, Taqqu and Terrin (1998),
−2
X
=1
(2
− E¡2
¢)
−→ 0
Z00
R2Ψ0(12)1−2−(d1)(d2)
with
Ψ0(12) :=
Z
R
ei(1+)− 1
i(1+ )
ei(2−)− 1
i(1− )
d = 2ei(1+2)− 1
i(1+ 2)
Hence,
()1−2(− E())
−→ 20()
to prove (A.49).
Proof of (A.50). By (2.1),
Z
= (2)−1
E¡e 2
()¢
=
−
()E(())d = (2)−1
µsin(2)
Z
−
Z
−
()()( + )2dd
Z
−
sin(2)
¶2
()d
where () is defined in (A.23). Then,
E¡e 2
()¢
= (2)−1
Z
−
Z
()2(0)d
+(2)−1
−
()2(() − (0))d =: 1+ 2
By Proposition A.2,
1= (0) = 2
+ −1−2 + (−1−2)
To prove (A.50), it remains to show that
2= (−1−2) + (()1−2)(A.54)
Set := ()where 0 is a small number satisfying (A.18). By (A.24),
Z
= [(()1−2) + (−1−2)]−1
≤
−1
≤
()2() − (0)d
Z
()2d = [(()1−2) + (−1−2)]
whereas by (A.25), () − (0) ≤ ()1−2, and therefore
Z
=1
−1
≤≤
¯¯¯
X
ei¯¯¯
2() − (0)d
32
Page 33
≤ 1−2−1
Z
Z
≤≤
µsin(2)
µsin(2)
2
¶2
1−2d
≤ ()1−2
≤≤∞
2
¶2
1−2d = (()1−2)
since
→ ∞, to prove (A.54).
Acknowledgments
We are grateful for the feedback of seminar participants at GREQAM, Imper
ial, Liverpool, Oxford, Tilburg, and the 2006 International Vilnius Conference on
Probability Theory and Mathematical Statistics. We also thank two referees and
Peter Robinson for their comments. This research is supported by ESRC grants
R000239538, RES000230176, and RES062230790.
References
Abadir, K.M., Distaso, W., Giraitis, L., 2007, Nonstationarityextended local
Whittle estimation. Journal of Econometrics, 141, 13531384.
Abadir, K. M., Talmain, G., 2002, Aggregation, persistence and volatility in a
macro model. Review of Economic Studies 69, 749—779.
Abadir, K. M., Taylor, A. M. R., 1999, On the definitions of (co)integration.
Journal of Time Series Analysis 20, 129—137.
Andrews, D. W. K., Monahan, J. C., 1992, An improved heteroskedasticity and
autocorrelation consistent covariance matrix estimator. Econometrica 60, 953—
966.
Baillie, R. T., Bollerslev, T., 1994, Cointegration, fractional cointegration, and
exchange rate dynamics. Journal of Finance 49, 737—745.
Bhansali, R. J., Giraitis, L., Kokoszka, P., 2007, Decomposition and asymptotic
properties of quadratic forms in linear variables. Stochastic Processes and
their Applications 117, 7195.
Cavaliere, G., 2001, Testing the unit root hypothesis using generalized range sta
tistics. Econometrics Journal 4, 70—88.
Chambers, M. J., 1998, Long memory and aggregation in macroeconomic time
series. International Economic Review 39, 1053—1072.
33
Page 34
Dalla, V., Giraitis, L., Hidalgo, J., 2006, Consistent estimation of the memory
parameter for nonlinear time series. Journal of Time Series Analysis 27, 211—
251.
Diebold, F. X., Rudebusch, G. D., 1989, Long memory and persistence in aggregate
output. Journal of Monetary Economics 24, 189—209.
GilAlaña, L. A., Robinson, P. M., 1997, Testing of unit root and other nonsta
tionary hypotheses in macroeconomic time series. Journal of Econometrics 80,
241—268.
Giraitis, L., Kokoszka, P., Leipus, R., Tessière, G., 2003, Rescaled variance and
related tests for long memory in volatility and levels. Journal of Econometrics
112, 265—294.
Giraitis, L., Taqqu, M. S., Terrin, N., 1998, Limit theorems for bivariate Appell
polynomials. Part: II. NonCentral Limit Theorems. Probability Theory and
Related Fields 110, 333—367.
Hannan, E. J., 1957, The variance of the mean of a stationary process. Journal of
the Royal Statistical Society B 19, 282—285.
Jowett, G. H., 1955, The comparison of means of sets of observations from sections
of independent stochastic series. Journal of the Royal Statistical Society B 17,
208—227.
Newey, W. K., West, K. D., 1987, A simple, positive semidefinite, heteroskedastic
ity and autocorrelation consistent covariance matrix. Econometrica 55, 703—
708.
Robinson, P. M., 1995a, Logperiodogram regression of time series with long range
dependence. Annals of Statistics 23, 1048—1072.
Robinson, P. M., 1995b, Gaussian semiparametric estimation of long range depen
dence. Annals of Statistics 23, 1630—1661.
Robinson, P. M., 1997, Largesample inference for nonparametric regression with
dependent errors. Annals of Statistics 25, 2054—2083.
Robinson, P. M., 2005, Robust covariance matrix estimation: HAC estimates with
long memory/antipersistence correction. Econometric Theory 21, 171—180.
34
Page 35
Velasco, C., Robinson, P. M., 2001, Edgeworth expansions for spectral density
estimates and studentized sample mean. Econometric Theory 17, 497—539.
White, H., 1980, A heteroskedasticityconsistent covariance matrix estimator and
a direct test for heteroskedasticity. Econometrica 48, 817—838.
35
Page 36
Table 1: MSE of HAC estimator ¯ 2
= 250
= 0 = 5
=¥02¦= 3
0.3630.1400.422
0.1490.051 1.208
0.1880.053 3.192
1.4770.305 11.284
=¥03¦= 5
0.358 0.186 0.723
0.176 0.0801.173
0.2250.0612.990
1.681 0.4937.439
=¥04¦= 9
0.3880.2340.978
0.2000.1090.908
0.285 0.0762.702
1.9140.726 6.205
=¥05¦= 15
0.4240.3730.910
0.2300.1670.853
0.3070.1162.716
2.0671.0073.617
=¥06¦= 27
0.4360.3680.995
0.2680.2090.513
0.3830.155 1.774
2.3331.3793.205
=¥07¦= 47
0.511 0.4631.077
0.3110.3280.535
0.4520.2451.704
2.5981.7622.038
=¥08¦= 82
0.6320.8570.954
0.4060.4340.394
0.5480.3731.063
2.9412.2292.006
(b).
= 500
= 0
=¥02¦= 3
0.379
0.158
0.176
1.348
=¥03¦= 6
0.357
0.200
0.243
1.669
=¥04¦= 12
0.402 0.176
0.2360.089
0.2930.053
1.9060.664
=¥05¦= 22
0.4360.293
0.2560.118
0.3340.075
2.0960.892
=¥06¦= 41
0.467 0.308
0.3020.128
0.3730.134
2.3621.236
=¥07¦= 77
0.5280.499
0.336 0.180
0.4270.223
2.5931.672
=¥08¦= 144
0.5810.569
0.4230.307
0.5590.338
2.8782.264
= 1000
= 0
=¥02¦= 3
0.395
0.162
0.175
1.275
=¥03¦= 7
0.391
0.210
0.254
1.625
=¥04¦= 15
0.432 0.154
0.2520.070
0.2960.039
1.883 0.560
=¥05¦= 31
0.4770.215
0.2690.069
0.3340.052
2.0940.767
=¥06¦= 63
0.515 0.302
0.2940.107
0.3840.093
2.3241.185
=¥07¦= 125
0.5430.314
0.3430.132
0.4520.175
2.5321.614
=¥08¦= 251
0.5880.402
0.4150.254
0.543 0.314
2.9132.153
= −5 = −5 = 5 = −5 = 5
= −04
= −02
= 0
= 02
= 04
6.1384.9922.6426.1545.033
0.126
0.039
0.037
0.214
1.983
0.740
1.657
3.387
14.489
6.0474.865
0.105
0.020
0.033
0.124
1.469
0.954
2.000
4.285
16.188
= −04
= −02
= 0
= 02
= 04
5.7494.3192.0485.4424.044
0.146
0.061
0.051
0.397
1.082
1.783
2.001
2.636
9.628
5.2443.656
0.111
0.056
0.031
0.320
0.722
2.595
2.784
3.586
12.018
= −04
= −02
= 0
= 02
= 04
5.4723.881 2.0295.2113.3561.372
2.194
2.006
1.902
6.506
5.0302.8721.715
4.053
2.929
2.632
7.444
= −04
= −02
= 0
= 02
= 04
5.298 3.5472.0235.1872.9091.806
2.652
1.620
1.410
4.817
5.0432.3762.962
3.740
2.702
1.754
5.556
= −04
= −02
= 0
= 02
= 04
5.2073.223 2.593 5.017 2.775 2.316
1.883
1.164
0.842
3.138
4.9192.2254.341
3.589
1.991
1.068
3.806
= −04
= −02
= 0
= 02
= 04
5.1133.2282.9585.022 2.512 3.170
1.764
0.844
0.646
2.255
4.8451.9456.103
3.416
1.552
0.786
2.755
= −04
= −02
= 0
= 02
= 04
4.9863.0724.4004.8472.6874.829
2.606
0.748
0.440
2.290
4.7962.1028.090
3.847
1.221
0.570
1.976
36
Page 37
Table 2: MSE of HAC estimator ¯ 2
(b) when is chosen according to (2.14).
= −5
4.8642.516
0.4570.306
0.219 0.117
0.224 0.049
10.2751.2960.164
= 250
= 0
3.101
0.489
0.122
0.065
0.219
= 500
= 0
= 1000
= 0
2.046
0.258
0.051
0.029
0.101
= −5
5.028
0.397
0.189
0.211
1.384
= 5
2.178
1.133
1.236
3.495
= 5
3.300
2.068
2.064
3.231
11.480
= −5
4.721
0.439
0.242
0.241
1.218
= 5
5.409
3.808
2.926
4.093
12.527
= −04
= −02
= 0
= 02
= 04
Table 3: MSE of MAC estimator b 2
= 0 = 5
=¥05¦= 15
0.6501.9522.289
0.2560.2491.948
0.2350.101 7.623
1.127 0.31732.052
=¥06¦= 27
0.619 1.1822.626
0.2600.4161.781
0.2460.077 6.269
1.1410.264 22.630
=¥07¦= 47
0.4970.580 1.174
0.2040.2011.032
0.2030.057 4.138
1.124 0.2599.757
=¥08¦= 82
0.3160.7730.361
0.1020.0650.327
0.0990.075 2.133
0.6960.264 3.188
(b).
= 5
= 250 = 500
= 0
= 1000
= 0
=¥05¦= 31
0.5480.531
0.272 0.081
0.2770.035
1.0890.198
=¥06¦= 63
0.552 0.494
0.2710.112
0.2800.034
1.1340.155
=¥07¦= 125
0.5100.262
0.2630.051
0.2660.018
1.136 0.130
=¥08¦= 251
0.3640.111
0.1860.021
0.185 0.010
0.8400.201
= −5 = −5 = −5 = 5
=¥05¦= 22
0.5571.201
0.260 0.158
0.2620.059
1.1100.249
=¥06¦= 41
0.5190.656
0.2760.134
0.2600.059
1.1950.231
=¥07¦= 77
0.5070.894
0.2410.081
0.2360.027
1.157 0.194
=¥08¦= 144
0.3290.211
0.1500.072
0.140 0.018
0.7690.235
= −04
= −02
= 0
= 02
= 04
5.0972.695 6.6675.0552.05216.202
6.749
2.895
5.787
35.765
4.8611.87825.473
5.767
3.988
5.144
36.869
= −04
= −02
= 0
= 02
= 04
4.8142.4467.032 4.6322.26415.770
4.186
2.866
5.616
26.383
4.6411.892 28.134
6.278
3.973
5.269
31.099
= −04
= −02
= 0
= 02
= 04
4.215 1.9875.2604.4601.913 9.449
2.453
1.644
3.575
14.549
4.365 1.66819.986
3.939
2.605
3.827
19.078
= −04
= −02
= 0
= 02
= 04
2.8961.6033.1763.1381.4753.044
0.900
0.517
1.682
4.667
3.4541.3445.342
1.133
0.809
1.381
6.192
37
Page 38
Table 4: Coverage probabilities for based on the tratio (4.1) and the HAC esti
mator ¯ 2
= 250
= −5
= −04
= −02
= 00.8900.880 0.9640.914
= 02 0.9040.8880.9680.890
= 040.830 0.8020.862 0.822
(b) with chosen according to (2.14).
= 0
0.984 0.956
0.9160.912
= 500
= 0
0.958
0.918
0.896
0.878
0.830
= 1000
= 0
0.948
0.922
0.930
0.910
0.870
= 5
0.972
0.966
= −5
0.970
0.932
= 5
0.942
0.958
0.962
0.970
0.870
= −5
0.984
0.928
0.922
0.926
0.884
= 5
0.950
0.956
0.958
0.970
0.898
Table 5: Coverage probabilities for based on the tratio (4.1) and the MAC esti
mator b 2
= −5
=¥05¦= 15
= −02
= 00.8660.890 0.9700.884
= 02 0.8660.896 0.9600.884
= 040.7920.862 0.9040.822
=¥06¦= 27
= −02
= 00.878 0.8980.9700.894
= 020.908 0.8920.9680.886
= 040.786 0.8380.9160.838
=¥07¦= 47
= −02
= 00.8800.9040.960 0.894
= 020.8660.8700.9580.880
= 040.8000.8060.8500.876
=¥08¦= 82
= −02
= 00.8900.9060.950 0.916
= 020.874 0.8780.920 0.918
= 04 0.866 0.8500.7920.902
(b).
= 250
= 0
= 500
= 0
= 1000
= 0
=¥05¦= 31
0.9360.938
0.8920.936
0.908 0.910
0.888 0.886
=¥06¦= 63
0.9220.926
0.9120.922
0.9140.918
0.8440.868
=¥07¦= 125
0.9120.934
0.9160.924
0.8960.940
0.8640.866
=¥08¦= 251
0.9420.928
0.9360.924
0.9380.920
0.9120.884
= 5 = −5 = 5 = −5 = 5
=¥05¦= 22
0.9480.948
0.916
0.924
0.858
=¥06¦= 41
0.9240.948
0.912
0.912
0.856
=¥07¦= 77
0.9420.940
0.930
0.896
0.870
=¥08¦= 144
0.936 0.910
0.890
0.922
0.862
= −04 0.972
0.924
0.984
0.922
0.988
0.980
0.9640.9840.990
0.968
0.974
0.982
0.934
0.9660.9700.992
0.968
0.964
0.968
0.932
= −04 0.962
0.912
0.978
0.940
0.986
0.970
0.9680.984 0.992
0.966
0.962
0.974
0.934
0.962 0.9860.980
0.960
0.972
0.984
0.932
= −040.970
0.938
0.984
0.928
0.986
0.978
0.9820.9720.982
0.948
0.966
0.958
0.920
0.9760.9820.972
0.970
0.942
0.944
0.916
= −040.998
0.948
0.976
0.910
0.962
0.922
0.9900.9840.948
0.934
0.922
0.920
0.828
0.9960.9740.962
0.902
0.936
0.942
0.852
38
Download fulltext