ArticlePDF Available

Which Lag Selection Criteria Should We Employ?

Authors:

Abstract and Figures

Estimating the lag length of autoregressive process for a time series is a crucial econometric exercise in most economic studies. This study attempts to provide helpfully guidelines regarding the use of lag length selection criteria in determining the autoregressive lag length. The most interesting finding of this study is that Akaike's information criterion (AIC) and final prediction error (FPE) are superior than the other criteria under study in the case of small sample (60 observations and below), in the manners that they minimize the chance of under estimation while maximizing the chance of recovering the true lag length. One immediate econometric implication of this study is that as most economic sample data can seldom be considered “large†in size, AIC and FPE are recommended for the estimation the autoregressive lag length.
Content may be subject to copyright.
Which Lag Length Selection Criteria Should We Employ?
Venus Khim−Sen Liew
Universiti Putra Malaysia
Abstract
Estimating the lag length of autoregressive process for a time series is a crucial econometric
exercise in most economic studies. This study attempts to provide helpfully guidelines
regarding the use of lag length selection criteria in determining the autoregressive lag length.
The most interesting finding of this study is that Akaike’s information criterion (AIC) and
final prediction error (FPE) are superior than the other criteria under study in the case of
small sample (60 observations and below), in the manners that they minimize the chance of
under estimation while maximizing the chance of recovering the true lag length. One
immediate econometric implication of this study is that as most economic sample data can
seldom be considered “large” in size, AIC and FPE are recommended for the estimation the
autoregressive lag length.
Citation: Liew, Venus Khim−Sen, (2004) "Which Lag Length Selection Criteria Should We Employ?." Economics
Bulletin, Vol. 3, No. 33 pp. 1−9
Submitted: May 16, 2004. Accepted: September 17, 2004.
URL: http://www.economicsbulletin.com/2004/volume3/EB−04C20021A.pdf
1
1. Introduction
It is well known that most economic data are time series in nature and that a popular kind
of time series model known as autoregressive (AR) model has been directly or indirectly
applied in most economic researches. Note that the foremost exercise in the application of
AR model is none other than the determination of autoregressive lag length. In this
respect, many lag length selection criteria have been employed in economic study to
determine the Autoregressive (AR) lag length of time series variables. Briefly, an AR
process of lag length p refers to a time series in which its current value is dependent on its
first p lagged values and is normally denoted by AR (p). Note that the AR lag length p is
always unknown and therefore has to be estimated via various lag length selection criteria
such as the Aikaike’s information criterion (AIC) (Akaike 1973), Schwarz information
criterion (SIC) (Schwarz 1978) Hannan-Quinn criterion (HQC) (Hannan and Quinn
1979), final prediction error (FPE) (Akaike 1969), and Bayesian information criterion
(BIC) (Akaike 1979); see Liew (2000) for an overview of these criteria. These criteria
especially the AIC have been popularly adopted in economic studies, see for examples
the works of Sarantis (1999, 2001) and Baum et al. (2001), Baharumshah et al. (2002),
Ng (2002) and Tang (2003) who employed the AIC, Sarno and Taylor (1998) who
employed the AIC and SIC, Ahmed (2000) who used the AIC and BIC, Yamada (2000)
who used AIC and HQC, Tan and Baharumshah (1999) and Ibrahim (2001) who
deployed the FPE, Dropsy (1996), Azali et al. (2001) and Xu (2003) who utilized the SIC
in their empirical research. However, no special study has been allocated to contrast the
performances of these lag length selection criteria, although few empirical studies (Taylor
and Peel 2000, Baum et al. 2001, Guerra 2001) do notify the inconsistency of these
criterion and their tendency to under estimate the autoregressive lag length1. This
simulation study is specially conducted to compare the empirical performances of various
lag length selected criteria, with the principle objective of discovering the best choice of
lag length criteria, an issue which has substantial econometric impact on most empirical
economic studies.
The major findings in the current simulation study are previewed as follows. First,
these criteria managed to pick up the correct lag length at least half of the time in small
sample. Second, this performance increases substantially as sample size grows. Third,
with relatively large sample (120 or more observations), HQC is found to outdo the rest
in correctly identifying the true lag length. In contrast, AIC and FPE should be a better
choice for smaller sample. Fourth, AIC and FPE are found to produce the least
probability of under estimation among all criteria under study. Finally, the problem of
over estimation, however, is negligible in all cases. The findings in this simulation study,
besides providing formal groundwork supportive of the popular choice of AIC in
previous empirical researches, may as well serve as useful guiding principles for future
economic researches in the determination of autoregressive lag length.
The rest of this paper is organized as follows. Section 2 briefly describes the AR
process, the lag length selection criteria and simulation procedure. Section 3 presents and
discusses the results of this simulation study. Section 4 offers a summary of this study.
1 A related work by Liew (2000) studies the performance of an individual criteria, namely the Aikaike’s
biased corrected information criterion, AICC. The current study is more comprehensive than Liew (2000)
in the sense that more criteria are involved for the purpose of comparative study.
2
2. Methodology of Study
2.1 Autoregressive process
Mathematically, an AR(p) process of a series t
y may be represented by
tptpttt yayayay ε++++= ...
2211 (1)
where p
aaa ,...,,21 are autoregressive parameters and t
ε are normally distributed random
error terms with a zero mean and a finite variance 2
σ.
The estimation of AR (p) process involves 2 stages: First, identify the AR lag
length p based on certain rules such as lag length selection criteria. Second, estimate the
numerical values for intercept and parameters using regression analysis. This study is
confined to the study of the performances of various commonly used lag length selection
criteria in identifying the true lag length p. In particular, this study generates AR
processes with p arbitrary fixed at a value of 4 and uses these criteria to determine the lag
length of each generated series as if the lag length is unknown. The autoregressive
parameters are independently generated from uniform distribution with values ranging
from -1 to 1 exclusively. Measures are taken to ensure that the sum of these simulated
autoregressive parameters is less than unity in magnitude (| 4321 aaaa +++ |< 1) so as to
avoid non-stationary AR process. The error term is generated from standard normal
distribution. We simulate data sets for various usable sample sizes, S: 30, 60, 120, 240,
480 and 960. For each combination of processes and sample sizes, we simulated 1000
independent series for the purpose of lag length estimation. In every case, the initial
value, 0
y is arbitrary set to zero. In an effort to minimize the initial effect, we simulate
3S observations and discard the first 2S observations, leaving the last S observations for
lag length estimation. The estimated lag length p
ˆis allowed to be determined from any
integer ranging from 1 to 20 inclusively. In this respect, we compute the values for all 20
lag lengths for each specific criterion and p
ˆ is taken from the one that minimizes that
criterion. Note that each criterion independently selects one p
ˆfor the same simulated
series.
2.2 Lag length selection criteria
The lag length selection criteria to be evaluated include2:
(a) Akaike information criterion, AICp= – 2T [ln( 2
ˆp
σ)] + 2p ; (2)
(b) Schwarz information criterion, SICp = ln( 2
ˆp
σ)+ [p ln(T)]/T ; (3)
(c) Hannan-Quinn criterion, HQCp = ln( 2
ˆp
σ)+2 1
Tp ln[ln(T)]; (4)
(d) the final prediction error, FPEp=2
ˆp
σ)()( 1pTpT + and (5)
2 Among other criteria not taken up in this study include: First, the Schwert (1987, 1989) criteria, which are
defined as [(4S/100)0.25] and [(12S/100)0.25] respectively, with S denoting the sample size and [A] stands for
the integer part of the real number A, see for instance Habibullah (2001) and Habibullah and Baharumshah
(2001) for their applications. Second, the Akaike’s corrected information criterion, AICCp= –2T [ln( 2
ˆp
σ)]
+ 2Tp / (Tp), see Liew (2000) for a simulation study on its performance as well as its application. Last
but not least, the partial autocorrelation function as applied in among others, Taylor and Peel (2000),
Guerrra (2001) and Liew et al. (2003).
3
(e) Bayesian information criterion,
BICp=(T–p) ln[ 21
)( p
TpT σ
]+T[1+ln( π2)]+
pln[ )
ˆ
(
1
221
=
T
tpt Typ σ], (6)
where
=
=
T
pt
tp pT 212 ˆ
)1(
ˆεσ , t
ε is the model’s residuals and T is the sample size.
Note that the cap sign (^) indicates an estimated value. Liew (2000) provides an
overview on these criteria, whereas details are given in, for instance, Brockwell and
Davis (1996) and the references therein.
The main task of this study is to compute the probability of each of these criteria
in correctly estimated the true autoregressive lag length. Note that this probability takes a
value between zero and one inclusively, with a probability of zero means that the
criterion fails to pick up any true lag length and thereby is a poor criterion. On the other
hand, a probability of one implies that the criterion manages to correctly select the true
lag length in all cases and hence is an excellent criterion.
Besides, we also inspect the selected lag lengths of the estimated lag length for
1000 simulated series of known lag length (that is, p = 4), so as to gain deeper
understanding on the performance of various criteria. We will refer to the situation
whereby a criterion selected lower lag lengths than the true ones as under estimate,
whereas over estimate would mean the selection of higher lag lengths than the true ones.
2.3 Simulation procedure
Briefly, the simulation procedure involves three sub-routines: with the first sub-routine
generates a series of from the AR process, whereas the second sub-routine selects the
autoregressive lag length of the simulated series and the third sub-routine evaluates the
performance of the lag length selection criteria. The algorithm for the simulation
procedure for each combination of sample size S and AR lag length p is outlined as
follows:
1. Independently generate 1
a, 2
a and 3
a from a uniform distribution in the range
(-1, 1), conditioned on |
=
4
1
i
i
a| < 1.
2. Generate a series of size 3S from the AR process as represented in Equation (1)
of lag length p= 4 with 1
a, 2
a and 3
a obtained from Step 1. Initialize the starting
value, 0
y = 0. Discard the first 2S observations to minimize the effect of initial
value.
3. Use each selection criterion to determine the autoregressive lag length ( )
ˆ
pfor
the last S observations of the series simulated in Step 2. Five selection criteria are
involved.
4. Repeat Step 1 to Step 3 for B times, where B is fixed at 1000 in this study.
5. Compute the probabilities of (i) correct estimate, which is computed as
Bpp /)
ˆ
(# =; (ii) under estimate, which is computed as Bpp /)
ˆ
(# <; and (iii) over
estimate, which is computed as Bpp /)
ˆ
(# >, where #() denotes numbers of time
event ( ) happens.
4
3. Results and discussions
The probability of various criteria in correctly estimated the true lag length of the AR,
process is tabulated in Table 1. Generally, Table 1 shows more than half of the time, AIC,
SIC, FPE, HQC and BIC correctly estimated the true autoregressive lag length, in all
cases. For example, for the case of sample size equals 30, the probability in correctly
recovering the true lag length for each of the above criterion is, in that lag length, 0.554,
0.510, 0.554, 0.542 and 0.515. This means that out of 1000 simulated series of known lag
length, AIC, SIC, FPE, HQC and BIC respectively have correctly identified the true lag
length 554, 510, 554, 642 and 515 times. Table 1 also shows that these criteria perform
better and better as the sample size grows. With a sample size of 960, the probability
concerned for each of the same the five criteria has reached a value of 0.765, 0.802,
0.765, 0.818 and 0.807 respectively. This conclusion of improvement in performance for
each of these five criteria as the sample size grows is clearly depicted in Figure 1. Thus,
around 80% of the true lag length has been correctly detected by these five criteria under
study. Summing up these two findings, we may conclude that these criteria perform fairly
well in picking up the true lag length especially when one has large enough sample size.
Table 1: Probability of correctly estimated the true lag length of AR process, ( 4
ˆ=p).
Lag length Selection Criteria Sample Size
(Logarithmic Scale) AIC SIC FPE HQC BIC
30 (1.48) 0.554 0.510 0.554 0.542 0.515
60 (1.78) 0.567 0.537 0.567 0.563 0.537
120 (2.08) 0.616 0.592 0.616 0.631 0.596
240 (2.38) 0.703 0.687 0.703 0.715 0.691
480 (2.68) 0.749 0.750 0.749 0.772 0.755
960 (2.98) 0.765 0.802 0.765 0.818 0.807
Figure 1: Performances of various criteria in correctly selected the true lag length.
1.48 1.78 2.08 2.38 2.68 2.98
AIC
SIC
FPE
HQC
BIC
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Probability
Size (in logarithmic scale)
Criterion
p = 4
The third finding revealed by Table 1 is that AIC and FPE (both constructed by
Akaike) seems to have identical performance in terms of their ability to correctly locating
the true lag length. In fact, a closer inspection on the selected lag length for each
simulated series (results not shown) discovered that they consistently choose the same lag
5
length at all times3. One would expect AIC to improve over FPE as it was proposed by
Akaike to overcome the inconsistency of the latter (Akaike 1973). However, such
improvement is not observed in this study.
An interesting question in mind is whether we can identify the best criterion in
selecting the AR lag length. However, it is difficult to just from Table 1 regarding this
matter, as no criterion is found to consistently perform better than the rest in all cases.
Nonetheless, it is observed that HQC performs substantially better than others, in when
the sample size is equal to or larger than 120. However, for sample size smaller than this
figure, AIC and FPE turns out to be the better choice.
Further analysis of the distribution of the selected lag lengths is conducted and the
results are summarized in Tables 2 and 3. Table 2 reveals that for a sample data
containing up to 120 observations, AIC, SIC, FPE, HQC and BIC have under-estimated
the true lag length with a probability falling in the range of 0.289 and 0.473 inclusively.
On the other hand, the probability of under estimation reduces as sample size grows, to
an acceptable extent for a sample size as large as 960, with a respective probability of
0.128, 0.192, 0.128, 0.151 and 0.182. This finding is may be clearly seen from Figure 2.
However, as researchers hardly have large sample, identifying the criterion that
minimizes the probability of under estimation may be a more practically effort. In this
regards, it is observed from Table 2 that AIC and FPE consistently out-do the rest across
all sample sizes. Thus, if our objective is to avoid too low the lag length being selected, it
is advisable to adopt AIC and/or FPE. The gain in choosing of these two criteria is even
significant in sample size of not more than 60 observations. In such ideal case, apart from
minimizing the chance of under estimation, one can simultaneously maximize the chance
of getting the correct lag length. This conclusion may be taken as formal statistical
support for the well-liked use of AIC criterion in previous empirical studies.
Table 2: Probability of under estimated the true lag length of AR process, ( 4
ˆ<
p).
Lag length Selection Criteria Sample Size
(Logarithmic Scale) AIC SIC FPE HQC BIC
30 (1.48) 0.362 0.473 0.362 0.418 0.463
60 (1.78) 0.353 0.453 0.353 0.402 0.451
120 (2.08) 0.289 0.399 0.289 0.336 0.387
240 (2.38) 0.216 0.307 0.216 0.258 0.299
480 (2.68) 0.168 0.247 0.168 0.201 0.234
960 (2.98) 0.128 0.192 0.128 0.151 0.182
Regarding over estimation, Table 3 shows that AIC, SIC, FPE, HQC and BIC is
negligible in all cases regardless of small sample size. In fact, the probability of over
estimation is well less than 10% for all criteria across most sample sizes. This empirical
finding is in line with the built-in property of these criteria, which are designed in such a
way that larger lag length is less preferable, in the spirit of parsimony (that is the simpler
the better).
3 Hence, these two criteria also have the same level of under estimation and over estimation as will be
shown in Tables 2 and 3 later.
6
Figure 2: Performances of various criteria in under estimated the true lag length.
1.48
1.78
2.08
2.38
2.68
2.98
AIC
SIC
FPE
HQC
BIC
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Probability
Size (in logarithmic scale)
Criterion
p = 4
Table 3: Probability of over estimated the true lag length of AR process, (4
ˆ>
p).
Lag length Selection Criteria
Sample Size
(Logarithmic Scale) AIC SIC FPE HQC BIC
30 (1.48) 0.084 0.017 0.084 0.040 0.022
60 (1.78) 0.080 0.010 0.080 0.035 0.012
120 (2.08) 0.095 0.009 0.095 0.033 0.017
240 (2.38) 0.081 0.006 0.081 0.027 0.010
480 (2.68) 0.083 0.003 0.083 0.027 0.011
960 (2.98) 0.107 0.006 0.107 0.031 0.011
4. Summary
The determination of autoregressive lag length for a time series is especially important in
economics studies. Various lag length selection criteria such as the Aikaike’s information
criterion (AIC), Schwarz information criterion (SIC), Hannan-Quinn criterion (HQC),
final prediction error (FPE) and Bayesian information criterion (BIC) have been
employed for this while by researchers in this respect. As the outcomes of these criteria
may influence the ultimate findings of a study, a throughout understanding on the
empirical performance of these criteria is warranted. This simulation study is specially
conducted to shed light on this matter.
The current study independently simulate 1000 series from autoregressive process
of known lag length (p = 4) each of the various sample sizes ranging from 30 to 960
observations in each series. Each lag length selection criterion is then allowed to
independently estimate the autoregressive lag length for each simulated series, yielding
some 1000 selected lag lengths for each criterion. Based on these selected lag lengths, we
compute the probabilities in which the true lag length is correctly identified, under
estimate and over estimate. The results, which provide useful insights for empirical
researchers are summarized as follows.
First, these criteria managed to pick up the correct lag length at least half of the
time in small sample. Second, this performance increases substantially as sample size
grows. Third, with relatively large sample (120 or more observations), HQC is found to
7
outdo the rest in correctly identifying the true lag length. In contrast, AIC and FPE should
be a better choice for smaller sample. Fourth, AIC and FPE are found to produce the least
probability of under estimation among all criteria under study. Finally, the problem of
over estimation, however, is negligible in all cases. As many econometric testing
procedures such as unit root tests, causality tests, cointegration tests and linearity tests
involved the determination of autoregressive lag lengths, the findings in this simulation
study may be taken as useful guidelines for future economic researches.
References
Ahmed, M. (2000) “Money–income and money–price–causality in selected SAARC
countries: some econometric exercises” Indian Economic Journal 48, 55 – 62.
Akaike, H. (1969) “Fitting autoregressive models for prediction” Annals of the Institute
of Statistical Mathematics 21, 243 – 247.
Akaike, H. (1973) “Information theory and an extension of the maximum likelihood
principle” in 2nd International Symposium on Information Theory by B. N. Petrov
and F. Csaki, eds., Akademiai Kiado: Budapest.
Akaike, H. (1979) “A Bayesian extension of the minimum AIC procedure of
Autoregressive model fitting” Biometrika 66, 237 – 242.
Azali, M., A. Z. Baharumshah, and M.S. Habibullah (2001) “Cointegration test for
demand for money in Malaysia: Does exchange rate matter?” in Readings on
Malaysia Economy: Issues in Applied Macroeconomics by M. Azali, ed. Imagepac
Print: Kuala Lumpur.
Baharumshah, A. Z., A.M.M. Masih and M. Azali (2002) “The stock market and the
ringgit exchange rate: A note” Japan and the World Economy 14, 471 – 486.
Brockwell, P. J., R.A. Davis (1996) Introduction to Time Series and Forecasting.
Springer: New York.
Dropsy, V. (1996) “Macroeconomics determinants of exchange rates: A frequency-
specific analysis” Applied Economics 28, 55 – 63.
Guerra, R. (2001) “Nonlinear adjustment towards Purchasing Power Parity: the Swiss
France - German Mark case” Working Paper, Department of Economics, University
of Geneva.
Habibullah, M. S. and A. Z. Baharumshah (2001) “Money, output and stock prices in
Malaysia” in Readings on Malaysia Economy: Issues in Applied Macroeconomics by
M. Azali, ed. Imagepac Print: Kuala Lumpur.
Habibullah, M. S. (2001) “Rational expections, survey data and cointegration” in
Readings on Malaysia Economy: Issues in Applied Macroeconomics by M. Azali,
ed. Imagepac Print: Kuala Lumpur.
Ibrahim, M. H.(2001) “Financial factors and the empirical behaviour of money demand:
A case study of Malaysia” International Economic Journal 15, 55 – 72.
Hannan, E. J. and B.G. Quinn (1978) “The determination of the lag length of an
autoregression” Journal of Royal Statistical Society 41, 190 – 195.
Liew, K. S. (2000) “The performance of AICC as lag length determination criterion in
the selection of ARMA time series models” Unpublished Thesis, Department of
Mathematics, Universiti Putra Malaysia.
8
Liew, V. K. S., T. T. L. Chong, K.P. and Lim (2003) ‘The inadequacy of linear
autoregressive model for real exchange rates: Empirical evidences from Asian
economies” Applied Economics 35, 1387 – 1392.
Ng, T. H. (2002) “Stock Market Linkages in South East Asia” ASEAN Economic
Journal 16, 353 – 377.
Sarantis, N. (1999) “Modelling non-linearities in real effective exchange rates” Journal
of International Money and Finance 18, 27 – 45.
Sarantis, N. (2001) “Nonlinearities, cyclical behaviour and predictability in stock
markets: international evidence” International Journal of Forecasting 17, 439 – 482.
Sarno, L. and M.P. Taylor (1998) “Real exchange rates under the recent float:
unequivocal evidence of mean reversion” Economics Letters 60, 131 – 137.
Schwarz, G. (1978) “Estimating the dimension of a model” Annals of Statistics 6, 461 –
464.
Schwert, G. W. (1987) “Effects of model specification on tests for unit roots in
macroeconomic data” Journal of Monetary Economics 20, 73 – 103.
Schwert, G. W. (1989) “Tests for unit roots: A Monte Carlo investigation” Journal of
Business and Economics Statistics 7, 147 – 159.
Tan, H. B. and A. Z. Baharumshah (1999) “Dynamic causal chain of money, output,
interest rate and prices in Malaysia: evidence based on vector error-correction
modeling analysis” International Economic Journal 13, 103 – 120.
Tang. T. C. (2003) “Singapore’s aggregrate import demand function: Southeast Asian
economies compared” Labuan Buletin of International Business and Finance 1, 13 –
28.
Taylor, M. P. and Peel, D. (2000) “Nonlinear adjustment, long-run equilibrium and
exchange rate fundamentals” Journal of International Money and Finance 19, 33 –
53.
Xu, Z. (2003) “Purchasing power parity, price indices, and exchange rate forecasts”
Journal of International Money and Finance 22, 105 – 130.
Yamada, H. (2000) “M2 demand relation and effective exchange rate in Japan: a
cointegration analysis” Applied Economics Letters 7, 229 – 232.
... The second is to compare different model selection criteria in selecting an appropriate model. Liew (2004) has expatiated the appropriateness of lag selection criteria to be employed in time series econometrics. ...
... The AIC selected the 12 (0,1,1)(0,1,1) SARIMA , as the best model while the RMSE and MAE selected the (1, 1, 0) (1, 1, 0) 12 , and (1, 1, 0) (0, 1, 1) 12 , as their respective best models. Though, in a time series econometrics framework, the AIC is generally preferred to other model selection or evaluation criteria under a univariate time series framework (Aje et al., 2024;Liew, 2004) but these remaining two competing models will further be examined based on their standard errors. Table 4 presents the estimates of the nonseasonal and seasonal orders of the competing models. ...
... Our findings established that ARIMA (3, 1, 0), which is a non-seasonal univariate model, is not adequate for forecasting the EXR series due to its outlier AIC value. Further findings also established the preference of AIC over RMSE and MAE in selecting the optimal ARIMA (0, 1, 1) (0, 1, 1) 12 model for the EXR series, which is in accordance with Liew (2004) work. Based on the forecasts from the optimal ARIMA (0, 1, 1) (0, 1, 1) 12 model, we conclude that Naira is projected to continue to fluctuate and decrease in value against the United States Dollar over the © 2025 Department of Mathematics, Modibbo Adama University. ...
Article
Full-text available
Over the past eight decades, the United States Dollar has become the vehicle currency driving other major foreign currencies as well as minor currencies emerging in the financial markets. Consequently, the goal of developed and developing countries is to maintain good exchange rates with the USD through effective monetary policy management. To assess the monthly Naira-Dollar exchange rates (EXR) relationship, this study applied the non-seasonal Autoregressive Integrated Moving Average (ARIMA) and Seasonal ARIMA (SARIMA) models to examine the forecasting dynamics of the series. The pre-test results confirmed that EXR is a stationary difference process of order one {I(1)}, contains seasonality. SARIMA (0, 1, 1)(0, 1, 1)12 was observed to outperform its ARIMA (3, 1, 0) counterpart. The preference of the Akaike Information Criterion (AIC) in selecting the model over Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) was highlighted. A diagnostic check was carried out on the identified model and it revealed that the residual of the model is a white noise since it is homoscedastic, stationary, and non-autocorrelated. Findings from this study established that the Naira is projected to continue to fluctuate and further lose value relative to the USD within the forecasted time frame. Based on these findings, both the Government and policymakers need to formulate policies that will further enhance investment incentives, currency stabilization, and local production support. The practical implications of the forecast results on the country’s economy include rising costs of living, higher inflation, reduced foreign direct investment, job losses, worsening trade deficit, and skilled worker migration.
... The remaining requirement for the Granger causality test is optimal lag selection. This is a crucial step are superior to other criteria as they minimise the chances of underestimation while maximising the chances of recovering the true lag length (Khim-Sen Liew, 2004). The implementation of both tests will aid the determination of optimal lag length in order to perform the following Granger causality test. ...
... When selecting the information criterion to determine optimal lag length, lower values indicate a superior model of fit. The optimal lag selection conforms to the findings of Khim-Sen Liew (2004), as Akaike Information Criterion (AIC) is chosen as the superior information criteria. The optimal lag length (n) is presented in Table 6. ...
Thesis
Full-text available
This body of research aims to study the relationship between two distinct disruptive technologies, Bitcoin, and the Dark Market, as both entities are used in tandem and are not mutually exclusive. Bitcoin is a virtual currency, independent of government control, and its popularity is derived from its features such as decentralisation, anonymity, and its ability to bypass financial intermediaries. The Dark Market is an illegal arena where the sale of illicit goods and services occurs. Dark Market consumers have adopted Bitcoin as the primary tender used for transactions, with circa 46% of all Bitcoin transactions being related to the sale of goods on the dark web. Given this fact, this study aims to determine a causal relationship between Dark Market activity and Bitcoin returns in an attempt to legitimise Bitcoin as a credible asset, determine Bitcoin price formation and aid in regulation formation and oversight. The results highlight no causal relationship between Dark Market activity and Bitcoin returns, however, an interesting takeaway of this study in terms of regulation is the causal relationship between Dark Market activity and the privacy coin, Monero. This study is unique in the sense that no research in this field has been completed before, thus filling a gap in the literature. The contribution to the literature is noted in chapter five. The economic and regulatory significance of these results are also presented, which discuss Bitcoin price formation, and suggestions for a cryptocurrency regulatory trajectory.
... Table 3 presents the results of the lag selection process based on Akaike Information Criterion (AIC), Schwarz Bayesian Information Criterion (SBIC), and Hannan-Quinn Criterion (HQC). The optimal lag length was 3 based on the AIC, which is a commonly used criterion for time-series analysis (Sen Liew, 2004). (Billio et al., 2012;Ehrmann et al., 2011;, reinforcing the US as a global financial leader. ...
Article
Full-text available
Financial markets are a system of complex price dynamics that are often influenced by various nonlinear factors. Traditional linear models often do not capture the inherent nonlinearities that exist between them. The aim of this study was to examine the interconnectedness of global financial markets using a multivariate Granger causality framework, focusing the United States, Europe, Asia, and emerging markets. Daily closing share prices spanning 13 years were utilised (January 2010 to December 2023) to analyze the shock transmission dynamics. The findings revealed unidirectional causality from the U.S. to European, Japanese, and Emerging markets, underscoring its dominant role in global financial networks. Conversely, European and Asian markets exhibit no reverse causality on the United States, highlighting asymmetrical interdependence. Notably, China’s Shanghai Composite Granger-causes Emerging market returns, reflecting its rising influence. These insights challenge conventional models that underestimate multilateral linkages, demonstrating that financial integration has intensified bidirectional interactions between Emerging markets and developed markets. By implication, investors should reassess exposure to dominant markets like the U.S. and China’s growing sway over Emerging economies while policymakers should prioritize cross-border spillover monitoring and systemic risk frameworks to address asymmetrical dependencies. Also, institutions must adopt nonlinear models to better capture shock transmission and evolving interdependencies, enhancing resilience against contagion.
... For this study, the optimal lag is one and this is based on the Akaike's Information Criterion (AIC) and Final Prediction Error (FPE) (see Table 3 below). Liew (2004) suggested that the AIC and FPE are more powerful and reliable than the other criteria when a study uses a sample below 60 observations. Because this study has 37 observations, the AIC and FPE were used because the criteria reduce the possibility of underestimation while maximising the possibility of recovering the true lag length. ...
Article
Full-text available
Energy efficiency potentially reduces global carbon emissions, whereas the need of emerging countries to maintain economic growth and development entails a sharp increase in energy consumption. However, to meet this, current energy systems need to be transformed. Several studies find different conclusions on the short-run and long-run relationship and the direction of causality, and none of the studies have considered energy efficiency in their model. This study investigates the direction of causality between energy efficiency, energy consumption, and economic growth in South Africa. To determine if a long-run relationship between the variables exists, the Johanson cointegration test is used, and the results indicate that there is a long-run relationship between economic growth, energy depletion, energy efficiency, non-renewable energy consumption, renewable energy consumption, and energy security, with trace statistics suggesting that the null hypothesis of no cointegration should be rejected at a 5% level of significance. The Toda and Yamamoto procedure of the Granger causality approach was then applied. This study finds a unidirectional causality between energy efficiency, non-renewable energy consumption, and economic growth and no causality between renewable energy consumption, energy depletion, energy security, and economic growth. The growth hypothesis is supported, while the neutrality hypothesis is only confirmed regarding renewable energy consumption and economic growth. The results further suggest that a unidirectional Granger causality exists between non-renewable consumption and energy efficiency, and economic growth in South Africa. In South Africa, energy efficiency is a significant tool to enhance sustainable growth and attain climate objectives. Also, energy efficiency helps to lower the costs of mitigating carbon emissions and further advance both social and economic development.
... The AIC and HQIC suggested using lags (3), which aligned with ensuring that the model captures the underlying dynamics while avoiding overfitting (Hannan & Quinn, 1979;Luetkepohl, 2005). Additionally, according to Liew (2004), the AIC is arguably considered to be more accurate for samples sizes that are less than 60 while HQIC is also considered to provide a middle ground between the AIC and SBIC, making lags (3) a more considerable choice for the model given that the sample size used in this paper is below 60. Although the SBIC suggested lag (1) due to its higher penalty on additional parameters, and the FPE suggested lags (4) for predictive accuracy, the balance achieved by AIC and HQIC was prioritized in this paper. ...
Article
Full-text available
Purpose: This study investigates the dynamic relationship between macro-economic variables, specifically FDI and economic growth (GDP per capita), and their impact on GHG emissions in Zambia. Using time series data from 1990 to 2022, this investigation tested the Environmental Kuznets Curve (EKC) and Pollution Haven Hypothesis (PHH). Materials and Methods: The econometric analysis made use of the Vector Error Correction Model (VECM) to evaluate the short run and long run relationship among the variables while the granger causality test was used to analyze the causal relationship among the variables. Findings: The findings revealed that the EKC does not hold in Zambia, indicating a U-shaped relationship between economic growth and GHG emissions. Furthermore, FDI in Zambia supports environmental sustainability, aligning with the Pollution Halo Hypothesis (PHAH) rather than the PHH at least at the low level of current Zambia’s output. The granger causality results indicated that there exists a positive unidirectional causal relationship from GDP per capita to GHG emissions and no causality relationship among the rest of the variables. Unique contribution to Theory, Practice and Policy: These results provide insights for policymakers to balance economic growth with environmental sustainability hence the study suggested policy recommendations aimed at strengthening regulations, promoting green technology, and attracting sustainable foreign investment.
... The selection lag order was performed to determine the number of lagged terms to include in the VAR model. The selection was based on criteria such as AIC, BIC, FPE, and HQIC as incorporated in the studies of Liew (2004), Ding et al. (2016), and Azhar et al. (2021). The results are presented in Table 4. ...
Article
Full-text available
Malaria remains a significant global health concern which continues to pose a life-threatening risk globally. The disease, transmitted by Anopheles mosquitoes acting as vectors, requires favorable environments for effective transmission. These environments are influenced by factors such as meteorological conditions and vegetation cover; a number of which have been examined in this study and incorporated into modeling the observed malaria incidence. This method provides a solution for common data inconsistencies encountered in healthcare and epidemiological research, while also offering predictions on incidence rates, thereby enabling more informed decision-making processes. A multivariate statistical modelling approach using the Vector Autoregressive (VAR) model has been employed, enabling dynamic analysis of all relevant parameters simultaneously. The environmental information obtained from satellite and reanalysis datasets, along with the recorded malaria cases in Dhalai district, Tripura, India, were evaluated for causality, refined, and subsequently utilized in the modelling process. The model’s reliability was assessed by comparing its short-term forecast with actual data using a number of accuracy metrics, revealing a mean absolute percentage error of 1.16% and a correlation coefficient of 0.721 between the testing and forecasted malaria incidence data. These observations highlight the model’s effectiveness in accurately capturing the variations in malaria incidence and its predictive capability. Notably, this model has yet to be widely utilized, which presents a unique opportunity for further exploration in other regions. Such studies could significantly contribute to the development of more targeted and effective control measures.
Article
The determinants of the global financial cycle are empirically investigated in this study report. The presence of concurrent changes in capital flows, asset prices, and global bank leverage is associated with the Global Financial Cycle (GFCy). According to the research now in publication, the Chicago Board of Exchange's VIX (Volatility Index), which gauges market uncertainty and risk aversion, indicates this cycle. The Federal Reserve's monetary policy decisions are the driving force behind this cycle, and the literature already in existence has examined the ramifications of these decisions. The GFCy and, thus, the financial circumstances of emerging market economies (EMEs) could be impacted by additional global shocks. Other global shocks have the potential to impact the global financial cycle and analysis of the same is required to make the existing literature more robust. Our analysis, which includes a study of identifying the potential global shocks for a period of 23 years data (quarterly), indicates that the global financial cycle is driven by global liquidity and global economic policy uncertainty. VECM, Granger Causality, Impulse Response functions were applied. There is a unidirectional causal relationship between the global financial cycle and global liquidity, as well as a unidirectional relationship between the global financial cycle and global economic policy uncertainty.
Article
The role of foreign aid in supporting economic development and its effectiveness in decreasing poverty in the developing countries have been an intense topic of debate in the development context. This dissertation focuses on the social economic costs related to foreign aid in Afghanistan. The aim is to suggest a series of actions and policies to make foreign aid programs more effective on economic development and governance in Afghanistan. The researcher applied quantitative research methods and employed secondary time series annual data for the period of 1960-2021. The study used. The study conducted the analysis of time series data through ADF to check Stationarity of each variable first and then Johansen co-integration tests to assess the long run relationship of foreign aid and GDP of Afghanistan. All analyses were carried out using econometric software package EView 10. The results of ADF tests showed that variables ODA and GDP of Afghanistan were stationary at first difference. The Johansen co-integration test authenticated that there is long-run relationship between foreign aid and GDP of Afghanistan. Based on the result of the OLS test, the result is statistically significant according to the p-value of 5%, and the existence of the long-run relationship between GDP and ODA is confirmed. The 1% change in net official development assistance leads to a 5.6% change in GDP.
Article
Full-text available
The present paper examines the linkages between the South–East Asian stock markets following the opening of the stock markets in the 1990s. No evidence was found to indicate a long–run relationship among the South–East Asian stock markets over the period 1988–1997; however, correlation analyses indicate that the South–East Asian stock markets are becoming more integrated. The results from the time–varying parameter model also show that the stock market returns of Indonesia, the Philippines and Thailand had all become more closely linked with that of Singapore.
Article
The proposal of simultaneous use of modified AIC statistics by Bhansali & Downham for the fitting of autoregressive models is reviewed and a Bayesian extension of the minimum AIC procedure is proposed. The practical utility of the procedure is demonstrated by numerical examples.
Article
This paper presents and tests an augmented monetary model that includes the effect of stock prices on the bilateral exchange rates. The model is applied to the ringgit/US dollar (RM/US) and ringgit/Japanese yen (RM/JY) exchange rates. The empirical analysis is conducted by the Johansen method of cointegration. Using the data from the recent float that ends with 1996:Q4, the study is motivated, among others, by an interesting preliminary finding that although the augmented monetary model is cointegrated, it is subject to parameter instability and that the parameter time dependency can be attributed at least partly to a particular subset of the variables in the system including stock prices. We find that a restricted VAR model which imposes exogeneity restrictions on I(1) variables, such as stock prices, among others, exhibits both cointegration and parameter stability. In addition, we demonstrate that exchange rate adjusts to clear any disequilibrium in the long-run relationship. The empirical findings tend to suggest that the equity market is significant in affecting the exchange rate and in explaining at least in part the parameter instability evidenced in the cointegrating system. Hence, we conclude that models of equilibrium exchange rate should be extended to include equity markets in addition to bond markets.