Book

Asymptotic Theory For Econometricians

Authors:
... Proof of Lemma A.7. The proof of this lemma will rely on Theorem 5.20 (Wooldridge -White, p. 30) in White (1999). To prove the first result, consider, for any n and t, the following double array of scalars 11 , z nt := v ′ R 1 Σ −1 W W t e t,h s.e.β(de−LS) 1,h (v) . ...
... We need to justify that {z nt } satisfies the hypotheses in the statement of Theorem 5.20 (White, 1999), that is (i) E z nt r < ∆ < ∞ for some r ⩾ 2 and all n, t; ...
... Also W t is mixing of size −r/(r − 2) by Assumption 3(i). Due to Proposition 3.50 in White (1999) (if two elements are strong mixing of size −a, then the product of two are also strong mixing of size −a ), W t e t,h is mixing of size −r/(r − 2). Therefore, z nt is mixing of size −r/(r − 2) as a linear transformation of W t e t,h . ...
Preprint
This paper presents a Wald test for multi-horizon Granger causality within a high-dimensional sparse Vector Autoregression (VAR) framework. The null hypothesis focuses on the causal coefficients of interest in a local projection (LP) at a given horizon. Nevertheless, the post-double-selection method on LP may not be applicable in this context, as a sparse VAR model does not necessarily imply a sparse LP for horizon h>1. To validate the proposed test, we develop two types of de-biased estimators for the causal coefficients of interest, both relying on first-step machine learning estimators of the VAR slope parameters. The first estimator is derived from the Least Squares method, while the second is obtained through a two-stage approach that offers potential efficiency gains. We further derive heteroskedasticity- and autocorrelation-consistent (HAC) inference for each estimator. Additionally, we propose a robust inference method for the two-stage estimator, eliminating the need to correct for serial correlation in the projection residuals. Monte Carlo simulations show that the two-stage estimator with robust inference outperforms the Least Squares method in terms of the Wald test size, particularly for longer projection horizons. We apply our methodology to analyze the interconnectedness of policy-related economic uncertainty among a large set of countries in both the short and long run. Specifically, we construct a causal network to visualize how economic uncertainty spreads across countries over time. Our empirical findings reveal, among other insights, that in the short run (1 and 3 months), the U.S. influences China, while in the long run (9 and 12 months), China influences the U.S. Identifying these connections can help anticipate a country's potential vulnerabilities and propose proactive solutions to mitigate the transmission of economic uncertainty.
... The convergence is proved by Assumption 5.1 with Corollary 3.48 (page 49) in White (1999) ...
... can be shown to converge in probability by Assumption 5.1 with Corollary 3.48 (page 49) in White (1999). The convergence of the term ...
... due to the regularity conditions in Assumption 5.1 and Corollary 3.48 (page 49) in White (1999). ...
Preprint
We propose a structural model-free methodology to analyze two types of macroeconomic counterfactuals related to policy path deviation: hypothetical trajectory and policy intervention. Our model-free approach is built on a structural vector moving-average (SVMA) model that relies solely on the identification of policy shocks, thereby eliminating the need to specify an entire structural model. Analytical solutions are derived for the counterfactual parameters, and statistical inference for these parameter estimates is provided using the Delta method. By utilizing external instruments, we introduce a projection-based method for the identification, estimation, and inference of these parameters. This approach connects our counterfactual analysis with the Local Projection literature. A simulation-based approach with nonlinear model is provided to add in addressing Lucas' critique. The innovative model-free methodology is applied in three counterfactual studies on the U.S. monetary policy: (1) a historical scenario analysis for a hypothetical interest rate path in the post-pandemic era, (2) a future scenario analysis under either hawkish or dovish interest rate policy, and (3) an evaluation of the policy intervention effect of an oil price shock by zeroing out the systematic responses of the interest rate.
... Scaled score function l t is strictly stationary and ergodic, because l t is an F-measurable function of (ϵ 1 , . . . , ϵ t ), and because ϵ t is strictly stationary and ergodic (White 2001). (x) Score function z t is i.i.d., because z t is a continuous function of ϵ t , and because ϵ t is i.i.d. ...
... (x) Score function z t is i.i.d., because z t is a continuous function of ϵ t , and because ϵ t is i.i.d. (White 2001). (xi) Score function z t is an F-measurable function of ϵ t (White 2001), because z t is a continuous function of ϵ t (Harvey 2013). ...
... (White 2001). (xi) Score function z t is an F-measurable function of ϵ t (White 2001), because z t is a continuous function of ϵ t (Harvey 2013). Score function z t is strictly stationary and ergodic, because z t is an F-measurable function of ϵ t , and because ϵ t is strictly stationary and ergodic (White 2001). ...
Article
Full-text available
We use a score-driven minimum mean-squared error (MSE) signal extraction method and perform inflation smoothing for China and the ASEAN-10 countries. Our focus on China and ASEAN-10 countries is motivated by the significant historical variation in inflation rates, e.g. during the 1997 Asian Financial Crisis, the 2007-2008 Financial Crisis, the COVID-19 Pandemic, and the Russian Invasion of Ukraine. Some advantages of the score-driven signal extraction method are that it uses dynamic mean and volatility filters, it considers stationary or non-stationary mean dynamics, it is computationally fast, it is robust to extreme observations, it uses information-theoretically optimal updating mechanisms for both mean and volatility, it uses closed-form formulas for smoothed signals, and parameters are estimated by using the maximum likelihood (ML) method for which the asymptotic properties of estimates are known. In the empirical application, we present the political and economic conditions for each country and analyze the evolution and determinants of the core inflation rate.
... Scaled score function l t is strictly stationary and ergodic, because l t is an F-measurable function of (ϵ 1 , . . . , ϵ t ), and because ϵ t is strictly stationary and ergodic (White 2001). (x) Score function z t is i.i.d., because z t is a continuous function of ϵ t , and because ϵ t is i.i.d. ...
... (x) Score function z t is i.i.d., because z t is a continuous function of ϵ t , and because ϵ t is i.i.d. (White 2001). (xi) Score function z t is an F-measurable function of ϵ t (White 2001), because z t is a continuous function of ϵ t (Harvey 2013). ...
... (White 2001). (xi) Score function z t is an F-measurable function of ϵ t (White 2001), because z t is a continuous function of ϵ t (Harvey 2013). Score function z t is strictly stationary and ergodic, because z t is an F-measurable function of ϵ t , and because ϵ t is strictly stationary and ergodic (White 2001). ...
... (viii) ∂l t /∂s t and ∂z t /∂λ t are bounded functions of t (Blazsek et al. 2020). (ix) Scaled score function l t is an F-measurable function of t (White 2001), because l t is a continuous function of t . Scaled score function l t is strictly stationary and ergodic, because l t is an F-measurable function of ( 1 , . . . ...
... Scaled score function l t is strictly stationary and ergodic, because l t is an F-measurable function of ( 1 , . . . , t ), and because t is strictly stationary and ergodic (White 2001). (x) Score function z t is i.i.d., because z t is a continuous function of t , and because t is i.i.d. ...
... (x) Score function z t is i.i.d., because z t is a continuous function of t , and because t is i.i.d. (White 2001). (xi) Score function z t is an F-measurable function of t (White 2001), because z t is a continuous function of t . ...
Preprint
Full-text available
In this paper, a novel approach of signal smoothing for score-driven models is suggested , by using results from the literature on minimum mean squared error (MSE) signals. The new smoothing procedure can be applied to more general score-driven models than the state space smoothing recursions procedures from the literature. Score-driven location, trend, and seasonality models with constant and score-driven scale parameters for macroeconomic variables are used. The two-step smoothing procedure is computationally fast, and it uses closed-form formulas for smoothed signals. In the first step, the score-driven models are estimated by using the maximum likelihood (ML) method. In the second step, the ML estimates of the score functions are substituted into the minimum MSE signal extraction filter. Applications for monthly data of the seasonally adjusted and the not seasonally adjusted (NSA) United States (US) inflation rate variables for the period of 1948 to 2020 are presented.
... Thus, the same old true for {f 2 t σ 2 it } t∈Z and {f t σ it it } t∈Z for i = 1, . . . , N by Proposition 3.36 of White (2001), implying that the series {x it } t∈Z is second-order stationary. ...
... − → 0 as T → ∞, since N is fixed. Therefore,since Lemma 4.5 entails the fact that √ N T ∇L N T (θ 0 ) obeys the CLT for martingales, we obtain that √ N T ∇L N T (θ 0 ) has the same asymptotic distribution by the asymptotic equivalence lemma, see Lemma 4.7White (2001). ...
... In addition to ∞ ∑ τ=0 ρ τ < ∞, we also know that Var(X i,t (s t ) | s t = s) < ∞ for all t. Thus, according to Definition 3.55 in [28], we know that {X i,t (s)1 {s t =s} } is an asymptotically uncorrelated sequence. Regarding the expression ofγ i (r, s), we obtain ...
... Hence, {Y i,t } are asymptotically uncorrelated sequences. Using Theorem 3.57 in [28], we have s). Thus, the consistency of the estimatorλ i (s) yw can be proved. ...
Article
Full-text available
The novel circumstance-driven bivariate integer-valued autoregressive (CuBINAR) model for non-stationary count time series is proposed. The non-stationarity of the bivariate count process is defined by a joint categorical sequence, which expresses the current state of the process. Additional cross-dependence can be generated via cross-dependent innovations. The model can also be equipped with a marginal bivariate Poisson distribution to make it suitable for low-count time series. Important stochastic properties of the new model are derived. The Yule–Walker and conditional maximum likelihood method are adopted to estimate the unknown parameters. The consistency of these estimators is established, and their finite-sample performance is investigated by a simulation study. The scope and application of the model are illustrated by a real-world data example on sales counts, where a soap product in different stores with a common circumstance factor is investigated.
... The main contribution of this paper consists in establishing oracle inequalities for the ERM when the data are dependent and heavy-tailed. Our analysis covers both the cases of identically and heterogeneously distributed observations (using the jargon of White, 2001). In particular, our main results establish that the ERM achieves the optimal rate of linear aggregation (up to a log(T) factor) in a dependent data setting. ...
... Fourth, our analysis aims to provide large-dimensional analogs of some of the classic results of White (2001) for fixed-dimensional linear regression with dependent data. We shall point out the differences between those results and the ones established here. ...
Article
Full-text available
This paper establishes bounds on the performance of empirical risk minimization for large-dimensional linear regression. We generalize existing results by allowing the data to be dependent and heavy-tailed. The analysis covers both the cases of identically and heterogeneously distributed observations. Our analysis is nonparametric in the sense that the relationship between the regressand and the regressors is not specified. The main results of this paper show that the empirical risk minimizer achieves the optimal performance (up to a logarithmic factor) in a dependent data setting.
... of White (2000) -, the result follows directely from Exercise 5.21 of page 131 of White (2000), since ...
... of White (2000) -, the result follows directely from Exercise 5.21 of page 131 of White (2000), since ...
Preprint
Full-text available
The economic relations between functional income distribution and the three main personal income distributions --- total income, wealth, and wages --- are not straightforward in the economic literature. This paper presents an attempt to relate the factor shares and these personal income distributions in a more direct and precise manner. Our analytical result proves that the total income Lorenz curve is always above the weighted average of capital income and wages Lorenz curves, where the factor shares are the weights. We also establish a method to estimate income, wages, or wealth personal distributions when one of them does not exists.
... When average treatment effects for compliers are heterogeneous in X, the DWH test may reject with probability converging to 1 even when there is no unmeasured confounding. To show this, we will show thatβ OLS andβ T SLS can converge to different weighted averages of treatment effects when average treatment effects for compliers are heterogeneous in X. Combining this fact with the fact that, under regularity conditions described in [44], the denominator of the DWH test statistic (16) will converge to 0, shows that T DW H converges in probability to ∞ and rejects with probability 1 even when there is no unmeasured confounding. ...
Preprint
An important concern in an observational study is whether or not there is unmeasured confounding, i.e., unmeasured ways in which the treatment and control groups differ before treatment that affect the outcome. We develop a test of whether there is unmeasured confounding when an instrumental variable (IV) is available. An IV is a variable that is independent of the unmeasured confounding and encourages a subject to take one treatment level vs. another, while having no effect on the outcome beyond its encouragement of a certain treatment level. We show what types of unmeasured confounding can be tested for with an IV and develop a test for this type of unmeasured confounding that has correct type I error rate. We show that the widely used Durbin-Wu-Hausman (DWH) test can have inflated type I error rates when there is treatment effect heterogeneity. Additionally, we show that our test provides more insight into the nature of the unmeasured confounding than the DWH test. We apply our test to an observational study of the effect of a premature infant being delivered in a high-level neonatal intensive care unit (one with mechanical assisted ventilation and high volume) vs. a lower level unit, using the excess travel time a mother lives from the nearest high-level unit to the nearest lower-level unit as an IV.
... Proof of Theorem 13: Theorem 6.20 inWhite (2001) implies that under B1-B4, ∥ Ω n − Ω n ∥ P −→ 0, as n → ∞. Together with Ω n P −→ Ω from B5, this implies that ∥ Ω n − Ω∥ P −→ 0, as n → ∞. ...
Preprint
Full-text available
This paper lays out a principled approach to compare copula forecasts via strictly consistent scores. We first establish the negative result that, in general, copulas fail to be elicitable, implying that copula predictions cannot sensibly be compared on their own. A notable exception is on Fr\'echet classes, that is, when the marginal distribution structure is given and fixed, in which case we give suitable scores for the copula forecast comparison. As a remedy for the general non-elicitability of copulas, we establish novel multi-objective scores for copula forecast along with marginal forecasts. They give rise to two-step tests of equal or superior predictive ability which admit attribution of the forecast ranking to the accuracy of the copulas or the marginals. Simulations show that our two-step tests work well in terms of size and power. We illustrate our new methodology via an empirical example using copula forecasts for international stock market indices.
... Dependence of g τ on its past history, in terms of mean and variance, is allowed. However, according to Assumptions 4(b) and 4(c), this dependence vanishes to zero as the lag order j −→ ∞. ♢ Hence, considering Assumptions 1 -4 the following Theorem can be stated (see Hong, 2020 andWhite, 2001). ...
Preprint
Full-text available
This paper introduces a flexible framework for the estimation of the conditional tail index of heavy tailed distributions. In this framework, the tail index is computed from an auxiliary linear regression model that facilitates estimation and inference based on established econometric methods, such as ordinary least squares (OLS), least absolute deviations, or M-estimation. We show theoretically and via simulations that OLS provides interesting results. Our Monte Carlo results highlight the adequate finite sample properties of the OLS tail index estimator computed from the proposed new framework and contrast its behavior to that of tail index estimates obtained by maximum likelihood estimation of exponential regression models, which is one of the approaches currently in use in the literature. An empirical analysis of the impact of determinants of the conditional left- and right-tail indexes of commodities' return distributions highlights the empirical relevance of our proposed approach. The novel framework's flexibility allows for extensions and generalizations in various directions, empowering researchers and practitioners to straightforwardly explore a wide range of research questions.
... , N and t = 1, . . . , T (White 1984). ...
Preprint
Full-text available
In this econometric study, we use monthly panel data from the Centers for Disease Control and Prevention (CDC) for the cross-section of all states of the United States (US) from January 2020 to December 2022 on the following variables: (i) the count of COVID-19 (coronavirus disease of 2019) deaths (i.e. COVID-19 mortality); (ii) the count of deaths that medically may be related to COVID-19; (iii) the count of the rest of the causes of death (i.e. non-COVID-19 mortality). We use a novel score-driven panel data model, named the panel quasi-vector autoregressive model for the multivariate t-distribution (t-PQVAR), to measure the interactions among (i), (ii), and (iii) in a robust way. The interaction between (i) and (ii) is clear from a medical perspective and is documented in the literature. Nevertheless, the interaction between (i) and (iii) is less documented and is the focus of the present paper. We compare the performances of the t-PQVAR and classical PVAR and PVAR moving average (PVARMA) models. Several model performance metrics and diagnostic tests support the use of the t-PQVAR model. We provide robust statistical evidence that COVID-19 mortality influences non-COVID-19 mortality, which can be explained by indirect social or economic reasons.
... The fixed number of clusters leads to non-normal asymptotic distribution, and discussion about recent contributions can be found in Hansen and Lee ( 2019 ). An asymptotic distribution theory for a large number of clusters was first derived by White ( 1984 ), and has been investigated by several authors allowing either fixed cluster size or heterogeneous cluster. Recent developments include Hansen and Lee ( 2019 ), who propose conditions on the relation between the cluster sample sizes and the full sample in a regular asymptotic, or Djogbenou et al. ( 2019 ), who derive asymptotic with varying cluster sizes and carry out a cluster wild bootstrap. ...
Article
Full-text available
This paper proposes an estimator which combines spatial differencing with a two-step sample selection estimator. We derive identification, estimation, and inference results from ‘site-specific’ unobserved effects. These effects operate at a spatial scale that cannot be captured by administrative borders. Therefore, we use spatial differencing. We show that under justifiable assumptions, the estimator is consistent and asymptotically normal. A Monte Carlo experiment illustrates the small sample properties of our estimator. We apply our procedure to the estimation of a female wage offer equation in the United States and the results show the relevance of spatial differencing to account for ‘site-specific’ unobserved effects.
... In addition, it is possible to approximate an upper bound for the bias of optimal control variates by leveraging the asymptotic properties ofˆ, . When regression residualsˆ, +1 = , +1 −ˆ( , ) are identically and independently normally distributed, standard regression theory givesB | G ∼ (B , ΣB ), where ( , ) denotes a normal distribution with mean vectorand variance-covariance matrix[14]. In the LSM algorithm, residuals are not independent, and the quality of a normal distributional approximation based on the Central Limit Theorem may vary with option characteristics like , , and 0 / . ...
Article
This article discusses the application of optimally sampled control variates in the context of the Least-Squares Monte Carlo algorithm for pricing American options. We demonstrate theoretically that optimal sampling introduces bias when estimated exercise times are not stopping times. Numerical results show that this bias is an accurate proxy for the positive foresight bias of American option price estimates, effectively yielding in-sample lower-bound estimates.
... A standard FCLT for linear regression models is introduced by Theorem 7.17 inWhite (2001). ...
Preprint
Full-text available
We consider Wald type statistics designed for joint predictability and structural break testing based on the instrumentation method of Phillips and Magdalinos (2009). We show that under the assumption of nonstationary predictors: (i) the tests based on the OLS estimators converge to a nonstandard limiting distribution which depends on the nuisance coefficient of persistence; and (ii) the tests based on the IVX estimators can filter out the persistence under certain parameter restrictions due to the supremum functional. These results contribute to the literature of joint predictability and parameter instability testing by providing analytical tractable asymptotic theory when taking into account nonstationary regressors. We compare the finite-sample size and power performance of the Wald tests under both estimators via extensive Monte Carlo experiments. Critical values are computed using standard bootstrap inference methodologies. We illustrate the usefulness of the proposed framework to test for predictability under the presence of parameter instability by examining the stock market predictability puzzle for the US equity premium.
... So, X t and X t+m+1 are independent but X t and X t+k , with k ≤ m are dependent. If this holds for some finite m, then we obtain the CLT (White, 2001). In Appendix A.3 we provide a more detailed explanation of this, but the main practical implication we consider in this paper is that, while weak stationarity alone may not be sufficient for the CLT, if weak stationarity is violated, then the CLT will not hold. ...
Preprint
Full-text available
Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e. by differencing or detrending). However, many empirical researchers have limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible a way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show via simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.
... In addition, Lin (2013) shows that OLS adjustment with a full set of treatment × covariate interactions improves or does not hurt asymptotic precision, even when the regression model is incorrect. The robust Eicker-Huber-White (Eicker, 1967;Huber, 1967;White, 2014) variance estimator (Freedman, 2006) is consistent or asymptotically conservative (regardless of whether the interactions are included) in estimating the true asymptotic variance. More results on the asymptotic precision of treatment effect can be found in papers by Yang and Tsiatis (2001), Tsiatis et al. (2008), Zhang et al. (2008), Schochet (2010), and Negi and Wooldridge (2021 strong finite-sample performance; (4) computational simplicity; and (5) wide applicability. ...
Preprint
Full-text available
The use of smart devices (e.g., smartphones, smartwatches) and other wearables to deliver digital interventions to improve health outcomes has grown significantly in the past few years. Mobile health (mHealth) systems are excellent tools for the delivery of adaptive interventions that aim to provide the right type and amount of support, at the right time, by adapting to an individual's changing context. Micro-randomized trials (MRTs) are an increasingly common experimental design that is the main source for data-driven evidence of mHealth intervention effectiveness. To assess time-varying causal effect moderation in an MRT, individuals are intensively randomized to receive treatment over time. In addition, measurements, including individual characteristics, and context are also collected throughout the study. The effective utilization of covariate information to improve inferences regarding causal effects has been well-established in the context of randomized control trials (RCTs), where covariate adjustment is applied to leverage baseline data to address chance imbalances and improve the asymptotic efficiency of causal effect estimation. However, the application of this approach to longitudinal data, such as MRTs, has not been thoroughly explored. Recognizing the connection to Neyman Orthogonality, we propose a straightforward and intuitive method to improve the efficiency of moderated causal excursion effects by incorporating auxiliary variables. We compare the robust standard errors of our method with those of the benchmark method. The efficiency gain of our approach is demonstrated through simulation studies and an analysis of data from the Intern Health Study (NeCamp et al., 2020).
... The result follows the central limit theorem for martingale difference sequence of White (2000). where N has a K-dimensional normal distribution with mean zero and covariance matrix Γ as in (A.18). ...
Preprint
The spatio-temporal autoregressive moving average (STARMA) model is frequently used in several studies of multivariate time series data, where the assumption of stationarity is important, but it is not always guaranteed in practice. One way to proceed is to consider locally stationary processes. In this paper we propose a time-varying spatio-temporal autoregressive and moving average (tvSTARMA) modelling based on the locally stationarity assumption. The time-varying parameters are expanded as linear combinations of wavelet bases and procedures are proposed to estimate the coefficients. Some simulations and an application to historical daily precipitation records of Midwestern states of the USA are illustrated.
... The conditions that bind the extent of dependence are called mixing conditions. Results expressed below follow both White (1984) and Herrndorf (1984). ...
Article
Full-text available
Hoy en día es una práctica estándar de trabajo empírico la aplicación de diferentes estadísticos de contraste de raíz unitaria. A pesar de ser un aspecto práctico, estos estadísticos poseen distribuciones complejas y no estándar que dependen de funcionales de ciertos procesos estocásticos y sus derivaciones representan una barrera incluso para varios econometristas teóricos. Estas derivaciones están basadas en herramientas estadísticas fundamentales y rigurosas que no son (muy) bien conocidas por econometristas estándar. El presente artículo completa esta brecha al explicar en una forma simple una de estas herramientas fundamentales la cual es el Teorema del Límite Central Funcional. Por lo tanto, este documento analiza los fundamentos y la aplicabilidad de dos versiones del Teorema del Límite Central Funcional dentro del marco de una raíz unitaria con un quiebre estructural. La atención inicial se centra en la estructura probabilística de las series de tiempo propuesta por Phillips (1987a), la cual es aplicada por Perron (1989) para estudiar los efectos de un quiebre estructural (asumido) exógeno sobre la potencia de las pruebas Dickey-Fuller aumentadas y por Zivot y Andrews (1992) para criticar el supuesto de exogeneidad y proponer un método para estimar un punto de quiebre endógeno. Un método sistemático para tratar con aspectos de eficiencia es introducido por Perron y Rodríguez (2003), el cual extiende el enfoque de Mínimos Cuadrados Generalizados para eliminar los componentes determinísticos de Elliot et al. (1996). Se presenta además una aplicación empírica.
... An interesting characteristic of all of this is that larger samples can more closely approximate the population. This is the Law of Large Numbers (LLN; Rovner, 2006;White, 2001). ...
Article
Full-text available
A standard error (SE) is a measure of variability for a sample. Standard errors are somewhat similar to standard deviations – another measurement of variation. The difference is that a standard deviation describes the variability of the data for a single group (sample or population), while a standard error describes the variability of a statistic, such as the mean of multiple sampling means. With a sufficient number of sample groups, the distribution of sampling means will be normally distributed, and the SE allows us to make convenient calculations about unknown population values. Also, SEs allow us to select evidence-based statements and conclusions based on available data, including categorical cut scores that can support practical use and coherent or intuitive meaning of individual test results.
... Besides, results based on the PCSEs are also robust and consistent in most especially long panels. Thus, panels with large cross-sectional units and a short period per cluster (N > T), see, for White (1980White ( , 1984 and Liang and Zeger (1986). However, results based on the PCSEs are only for shortrun analysis. ...
Article
Full-text available
According to the World Food Programme (WFP), the projected increase in the human population stands at 2 billion people by 2050. At the same time, world food production has been witnessing a declining trend over recent years, and 690 million (8.9%) of the world’s population are already in severe starvation. Climate variability and climate change impacts on food security are very eminent today. Against this backdrop, this study explored the real effects of climate variability and change on food security in Africa by applying the system Generalized Method of Moments (GMM) and the Panel Corrected Standard Errors (PCSEs) estimation techniques on data from 2001 to 2018 for 38 selected African countries. The findings reveal that higher amounts of precipitation positively influence food security along two dimensions: food availability and utilization. Hotter temperatures negatively impact food availability and utilization. However, it aids food accessibility in Africa. Similarly, carbon dioxide emissions improve food availability and are harmful to food accessibility and utilization in Africa. Consequently, the effects of climate variability and change on food security in Africa are undesirable, thereby putting the continent at risk of food insecurity over the long run. These findings provide practical implications for policy change to address the disastrous effects of climate variability and change on food security in Africa.
... To derive the bias approximations, we assume the F -conditional distribution of regression coefficientsˆfollows a multivariate normal distribution (White, 2001 ...
Preprint
Full-text available
We derive an expression for the local bias of an in-sample Least-Squares Monte Carlo (LSM) estimator. Bias is decomposed into positive and negative components, corresponding to foresight bias and sub-optimality bias, respectively. Using regression theory, continuation values are approximately normally distributed, giving local bias estimates that hold for general asset price processes and square-integrable option payoffs. Bias estimates are recursively subtracted from the option price estimator at each exercise opportunity, yielding an in-sample bias-corrected price estimator. Our bias approximation explicitly accounts for the fact that the early-exercise policy depends on future cash flows, mitigating the large positive bias effects of model overfitting in the recursive regressions of future option values. Numerical results confirm that our proposed method efficiently reduces overall estimator bias across a wide range of option characteristics. The relative importance of these improvements increases with the frequency of exercise opportunities and with the number of basis functions, yielding price estimates of much higher accuracy, even with a small number of sample paths.
... This convergence is achieved almost surely, and therefore it is also satisfied in probability. See White (2001) for further details. ...
Article
Full-text available
In this paper we introduce a “power booster factor” for out-of-sample tests of predictability. The relevant econometric environment is one in which the econometrician wants to compare the population Mean Squared Prediction Errors (MSPE) of two models: one big nesting model, and another smaller nested model. Although our factor can be used to improve finite sample properties of several out-of-sample tests of predictability, in this paper we focus on the widely used test developed by Clark and West (2006, 2007). Our new test multiplies the Clark and West t-statistic by a factor that should be close to one under the null hypothesis that the short nested model is the true model, but that should be greater than one under the alternative hypothesis that the big nesting model is more adequate. We use Monte Carlo simulations to explore the size and power of our approach. Our simulations reveal that the new test is well sized and powerful. In particular, it tends to be less undersized and more powerful than the test by Clark and West (2006, 2007). Although most of the gains in power are associated to size improvements, we also obtain gains in size-adjusted-power. Finally we illustrate the use of our approach when evaluating the ability that an international core inflation factor has to predict core inflation in a sample of 30 OECD economies. With our “power booster factor” more rejections of the null hypothesis are obtained, indicating a strong influence of global inflation in a selected group of these OECD countries.
... where all paths, White [48] shows that the fitted value of the regression, a particular value that fits the line of best fit,F M (ω; t k ) converges in mean square and in probability to F M (ω; t k ) as the number of in-the-money paths goes to infinity. ...
Thesis
Full-text available
In financial mathematics, option pricing is a popular problem in theory of finance and mathematics. In option pricing theory, the valuation of American options is one of the most important problems. American options are the most traded option styles in all financial markets. In spite of the recent developments, the valuation of American options continues to exist as a challenging problem. There are no closed-form analytical solutions of American options, so that a usual way to deal with this problem is to develop numerical and approximation techniques. In this thesis, we analyse binomial, finite difference and approximation methods, for pricing American options. We first consider the binomial approximation which is very easy to implement and assumes that the asset prices follow from geometric Brownian motion. Then, we present American options as a free boundary value problem based on Black-Scholes partial differential equation, which leads to a very famous model in finance theory, and formalize it as a linear complementarity problem. We refer to the projected successive over relaxation (PSOR) method to solve this problem. Although there are no closed-form solutions for American options, we deal with some analytical approximation methods to approach the option values. We demonstrate the applications of the each method and compare their solutions.
Preprint
Full-text available
Inference in linear panel data models is complicated by the presence of fixed effects when (some of) the regressors are not strictly exogenous. Under asymptotics where the number of cross-sectional observations and time periods grow at the same rate, the within-group estimator is consistent but its limit distribution features a bias term. In this paper we show that a panel version of the moving block bootstrap, where blocks of adjacent cross-sections are resampled with replacement, replicates the limit distribution of the within-group estimator. Confidence ellipsoids and hypothesis tests based on the reverse-percentile bootstrap are thus asymptotically valid without the need to take the presence of bias into account.
Article
Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e., by differencing or detrending). However, many empirical researchers have a limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show via simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.
Article
Full-text available
The financial theory shows that future markets can stimulate the information production, and therefore provide more information for uninformed traders, hence to stabilize the stock market. This paper investigates whether the Chinese CSI300 index futures can stabilize the Chinese stock market. By using the econometric approach of Bai (2003) and Stock and Watson (2011) to estimate the latent global factor from panel data of international stock markets, this paper controls the economic fundamentals and employs the equal variance test of Yang et al. (2014) to compare the volatility of the CSI300 stock index before and after the appearance of CSI300 index futures. We find that CSI300 index futures can help to stabilize the Chinese stock market significantly.
Preprint
The problem of detecting variance breaks in the case of smooth time-varying variance structure is studied. It is highlighted that the tests based on (piecewise) constant specification of the variance are not able to distinguish between smooth non constant variance and the case where an abrupt change is present. Consequently, a new procedure for detecting variance breaks taking into account for smooth changes of the variance is proposed. The finite sample properties of the tests introduced in the paper are investigated by Monte Carlo experiments. The theoretical outputs are illustrated using U.S. macroeconomic data.
Preprint
This paper establishes the almost sure convergence and asymptotic normality of levels and differenced quasi maximum-likelihood (QML) estimators of dynamic panel data models. The QML estimators are robust with respect to initial conditions, conditional and time-series heteroskedasticity, and misspecification of the log-likelihood. The paper also provides an ECME algorithm for calculating levels QML estimates. Finally, it uses Monte Carlo experiments to compare the finite sample performance of levels and differenced QML estimators, the differenced GMM estimator, and the system GMM estimator. In these experiments the QML estimators usually have smaller --- typically substantially smaller --- bias and root mean squared errors than the panel data GMM estimators.
Preprint
Mixture of autoregressions (MoAR) models provide a model-based approach to the clustering of time series data. The maximum likelihood (ML) estimation of MoAR models requires the evaluation of products of large numbers of densities of normal random variables. In practical scenarios, these products converge to zero as the length of the time series increases, and thus the ML estimation of MoAR models becomes infeasible without the use of numerical tricks. We propose a maximum pseudolikelihood (MPL) estimation approach as an alternative to the use of numerical tricks. The MPL estimator is proved to be consistent and can be computed via an EM (expectation--maximization) algorithm. Simulations are used to assess the performance of the MPL estimator against that of the ML estimator in cases where the latter was able to be calculated. An application to the clustering of time series data arising from a resting-state fMRI experiment is presented as a demonstration of the methodology.
Article
Full-text available
Problem Definition The purpose of this research is to examine the effect of diversification on interfirm relationships. Given how extensively firms develop key relationships with customers, suppliers, and other stakeholders, understanding the role that interfirm (relational) strategies are affected by diversification likely will be quite informative. This is particularly true of small businesses, which are not as frequently studied by strategy scholars. A relational perspective suggests that investments in relationship-specific assets, substantial knowledge exchange, combinations of complementary resources and capabilities, and effective governance structures between supply/buyer firms in a partnership dyad can generate relational rents. Methodology/Results A foundational predication within our research is that firm diversification will lead to more advantageous relationships with business partners, a hypothesis that we test through contract performance. In our study, we review 240 Research & Development and New Product Development contracts with supplier firms and the US Department of Defense that incorporated some form of risk-sharing between the buyer and supplier. We find that diversified firms engage in contracting with suppliers in a way that provides an advantage over their single-segment competitors in terms of total contract cost, the number of change proposals by engineers in contract work, and longer durations of government contracts. We also find that diversified small firms receive more of a benefit than their larger counterparts in terms of contracting advantage. Managerial Implications Based on our findings, it is evident that managers of diversified firms provide advantage to their firms by being more accustomed to complex contractual arrangements than their single-segment firm counterparts. Our findings also suggest that enhanced opportunities for organizational learning are available to diversified firms who engage in contractual relationships. Relational contracts that feature risk-sharing between buyers and suppliers provide space for joint-learning, and it is likely that managers of diversified firms have more experience navigating these risk-sharing relationships. This is particularly influential in a dynamic marketplace as firms prioritize innovation and adaptability in order to thrive.
Article
Tarih boyunca ülkeler birbirleri ile ticaret yapmış ve bu ticaretle refah artışı sağlamıştır. Bazı ülkeler için coğrafi nedenler, kaynak eksikliği veya teknolojiden yoksun olma sebepleriyle uluslararası ticaret bir zorunluluk haline gelirken, bazı ülkeler için üretim maliyetleri ve kalite avantajları sebebiyle tercih edilmiştir. Artan üretim ve dış ticaret, beraberinde büyüyen ulaşım altyapısı ile fosil yakıtlarına bağlı enerji tüketiminde artışa bu durum da küresel ısınmanın önemli unsurlarından biri olan sera gazı emisyonunun artışına sebep olmuştur. Bu nedenle kalkınma ve refah artışı sağlayan uluslararası ticaretin, çevre üzerine etkilerinin araştırılması son yıllarda önemli hale gelmiştir. Bu çalışmanın amacı gelir grupları farklı ülkeler için sera gazı emisyonundaki değişimin nedenlerini özellikle uluslararası ticaret çerçevesinde incelemektir. Bu amaçla toplamda 186 ülke 1998-2020 yılları için incelenmiş, özellikle refah seviyesinde artışa bağlı olarak mal/hizmet ihracatının ve ithalatının sera gazı emisyonunda etkili bir değişken olduğu belirlenmiştir.
Article
Full-text available
This paper studies the problem of simultaneously testing that each of k samples, coming from k count variables, were all generated by Poisson laws. The means of those populations may differ. The proposed procedure is designed for large k , which can be bigger than the sample sizes. First, a test is proposed for the case of independent samples, and then the obtained results are extended to dependent data. In each case, the asymptotic distribution of the test statistic is stated under the null hypothesis as well as under alternatives, which allows to study the consistency of the test. Specifically, it is shown that the test statistic is asymptotically free distributed under the null hypothesis. The finite sample performance of the test is studied via simulation. A real data set application is included.
Article
Full-text available
The purpose of this paper is to study the generalized method of moments (GMM) estimation procedures of the realized stochastic volatility model; we give the moment conditions for this model and then obtain the estimation of parameters. Then, we apply these moment conditions to the realized stochastic volatility model to improve the volatility prediction effect. This paper selects the Shanghai Composite Index (SSE) as the original data of model research and completes the volatility prediction under a realized stochastic volatility model. Markov chain Monte Carlo (MCMC) estimation and quasi-maximum likelihood (QML) estimation are applied to the parameter estimation of the realized stochastic volatility model to compare with the GMM method. And the volatility prediction accuracy of these three different methods is compared. The results of empirical research show that the effect of model prediction using the parameters obtained by the GMM method is close to that of the MCMC method, and the effect is obviously better than that of the quasi-maximum likelihood estimation method.
Article
Full-text available
Estimating poverty measures for disabled people in developing countries is often difficult, partly because relevant data are not readily available. We extend the small-area estimation developed by Elbers, Lanjouw and Lanjouw (2002, 2003) to estimate poverty by the disability status of the household head, when the disability status is unavailable in the survey. We propose two alternative approaches to this extension: Aggregation and Instrumental Variables Approaches. We apply these approaches to data from Tanzania and show that both approaches work. Our estimation results show that disability is indeed positively associated with poverty in every region of mainland Tanzania.
Preprint
We study robust mean-variance optimization in multiperiod portfolio selection by allowing the true probability measure to be inside a Wasserstein ball centered at the empirical probability measure. Given the confidence level, the radius of the Wasserstein ball is determined by the empirical data. The numerical simulations of the US stock market provide a promising result compared to other popular strategies.
Article
Full-text available
Systemic risk measures such as CoVaR, CoES and MES are widely-used in finance, macroeconomics and by regulatory bodies. Despite their importance, we show that they fail to be elicitable and identifiable. This renders forecast comparison and validation, commonly summarized as ‘backtesting’, impossible. The novel notion of multi-objective elicitability solves this problem by relying on bivariate scores equipped with the lexicographic order. Based on this concept, we propose Diebold–Mariano type tests with suitable bivariate scores to compare systemic risk forecasts. We illustrate the test decisions by an easy-to-apply traffic-light approach. Finally, we apply our traffic-light approach to DAX 30 and S&P 500 returns, and infer some recommendations for regulators.
Article
Full-text available
A two-step estimator of a nonparametric regression function via Kernel regularized least squares (KRLS) with parametric error covariance is proposed. The KRLS, not considering any information in the error covariance, is improved by incorporating a parametric error covariance, allowing for both heteroskedasticity and autocorrelation, in estimating the regression function. A two step procedure is used, where in the first step, a parametric error covariance is estimated by using KRLS residuals and in the second step, a transformed model using the error covariance is estimated by KRLS. Theoretical results including bias, variance, and asymptotics are derived. Simulation results show that the proposed estimator outperforms the KRLS in both heteroskedastic errors and autocorrelated errors cases. An empirical example is illustrated with estimating an airline cost function under a random effects model with heteroskedastic and correlated errors. The derivatives are evaluated, and the average partial effects of the inputs are determined in the application.
Article
Momentum methods have been shown to accelerate the convergence of the standard gradient descent algorithm in practice and theory. In particular, the random partition based minibatch gradient descent methods with momentum (MGDM) are widely used to solve large-scale optimization problems with massive datasets. Despite the great popularity of the MGDM methods in practice, their theoretical properties are still underexplored. To this end, we investigate the theoretical properties of MGDM methods based on the linear regression models. We first study the numerical convergence properties of the MGDM algorithm and derive the conditions for faster numerical convergence rate. In addition, we explore the relationship between the statistical properties of the resulting MGDM estimator and the tuning parameters. Based on these theoretical findings, we give the conditions for the resulting estimator to achieve the optimal statistical efficiency. Finally, extensive numerical experiments are conducted to verify our theoretical results.
Article
This paper proposes a model-free test for the strict stationarity of a potentially vector-valued time series using the discrete Fourier transform (DFT) approach. We show that the DFT of a residual process based on the empirical characteristic function weakly converges to a zero spectrum in the frequency domain for a strictly stationary time series and a nonzero spectrum otherwise. The proposed test is powerful against various types of nonstationarity including deterministic trends and smooth or abrupt structural changes. It does not require smoothed nonparametric estimation and, thus, can detect the Pitman sequence of local alternatives at the parametric rate T1/2T^{-1/2} , faster than all existing nonparametric tests. We also design a class of derivative tests based on the characteristic function to test the stationarity in various moments. Monte Carlo studies demonstrate that our test has reasonarble size and excellent power. Our empirical application of exchange rates strongly suggests that both nominal and real exchange rate returns are nonstationary, which the augmented Dickey–Fuller and Kwiatkowski–Phillips–Schmidt–Shin tests may overlook.
Chapter
The autoregressive conditional heteroscedasticity (ARCH) model and its various generalizations have been widely used to analyze economic and financial data. Although many variables like GDP, inflation, and commodity prices are imprecisely measured, research focusing on the mismeasured response processes in GARCH models is sparse. We study a dynamic model with ARCH error where the underlying process is latent and subject to additive measurement error. We show that, in contrast to the case of covariate measurement error, this model is identifiable by using the observations of the proxy process only and no extra information is needed. We construct GMM estimators for the unknown parameters which are consistent and asymptotically normally distributed under general conditions. We also propose a procedure to test the presence of measurement error, which avoids the usual boundary problem of testing variance parameters. We carry out Monte Carlo simulations to study the impact of measurement error on the naive maximum likelihood estimators and have found interesting patterns of their biases. Moreover, the proposed estimators have fairly good finite sample properties.KeywordsDynamic ARCH modelErrors in variablesGeneralized method of momentsMeasurement errorSemiparametric estimation
Article
Full-text available
This study explores the multi-step ahead forecasting performance of a so-called hybrid conditional quantile method, which combines relevant conditional quantile forecasts from parametric and semiparametric methods. The focus is on lower (left) and upper (right) tail quantiles of the conditional distribution of the response variable. First, we evaluate and compare out-of-sample conditional quantile forecasts obtained from a hybrid method and from five non-hybrid methods, employing a large data set of exogenous predictors generated by various GARCH model specifications. Second, we compare the accuracy of these methods by calculating conditional quantile forecasts for the risk premium of the monthly S&P 500 index, using a data set of macroeconomic predictors. Monte Carlo and empirical application results indicate that the hybrid forecasting provides more accurate quantile forecasts than non-hybrid methods. The success of the hybrid method is most prominent when compared with results obtained by a simple equal-weighted combination of quantile forecasts.
Article
Credibility is elusive, but Blinder [(2000) American Economic Review 90, 1421–1431.] generated a consensus in the literature by arguing that “A central bank is credible if people believe it will do what it says.” To implement this idea, we first measure people’s beliefs by using survey data on inflation’s expectations. Second, we compare beliefs with explicit (or tacit) targets, taking into account the uncertainty in our estimate of beliefs (asymptotic 95% robust confidence intervals). Whenever the target falls into this interval we consider the central bank credible. We consider it not credible otherwise. We apply our approach to study the credibility of the Brazilian Central Bank (BCB) by using a world-class database—the Focus Survey of forecasts. Using monthly data from January 2007 until April 2017, we estimate people’s beliefs of inflation 12 months ahead, coupled with a robust estimate of its asymptotic 95% confidence interval. Results show that the BCB was credible 65% of the time, with the exception of a few months in the beginning of 2007 and during the interval between mid-2013 throughout mid-2016.
ResearchGate has not been able to resolve any references for this publication.