## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... (v) u t is MDS and the second moments of u t exist, hence u t is white noise vector [White (2001)]. ...

... Under (A4), u t is an F-measurable function of (ε 1 , . . . ,ε t ) [White (2001)]. ...

... (iii) Score function e i,t is i.i.d., because e i,t is a continuous function ofε t , andε t is i.i.d. [White (2001)]. ...

We contribute to the literature on empirical macroeconomic models with time-varying conditional moments, by introducing a heteroskedastic score-driven model with Student's t-distributed innovations, named the heteroskedastic score-driven t-QVAR (quasi-vector autoregressive) model. The t-QVAR model is a robust nonlinear extension of the VARMA (VAR moving average) model. As an illustration, we apply the heteroskedastic t-QVAR model to a dynamic stochastic general equilibrium (DSGE) model, for which we estimate Gaussian-ABCD and t-ABCD representations. We use data on economic output, inflation, interest rate, government spending, aggregate productivity, and consumption of the United States (US) for the period of 1954 Q3 to 2022 Q1. Due to the robustness of the heteroskedastic t-QVAR model, even including the period of the coronavirus disease of 2019 (COVID-19) pandemic and the start of the Russian invasion of Ukraine, we find a superior statistical performance, lower policy-relevant dynamic effects, and a higher estimation precision of the impulse response function (IRF) for US gross domestic product (GDP) growth and US inflation rate, for the heteroskedastic score-driven t-ABCD representation rather than for the homoskedastic Gaussian-ABCD representation.

... In linear regression models, it's common to assume the observations can be sorted into clusters where observations are independent across clusters but correlated within the same cluster. One method to conduct accurate statistical inference under this setting is to estimate the regression model without controlling for within-cluster error correlation, and then proceed to compute the so called cluster-robust standard errors (White, 1984;Liang and Zeger, 1986;Arellano, 1987, Hasen, 2011.These cluster-robust standard errors do not require a model for within-cluster error structure for consistency, but do require additional assumptions such as the number of cluster tends to infinity or the size of the cluster tends to infinity. The cluster-robust standard errors had became popular among applied researchers after Rogers (1993) incorporated the method in Stata. ...

... The consistency of the above estimator is established by White (1984), Liang and Zeger (1986) and Hansen (2007) with varying degree of restrictiveness. ...

... For the purpose of this section, reader might assume we follow the asymptotic setting of White (1984) unless otherwise stated. For the proof of White (1984) to go through, we need the estimatorû gû g to be an asymptotic unbiased estimator of Ω g . While this is true under White's regularity conditions, this is not necessary true when considering high dimensional asymptotics as it violates one of the regularity condition of White (1984) that the probability limit of 1 nX X to be finite and positive definite. ...

Cluster standard error (Liang and Zeger, 1986) is widely used by empirical researchers to account for cluster dependence in linear model. It is well known that this standard error is biased. We show that the bias does not vanish under high dimensional asymptotics by revisiting Chesher and Jewitt (1987)'s approach. An alternative leave-cluster-out crossfit (LCOC) estimator that is unbiased, consistent and robust to cluster dependence is provided under high dimensional setting introduced by Cattaneo, Jansson and Newey (2018). Since LCOC estimator nests the leave-one-out crossfit estimator of Kline, Saggio and Solvsten (2019), the two papers are unified. Monte Carlo comparisons are provided to give insights on its finite sample properties. The LCOC estimator is then applied to Angrist and Lavy's (2009) study of the effects of high school achievement award and Donohue III and Levitt's (2001) study of the impact of abortion on crime.

... Only in this case can the pretreatment data be used to construct the SC unit and estimate the treatment effect for the posttreatment periods. If we treat {λ t } and {δ t } as stochastic and assume T 1 to be divergent at rate O(T 0 ), then using the central limit theorem for dependent observations (see, e.g., Theorems 5.16 and 5.20 in White, 1984), ...

... where we recall that e (i) t, = 0,t − i,t . From Condition 7 and Theorem 3.49 in White (1984), ...

... can be uniformly bounded, as a result of Condition 8 (i). Using Theorem 5.20 in White (1984), such properties of e ...

This paper provides new insights into the asymptotic properties of the synthetic control method (SCM). We show that the synthetic control (SC) weight converges to a limiting weight that minimizes the mean squared prediction risk of the treatment-effect estimator when the number of pretreatment periods goes to infinity, and we also quantify the rate of convergence. Observing the link between the SCM and model averaging, we further establish the asymptotic optimality of the SC estimator under imperfect pretreatment fit, in the sense that it achieves the lowest possible squared prediction error among all possible treatment effect estimators that are based on an average of control units, such as matching, inverse probability weighting and difference-in-differences. The asymptotic optimality holds regardless of whether the number of control units is fixed or divergent. Thus, our results provide justifications for the SCM in a wide range of applications. The theoretical results are verified via simulations.

... (sandwich) estimation (White, 1980(White, , 1984 with clustered standard errors. Table 1 reports the results. ...

... Data will be made available on request. 1960, 1964, 1968, 1968, 1970, 1991, 1991*, 1996, 1996* Burkina Faso 1965, 1978, 1978*, 1991, 1998, 2005, 2020Burundi 1984, 1993, 2020Cameroon 1965, 1975, 1980, 1984, 1988, 1992, 1997Cape Verde 2011Central African Republic 1964, 1981, 1993, 1993*, 1999, 2005, 2005* Chad 1969, 1996, 1996Comoros 1978, 1990, 1990*, 2002* Congo, DR 1977Congo, R 1961, 1992, 1992*, 2002, 2009Cote d'Ivoire 1960, 1965, 1970, 1975, 1980, 1985, 1990, 1995, 2000Djibouti 1987, 1993, 1999, 2005Equatorial Guinea 1996, 2002, 2009Gabon 1961, 1967, 1973, 1979, 1986, 1993, 2005, 2009Gambia 1987, 1992, 1996Ghana 1992, 1996, 2000, 2000*, 2012Guinea 1961, 1968, 1974, 1993, 1998Guinea-Bissau 1994, 1994*, 1999, 2000*, 2005, 2005*, 2009, 2009*, 2012, 2014, 2014* Kenya 1969, 1974, 1979, 1983, 1988, 1992, 1997, 2002, 2007Liberia 1985, 1997, 2005, 2005* Madagascar 1982, 1989, 1992, 1993*, 1996, 1996* Malawi 1994, 1999, 2009, 2014Mali 1979, 1992, 1992*, 1997, 2002, 2002*, 2007* Mauritania 1961, 1966, 1971, 1976, 1992, 1997, 2007, 2007*, 2009, 2014Mozambique 1994, 1999, 2009, 2014Namibia 1994, 1999, 2009, 2014Niger 1965, 1970, 1989, 1993, 1993*, 1996, 1999, 1999* Nigeria 1979, 1983, 1993, 1999, 2007Rwanda 1965, 1969Sao Tome and Principe 1991, 1996, 1996* Senegal 1963, 1968, 1973, 1978, 1983, 1988, 1993, 2000, 2000*, 2007, 2012, 2012* Seychelles 1979, 1984, 1989, 1993, 1998* Sierra Leone 1996, 1996*, 2002, 2007, 2007*, 2012Sudan 1971, 1996Tanzania 1962, 1965, 1970, 1975, 1980, 1985, 1990, 1995, 2000, 2005Togo 1961, 1963, 1979, 1986, 1993, 1998, 2005Uganda 1996Zambia 1968, 1973, 1978, 1983, 1988, 1991, 1996Zimbabwe 1990, 1996, 2002 Notes: Elections in Botswana, Ethiopia, Lesotho, Mauritius, Somalia, South Africa, and Swaziland are excluded due to the parliamentary systems in these countries. Elections marked with an asterisk are second rounds. ...

... Data will be made available on request. 1960, 1964, 1968, 1968, 1970, 1991, 1991*, 1996, 1996* Burkina Faso 1965, 1978, 1978*, 1991, 1998, 2005, 2020Burundi 1984, 1993, 2020Cameroon 1965, 1975, 1980, 1984, 1988, 1992, 1997Cape Verde 2011Central African Republic 1964, 1981, 1993, 1993*, 1999, 2005, 2005* Chad 1969, 1996, 1996Comoros 1978, 1990, 1990*, 2002* Congo, DR 1977Congo, R 1961, 1992, 1992*, 2002, 2009Cote d'Ivoire 1960, 1965, 1970, 1975, 1980, 1985, 1990, 1995, 2000Djibouti 1987, 1993, 1999, 2005Equatorial Guinea 1996, 2002, 2009Gabon 1961, 1967, 1973, 1979, 1986, 1993, 2005, 2009Gambia 1987, 1992, 1996Ghana 1992, 1996, 2000, 2000*, 2012Guinea 1961, 1968, 1974, 1993, 1998Guinea-Bissau 1994, 1994*, 1999, 2000*, 2005, 2005*, 2009, 2009*, 2012, 2014, 2014* Kenya 1969, 1974, 1979, 1983, 1988, 1992, 1997, 2002, 2007Liberia 1985, 1997, 2005, 2005* Madagascar 1982, 1989, 1992, 1993*, 1996, 1996* Malawi 1994, 1999, 2009, 2014Mali 1979, 1992, 1992*, 1997, 2002, 2002*, 2007* Mauritania 1961, 1966, 1971, 1976, 1992, 1997, 2007, 2007*, 2009, 2014Mozambique 1994, 1999, 2009, 2014Namibia 1994, 1999, 2009, 2014Niger 1965, 1970, 1989, 1993, 1993*, 1996, 1999, 1999* Nigeria 1979, 1983, 1993, 1999, 2007Rwanda 1965, 1969Sao Tome and Principe 1991, 1996, 1996* Senegal 1963, 1968, 1973, 1978, 1983, 1988, 1993, 2000, 2000*, 2007, 2012, 2012* Seychelles 1979, 1984, 1989, 1993, 1998* Sierra Leone 1996, 1996*, 2002, 2007, 2007*, 2012Sudan 1971, 1996Tanzania 1962, 1965, 1970, 1975, 1980, 1985, 1990, 1995, 2000, 2005Togo 1961, 1963, 1979, 1986, 1993, 1998, 2005Uganda 1996Zambia 1968, 1973, 1978, 1983, 1988, 1991, 1996Zimbabwe 1990, 1996, 2002 Notes: Elections in Botswana, Ethiopia, Lesotho, Mauritius, Somalia, South Africa, and Swaziland are excluded due to the parliamentary systems in these countries. Elections marked with an asterisk are second rounds. ...

The academic literature provides two competing hypotheses about the effect of economic downturns on voter turnout: the ‘mobilisation’ hypothesis, according to which people go to the polls to express their discontent with the government's performance; and the ‘withdrawal’ hypothesis, according to which people stay at home on election day, either to attend to more immediate, pressing concerns or to punish the incumbent. In this paper, we test these hypotheses against novel data from 317 presidential elections in 40 African countries over the period from 1960 to 2016. We find that economic growth has a positive effect on voter turnout, consistent with the ‘withdrawal’ hypothesis. The paper contributes to the literature in three ways. First, it provides the most comprehensive macro level analysis of voter turnout in Africa to-date. Second, it proves that African voters respond to changes in aggregate economic measures, thus contributing to the growing literature on economic voting in Africa. Finally, it demonstrates that African voters behave in a way that is consistent with the ‘withdrawal’ hypothesis.

... (viii) ∂l t /∂s t and ∂z t /∂λ t are bounded functions of ϵ t (Blazsek et al. [39]). (ix) Scaled score function l t is an F-measurable function of ϵ t (White [44]), because l t is a continuous function of ϵ t . Scaled score function l t is strictly stationary and ergodic, because l t is an F-measurable function of (ϵ 1 , . . . ...

... Scaled score function l t is strictly stationary and ergodic, because l t is an F-measurable function of (ϵ 1 , . . . , ϵ t ), and because ϵ t is strictly stationary and ergodic (White [44]). (x) Score function z t is i.i.d., because z t is a continuous function of ϵ t , and because ϵ t is i.i.d. ...

... (White [44]). (xi) Score function z t is an F-measurable function of ϵ t (White [44]), because z t is a continuous function of ϵ t (Harvey [16]). ...

In the present paper, a linear approach of signal smoothing for nonlinear score-driven models is suggested, by using results from the literature on minimum mean squared error (MSE) signals. Score-driven location, trend, and seasonality models with constant and score-driven scale parameters are used, for which the parameters are estimated by using the maximum likelihood (ML) method. The smoothing procedure is computationally fast, and it uses closed-form formulas for smoothed signals. Applications for monthly data of the seasonally adjusted and the not seasonally adjusted (NSA) United States (US) inflation rate variables for the period of 1948 to 2020 are presented.

... Put simply, Λ λ depends on the covariances E(X t X j ). As W 1 and W 2 are assumed to be independent, we must re-center the term ∆ 2 1 White (2014) [34], p.198 for a similar discussion. Notably, our proposed MENC-NEW is simply a re-centered version of the ENC-NEW, corrected by the autocovariances that shift the distribution. ...

... Put simply, Λ λ depends on the covariances E(X t X j ). As W 1 and W 2 are assumed to be independent, we must re-center the term ∆ 2 1 White (2014) [34], p.198 for a similar discussion. Notably, our proposed MENC-NEW is simply a re-centered version of the ENC-NEW, corrected by the autocovariances that shift the distribution. ...

... follows from the fact that X j u j+1 is a martingale difference, Theorem 7.19 in [34], from Corollary 29.19 of Davidson (1994) [39] and Theorem 3.1 of Hansen (1992) [40]. Then, the proof is complete since T −0.5 is o p (1). ...

Are traditional tests of forecast evaluation well behaved when the competing (nested) model is biased? No, they are not. In this paper, we show analytically and via simulations that, under the null hypothesis of no encompassing, a bias in the nested model may severely distort the size properties of traditional out-of-sample tests in economic forecasting. Not surprisingly, these size distortions depend on the magnitude of the bias and the persistency of the additional predictors. We consider two different cases: (i) There is both in-sample and out-of-sample bias in the nested model. (ii) The bias is present exclusively out-of-sample. To address the former case, we propose a modified encompassing test (MENC-NEW) robust to a bias in the null model. Akin to the ENC-NEW statistic, the asymptotic distribution of our test is a functional of stochastic integrals of quadratic Brownian motions. While this distribution is not pivotal, we can easily estimate the nuisance parameters. To address the second case, we derive the new asymptotic distribution of the ENC-NEW, showing that critical values may differ remarkably. Our Monte Carlo simulations reveal that the MENC-NEW (and the ENC-NEW with adjusted critical values) is reasonably well-sized even when the ENC-NEW (with standard critical values) exhibits rejections rates three times higher than the nominal size.

... Since ε t is mixing, by Theorem 3.49 in White (2001), ε t ε t is mixing of the same order. Assuming V 22 is finite and positive definite, by the strong mixing central limit theorem (Theorem A.8 in Lahiri (2003) ...

... for any ν > 0, and γ i = i −(ν+1)/ν . By Theorem 3.49 and Lemma 6.16 in White (2001), ...

It is well known that Local Projections (LP) residuals are autocorrelated. Conventional wisdom says that LP have to be estimated by OLS and that GLS is not possible because the autocorrelation process is unknown and/or because the GLS estimator would be inconsistent. I show that the autocorrelation process of LP can be written as a Vector Moving Average (VMA) process of the Wold errors and impulse responses and that autocorrelation can be corrected for using a consistent GLS estimator. Monte Carlo simulations show that estimating LP with GLS can lead to more efficient estimates.

... As observed before, some mixing conditions must be assumed with this aim. The definition of the mixing coefficients in Theorem 5 below can be found in Definition 3.42 of White (2001) ...

... Several papers have dealt with the consistency of the HAC estimator for non-stationary sequences (see e.g. Andrews 1988, Andrews 1991, Newey and West 1987, White 2001). The next proposition shows that, for adequate choices of the weights ω(i, k ) and of k , and under some mixing conditions,σ 2 0k is ratio consistent whenσ 0k > 0. Proposition 3 Suppose that (13) holds, that {X i } i∈Z has φ-mixing coefficients of size −2v/(2v − 1) or α-mixing coefficients of size −2v/(v − 1), for some v > 1, that σ 0k > 0 for all k sufficiently large, that H 0,k is true with λ i ∈ [L 1 , L 2 ], ∀i, for some fixed 0 < L 1 < L 2 < ∞, that the weights are bounded, |ω(i, k )| ≤ , for some > 0, that k → ∞, that ω(i, k ) → 1 for each i, and that k /k 1/4 → 0. ...

This paper studies the problem of simultaneously testing that each of k samples, coming from k count variables, were all generated by Poisson laws. The means of those populations may differ. The proposed procedure is designed for large k , which can be bigger than the sample sizes. First, a test is proposed for the case of independent samples, and then the obtained results are extended to dependent data. In each case, the asymptotic distribution of the test statistic is stated under the null hypothesis as well as under alternatives, which allows to study the consistency of the test. Specifically, it is shown that the test statistic is asymptotically free distributed under the null hypothesis. The finite sample performance of the test is studied via simulation. A real data set application is included.

... where W 1,T (r) := T −1/2 ⌊T r⌋ t=1 (s 2 β,t − 1). Under Assumption 1, {s 2 β,t − 1} satisfies the FCLT for mixing processes (Theorem 5.20 of White, 2000), and hence W 1,T (r) is O p (1) uniformly in r. Therefore, the first term of A 1,T is O p (T −1/2 ). ...

... y,t s β,t |] by the law of large numbers (LLN) for mixing processes (Corollary 3.48 of White, 2000) under Assumption 1. The other terms of B 1,1,T and B 1,2,T can be shown to be o p (1) in a similar fashion. ...

This study considers tests for coefficient randomness in predictive regressions. Our focus is on how tests for coefficient randomness are influenced by the persistence of random coefficient. We find that when the random coefficient is stationary, or I(0), Nyblom's (1989) LM test loses its optimality (in terms of power), which is established against the alternative of integrated, or I(1), random coefficient. We demonstrate this by constructing tests that are more powerful than the LM test when random coefficient is stationary, although these tests are dominated in terms of power by the LM test when random coefficient is integrated. This implies that the best test for coefficient randomness differs from context to context, and practitioners should take into account the persistence of potentially random coefficient and choose from several tests accordingly. In particular, we show through theoretical and numerical investigations that the product of the LM test and a Wald-type test proposed in this paper is preferable when there is no prior information on the persistence of potentially random coefficient. This point is illustrated by an empirical application using the U.S. stock returns data.

... To address this problem, we used the robust and cluster options in Stata for the regression models. The robust option estimates the standard error for each parameter using the Huber-White estimator (also called the sandwich estimator of variance), replacing the traditional calculation (White, 1984). Table 5 presents the correlations and descriptive statistics for the continuous variables used in the analysis. ...

A project may be conceived as a coalition of absorbed and unabsorbed resources that are allocated by the organisation. As such, a project may be subject to constraints in terms of certain resources combined with excesses (i.e., resource slack) in terms of others. This paper builds on the behavioural theory and the prospect theory to understand the role of resource slack in projects by exploring: 1) how distinct bundles of absorbed and unabsorbed slack influence project managers' risk perceptions; 2) how the impact of slack on risk perceptions differs according to ongoing project performance. Hypotheses are tested by regression analysis in a sample of 106 project managers. The results show complex effects of project slack on project managers' risk perceptions.

... The original stochastic volatility (SV) model was proposed by Taylor (1986) and others. Taylor (1986) proposed a discrete-time SV model, White (1984) proposed a continuoustime SV model, and Harvey and Shephard (1996) discussed an asymmetric SV model with leverage effects between the return process and the stochastic volatility process in the SV model using the quasi-maximum likelihood estimation method. Han et al. (2016) described an asymmetric stochastic volatility model using Gaussian regression with parameter estimation using the sequential Monte Carlo method. ...

The purpose of this paper is to study the generalized method of moments (GMM) estimation procedures of the realized stochastic volatility model; we give the moment conditions for this model and then obtain the estimation of parameters. Then, we apply these moment conditions to the realized stochastic volatility model to improve the volatility prediction effect. This paper selects the Shanghai Composite Index (SSE) as the original data of model research and completes the volatility prediction under a realized stochastic volatility model. Markov chain Monte Carlo (MCMC) estimation and quasi-maximum likelihood (QML) estimation are applied to the parameter estimation of the realized stochastic volatility model to compare with the GMM method. And the volatility prediction accuracy of these three different methods is compared. The results of empirical research show that the effect of model prediction using the parameters obtained by the GMM method is close to that of the MCMC method, and the effect is obviously better than that of the quasi-maximum likelihood estimation method.

... See, e.g., Andrews (1991) and White (2014) for details. ...

This paper considers a linear panel model with interactive fixed effects and unobserved individual and time heterogeneities that are captured by some latent group structures and an unknown structural break, respectively. To enhance realism the model may have different numbers of groups and/or different group memberships before and after the break. With the preliminary nuclear-norm-regularized estimation followed by row- and column-wise linear regressions, we estimate the break point based on the idea of binary segmentation and the latent group structures together with the number of groups before and after the break by sequential testing K-means algorithm simultaneously. It is shown that the break point, the number of groups and the group memberships can each be estimated correctly with probability approaching one. Asymptotic distributions of the estimators of the slope coefficients are established. Monte Carlo simulations demonstrate excellent finite sample performance for the proposed estimation algorithm. An empirical application to real house price data across 377 Metropolitan Statistical Areas in the US from 1975 to 2014 suggests the presence both of structural breaks and of changes in group membership.

... To address the possible bias related to heteroscedasticity and autocorrelation, we use clustered (province) robust standard errors, as suggested by literature (White, 1984;Arellano, 1987;Bertrand et al. (2002Bertrand et al. ( , 2004. We also account for the specific influence of time and province by the inclusion time (year) and province-related fixed effects (Pischke, 2005;Wing et al. 2018). ...

While the 1995 Occupational Safety and Health Act (OSH) regulation transformed the outlook on workplaces in Spain, characterized by a lack of preventive protection, public statistics have reported an increasing trend in the postregulation workplace accident rates. This study uses microdata from official national statistics to examine the effect of the OSH regulation on the reported accidents while focusing on its severity. Accordingly, we apply a difference-in-difference assessment method where a comparable group is formed by the contemporaneous in itinere accidents (commuting), which are legally and statistically considered work-related accidents but not directly impacted by the OSH regulation, with a focus on the workplace environment. The results reveal that the nonfatal accident rate decreased after the implementation of the regulation. However, when we isolate the effect of the regulation on accidents that usually provoke hard-to-diagnose injuries (dislocations, back pain, sprains, and strains), we obtain a significant increase in the accident rate. Moral hazard mixed effects seem to have played a crucial role in these dynamics through overreporting and/or Peltzman effects, often offsetting accident reduction intended by the OSH regulation.
JEL CLASSIFICATION
K31, I18, D04, H43, J28

... To deal with heteroscedasticity, all analyses are carried out with robust standard errors. (Huber, 1967;White, 1984). The best fits of the both the RE and the pooled analysis results are presented in Tables 2 and 3, respectively. ...

The paper looks into agricultural production at the subnational level in Greece, across four regions (Thessaly, north Greece, west Greece, the rest of Greece) from the EU's 2003-04 CAP reform and 2004 enlargement by ten member-states to the end of the country's long economic recession in 2016. It relies on annual observations running from 2004 to 2016, supplied by the EU Commission, and plots the evolution of output, of labor, of capital, of the land-area used, of energy costs, of the respective average productivities, and of the output to energy costs ratio. In addition, it econometrically estimates the impact of the said inputs on output, and the magnitude of multifactor productivity (i.e., of entrepreneurship, technology and of the impact of the factors not considered in the regression) in a translog production function framework. Alternative specifications are considered and all regressors are rendered uncorrelated to each other so as to deal with heteroscedasticity. The results suggest that labor and the cost of energy are the main explanatory factors. However, their impact along with the size of multifactor productivity vary across space. This implies that there is room for spatially differentiated interventions.

... One of the causes of volatility is the wrong estimation model setting. White [29,30,31] mentioned that exact and specific model form is desired, but sometimes researchers who do the best efforts still cannot solve the misspecification of model form. That will affect the residuals to induce heteroscedasticity that causes the measurable mistake of the volatilities. ...

This paper aims to find out the volatility of exchange rates according to running numerous mathematical models containing three parts of the estimations on the expected value, heteroscedasticity, and autocorrelation. Artificial intelligence in the estimation process determines the best model on the criterion of the minimum mean-squared-error. The main results show that the intelligent selection approach on regression models is the best tool that we can seek to solve the misspecification of the model form and get more accurate volatility of exchange rates represented by heteroscedasticity. Second, the economic stages can be classified depending on the heteroscedastic estimation of USD/TWD. Moreover, heteroscedasticity shows that the volatilities of the appreciation at economic stages after 2015 are different from the past. In sum, heteroscedasticity in the best estimation can show more exact volatilities on the pattern and what happened in the exchange rates on different economic stages automatically.

... Spurred by the development of the Generalized Method of Moments (GMM) by Hansen (1982) econometricians started to tackle this problem. Early contributions (in a more general non-linear context) include White (1984), White and Domowitz (1984), Newey and West (1987) and a comprehensive treatment was provided by Andrews (1991) who used results from the theory of spectral density estimation developed much earlier. Since then all the theoretical and empirical work has concentrated on OLS and a ‡ood of papers have been devoted to deliver improved estimates of the limit variance of OLS so that the con…dence intervals have accurate …nite sample coverage rates. ...

We consider a linear regression model with serially correlated errors. It is well known that with exogenous regressors Generalized Least-Squares is more efficient than Ordinary Least-Squares (OLS). However, there are usually three main reasons advanced for adopting OLS instead of GLS. The first is that it is generally believed that OLS is valid whether the regressors are exogenous or not, while GLS is only consistent when dealing with predetermined regressors. Second, OLS is more robust than GLS. Third, the gains in accuracy can be minor and the inference can be misleading (e.g., bad coverage rates of the confidence intervals). We show that all three claims are wrong. The first contribution is to dispel the fact that OLS is valid only requiring predetermined regressors, while GLS is valid only with exogenous regressors. We show the opposite to be true. The second contribution is to show that GLS is indeed much more robust that OLS. By that we mean that even a blatantly incorrect GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a feasible GLS (FGLS) procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. We also consider a further GLS correction for heteroskedastic errors.

... Pooling across all 33 countries for which we have life satisfaction data, we next predicted individuals' life satisfaction from all the fundamental social motives simultaneously, using Cluster-Robust Errors (CRE) analyses to account for potential non-independence of participants within countries (White, 1984). We found that before the pandemic, people had higher life satisfaction if they had higher Affiliation (Group) and Affiliation (Independence) motives (β = 0.14, t(937) = 2.18, p = .03; ...

The COVID-19 pandemic caused drastic social changes for many people, including separation from friends and coworkers, enforced close contact with family, and reductions in mobility. Here we assess the extent to which people's evolutionarily-relevant basic motivations and goals—fundamental social motives such as Affiliation and Kin Care—might have been affected. To address this question, we gathered data on fundamental social motives in 42 countries (N = 15,915) across two waves, including 19 countries (N = 10,907) for which data were gathered both before and during the pandemic (pre-pandemic wave: 32 countries, N = 8998; 3302 male, 5585 female; Mage = 24.43, SD = 7.91; mid-pandemic wave: 29 countries, N = 6917; 2249 male, 4218 female; Mage = 28.59, SD = 11.31). Samples include data collected online (e.g., Prolific, MTurk), at universities, and via community sampling. We found that Disease Avoidance motivation was substantially higher during the pandemic, and that most of the other fundamental social motives showed small, yet significant, differences across waves. Most sensibly, concern with caring for one's children was higher during the pandemic, and concerns with Mate Seeking and Status were lower. Earlier findings showing the prioritization of family motives over mating motives (and even over Disease Avoidance motives) were replicated during the pandemic. Finally, well-being remained positively associated with family-related motives and negatively associated with mating motives during the pandemic, as in the pre-pandemic samples. Our results provide further evidence for the robust primacy of family-related motivations even during this unique disruption of social life.

... One of the most important assumptions in time series applications is about serial correlation in the errors. This situation increases the possibility of allowing the GMM weighting matrix to take account for serial correlation of unknown forms, as well as for heteroscedasticities as discussed in Hansen (1982), White (1984), and Newey & West (1987). ...

The study examines the impacts of financial and macroeconomic factors on financial stability in emerging countries by focusing on Turkey's banking sector. In this context, financial stability is represented by non-performing loans (NPLs); 4 financial and 3 macroeconomic indicators as well as the COVID-19 pandemic are included as explanatory variables; quarterly data from 2005/Q1 to 2020/Q3 are analyzed by using Residual Augmented Least Squares (RALS) unit root test and generalized method of moments (GMM). The empirical results show that (i) credit volume, which is a financial indicator, has the highest effect on NPLs; (ii) risk-weighted assets, unemployment rate, foreign exchange rate, and economic growth have a statistically significant impact on NPLs, respectively; (iii) the COVID-19 pandemic has an increasing impact on NPLs; (iv) inflation and interest rate have a positive coefficient as expected while they are not statistically significant. The results highlight the high importance of financial factors (i.e. credit volume & risk-weighted assets) in comparison to the macroeconomic factors in terms of NPLs. Based on the empirical results of the study, it can be stated that Turkish policymakers should focus on financial variables (i.e., credit growth and risk-weighted assets) firstly as well as considering the effects of other factors.

... We estimate the models as a system of equations allowing for autocorrelation of the error terms by clustering standard errors at the Hispanic group level. [29][30][31] This procedure also allows us to account for the correlation of mortality rates across age groups and gender within each Hispanic group as well as the autocorrelation of mortality for each group, and to generate estimates of aggregate excess mortality for the population based on the stratum-specific models. ...

Objectives
To determine death occurrences of Puerto Ricans on the mainland USA following the arrival of Hurricane Maria in Puerto Rico in September 2017.
Design
Cross-sectional study.
Participants
Persons of Puerto Rican origin on the mainland USA.
Exposures
Hurricane Maria.
Main outcome
We use an interrupted time series design to analyse all-cause mortality of Puerto Ricans in the USA following the hurricane. Hispanic origin data from the National Vital Statistics System and from the Public Use Microdata Sample of the American Community Survey are used to estimate monthly origin-specific mortality rates for the period 2012–2018. We estimated log-linear regressions of monthly deaths of persons of Puerto Rican origin by age group, gender, and educational attainment.
Results
We found an increase in mortality for persons of Puerto Rican origin during the 6-month period following the hurricane (October 2017 through March 2018), suggesting that deaths among these persons were 3.7% (95% CI 0.025 to 0.049) higher than would have otherwise been expected. In absolute terms, we estimated 514 excess deaths (95% CI 346 to 681) of persons of Puerto Rican origin that occurred on the mainland USA, concentrated in those aged 65 years or older.
Conclusions
Our findings suggest an undercounting of previous deaths as a result of the hurricane due to the systematic effects on the displaced and resident populations in the mainland USA. Displaced populations are frequently overlooked in disaster relief and subsequent research. Ignoring these populations provides an incomplete understanding of the damages and loss of life.

... However, the results from such transformation should be interpreted in terms of the mean of the transformed variable and not of the mean of y (Cribari-Neto and Zeileis, 2010). That could be in disagreement with Jensen inequality, because the conditional mean of the transformed variable may differ from the transformation of the conditional mean of the variable (Grillenzoni, 1998;White, 1984). ...

In this paper, we propose five prediction intervals for the beta autoregressive moving average model. This model is suitable for modeling and forecasting variables that assume values in the interval $(0,1)$. Two of the proposed prediction intervals are based on approximations considering the normal distribution and the quantile function of the beta distribution. We also consider bootstrap-based prediction intervals, namely: (i) bootstrap prediction errors (BPE) interval; (ii) bias-corrected and acceleration (BCa) prediction interval; and (iii) percentile prediction interval based on the quantiles of the bootstrap-predicted values for two different bootstrapping schemes. The proposed prediction intervals were evaluated according to Monte Carlo simulations. The BCa prediction interval offered the best performance among the evaluated intervals, showing lower coverage rate distortion and small average length. We applied our methodology for predicting the water level of the Cantareira water supply system in S\~ao Paulo, Brazil.

... For simplicity, we here make the strict assumption of independence, but it is also possible to assume weaker forms of this assumptions for which it is still the case that the sum approximates a normal distribution(McLeish, 1974;White, 2001). ...

In modern test theory, response variables are a function of a common latent variable that represents the measured attribute, and error variables that are unique to the response variables. While considerable thought goes into the interpretation of latent variables in these models (e.g., validity research), the interpretation of error variables is typically left implicit (e.g., describing error variables as residuals). Yet, many psychometric assumptions are essentially assumptions about error and thus being able to reason about psychometric models requires the ability to reason about errors. We propose a causal theory of error as a framework that enables researchers to reason about errors in terms of the data-generating mechanism. In this framework, the error variable reflects myriad causes that are specific to an item and, together with the latent variable, determine the scores on that item. We distinguish two types of item-specific causes: characteristic variables that differ between people (e.g., familiarity with words used in the item), and circumstance variables that vary over occasions in which the item is administered (e.g., a distracting noise). We show that different assumptions about these unique causes (a) imply different psychometric models; (b) have different implications for the chance experiment that makes these models probabilistic models; and (c) have different consequences for item bias, local homogeneity, and reliability coefficient α and the test-retest correlation. The ability to reason about the causes that produce error variance puts researchers in a better position to motivate modeling choices. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

... However, the Breusch-Pagan test (Breusch and Pagan, 1980) showed heteroscedasticity. To address this problem and ensure consistent and efficient estimates for the significance of the variables, the p-values were corrected by White's method (White, 1980(White, , 1984 in the random effects models. Table 1 presents the descriptive statistics of the main variables used for the study. ...

... As developed byWhite (1984) with extensions byLiang and Zeger (1986) andArellano (1987). 14 Data accessed 19 November 2020 from https://www.nber.org/research/data/current-population-survey-cps-data-nb ...

Triple difference has become a widely used estimator in empirical work. A close reading of articles in top economics journals reveals that the use of the estimator to a large extent rests on intuition. The identifying assumptions are neither formally derived nor generally agreed on. We give a complete presentation of the triple difference estimator, and show that even though the estimator can be computed as the difference between two difference-in-differences estimators, it does not require two parallel trend assumptions to have a causal interpretation. The reason is that the difference between two biased difference-in-differences estimators will be unbiased as long as the bias is the same in both estimators. This requires only one parallel trend assumption to hold.

... However, the Breusch-Pagan test (Breusch and Pagan, 1980) showed heteroscedasticity. To address this problem and ensure consistent and efficient estimates for the significance of the variables, the p-values were corrected by White's method (White, 1980(White, , 1984 in the random effects models. Table 1 presents the descriptive statistics of the main variables used for the study. ...

... Notice that i) is sufficient conditions for the CLT to hold, see Theorem 5.20 in White (2000). See also Theorem 14.19 in Hansen (2010). ...

In this paper, we propose a correlation-based test for the evaluation of two competing forecasts. Under the null hypothesis of equal correlations with the target variable, we derive the asymptotic distribution of our test using the Delta method. This null hypothesis is not necessarily equivalent to the null of equal Mean Squared Prediction Errors (MSPE). Specifically, it might be the case that the forecast displaying the lowest MSPE also exhibits the lowest correlation with the target variable: this is known as "The MSPE paradox" (Pincheira and Hardy; 2021). In this sense, our approach should be seen as complementary to traditional tests of equality in MSPE. Monte Carlo simulations indicate that our test has good size and power. Finally, we illustrate the use of our test in an empirical exercise in which we compare two different inflation forecasts for a sample of OECD economies. We find more rejections of the null of equal correlations than rejections of the null of equality in MSPE.

... In Equation (15), the dynamic interaction effects are time-dependent due toD t . In the empirical applications of the present paper we replaceD t by the sample average (White 2001) for the last 10 observations of the sample, because, as noted in the work of Castle and Hendry (2020, p. 111), humanity has influenced climate for the last 10 thousand years. The use of the sample averages for dynamic models with time-varying IRFs is motivated by the work of Herwartz and Lütkepohl (2000). ...

We extend a recent climate-econometric model that we call benchmark ice-age model, by using score-driven filters of conditional mean and variance which generalize the updating mechanism of the benchmark model. For the period of the last 798 thousand years, we use the climate variables global ice volume (Ice t), atmospheric carbon dioxide level (CO 2,t), and Antarctic land surface temperature (Temp t). Those variables are cyclical and driven by strictly exogenous orbital variables such as the eccentricity of the Earth's orbit, obliquity, and precession of the equinox. We show that the score-driven ice-age models improve the statistical performance of the benchmark ice-age model, and the results for the benchmark model reported in the literature are robust. One of our objectives is to separate the effects of the orbital variables and the effects of humanity on the Earth's climate. We forecast the climate variables out-of-sample for the last 100 thousand years. For the last 10-15 thousand years of the forecasting period, for which humanity influenced the Earth's climate, we find that: (i) the forecasts of Ice t are above the observed Ice t , (ii) the forecasts of CO 2,t level are below the observed CO 2,t , and (iii) the forecasts of Temp t are below the observed Temp t .

... Noting that ς T ;i ðCÞ's are independent across i, by verifying the Liapounov condition, we can show that the Lindeberg-Feller central limit theorem (CLT) holds; see, e.g. Theorem 5.10 in White (2001). It follows that 1 ffiffiffi N p ∑ N i¼1 ς T ;i ðCÞ ! ...

This paper considers a probit model for panel data in which the individual effects vary over time by interacting with unobserved factors. In estimation we adopt a correlated random effects approach for individual effects to get around the incidental parameter problem. This allows us to construct (asymptotically) unbiased estimators for average marginal effects (AMEs), which are often the ultimate quantities of interest in many empirical studies. We derive the asymptotic distributions for the AME estimators as well as provide the consistent estimators for their asymptotic variances. Next, we design a specification test for detecting whether individual effects are time-varying or not, and establish the asymptotic distribution for the proposed test statistic under the null hypothesis of no time variation of individual effects. Monte Carlo simulations demonstrate satisfactory finite sample performance of our proposed method. An empirical application to study the effect of fertility on labour force participation (LFP) is provided. We find that fertility has a larger impact on female LFP in Germany than in the US during the 1980s. We also provide some new empirical evidence of a even stronger effect of fertility on LFP during the 2010s in Germany, which might call for a reconsideration of relevant policies recently enacted such as the subsidized child care programme.

... It is straightforward to see that Z i is bounded, and hence any moment of |Z i | is finite. Then we can apply the Lyapnov central limit theorem (for example, Theorem 5.11 in White (2001)) to obtain that ∑ Z i / √ n converges in distribution to N(0, V). The desired result is thus proved. ...

This paper studies the local linear regression (LLR) estimation of the conditional distribution function $F(y|x)$. We derive three uniform convergence results: the uniform bias expansion, the uniform convergence rate, and the uniform asymptotic linear representation. The uniformity of the above results is not only with respect to $x$ but also $y$, and therefore are not covered by the current developments in the literature of local polynomial regressions. Such uniform convergence results are especially useful when the conditional distribution estimator is the first stage of a semiparametric estimator. We demonstrate the usefulness of these uniform results with an example.

Estimating poverty measures for disabled people in developing countries is often difficult, partly because relevant data are not readily available. We extend the small-area estimation developed by Elbers, Lanjouw and Lanjouw (2002, 2003) to estimate poverty by the disability status of the household head, when the disability status is unavailable in the survey. We propose two alternative approaches to this extension: Aggregation and Instrumental Variables Approaches. We apply these approaches to data from Tanzania and show that both approaches work. Our estimation results show that disability is indeed positively associated with poverty in every region of mainland Tanzania.

Systemic risk measures such as CoVaR, CoES and MES are widely-used in finance, macroeconomics and by regulatory bodies. Despite their importance, we show that they fail to be elicitable and identifiable. This renders forecast comparison and validation, commonly summarized as ‘backtesting’, impossible. The novel notion of multi-objective elicitability solves this problem by relying on bivariate scores equipped with the lexicographic order. Based on this concept, we propose Diebold–Mariano type tests with suitable bivariate scores to compare systemic risk forecasts. We illustrate the test decisions by an easy-to-apply traffic-light approach. Finally, we apply our traffic-light approach to DAX 30 and S&P 500 returns, and infer some recommendations for regulators.

A two-step estimator of a nonparametric regression function via Kernel regularized least squares (KRLS) with parametric error covariance is proposed. The KRLS, not considering any information in the error covariance, is improved by incorporating a parametric error covariance, allowing for both heteroskedasticity and autocorrelation, in estimating the regression function. A two step procedure is used, where in the first step, a parametric error covariance is estimated by using KRLS residuals and in the second step, a transformed model using the error covariance is estimated by KRLS. Theoretical results including bias, variance, and asymptotics are derived. Simulation results show that the proposed estimator outperforms the KRLS in both heteroskedastic errors and autocorrelated errors cases. An empirical example is illustrated with estimating an airline cost function under a random effects model with heteroskedastic and correlated errors. The derivatives are evaluated, and the average partial effects of the inputs are determined in the application.

We examine asymptotic properties of the OLS estimator when the values of the regressor of interest are assigned randomly and independently of other regressors. We find that the OLS variance formula in this case is often simplified, sometimes substantially. In particular, when the regressor of interest is independent not only of other regressors but also of the error term, the textbook homoskedastic variance formula is valid even if the error term and auxiliary regressors exhibit a general dependence structure. In the context of randomized controlled trials, this conclusion holds in completely randomized experiments with constant treatment effects. When the error term is heteroscedastic with respect to the regressor of interest, the variance formula has to be adjusted not only for heteroscedasticity but also for correlation structure of the error term. However, even in the latter case, some simplifications are possible as only a part of the correlation structure of the error term should be taken into account. In the context of randomized control trials, this implies that the textbook homoscedastic variance formula is typically not valid if treatment effects are heterogenous but heteroscedasticity-robust variance formulas are valid if treatment effects are independent across units, even if the error term exhibits a general dependence structure. In addition, we extend the results to the case when the regressor of interest is assigned randomly at a group level, such as in randomized control trials with treatment assignment determined at a group (e.g., school/village) level.

Keywords: Severity; Diabetes; Logit Model; Probity Model; Sample Size Recently, diabetes is one of the common and killer diseases in the world. The severity of the disease may depend on the treatment as well as some other factors. This severity may be classified into different categories according to the levels of its complications in the patients. This study is an attempt to investigate the significant factors in severity of diabetes using the logit and probit models and compare results from the two models. The study also seeks to identify the effects of sample size on the estimates of the parameters of the models. The results obtained show that the probit model performed a little better with age as the only significant factor identified in the two models among the factors investigated. The results obtained also show that the efficiency of the coefficients of the logit and probit models increases as the sample size increases.

This dissertation contributes to the academic research in econometrics and financial risk management. Our research's goal is twofold: (i) to quantify the financial risks incurred by financial institutions and (ii) to assess the validity of the risk measures commonly used in the financial industry or by regulators. We focus on three kind of financial risks, (i) credit risk, (ii) market risk, and (iii) systemic risk. In chapters 2 and 3, we develop new methods for modeling and backtesting the volatility and the Expected Shortfall (ES), two measures typically used to quantify the risk of incurred losses in investment portfolios. In Chapter 4, we provide new estimation methods for the systemic risk measures that are used to identify the financial institutions contributing the most to the overall risk in the financial system.In Chapter 2, we develop a volatility structure that groups the whole sequence of intraday returns as functional covariates. Contrary to the well-known GARCH model with exogenous variables (GARCH-X), our approach makes possible to account for the whole information contained in the intraday price movements via functional data analysis. Chapter 3 introduces an econometric methodology to test for the validity of ES forecasts in market portfolios. This measure is now used to calculate the market risk capital requirements following the adoption of the Basel III accords by the Basel Committee on Banking Supervision (BCBS). Our method exploits the existing relationship between ES and Value-at-Risk (VaR) and complies - as a special case - with the BCBS recommendation of verifying the VaR at two specific risk levels. In Chapter 4, we focus on the elicitability property for the market-based systemic risk measures. A risk measure is said to be elicitable if there exists a loss function such that the risk measure itself is the solution to minimize the expected loss. We identify a strictly consistent scoring function for the Marginal Expected Shortfall (MES) and the VaR of the market return jointly and we exploit the scoring function to develop a semi-parametric M-estimator for the pair (VaR, MES).

This study examines the effects of government spending shocks on the Italian credit market using NUTS 3 data over the sample period 2011–2018. The empirical methodology is based on a local projection IV and the identification of a public spending shock is achieved by constructing a Bartik instrument. The empirical evidence shows a mild positive effect of 1% increase in government spending relative to GDP on the growth of the volume loans relative to GDP. However, the empirical findings show that government spending does not help to ameliorate neither the “size bias,” that is the financial constraints which small firms face relative to larger ones, nor the “home bias” in lending related to the process of bank consolidation in Italy.

Momentum methods have been shown to accelerate the convergence of the standard gradient descent algorithm in practice and theory. In particular, the random partition based minibatch gradient descent methods with momentum (MGDM) are widely used to solve large-scale optimization problems with massive datasets. Despite the great popularity of the MGDM methods in practice, their theoretical properties are still underexplored. To this end, we investigate the theoretical properties of MGDM methods based on the linear regression models. We first study the numerical convergence properties of the MGDM algorithm and derive the conditions for faster numerical convergence rate. In addition, we explore the relationship between the statistical properties of the resulting MGDM estimator and the tuning parameters. Based on these theoretical findings, we give the conditions for the resulting estimator to achieve the optimal statistical efficiency. Finally, extensive numerical experiments are conducted to verify our theoretical results.

This paper derives asymptotic theory for Breitung's (2002, Journal of Econometrics 108, 343-363) nonparameteric variance ratio unit root test when applied to regression residuals. The test requires neither the specification of the correlation structure in the data nor the choice of tuning parameters. Compared with popular residuals-based no-cointegration tests, the variance ratio test is less prone to size distortions but has smaller local asymptotic power. However, this paper shows that local asymptotic power properties do not serve as a useful indicator for the power of residuals-based no-cointegration tests in finite samples. In terms of size-corrected power, the variance ratio test performs relatively well and, in particular, does not suffer from power reversal problems detected for, e.g., the frequently used augmented Dickey-Fuller type no-cointegration test. An application to daily prices of cryptocurrencies illustrates the usefulness of the variance ratio test in practice.

This paper revisits and provides an alternative derivation of the asymptotic results for the Principal Components estimator of a large approximate factor model as considered in Stock and Watson (2002), Bai (2003), and Forni et al. (2009). Results are derived under a minimal set of assumptions with a special focus on the time series setting, which is usually considered in almost all recent empirical applications. Hence, $n$ and $T$ are not treated symmetrically, the former being the dimension of the considered vector of time series, while the latter being the sample size and, therefore, being relevant only for estimation purposes, but not when it comes to just studying the properties of the model at a population level. As a consequence, following Stock and Watson (2002) and Forni et al. (2009), estimation is based on the classical $n \times n$ sample covariance matrix. As expected, all asymptotic results we derive are equivalent to those stated in Bai (2003), where, however, a $T\times T$ covariance matrix is considered as a starting point. A series of useful complementary results is also given. In particular, we give some alternative sets of primitive conditions for mean-squared consistency of the sample covariance matrix of the factors, of the idiosyncratic components, and of the observed time series. We also give more intuitive asymptotic expansions for the estimators showing that PCA is equivalent to OLS as long as $\sqrt{T}/n\to 0$ and $\sqrt{n}/T\to 0$, that is loadings are estimated in a time series regression as if the factors were known, while factors are estimated in a cross-sectional regression as if the loadings were known. The issue of testing multiple restrictions on the loadings as well as building joint confidence intervals for the factors is discussed.

The autoregressive conditional heteroscedasticity (ARCH) model and its various generalizations have been widely used to analyze economic and financial data. Although many variables like GDP, inflation, and commodity prices are imprecisely measured, research focusing on the mismeasured response processes in GARCH models is sparse. We study a dynamic model with ARCH error where the underlying process is latent and subject to additive measurement error. We show that, in contrast to the case of covariate measurement error, this model is identifiable by using the observations of the proxy process only and no extra information is needed. We construct GMM estimators for the unknown parameters which are consistent and asymptotically normally distributed under general conditions. We also propose a procedure to test the presence of measurement error, which avoids the usual boundary problem of testing variance parameters. We carry out Monte Carlo simulations to study the impact of measurement error on the naive maximum likelihood estimators and have found interesting patterns of their biases. Moreover, the proposed estimators have fairly good finite sample properties.KeywordsDynamic ARCH modelErrors in variablesGeneralized method of momentsMeasurement errorSemiparametric estimation

The study investigated public debt sustainability in sub-Saharan Africa (SSA) by testing the reaction of the primary balance to positive and negative shocks in public debts in a panel of 45 SSA countries. The study found that public debts in SSA are weakly sustainable and there is a highly procyclical fiscal policy bias in SSA countries, particularly, in resource-rich countries, indicating that governments’ fiscal policy responses are expansionary during economic upturns and contractionary during recessions which may aggravate recessions and worsen debt situations across SSA. For robustness, the study compares the results with emerging and developed economies. The results indicate that in advanced economies, public debt is sustainable and that fiscal policy response is countercyclical.
The research and policy implications are discussed.

Next-generation communication systems aim for providing pervasive services, including the high-mobility scenarios routinely encountered in mission-critical applications. Hence we harness the recently-developed reconfigurable intelligent surfaces (RIS) to assist the high-mobility cell-edge users. More explicitly, the passive elements of RISs generate beneficial phase rotations for the reflected signals, so that the signal power received by the high-mobility users is enhanced. However, in the face of high Doppler frequencies, the existing RIS channel estimation techniques that assume block fading generally result in irreducible error floors. In order to mitigate this problem, we propose a new RIS channel estimation technique, which is the first one that performs minimum mean square error (MMSE) based interpolation for the sake of taking into account the time-varying nature of fading even within the coherence time. The RIS modelling invokes only passive elements without relying on RF chains, where both the direct link and RIS-reflected links as well as both the line-of-sight (LoS) and non-LoS (NLoS) paths are taken into account. As a result, the cascaded base station (BS)-RIS-user links involve the multiplicative concatenation of the channel coefficients in the LoS and NLoS paths across the two segments of the BS-RIS and RIS-user links. Against this background, we model the multiplicative RIS fading correlation functions for the first time in the literature, which facilitates MMSE interpolation for estimating the high-dimensional and high-Doppler RIS-reflected fading channels. Our simulation results demonstrate that for a vehicle travelling at a speed as high as 90 mph, employing a low-complexity RIS at the cell-edge using as few as 16 RIS elements is sufficient for achieving substantial power-effieincy gains, where the Doppler-induced error floor is mitigated by the proposed channel estimation technique.

The empirical likelihood ratio (ELR) test is proposed for uncovering a structural change in integer-valued autoregressive (INAR) processes. The limiting distribution is derived under the null hypothesis that the parameter did not change at the anticipated change points. To evaluate the finite-sample performance of the proposed ELR test, the empirical sizes and powers are investigated in a simulation study. The ELR test is also applied to real data on infectious disease and crime counts.

We refine the approximate factor model of asset returns by distinguishing between strong factors, whose sum of squared factor betas grow at the same rate as the number of assets, and semi-strong factors, whose sum of squared factor betas grow to infinity, but at a slower rate. We develop a test statistic for strength of factors based on the cross-sectional mean-square of regression-estimated betas. We also describe an adjusted version of the test statistic to differentiate semi-strong factors from strong factors. We apply the methodology to daily equity returns to characterize some pre-specified factors as strong or semi-strong.

Very few numbers of books on econometrics are in the Arabic version. To fill this gap, I have already produced my book on Econometrics in the Arabic language. The file includes a copy of the first introductory chapter and a listing of the references in Arabic and English.

Credibility is elusive, but Blinder [(2000) American Economic Review 90, 1421–1431.] generated a consensus in the literature by arguing that “A central bank is credible if people believe it will do what it says.” To implement this idea, we first measure people’s beliefs by using survey data on inflation’s expectations. Second, we compare beliefs with explicit (or tacit) targets, taking into account the uncertainty in our estimate of beliefs (asymptotic 95% robust confidence intervals). Whenever the target falls into this interval we consider the central bank credible. We consider it not credible otherwise. We apply our approach to study the credibility of the Brazilian Central Bank (BCB) by using a world-class database—the Focus Survey of forecasts. Using monthly data from January 2007 until April 2017, we estimate people’s beliefs of inflation 12 months ahead, coupled with a robust estimate of its asymptotic 95% confidence interval. Results show that the BCB was credible 65% of the time, with the exception of a few months in the beginning of 2007 and during the interval between mid-2013 throughout mid-2016.

Cropping management practices is an agronomic notion grasping the interdependence between targeted yield and input use levels. Subsequently, one can legitimately assume that different cropping management practices are associated to different production functions. T o better understand pesticide dependence – a key point to encourage more sustainable practices – one have to consider modelling cropping management practices specific production functions.Because of the inherent interdependence between those practices and their associated yield and input use levels, we need to consider endogenous regime switching models. When unobserved, the sequence of cropping management practices choices is considered as a Markovian process. From this modelling framework we can derive the cropping management choices, their dynamics, their associated yield and input use levels. When observed, we consider primal production functions to see how yield responds differently to input uses based on the different cropping management practices. Thus, we can assess jointly the effect of a public policy on input use and yield levels.In a nutshell, in this PhD we are aiming at giving some tools to evaluate the differentiated effect of agri-environmental public policies on production choices and on the associated yield and input use levels.

Computing a correlation between a pair of time series is a routine task in disciplines from biology to climate science. How do we test whether such a correlation is statistically significant (i.e. unlikely under the null hypothesis that the time series are independent)? This problem is made especially challenging by two factors. First, time series typically exhibit autocorrelation, which renders standard statistical tests invalid. Second, researchers are increasingly turning to nonlinear correlation statistics with no known analytical null distribution, thus rendering parametric tests inviable. Several statistical tests attempt to address these two challenges, but none is perfect: A few are valid only under restrictive data conditions, others have differing degrees of correctness depending on user-supplied parameters, and some are simply inexact. Here we describe the truncated time-shift procedure, which can be used with any correlation statistic to test for dependence between time series. We show that this test is exactly valid as long as one time series is stationary, a minimally restrictive requirement among nonparametric tests. Using synthetic data, we demonstrate that our test performs correctly even while other tests suffer high false positive rates. We apply the test to data sets from climatology and microbiome science, verifying previously discovered dependence relationships.

This thesis theoretically and empirically explores firm performance mostly when firms conduct international operation by linking internal, external, direct, and indirect factors, including five-country components including human capital, institution, market competition, cluster development, and market size, export behaviors based on productivity cut-off, horizontal, backward and forward FDI agglomerations, the dynamic effect between FDI and trade, financial constraints in R&D investment, and the â��black boxâ�� of firm efficiency. This thesis consists of six core chapters (Chapter 2-7) related to micro and macro studies. The study samples used in this thesis include the cross-sectional data obtained from 234 previous empirical studies in terms of firm performance and internationalization during the period 1969-2017 in Chapter 2, the panel data of 208,424 Chinese firms during the period 2005- 2007 in Chapter 3, the panel data of 12,240 Chinese firms during the period 2010-2013 in Chapter 4, the panel data of Chinaâ��s 151 target countries during the 2007-2017 period in Chapter 5, the panel data of 414 British manufacturing firms during the period 2009-2018 in Chapter 6, and the panel data of 123 British listed manufacturing firms during the period 2006-2018 in Chapter 7. Due to the different study purposes in the six chapters, the study III data used in this study are from various data resources and different timelines. Even if this study uses the same database, such as Chinaâ��s Annual Industry Survey Database in Chapter 3 and Chapter 4, this study also uses a different timeline because of the non-availability of data for a consistent panel of firms. Furthermore, the analytic methods correspondingly used in this thesis mainly involve the comparative MAR with VWLS, Heckman two-stage procedure with Probit and Tobit models, analytical method of ROC, spatial econometric model, global PCA and system GMM, heterogeneity SFA with one stage, and two-stage (DEA and Tobit) model and three-stage (DEA-SFA) model. There are multiple insight findings in this thesis. In Chapter 2, the five-country components, including human capital, institution, market competition, cluster development, and market size, substantially mediate the I-P relationship. Furthermore, the five-country components exert a positive mediating effect on the I-P relationship in developed countries while negatively mediating the I-P relationship in developing countries. In Chapter 3, productivity is likely to impact on firmâ��s export propensity and irregular export positively. In contrast, it negatively affects the firmâ��s export intensity and does not impact the firmâ��s regular export. Enhancing productivity cut-off is not conducive to the firmâ��s export propensity and regular export while it does not affect the firmâ��s export intensity and irregular export. Productivity cut-off tends to impact on firmâ��s export decision rather than its export scale. Moreover, firms with regular export are more sensitive to productivity cut-off than firms with irregular export. In Chapter 4, the congestion effect (inverted U-shaped relationship) dominates the three types of FDI agglomerations on the local firm's productivity. The congestion effect also dominates the spatial backward and forward FDI agglomerations on neighboring firmsâ�� productivity, while a U-shaped relationship dominates the spatial horizontal FDI agglomeration on neighboring firmsâ�� productivity. Furthermore, the interaction intensity of backward and forward FDI agglomerations with local and neighboring firms is similar and much stronger than horizontal FDI agglomeration. In Chapter 5, China is more likely to conduct horizontal FDI in developed countries, while vertical FDI in developing countries. There is a dynamic change from the substitution effect of FDI on export to the complementary effect of FDI on export along with the decreasing trend of horizontal motivation and the increasing trend of vertical motivation. In Chapter 6, the firmâ��s financial constraints in R&D have a vital impact on its productivity and future financing uncertainty and show a difference in low, middle, and high-tech industries. Furthermore, the loss of firm productivity is about 20% due to its financial constraints in IV R&D investment. In Chapter 7, the firmâ��s technical, pure technical, and scale efficiencies reach a relatively high level (between 0.93 and 1) while its operational efficiency (between 0.65 and 0.7) and profitability efficiency (between 0.85 and 0.9) have a large room to improve. Moreover, the overall environment has a significant impact on firm efficiency in middle and high-tech industries than that in the low-tech sector. Therefore, the various findings in the six core chapters provide important implications for government policies and firmâ��s domestic and foreign operations

ResearchGate has not been able to resolve any references for this publication.