We use detailed income, balance sheet, and cash flow statements constructed for households in a long monthly panel in an emerging market economy, and some recent contributions in economic theory, to document and better understand the factors underlying success in achieving upward mobility in the distribution of net worth. Wealth inequality is decreasing over time, and many households work their way out of poverty and lower wealth over the seven year period. The accounts establish that, mechanically, this is largely due to savings rather than incoming gifts and remittances. In turn, the growth of net worth can be decomposed household by household into the savings rate and how productively that savings is used, the return on assets (ROA). The latter plays the larger role. ROA is, in turn, positively correlated with higher education of household members, younger age of the head, and with a higher debt/asset ratio and lower initial wealth, so it seems from cross-sections that the financial system is imperfectly channeling resources to productive and poor households. Household fixed effects account for the larger part of ROA, and this success is largely persistent, undercutting the story that successful entrepreneurs are those that simply get lucky. Persistence does vary across households, and in at least one province with much change and increasing opportunities, ROA changes as households move over time to higher-return occupations. But for those households with high and persistent ROA, the savings rate is higher, consistent with some micro founded macro models with imperfect credit markets. Indeed, high ROA households save by investing in their own enterprises and adopt consistent financial strategies for smoothing fluctuations. More generally growth of wealth, savings levels and/or rates are correlated with TFP and the household fixed effects that are the larger part of ROA.
This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.
Over the last decades, spatial-interaction models have been increasingly used in economics. However, the development of a sufficiently general asymptotic theory for nonlinear spatial models has been hampered by a lack of relevant central limit theorems (CLTs), uniform laws of large numbers (ULLNs) and pointwise laws of large numbers (LLNs). These limit theorems form the essential building blocks towards developing the asymptotic theory of M-estimators, including maximum likelihood and generalized method of moments estimators. The paper establishes a CLT, ULLN, and LLN for spatial processes or random fields that should be applicable to a broad range of data processes.
The development of a general inferential theory for nonlinear models with cross-sectionally or spatially dependent data has been hampered by a lack of appropriate limit theorems. To facilitate a general asymptotic inference theory relevant to economic applications, this paper first extends the notion of near-epoch dependent (NED) processes used in the time series literature to random fields. The class of processes that is NED on, say, an α-mixing process, is shown to be closed under infinite transformations, and thus accommodates models with spatial dynamics. This would generally not be the case for the smaller class of α-mixing processes. The paper then derives a central limit theorem and law of large numbers for NED random fields. These limit theorems allow for fairly general forms of heterogeneity including asymptotically unbounded moments, and accommodate arrays of random fields on unevenly spaced lattices. The limit theorems are employed to establish consistency and asymptotic normality of GMM estimators. These results provide a basis for inference in a wide range of models with spatial dependence.
This study develops a methodology of inference for a widely used Cliff-Ord type spatial model containing spatial lags in the dependent variable, exogenous variables, and the disturbance terms, while allowing for unknown heteroskedasticity in the innovations. We first generalize the GMM estimator suggested in Kelejian and Prucha (1998,1999) for the spatial autoregressive parameter in the disturbance process. We also define IV estimators for the regression parameters of the model and give results concerning the joint asymptotic distribution of those estimators and the GMM estimator. Much of the theory is kept general to cover a wide range of settings.
Differences in economic opportunities give rise to strong migration incentives, across regions within countries, and across countries. In this paper we focus on responses to differences in welfare benefits across States. We apply the model developed in Kennan and Walker (2008), which emphasizes that migration decisions are often reversed, and that many alternative locations must be considered. We model individual decisions to migrate as a job search problem. A worker starts the life-cycle in some home location and must determine the optimal sequence of moves before settling down. The model is sparsely parameterized. We estimate the model using data from the National Longitudinal Survey of Youth (1979). Our main finding is that income differences do help explain the migration decisions of young welfare-eligible women, but large differences in benefit levels provide surprisingly weak migration incentives.
This paper compares the economic questions addressed by instrumental variables estimators with those addressed by structural approaches. We discuss Marschak's Maxim: estimators should be selected on the basis of their ability to answer well-posed economic problems with minimal assumptions. A key identifying assumption that allows structural methods to be more informative than IV can be tested with data and does not have to be imposed.
We analyze the roles of and interrelationships among school inputs and parental inputs in affecting child development through the specification and estimation of a behavioral model of household migration and maternal employment decisions. We integrate information on these decisions with observations on child outcomes over a 13-year period from the NLSY. We find that the impact of our school quality measures diminish by factors of 2 to 4 after accounting for the fact that families may choose where to live in part based on school characteristics and labor market opportunities. The positive statistical relationship between child outcomes and maternal employment reverses sign and remains statistically significant after controlling for its possible endogeneity. Our estimates imply that when parental responses are taken into account, policy changes in school quality end up having only minor impacts on child test scores.
The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coe cient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coe cient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and nd evidence of sorting into schooling based on unobserved components of gains.
This paper extends the ordered response polytomous probit model to a possibly more realistic framework in which the coefficients are allowed to be random. The new method is then used to analyze family migration behavior. It is seen that the new method allows us to make inferences that were not possible in the fixed coefficient model.
This paper applies the waiting-time regression methods of Olsen and Wolpin (1983) to an analysis of fertility. A utility maximizing model is set up and used to provide some guidance for an empirical analysis. The data are from an experimental guaranteed job program, the Youth Incentive Entitlement Pilot Project, aimed at young women 16 to 20 years old, from poverty-level families, and not yet high school graduates. The waiting-time regression method of estimation permits the youth in question to be used as her own control revealing how eligibility for the jobs program changes the durations of periods between live-birth conceptions. 3890 women surveyed had 1 birth, 429 had 2, 112 had 3, 26 had 4, and 7 had 5. Without this person specific control described here, the most important factors affecting fertility are number of siblings (negative effect), labor market attachment by parents, especially the father, and the presence of the natural father. With the person specific control, the results predicted from economic theory do emerge: even adolescent and young women consider the economic consequences of fertility reflected in effects of fertility when wages are high in favor of fertility with lower wages. Post program effects (taking place after youths lose eligibility for the program) are a rather rapid making up for foregone fertility, reducing likelihood of net reductions of total fertility.
"This paper generalizes previous results on income distribution dominance in the case where the population of income recipients is broken down into groups with distinct utility functions. The example taken here is that of income redistribution across families of different sizes. The paper first investigates the simplest assumptions that can be made about family utility functions. A simple dominance criterion is then derived under the only assumptions that family functions are increasing and concave with income and the marginal utility of income increases with family size."
The Quarterly Workforce Indicators (QWI) are local labor market data produced and released every quarter by the United States Census Bureau. Unlike any other local labor market series produced in the U.S. or the rest of the world, QWI measure employment flows for workers (accession and separations), jobs (creations and destructions) and earnings for demographic subgroups (age and gender), economic industry (NAICS industry groups), detailed geography (block (experimental), county, Core-Based Statistical Area, and Workforce Investment Area), and ownership (private, all) with fully interacted publication tables. The current QWI data cover 47 states, about 98% of the private workforce in those states, and about 92% of all private employment in the entire economy. State participation is sufficiently extensive to permit us to present the first national estimates constructed from these data. We focus on worker, job, and excess (churning) reallocation rates, rather than on levels of the basic variables. This permits comparison to existing series from the Job Openings and Labor Turnover Survey and the Business Employment Dynamics Series from the Bureau of Labor Statistics (BLS). The national estimates from the QWI are an important enhancement to existing series because they include demographic and industry detail for both worker and job flow data compiled from underlying micro-data that have been integrated at the job and establishment levels by the Longitudinal Employer-Household Dynamics Program at the Census Bureau. The estimates presented herein were compiled exclusively from public-use data series and are available for download.
The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n. The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.
"This paper develops an approach to simultaneity among hazard equations which is similar in spirit to simultaneous Tobit models. It introduces a class of continuous time models which incorporates two forms of simultaneity across related processes--when the hazard rate of one process depends (1) on the hazard rate of another process or (2) on the actual current state of or prior outcomes of a related multi-episode process. This paper also develops an approach to modeling the notion of 'multiple clocks' in which one process may depend on the duration of a related process, in addition to its own. Maximum likelihood estimation is proposed based on specific parametric assumptions. The model is developed in the context of and empirically applied to the joint determination of marital duration and timing of marital conceptions."
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US.
When multiple durations are generated by a single unit, they may be related in a way that is not fully captured by the regressors. The omitted unit-specific variables might vary over the durations. They might also be correlated with the variables in the regression component. The authors propose an estimator that responds to these concerns and develop a specification test for detecting unobserved unit-specific effects. Data from Malaysia reveal that concentration of child mortality in some families is imperfectly explained by observed explanatory variables, and that failure to control for unobserved heterogeneity seriously biases the parameter estimates.
We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.
This paper studies inference in a continuous time game where an agent's decision to quit an activity depends on the participation of other players. In equilibrium, similar actions can be explained not only by direct influences but also by correlated factors. Our model can be seen as a simultaneous duration model with multiple decision makers and interdependent durations. We study the problem of determining the existence and uniqueness of equilibrium stopping strategies in this setting. This paper provides results and conditions for the detection of these endogenous effects. First, we show that the presence of such effects is a necessary and sufficient condition for simultaneous exits. This allows us to set up a nonparametric test for the presence of such influences which is robust to multiple equilibria. Second, we provide conditions under which parameters in the game are identified. Finally, we apply the model to data on desertion in the Union Army during the American Civil War and find evidence of endogenous influences.
In this paper, we develop and estimate a model of retirement and savings incorporating limited borrowing, stochastic wage offers, health status and survival, social security benefits, Medicare and employer provided health insurance coverage, and intentional bequests. The model is estimated on sample of relatively poor households from the first three waves of the Health and Retirement Study (HRS), for whom we would expect social security income to be of particular importance. The estimated model is used to simulate the responses to changes in social security rules, including changes in benefit levels, in the payroll tax, in the social security earnings tax and in early and normal retirement ages. Welfare and budget consequences are estimated.
"A state-space model is developed which provides estimates of decrements in a dynamic environment. The model integrates the actual unfolding experience and a priori or Bayesian views of the rates. The estimates of present rates and predicted future rates are continually updated and associated standard errors have simple expressions. The model is described and applied in the context of mortality estimation but it should prove useful in other actuarial applications. The approach is particularly suitable for dynamic environments where data are scarce and updated parameter estimates are required on a regular basis. To illustrate the method it is used to monitor the unfolding mortality experience of the retired lives under an actual pension plan."
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between 'input' and 'output' time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example.
In this paper we discuss sensitivity of forecasts with respect to the information set considered in prediction; a sensitivity measure called impact factor, IF, is defined. This notion is specialized to the case of VAR processes integrated of order 0, 1 and 2. For stationary VARs this measure corresponds to the sum of the impulse response coefficients. For integrated VAR systems, the IF has a direct interpretation in terms of long-run forecasts. Various applications of this concept are reviewed; they include questions of policy effectiveness and of forecast uncertainty due to data revisions. A unified approach to inference on the IF is given, showing under what circumstances standard asymptotic inference can be conducted also in systems integrated of order 1 and 2. It is shown how the results reported here can be used to calculate similar sensitivity measures for models with a simultaneity structure.
This paper reconsiders a block bootstrap procedure for Quasi Maximum Likelihood estimation of GARCH models, based on the resampling of the likelihood function, as proposed by Gonçalves and White [2004. Maximum likelihood and the bootstrap for nonlinear dynamic models. Journal of Econometrics 119, 199–219]. First, we provide necessary conditions and sufficient conditions, in terms of moments of the innovation process, for the existence of the Edgeworth expansion of the GARCH(1,1) estimator, up to the k-th term. Second, we provide sufficient conditions for higher order refinements for equally tailed and symmetric test statistics. In particular, the bootstrap estimator based on resampling the likelihood has the same higher order improvements in terms of error in the rejection probabilities as those in Andrews [2002. Higher-order improvements of a computationally attractive k-step bootstrap for extremum estimators. Econometrica 70, 119–162].
This paper derives the limiting distribution of the Lagrange Multiplier (LM) test for threshold nonlinearity in a TAR model with GARCH errors when one of the regimes contains a unit root. It is shown that the asymptotic distribution is nonstandard and depends on nuisance parameters that capture the degree of conditional heteroskedasticity and non-Gaussian nature of the process. We propose a bootstrap procedure for approximating the exact finite-sample distribution of the test for linearity and establish its asymptotic validity.
In this note we reconsider the continuous time limit of the GARCH(1, 1) process. Let Yk and σk2 denote, respectively, the cumulative returns and the volatility processes. We consider the continuous time approximation of the couple We show that, by choosing different parameterizations, as a function of the discrete interval h, we can obtain either a degenerate or a non-degenerate diffusion limit. We then show that GARCH(1, 1) processes can be obtained as Euler approximations of degenerate diffusions, while any Euler approximation of a non-degenerate diffusion is a stochastic volatility process.
It is shown that an abrupt change in the innovation variance of an integrated process can generate spurious rejections of the unit root null hypothesis in routine applications of Dickey-Fuller tests. We develop and investigate modified test statistics, based on unit root tests of P. Perron [see Econmetrica 57, 1361–1401 (1989; Zbl 0683.62066)] for a time series with a changing level, or changing intercept and slope, which are applicable when there is a change in innovation variance of an unknown magnitude at an unknown location.
This paper studies the identifying power of conditional quantile restrictions in short panels with fixed effects. In contrast to classical fixed effects models with conditional mean restrictions, conditional quantile restrictions are not preserved by taking differences in the regression equation over time. This paper shows however that a conditional quantile restriction, in conjunction with a weak conditional independence restriction, provides bounds on quantiles of differences in time-varying unobservables across periods. These bounds carry observable implications for model parameters which generally result in set identification. The analysis of these bounds includes conditions for point identification of the parameter vector, as well as weaker conditions that result in identification of individual parameter components.
This paper shows that increases in the minimum wage rate can have ambiguous effects on the working hours and welfare of employed workers in competitive labor markets. The reason is that employers may not comply with the minimum wage legislation and instead pay a lower subminimum wage rate. If workers are risk neutral, we prove that working hours and welfare are invariant to the minimum wage rate. If workers are risk averse and imprudent (which is the empirically likely case), then working hours decrease with the minimum wage rate, while their welfare may increase.
I study inverse probability weighted M-estimation under a general missing data scheme. Examples include M-estimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect in the linear exponential family, and variable probability sampling with observed retention frequencies. I extend an important result known to hold in special cases: estimating the selection probabilities is generally more efficient than if the known selection probabilities could be used in estimation. For the treatment effect case, the setup allows a general characterization of a “double robustness” result due to Scharfstein et al. [1999. Rejoinder. Journal of the American Statistical Association 94, 1135–1146].
Aggregated times series variables can be forecasted in different ways. For example, they may be forecasted on the basis of the aggregate series or forecasts of disaggregated variables may be obtained first and then these forecasts may be aggregated. A number of forecasts are presented and compared. Classical theoretical results on the relative efficiencies of different forecasts are reviewed and some complications are discussed which invalidate the theoretical results. Contemporaneous as well as temporal aggregation are considered.
In this paper the usual product rule of probability theory is generalized by relaxing the assumption that elements of sets are equally likely to be drawn. The need for such a generalization has been noted by
We consider processes with second order long range dependence resulting from heavy tailed durations. We refer to this phenomenon as duration-driven long range dependence (DDLRD), as opposed to the more widely studied linear long range dependence based on fractional differencing of an i.i.d. process. We consider in detail two specific processes having DDLRD, originally presented in Taqqu and Levy [1986. Using renewal processes to generate long-range dependence and high variability. Dependence in Probability and Statistics. Birkhauser, Boston, pp. 73–89], and Parke [1999. What is fractional integration? Review of Economics and Statistics 81, 632–638]. For these processes, we obtain the limiting distribution of suitably standardized discrete Fourier transforms (DFTs) and sample autocovariances. At low frequencies, the standardized DFTs converge to a stable law, as do the standardized sample autocovariances at fixed lags. Finite collections of standardized sample autocovariances at a fixed set of lags converge to a degenerate distribution. The standardized DFTs at high frequencies converge to a Gaussian law. Our asymptotic results are strikingly similar for the two DDLRD processes studied. We calibrate our asymptotic results with a simulation study which also investigates the properties of the semiparametric log periodogram regression estimator of the memory parameter.
This paper studies a quantile regression dynamic panel model with fixed effects. Panel data fixed effects estimators are typically biased in the presence of lagged dependent variables as regressors. To reduce the dynamic bias, we suggest the use of the instrumental variables quantile regression method of Chernozhukov and Hansen (2006) along with lagged regressors as instruments. In addition, we describe how to employ the estimated models for prediction. Monte Carlo simulations show evidence that the instrumental variables approach sharply reduces the dynamic bias, and the empirical levels for prediction intervals are very close to nominal levels. Finally, we illustrate the procedures with an application to forecasting output growth rates for 18 OECD countries.
This paper develops new methods for determining the cointegration rank in a nonstationary fractionally integrated system, extending univariate optimal methods for testing the degree of integration. We propose a simple Wald test based on the singular value decomposition of the unrestricted estimate of the long run multiplier matrix. When the “strength” of the cointegrating relationship is less than 1/2, the test statistic has a standard asymptotic distribution, like Lagrange Multiplier tests exploiting local properties. We consider the behavior of our test under estimation of short run parameters and local alternatives. We compare our procedure with other cointegration tests based on different principles and find that the new method has better properties in a range of situations by using information on the alternative obtained through a preliminary estimate of the cointegration strength.