Wiley

Quantitative Economics

Published by Wiley and Econometric Society

Online ISSN: 1759-7331

·

Print ISSN: 1759-7323

Journal websiteAuthor guidelines

Top-read articles

15 reads in the past 30 days

RIPW estimator with derandomized cross‐fitting.
Effect weights for the unweighted TWFE estimator.
Effect weights for our RIPW estimator.
Boxplots of bias across 10,000 replicates for the unweighted, IPW, and RIPW estimators under (left) violation of parallel trend (σm = 1,στ = 0), (middle) heterogeneous treatment effect with limited heterogeneity (σm = 0,στ = 1,ai = 1), and (right) heterogeneous treatment effect with full heterogeneity (σm = 0,στ = 1,ai ∼ Unif([0,1])).
Treatment paths of each state. The darker color marks the treated days.

+2

Design‐robust two‐way‐fixed‐effects regression for panel data

November 2024

·

70 Reads

·

10 Citations

·

Guido W. Imbens

·

Lihua Lei

·

Xiaoman Luo
Download

Recent articles


World caloric production and consumption, and their trend for 1961–2017. The y‐axis is the number of people that hypothetically could be fed 2000 kilocalories per day diet based on consumption of only the four commodities.
Real caloric prices at delivery. The y‐axis is the annual cost of 2000 kilocalories per day.
The role of storage in commodity markets: Indirect inference based on grain data
  • Article
  • Full-text available

June 2025

·

8 Reads

We develop an indirect inference approach relying on a linear supply and demand model serving as an auxiliary model to provide the first full empirical test of the rational expectations commodity storage model. We build a rich storage model that incorporates a supply response and four structural shocks and show that exploiting information on both prices and quantities is critical for relaxing previous restrictive identifying assumptions and assessing the empirical consistency of the model's features. Finally, we carry out a structural estimation on the aggregate index of the world's most important staple food products. Our estimations show that supply shocks are the main drivers of food market dynamics and that our storage model is consistent with most of the moments in the data, including the high price persistence so far the subject of a long‐standing puzzle.


Economic consequences of vertical mismatch

June 2025

·

3 Reads

·

1 Citation

We study two first‐order economic consequences of vertical mismatch, using a simple (neoclassical) model of under and overemployment. Individuals of high type can perform both skilled and unskilled jobs, but only a fraction of low‐type workers can perform skilled jobs. People have different costs over these jobs. First, we calibrate the model to match U.S. CPS time series since the 1980s. To control for unobserved heterogeneity, we compute wages based on workers who have switched between skilled and unskilled jobs. We show that changes in educational mismatch has contributed one‐sixth as much as skilled‐bias technological progress for the rise in the college premium. Second, we calibrate the model to match moments of the 50 United States, to measure the output costs of frictions generating mismatch. The cost of frictions is 0.26% of output on average but varies between 0.06% to 0.77% across states. The key variable that explains the output cost of vertical mismatch is not the percentage of mismatched workers but their wage relative to well‐matched workers.


Price path samples of the efficient price X (in blue) and observed price Y (in black) simulated by the Persistent Noise (PN) model.
Histograms for Nu(ξ), NIw(ξ), and NIIw(ξ) based on simulated 1‐minute data and 1000 replications. We set the detection threshold ξ = 3.4 and the jump truncation parameter ζ=4σˆt−1med for all cases.
Upper plot: Price (red), Detection points (black vertical lines) and PN‐regions (in grey) for August 7, 2007. Bottom plot: Same day's rolling window truncated realized volatility (TV, in blue) and spot volatility (SV, in orange) standardized by the day's average TV. Detection based on NIIw(ξ) with window lengths (wn,rn)=(30,4) minutes, detection threshold ξ = 4, and jump truncation threshold ζ=4σˆt−1med.
Upper plot: Price process (in red), Detection points (black vertical lines) and PN‐regions (in grey) for August 30, 2019. Bottom plot: Same day's rolling window truncated volatility (TV, in blue) and spot volatility (SV, in orange) standardized by the day's average TV. Detection based on NIIw(ξ) with window lengths (wn,rn)=(30,4) minutes, detection threshold ξ = 4, and jump truncation threshold ζ=4σˆt−1med.
Histograms of PN‐region durations. Detection based on NIIw(ξ) with window lengths (wn,rn)=(30,4) minutes, detection threshold ξ = 4, and jump truncation threshold ζ=4σˆt−1med.
Real‐time detection of local no‐arbitrage violations

June 2025

·

4 Reads

This paper focuses on the task of detecting local episodes involving violation of the standard Itô semimartingale assumption for financial asset prices in real time that might induce arbitrage opportunities. Our proposed detectors, defined as stopping rules, are applied sequentially to continually incoming high‐frequency data. We show that they are asymptotically exponentially distributed in the absence of Itô semimartingale violations. On the other hand, when a violation occurs, we can achieve immediate detection under infill asymptotics. A Monte Carlo study demonstrates that the asymptotic results provide a good approximation to the finite‐sample behavior of the sequential detectors. An empirical application to S&P 500 index futures data corroborates the effectiveness of our detectors in swiftly identifying the emergence of an extreme return persistence episode in real time.


Absolute bias and standard deviation of OLS and IV with T = 200.
Absolute bias and standard deviation of OLS and IV with T = 200.
Size of nominal 5% two‐sided tests using OLS and IV with T = 200.
Absolute bias and standard deviation of OLS and IV with heteroskedasticity.
Difference between IV and OLS along estimated feedback direction, T = 200.
Linear regression with weak exogeneity

June 2025

·

7 Reads

This paper studies linear time‐series regressions with many regressors. Weak exogeneity is the most used identifying assumption in time series. Weak exogeneity requires the structural error to have zero conditional expectation given present and past regressor values, allowing errors to correlate with future regressor realizations. We show that weak exogeneity in time‐series regressions with many controls may produce substantial biases and render the least squares (OLS) estimator inconsistent. The bias arises in settings with many regressors because the normalized OLS design matrix remains asymptotically random and correlates with the regression error when only weak (but not strict) exogeneity holds. This bias' magnitude increases with the number of regressors and their average autocorrelation. We propose an innovative approach to bias correction that yields a new estimator with improved properties relative to OLS. We establish consistency and conditional asymptotic Gaussianity of this new estimator and provide a method for inference.


Lifecycle smoking behavior in PSID data. Note: Smoking expenditure share (as a ratio to labor income, ps/y) and cigarette smoked per day (CPD) are both measured as the population average with cohort and year effects being teased out.
Lifecycle profiles of income, wealth, health investment and smoking expenditure shares, health capital and death hazard. Note: All lifecycle profiles are from age 19 until death, with subscript N denoting a nonsmoker and S denoting a smoker. Incomes reported are labor income (y) and total income (Y). Wealth is measured by total assets (a) and smoking expenditure share is with respect to labor income (ps/y) of the population. Health capital (h) is from the initial normalized level of 100 to the minimum (h_), and death hazard includes overall death hazards (DH) and a smoking‐led death hazard for smokers (DHS−smk). For all panels that contrast smokers and nonsmokers, quitters are excluded from smokers after quitting, as are those no longer alive.
Lifecycle profile of smoking and health investment when quitting at 37. Note: For comparison, smoking and health expenditure shares of total income (ps/Y and x/Y) for nonquitters/quitters at age 37 in two genotype‐demographic groups.
Lifecycle profile of health capital and labor income when quitting at 37. Note: Health capital (h) and total income (Y) are depicted for nonquitters/quitters at age 37 in two genotype‐demographic groups.
Power of personalized smoking cessation: A quantitative lifecycle framework for policy evaluation

Evidence suggests that smokers' responsiveness to cessation medication depends on genotypes. Whether personalized treatment based on genotypes is cost effective compared to standard treatments, however, has been unexplored. We thus construct a lifecycle model with endogenous health evolution and life expectancy and with heterogeneities in genotypes, demographics, and adolescent smoking. We examine the cost effectiveness of three intervention policies: (i) a standard policy where all smokers receive counseling and medication, (ii) a standard policy where some smokers receive counseling and others receive counseling and medication, and (iii) a personalized policy based on genotypes. The personalized policy proves the most cost effective: every dollar of program cost generates about 29and29 and 40 in value measured over the lifecycle for smokers treated at age 37 and 52, respectively, about 16–22% higher than the two standard policies.


The effects of monetary policy through housing and mortgage choices on aggregate demand

Housing and mortgage choices are among the largest financial decisions households make and they substantially impact households' liquidity. This paper explores how monetary policy affects aggregate demand by influencing these portfolio choices. To quantify this channel, I build a heterogeneous‐agent life‐cycle model with long‐term mortgages and endogenous house prices. I find that, although only a small fraction of households adjust their housing and mortgage holdings in response to an expansionary monetary policy shock, these households account for over 50% of the increase in aggregate demand. Mortgage refinancing explains approximately four‐fifths of the contribution, whereas adjusted housing choices account for one‐fifth—uncovering a new transmission channel. I also show that the different pass‐through of the policy rate to short and long mortgage rates drives the difference in the house‐price and aggregate demand response between economies with adjustable‐rate as compared to fixed‐rate mortgages.


Insurance, redistribution, and the inequality of lifetime income

June 2025

·

7 Reads

Individuals vary considerably in how much they earn during their lifetimes. This study examines the role of the tax‐and‐transfer system in mitigating such inequalities, which could otherwise lead to disparities in living standards. Utilizing a life‐cycle model, we determine that the tax‐and‐transfer system offsets 45% of lifetime earnings inequality attributed to differences in productive abilities and education. Additionally, the system insures against 47% of lifetime earnings risk. Implementing a lifetime tax reform that links annual taxes to prior employment could enhance the system's insurance function, though it may involve tradeoffs in terms of employment and overall welfare.


Understanding regressions with observations collected at high frequency over long span

June 2025

·

6 Reads

In this paper, we analyze regressions with observations collected at small time intervals over a long period of time. For the formal asymptotic analysis, we assume that samples are obtained from continuous time stochastic processes, and let the sampling interval δ shrink down to zero and the sample span T increase up to infinity. In this setup, we show that the standard Wald statistic diverges to infinity and the regression becomes spurious as long as δ → 0 sufficiently fast relative to T → ∞. Such a phenomenon is indeed what is frequently observed in practice for the type of regressions considered in the paper. In contrast, our asymptotic theory predicts that the spuriousness disappears if we use the robust version of the Wald test with an appropriate long‐run variance estimate. This is supported, strongly and unambiguously, by our empirical illustration using the regression of long‐term on short‐term interest rates.


Distance to stability in unstable markets.
Distance to stable matching over time.
Progression of final, stable, and median stable matches over time.
An experimental study of decentralized matching

June 2025

·

2 Reads

We present an experimental study of decentralized two‐sided matching markets with no transfers. Experimental participants are informed of everyone's preferences and can make arbitrary nonbinding match offers that get finalized when a period of market inactivity has elapsed. Several insights emerge. First, stable outcomes are prevalent. Second, while centralized clearinghouses commonly aim at implementing extremal stable matchings, our decentralized markets most frequently culminate in the median stable matching. Third, preferences' cardinal representations impact the stable partners with whom participants match. Last, the dynamics underlying our results exhibit strategic sophistication, with agents successfully avoiding cycles of blocking pairs.


Demographic transition, industrial policies, and Chinese economic growth

We build a unified framework to quantitatively examine how demographic transition and industrial policies have contributed to China's economic growth in the past five decades. On the demographic side, we consider evolutions in government population‐control policies, life expectancy, and pension income replacement. Industrial policies include changes in the speed of the growth of entrepreneurship, industry‐specific interest subsidies, and financial intermediation costs. Our analyses suggest that the demographic transition alone hardly affects the aggregate savings rate, mainly due to general equilibrium feedback effects from prices. However, demographics account for a considerable fraction of the increase in per capita output growth since 1970. By comparison, industrial policy changes contribute significantly to the rise in both the aggregate savings rate and per capita output growth during the period. Notably, the interactions between the demographic transition and industrial policy changes cause aggregate savings to rise, but have little effect on per capita output growth. A novel factor of the model is endogenous human capital accumulation, a driver of per capita output growth. Our results are robust to the endogenization of fertility decisions.


Econometrics of insurance with multidimensional types

February 2025

·

33 Reads

·

1 Citation

In this paper, we address the identification and estimation of insurance models where insurees have private information about their risk and risk aversion. The model includes random damages and allows for several claims, while insurees choose from a finite number of coverages. We show that the joint distribution of risk and risk aversion is nonparametrically identified despite bunching due to multidimensional types and a finite number of coverages. Our identification strategy exploits the observed number of claims as well as an exclusion restriction, and a full support assumption. Furthermore, our results apply to any form of competition. We propose a novel estimation procedure combining nonparametric estimators and GMM estimation that we illustrate in a Monte Carlo study.


Programming FPGAs for economics: An introduction to electrical engineering economics

February 2025

·

47 Reads

We show how to use field‐programmable gate arrays (FPGAs) and their associated high‐level synthesis (HLS) compilers to solve heterogeneous agent models with incomplete markets and aggregate uncertainty (Krusell and Smith (1998)). We document that the acceleration delivered by one single FPGA is comparable to that provided by using 69 CPU cores in a conventional cluster. The time to solve 1200 versions of the model drops from 8 hours to 7 minutes, illustrating a great potential for structural estimation. We describe how to achieve multiple acceleration opportunities—pipeline, data‐level parallelism, and data precision—with minimal modification of the C/C++ code written for a traditional sequential processor, which we then deploy on FPGAs easily available at Amazon Web Services. We quantify the speedup and cost of these accelerations. Our paper is the first step toward a new field, electrical engineering economics, focused on designing computational accelerators for economics to tackle challenging quantitative models. Replication code is available on Github.


Prospering through Prospera: A dynamic model of CCT impacts on educational attainment and achievement in Mexico

February 2025

·

47 Reads

·

5 Citations

This paper develops and estimates a dynamic model, which integrates value‐added and school‐choice models, to evaluate grade‐by‐grade and cumulative impacts of the Mexican Prospera conditional cash transfer (CCT) program on educational achievement. The empirical application advances the previous literature by estimating policy impacts on learning, accounting for dynamic selective school attendance, and incorporating both observed and unobserved heterogeneity. A dynamic framework is critical for estimating cumulative learning effects because lagged achievements are important determinants of current achievements. The model is estimated using rich nationwide Mexican administrative data on schooling progression and mathematics and Spanish test scores in grades 4–9 along with student and family survey data. The estimates show significant CCT impacts on learning and educational attainment, particularly for students from poorer households. Results show that telesecondary schools (distance learning) play a crucial role in facilitating school attendance and in fostering skill accumulation.


Integrated epi‐econ assessment: Quantitative theory

February 2025

·

22 Reads

·

3 Citations

·

Karl Harmenberg

·

·

[...]

·

Jonna Olsson

Aimed at pandemic preparedness, we construct a framework for integrated epi‐econ assessment that we believe would be useful for policymakers, especially at the early stages of a pandemic outbreak. We offer theory, calibration to micro‐, macro‐, and epi‐data, and numerical methods for quantitative policy evaluation. The model has an explicit microeconomic, market‐based structure. It highlights trade‐offs, within period and over time, associated with activities that involve both valuable social interaction and harmful disease transmission. We compare market solutions with socially optimal allocations. Our calibration to Covid‐19 implies that households shift their leisure and work activities away from social interactions. This is especially true for older individuals, who are more vulnerable to disease. The optimal allocation may or may not involve lockdown and changes the time allocations significantly across age groups. In this trade‐off, people's social leisure time becomes an important factor, aside from deaths and GDP. We finally compare optimal responses to different viruses (SARS, seasonal flu) and argue that, going forward, economic analysis ought to be an integral element behind epidemiological policy.


Double robust inference for continuous updating GMM

February 2025

·

17 Reads

We propose the double robust Lagrange multiplier (DRLM) statistic for testing hypotheses specified on the minimizer of the population continuous updating objective function. The (bounding) χ² limiting distribution of the DRLM statistic is robust to both misspecification and weak identification, hence its name. The minimizer is the so‐called pseudo‐true value, which equals the true value of the structural parameter under correct specification. To emphasize its importance for applied work where misspecification and weak identification are common, we use the DRLM test to analyze: the risk premia in Adrian et al. (2014) and He et al. (2017); the structural parameters in a nonlinear asset pricing model with constant relative risk aversion.


An ordinal approach to the empirical analysis of games with monotone best responses

February 2025

·

2 Reads

·

1 Citation

We develop a nonparametric and ordinal approach for testing pure strategy Nash equilibrium play in games with monotone best responses, such as those with strategic complements/substitutes. The approach makes minimal assumptions on unobserved heterogeneity, requires no parametric assumptions on payoff functions, and no restriction on equilibrium selection from multiple equilibria. The approach can also be extended in order to make inferences and predictions. Both model‐testing and inference can be implemented by a tractable computation procedure based on column generation. To illustrate how our approach works, we include an application to an IO entry game.


Samples from random assignment to APM and non‐APM schools.
Balance in teacher characteristics for the original and observed in year 2 teachers who worked in an evaluation sample school in 2016 (Sample 1). Note: All regressions include UGEL fixed effects. Standard errors clustered at the school level. Estimates indicate differences in the standardized characteristics of control and treatment groups. Thick and thin lines indicate 90% and 95% confidence intervals, respectively. We do not present the differences in teacher experience and pedagogical degree for the original sample because we do not have information on those variables for the teachers that were not observed at the end of year 2.
Balance in school characteristics in the original and observed evaluation sample schools. Note: All regressions include UGEL fixed effects. Estimates indicate differences in the standardized characteristics of control and treatment groups. Thick and thin lines indicate 90% and 95% confidence intervals, respectively.
Treatment effects on the composition of teacher characteristics among the teachers in randomized pedagogical skill sample schools in 2017 (Sample 2). Note: All regressions include UGEL fixed effects. Estimates indicate differences in the standardized characteristics of control and treatment groups. Thick and thin lines indicate 90% and 95% confidence intervals, respectively.
Quantile regression results after 3 years of implementation (2018). Note: These figures show the quantile regression coefficients for the effect of the program on standardized test scores after 3 years of implementation (2018) for each decile of the distribution of student test scores. The 95% confidence intervals are shown with standard errors clustered by school. All specifications include school district fixed effects and control for school size.
Can teaching be taught? Improving teachers' pedagogical skills at scale in rural Peru

February 2025

·

29 Reads

·

2 Citations

We evaluate the impact of a large‐scale teacher coaching program in Peru, a context with high teacher turnover, on teachers' pedagogical skills and student learning. Previous studies find that small‐scale coaching programs can improve teaching of reading and science in developing countries. However, scaling up can reduce programs' effectiveness, and teacher turnover can erode compliance and cause spillovers onto non‐program schools. We develop a framework that defines different treatment effects when teacher turnover is present, and explains which effects can be estimated. We evaluate this teacher coaching program, exploiting random assignment of that program's expansion to 3797 rural schools in 2016. After two years, teachers assigned to the program increased their aggregate pedagogical skills by 0.20 standard deviations. The program also increased student learning; after 1 year, Grade 2 students' mathematics and reading scores increased by 0.106 and 0.075 standard deviations (of the distributions of those test scores), respectively. After three years, the cumulative effect increases slightly, to 0.114 and 0.100, respectively. One reason why these impacts are low is that some uncoached teachers moved into treated schools in years 2 and 3. Following our framework, we estimate that the impacts on students of having a “fully” coached teacher for all three years are 0.18 and 0.16 standard deviations for mathematics and reading comprehension, respectively.


Estimating macroeconomic models of financial crises: An endogenous regime‐switching approach

February 2025

·

111 Reads

·

1 Citation

We develop a new model of cycles and crises in emerging markets, featuring an occasionally binding borrowing constraint and stochastic volatility, and estimate it with quarterly data for Mexico since 1981. We propose an endogenous regime‐switching formulation of the occasionally binding borrowing constraint, develop a general perturbation method to solve the model, and estimate it using Bayesian methods. We find that the model fits the Mexican data well without systematically relying on large shocks, matching the typical stylized facts of emerging market business cycles and Mexico's history of sudden stops in capital flows. We also find that interest rate shocks play a smaller role in driving both cycles and crises than previously found in the literature.


How much do we learn? Measuring symmetric and asymmetric deviations from Bayesian updating through choices

February 2025

·

24 Reads

Belief‐updating biases hinder the correction of inaccurate beliefs and lead to suboptimal decisions. We complement Rabin and Schrag's (1999) portable extension of the Bayesian model by including conservatism in addition to confirmatory bias. Additionally, we show how to identify these two forms of biases from choices. In an experiment, we found that the subjects exhibited confirmatory bias by misreading 19% of the signals that contradicted their priors. They were also conservative and acted as if they missed 28% of the signals.


Average admission cutoffs of schools: Survey versus ROLs. Notes: The y‐axis represents absolute scores, and the x‐axis represents four student groups categorized by exam scores in percentile. The dotted curves represent the average cutoffs of the chosen schools in the survey. The solid curves represent the average cutoffs of schools in the ROLs. The threshold for public high school admission is 535 (60.95 percentile) in 2014.
Distribution of the housing price. Notes: This is the histogram of communities' housing prices in 2014. The unit of the x‐axis is 10,000 yuan/m². The unit of the y‐axis is the number of communities. Each bin represents 2500 yuan/m² except that the first bin includes the housing price lower than 5000 yuan/m² and the last one includes the housing price above 25,000 yuan/m².
Welfare change. Notes: These figures represent the welfare change when DA is replaced by another mechanism measured by the welfare‐equalizing tuition adjustment. The y‐axis represents the change of yuan, and the x‐axis represents the ZX quota. To keep the welfare level under the DA mechanism, a positive position indicates a student needs to pay additional tuition (loss), a negative position indicates a student receives a tuition deduction (gain). “HHP,” “MHP,” and “LHP” represent students from high‐, moderate‐, and low‐housing price communities.
Purchasing seats in school choice and inequality

November 2024

·

8 Reads

We study a mechanism that gives students the option of paying higher tuition to attend their preferred schools. This seat‐purchasing mechanism is neither strategyproof nor stable. Our paper combines administrative and survey data to estimate students' preferences and conducts welfare analysis. We find that changing from a deferred acceptance mechanism to the cadet‐optimal stable mechanism reduces students' welfare but that adopting the observed seat‐purchasing mechanism alleviates this welfare loss. Moreover, students from affluent communities prefer to pay higher tuition to stay at preferred schools, while those from less affluent communities are more likely be priced out to lower‐quality schools.


Investment and saving functions in three economies. We use the parameter values obtained from the calibration in Section 4.1.
Histograms of 1‐ and 5‐year log earnings changes. This figure plots the densities of 1‐ and 5‐year earnings changes from the data of Guvenen et al. (2021) and from our simulated model, superimposed on Gaussian densities with the same standard deviations.
Wealth distribution tail. This figure plots the log of the counter CDF of wealth against the wealth level in log. Specifying a power law distribution for the model simulated wealth data, we estimate the power parameter to be 1.58. The downward straight line corresponds to this power law. The curve corresponds to the empirical distribution derived from the same simulated wealth data.
Vector fields. Also Figure 4.
LHS and RHS of (30). Also Figure 5.
Capital income jumps and wealth distribution

November 2024

·

68 Reads

·

1 Citation

Compared to the distributions of earnings, the distributions of wealth in the US and many other countries are strikingly concentrated on the top and skewed to the right. To explain the income and wealth inequality, we provide a tractable heterogeneous‐agent model with incomplete markets in continuous time. We separate illiquid capital assets from liquid bond assets and introduce jump risks to capital income, which are crucial for generating a thicker tail of the wealth distribution than that of the labor income distribution. Under recursive utility, we derive optimal consumption and wealth in closed form and show that the stationary wealth distribution has an exponential right tail that closely approximates a power‐law distribution. Our calibrated model can match the income and wealth distributions in the US data including the extreme right tail of the wealth distribution.


Monte Carlo simulation results (κ = 1). Panel 1 displays results for Fξ and Panel 2 for Fε. Subpanels (a), (b), and (c) correspond to sample sizes N = 1000, 2000, and 4000, respectively.
An illustration of the departure of the ratio of ch.f.s of second and first Rossberg order statistics (red dotted line) from that of the measurement (observed) order statistics (black line). The ratio of ch.f.s of true measurement error (exponential) order statistics (blue dashed line) aligns with that of the measurement order statistics.
An illustration of the dissimilarity between the probability distributions of cross‐sums of two exponential and Rossberg order statistics (Figure 3b). Consistent with Rossberg (1972), the probability distributions of exponential and Rossberg spacings are aligned (Figure 3a).
Monte Carlo simulation results for Fξ. Panels 1, 2, and 3 correspond to the tuning parameter κ = 1, 3.14, and 5, respectively; Subpanels (a), (b), and (c) correspond to sample sizes N = 1000, 2000, and 4000, respectively.
Monte Carlo simulation results for Fε. Panels 1, 2, and 3 correspond to the tuning parameter κ = 1, 3.14, and 5, respectively; Subpanels (a), (b), and (c) correspond to sample sizes N = 1000, 2000, and 4000, respectively.
Deconvolution from two order statistics

November 2024

·

12 Reads

·

1 Citation

Economic data are often contaminated by measurement errors and truncated by ranking. This paper shows that the classical measurement error model with independent and additive measurement errors is identified nonparametrically using only two order statistics of repeated measurements. The identification result confirms a hypothesis by Athey and Haile (2002) for a symmetric ascending auction model with unobserved heterogeneity. Extensions allow for heterogeneous measurement errors, broadening the applicability to additional empirical settings, including asymmetric auctions and wage offer models. We adapt an existing simulated sieve estimator and illustrate its performance in finite samples.


The MMS procedure.
Estimation and inference in games of incomplete information with unobserved heterogeneity and large state space

November 2024

·

13 Reads

·

1 Citation

Building on the sequential identification result of Aguirregabiria and Mira (2019), this paper develops estimation and inference procedures for static games of incomplete information with payoff‐relevant unobserved heterogeneity and multiple equilibria. With payoff‐relevant unobserved heterogeneity, sequential estimation and inference face two main challenges: the matching‐types problem and a large number of matchings. We tackle the matching‐types problem by constructing a new minimum‐distance criterion for the correct matching and the payoff function with both correct and incorrect “moments.” To handle large numbers of matchings, we propose a novel and computationally fast multistep moment selection procedure. We show that asymptotically, it achieves a time complexity that is linear in the number of “moments” when the occurrence of multiple equilibria does not depend on the number of “moments.” Based on this procedure, we construct a consistent estimator of the payoff function, an asymptotically uniformly valid and easy‐to‐implement test for linear hypotheses on the payoff function, and a consistent method to group payoff functions according to the unobserved heterogeneity. Extensive simulations demonstrate the finite sample efficacy of our procedures.


Covariate adjustment in stratified experiments

November 2024

·

10 Reads

·

8 Citations

This paper studies covariate adjusted estimation of the average treatment effect in stratified experiments. We work in a general framework that includes matched tuples designs, coarse stratification, and complete randomization as special cases. Regression adjustment with treatment‐covariate interactions is known to weakly improve efficiency for completely randomized designs. By contrast, we show that for stratified designs such regression estimators are generically inefficient, potentially even increasing estimator variance relative to the unadjusted benchmark. Motivated by this result, we derive the asymptotically optimal linear covariate adjustment for a given stratification. We construct several feasible estimators that implement this efficient adjustment in large samples. In the special case of matched pairs, for example, the regression including treatment, covariates, and pair fixed effects is asymptotically optimal. We also provide novel asymptotically exact inference methods that allow researchers to report smaller confidence intervals, fully reflecting the efficiency gains from both stratification and adjustment. Simulations and an empirical application demonstrate the value of our proposed methods.


Differences in euro‐area household finances and their relevance for monetary‐policy transmission

November 2024

·

23 Reads

·

1 Citation

This paper quantifies mechanisms through which heterogeneity in household finances affects the transmission of monetary policy, considering housing tenure choices over the life cycle. Our analysis also identifies challenges for monetary policy related to housing busts. It focuses on the four largest economies in the euro area: France, Germany, Italy, and Spain. Through the lens of our model, we find that home ownership and endogenous transitions from renting to owning are key elements for the extent of cross‐country asymmetries in aggregate consumption responses to changes in the real interest rate. Across groups with different housing tenure, we find that the consumption response of homeowners to interest rate changes tends to be larger than the response of renters, particularly if these homeowners are indebted and do not adjust their illiquid housing wealth.


Journal metrics


1.9 (2023)

Journal Impact Factor™


4.1 (2023)

CiteScore™


1.480 (2023)

SNIP


£0.00 / €0.00

Article processing charge