# Christopher F. ParmeterUniversity of Miami | UM · Department of Economics

Christopher F. Parmeter

## About

180

Publications

29,852

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

3,387

Citations

Introduction

**Skills and Expertise**

## Publications

Publications (180)

Stochastic frontier models for cross‐sectional data typically assume that the one‐sided distribution of firm‐level inefficiency is continuous. However, it may be reasonable to hypothesize that inefficiency is continuous except for a discrete mass at zero capturing fully efficient firms (zero‐inefficiency). We propose a sieve‐type density estimator...

We provide a review of the literature related to the “wrong skewness problem” in stochastic frontier analysis. We identify two distinct approaches, one treating the phenomenon as a signal from the data that the underlying structure has some special characteristics that allow inefficiency to co-exist with “wrong” skewness, the other treating it as a...

At various stages during the initial onset of the COVID-19 pandemic, various US states and local municipalities enacted eviction moratoria. One of the main aims of these moratoria was to slow the spread of COVID-19 infections. We deploy a semiparametric difference-in-differences approach with an event study specification to test whether the lifting...

This paper assesses the terminology of modified and corrected ordinary least squares (MOLS/COLS) in efficiency analysis. These two approaches, while different, are often conflated. Beyond this several remarks on the practicality and utility of the methods are provided.

The growth of semi- and nonparametric methods to estimate the stochastic frontier model has expanded rapidly in the preceding years. This chapter provides a critical eye toward this burgeoning and important literature, highlighting the various approaches to achieving near-nonparametric identification. From here, the importance of the relaxation of...

Quantile regression has become common in applied economic research. Recently, these methods have been adapted for use with the stochastic frontier model. However, the composed nature of the error term is ignored, drawing into question if a “stochastic” quantile frontier is actually estimated. Here we demonstrate that a particular distributional pai...

The corrected ordinary least squares (COLS) estimator of the stochastic frontier model exploits the higher order moments of the OLS residuals to estimate the parameters of the composed error. However, both “Type I” and “Type II” failures in COLS can result from finite sample bias that arises in the estimation of these higher order moments, especial...

In this article, we introduce the sftt command, which fits two-tier stochastic frontier (2TSF) models with cross-sectional data. Like most frontier models, a 2TSF model consists of a linear frontier model and a composite error term. The error term is assumed to be a mixture of three components: two onesided inefficiency terms—strictly nonnegative a...

Applied researchers using kernel density estimation have worked with optimal bandwidth rules that invariably assumed that the reference density is Normal (optimal only if the true underlying density is Normal). We offer four new optimal bandwidth rules-of-thumb based on other infinitely supported distributions: Logistic, Laplace, Student's t and As...

Quantile regression has become one of the standard tools of econometrics. We examine its compatibility with the special goals of stochastic frontier analysis. We document several conflicts between quantile regression and stochastic frontier analysis. From there we review what has been done up to now, we propose ways to overcome the conflicts that e...

Understanding the drivers of productivity remains one of the most sought after phenomena in economics. The ability to create produce more from less resources is undoubtedly appealing. Using recently updated Penn World Table data, we investigate to what degree previous results using a popular productivity decomposition are maintained. We find that,...

Wrong skewness in the stochastic frontier model leads to many empirical and conceptual problems. A variety of approaches have been proposed to deal with this issue. Here we offer a solution based on moment constraints of the assumed density.

Relying upon an original (country-sector-year) measure of robotic capital, we study the extent of complementarity/substitutability between robots and workers at different skill levels. Relying on nonparametric analysis, we estimate country-sector elasticity of substitution patterns, over the 1995-2009 period, between robots and skilled/unskilled la...

We demonstrate how earlier approaches to model the impact that corporate social responsibility (CSR) has on investment inefficiency are likely to be incorrect and propose use of the stochastic frontier methodology to model this relationship. We apply the approach to a sample of European listed companies, providing robust evidence that CSR performan...

We study the impacts of individual fishing quota programs on overcapacity and the technical efficiency of the Gulf of Mexico red snapper and grouper-tilefish fisheries. We deploy generalized panel data stochastic frontier methods, which allow us to decompose time invariant heterogeneity into both vessel specific heterogeneity and persistent ineffic...

Despite colossal economic and human losses caused by conflict and violence, designing effective policies to avoid conflict remains challenging. While the literature has proposed a voluminous set of candidate predictors, their robustness is questionable and model uncertainty masks the true drivers of conflicts and wars. Considering a comprehensive s...

This review covers several of the core methodological and empirical developments surrounding stochastic frontier models that incorporate various new forms of dependence. Such models apply naturally to panels where cross-sectional observations on firm productivity correlate over time, but also in situations where various components of the error stru...

The economic literature has so far produced limited (country-level) evidence on the magnitude of skill-biased technical change (SBTC), and has not yet investigated the extent to which, coupled with labor market imperfections, SBTC is associated with inefficient use of labor. We present a novel approach to estimate SBTC allowing for the presence of...

As the COVID‐19 pandemic has progressed, so too has the recognition that cases and deaths have been underreported, perhaps vastly so. Here, we present an econometric strategy to estimate the true number of COVID‐19 cases and deaths for 61 and 56 countries, respectively, from 1 January 2020 to 3 November 2020. Specifically, we estimate a ‘structural...

The two-tier stochastic frontier model has seen widespread application across a range of social science domains. It is particularly useful in examining bilateral exchanges where unobserved side-specific information exists on both sides of the transaction. These buyer and seller specific informational aspects offer opportunities to extract surplus f...

We consider density deconvolution with zero-mean Laplace noise in the context of an error component regression model. We adapt the minimax deconvolution methods of Meister (2006) to allow estimation of the unknown noise variance. We propose a semi-uniformly consistent estimator for an ordinary-smooth target density and a modified "variance truncati...

The rise of artificial intelligence and automation is fueling anxiety about the
replacement of workers with robots and digital technologies. Relying upon a
(country-sector-year) constructed measure of robotic capital (RK), we study the extent
of complementarity/substitutability between robots and workers at different skill
levels (i.e., high-, medi...

There has been increased interest in estimation of the stochastic frontier model via quantile regression. Two main approaches currently exist, one that ignores distributional assumptions and selects arbitrary quantiles and another that attempts to estimate the frontier by recognizing that it aligns with a specific quantile of the conditional distri...

The limited number of existing papers that link competition among microfinance institutions (MFIs) and microcredit interest rates, provide inconclusive and counterintuitive results. This paper uses data from 1,997 MFIs operating in 109 countries between the years 2003 and 2016 to construct three measures of competition and evaluate their impacts on...

While classical measurement error in the dependent variable in a linear regression framework results only in a loss of precision, nonclassical measurement error can lead to estimates, which are biased and inference which lacks power. Here, we consider a particular type of nonclassical measurement error: skewed errors. Unfortunately, skewed measurem...

The distributional specifications for the composite regression error term most often used in stochastic frontier analysis are inherently bounded as regards their skewness and excess kurtosis coefficients. We derive general expressions for the skewness and excess kurtosis of the composed error term in the stochastic frontier model based on the ratio...

The papers in this collection of works presented at the 2018 North American Productivity Workshop X hosted at the University of Miami, represent contributed, peer reviewed chapters across all areas of efficiency and productivity analysis. They offer new insights and perspectives into the modeling, identification, and estimation of productivity and...

The volume examines the state-of-the-art of productivity and efficiency analysis. It brings together a selection of the best papers from the 10th North American Productivity Workshop. By analyzing world-wide perspectives on challenges that local economies and institutions may face when changes in productivity are observed, readers can quickly asses...

The distributional specifications for the composite regression error term most often used in stochastic frontier analysis are inherently bounded as regards their skewness and excess kurtosis coefficients. We derive general expressions for the skewness and excess kurtosis of the composed error term in the stochastic frontier model based on the ratio...

This paper proposes two alternative estimators for the semiparametric smooth coefficient stochastic frontier model which do not require parametric specification of the parameters of the distribution of inefficiency to identify all of the model primitives. These new estimators offer avenues for testing for correct specification. A small Monte Carlo...

In this reply, we comment on Goel and Saunoris’ (GS) replication of Jetter and Parmeter ( World Development, 2018, JP). We note that GS’ analysis is useful in extending the scope of studying corruption across countries. Our comment centers on two points: (i) the underlying databases used to measure corruption across countries and (ii) the countries...

Do all aspects of social capital improve repayment behavior in group lending programs? The group lending literature typically uses one or few measures of social capital in a linear form, and systematically understates the uncertainty of results and model specifications. As a result, many papers conclude that specific measures of social capital do n...

We consider the application of the profile least-squares method to estimate the impact of the determinants of inefficiency in the presence of panel data and unobserved individual specific heterogeneity for the stochastic frontier model. This method has the advantage over previous approaches in that the effect of the determinants of inefficiency on...

The matrix that transforms the response variable in a regression to its predicted value is commonly referred to as the hat matrix. The trace of the hat matrix is a standard metric for calculating degrees of freedom. The two prominent theoretical frameworks for studying hat matrices to calculate degrees of freedom in local polynomial regressions – A...

This work describes a versatile and readily-deployable sensitivity analysis of an ordinary least squares (OLS) inference with respect to possible endogeneity in the explanatory variables of the usual k-variate linear multiple regression model. This sensitivity analysis is based on a derivation of the sampling distribution of the OLS parameter estim...

We provide a general overview of Bayesian model averaging (BMA) along with the concept of jointness. We then describe the relative merits and attractiveness of the newest BMA software package, BMS, available in the statistical language R to implement a BMA exercise. BMS provides the user a wide range of customizable priors for conducting a BMA exer...

When making decisions, policymakers often rely on benefit transfer estimates of environmental amenities when primary valuation studies are infeasible. However, various forms of uncertainty appear in the application of meta-analysis. We discuss several visual devices which applied analysts can deploy to clearly present select types of uncertainty in...

A recent spate of research has attempted to develop estimators for stochastic frontier models that embrace semi- and nonparametric insights to enjoy the advantages inherent in the more traditional operations research method of data envelopment analysis. These newer methods explicitly allow statistical noise in the model, the absence of which is a c...

Model uncertainty is a prominent feature in many applied settings. This is certainty true in the efficiency analysis realm where concerns over the proper distributional specification of the error components of a stochastic frontier model is, generally, still open along with which variables influence inefficiency. Given the concern over the impact t...

The distributional specifications for the composite regression error term most
often used in Stochastic Frontier Analysis (SFA) are inherently bounded as regards their
skewness and excess kurtosis coefficients. These bounds provide simple diagnostic tools and
model selection/rejection criteria for empirical studies which appear to have been overloo...

Data envelopment analysis (DEA) and stochastic frontier analysis (SFA), as well as combinations thereof, are widely applied in incentive regulation practice, where the assessment of efficiency plays a major role in regulation design and benchmarking. Using a Monte Carlo simulation experiment, this paper compares the performance of six alternative m...

Identifying the robust determinants of corruption among cultural, economic, institutional, and geographical factors has proven difficult. From a policy perspective, it is important to know whether inherent, largely unchangeable attributes are responsible or if institutional and economic attributes are at work. Accounting for model uncertainty, we u...

The hat matrix maps the vector of response values in a regression to its predicted counterpart. The trace of this hat matrix is the workhorse for calculating the effective number of parameters and the degrees of freedom in both parametric and nonparametric regression settings. The nonparametric literature is, however, silent on the number of parame...

How much of the convergence in labor productivity that we observe in manufacturing is due to convergence in technology versus convergence in capital-labor ratios? To shed light on this question, we introduce a nonparametric counterfactual decomposition of labor productivity growth into growth of the capital-labor ratio (K/L), technological producti...

The two-tiered stochastic frontier model has enjoyed success across a range of application domains where it is believed that incomplete information on both sides of the market leads to surplus which buyers and sellers can extract. Currently, this model is hindered by the fact that estimation relies on very restrictive distributional assumptions on...

The matrix that transforms the response variable in a regression to its predicted value is commonly referred to as the hat matrix. The trace of the hat matrix is a standard metric for calculating degrees of freedom.
Nonparametric-based hat matrices do not enjoy all properties of their parametric counterpart in part owing to the fact that the forme...

We replicate the findings of two influential studies on returns to scale in the United States electricity generation market. The main results are contrasted using both local-linear nonparametric regression, a technique robust to parametric functional form assumptions, as well as an updated data set. While the quantitative findings across all of the...

This paper introduces urbanization as an important driver of government size. Using panel data of 175 countries from 1960–2010, urbanization is closely linked to a larger public sector, especially related to education, health care, and social issues. Various robustness checks confirm this finding. Analyzing state‐level public spending in Colombia a...

We review Bayesian and classical approaches to nonparametric density and regression estimation and illustrate how these techniques can be used in economic applications. On the Bayesian side, density estimation is illustrated via finite Gaussian mixtures and a Dirichlet Process Mixture Model, while nonparametric regression is handled using priors th...

Least-squares cross-validation is commonly used for selection of smoothing parameters in the discrete data setting; however, in many applied situations, it tends to select relatively small bandwidths. This tendency to undersmooth is due in part to the geometric weighting scheme that many discrete kernels possess. This problem may be avoided by usin...

We consider the benchmark stochastic frontier model where inefficiency is directly influenced by observable determinants. In this setting, we estimate the stochastic frontier and the conditional mean of inefficiency without imposing any distributional assumptions. To do so we cast this model in the partly linear regression framework for the conditi...

Stochastic frontier analysis is a popular tool to assess firm performance. Almost universally it has been applied using maximum likelihood (ML) estimation. An alternative approach, pseudolikelihood (PL) estimation, which decouples estimation of the error component structure and the production frontier, has been adopted in both the non-parametric an...

Nearly all journal ranking analyses assume that rank statistics of journal quality are deterministic, yet they are clearly random. The only study to recognize ranking uncertainty is Stern (2013), which calculates standard errors for a ranking of five-year impact factors for 232 economics journals and performs inference using a series of univariate...

Policymakers and advocates often use benefit transfers to estimate the economic value of environmental amenities when primary valuation studies are infeasible. Benefit transfers based on meta-analyses, which synthesize site and methodological characteristics from valuation studies of similar underlying amenities, generally outperform traditional si...

A well established fact in the growth empirics literature is the increasing (unconditional) variation in output per capita across countries. We propose a nonparametric decomposition of the conditional variation of output per capita across countries to capture different channels over which the variation might be increasing. We find that OECD countri...

Purpose:
Evaluate health care access and experiences with care among long-term survivors of adolescent and young adult (AYA) cancer relative to a comparison group in the USA.
Methods:
The 2008 to 2012 Medical Expenditure Panel Surveys identified 1163 survivors of cancer, diagnosed ages 15-39, current ages 20-64, who were at least 5 years after d...

It is known that model averaging estimators are useful when there is uncertainty governing which covariates should enter the model. We argue that in applied research there is also uncertainty as to which method one should deploy, prompting model averaging over user-defined choices. Specifically, we propose, and detail, a nonparametric regression es...

21
Background: To evaluate perceived health care quality among a national sample of survivors of adolescent and young adult (AYA) cancer relative to individuals from the general population. Methods: Using the Medical Expenditure Panel Surveys from 2008-2012, we identified 1,163 survivors diagnosed with cancer ages 15-39 who were at least five years...

Stochastic frontier analysis is a popular tool to assess firm performance. Almost
universally it has been applied using maximum likelihood estimation. An alternative
approach, pseudolikelihood estimation, which decouples estimation of the error component
structure and the production frontier, has been adopted in several advanced settings. To
date,...

The point of empirical work is commonly to test a very small number of crucial null hypotheses in a linear multiple regression setting. Endogeneity in one or more model explanatory variables is well known to invalidate such testing using OLS estimation. But attempting to identify credibly valid (and usefully strong) instruments for such variables i...

This paper proposes semiparametric estimation of a system of equations using a smooth coefficient model. The framework of the model makes system estimation in the semiparametric case relatively simple and computationally tractable. We discuss the imposition of cross-equation restrictions which are motivated by economic theory. Further, we discuss m...

We propose a Laplace stochastic frontier model as an alternative to the traditional model with normal errors. An interesting feature of the Laplace model is that the distribution of inefficiency conditional on the composed error is constant for positive values of the composed error, but varies for negative values. A simulation study suggests that t...

This chapter outlines statistical and econometric procedures that can be applied to the analysis of meta-data
. Particular attention is paid to ensuring robustness of the insights from a meta-regression
. Specific detail is paid to the fine econometric details to sharpen the insights of practitioners when deciding which tools to use for a meta-anal...

This paper proposes plug-in bandwidth selection for kernel density estimation with discrete data via minimization of mean summed square error. Simulation results show that the plug-in bandwidths perform well, relative to cross-validated bandwidths, in non-uniform designs. We further find that plug-in bandwidths are relatively small. Several empiric...

When comparing two competing approximate models, the one having smallest 'ex-pected true error' is closest to the data generating process (according to the specified loss function) and is therefore to be preferred. In this paper we consider a data-driven method of testing whether two competing approximate models, for instance a parametric and a non...

This paper proposes a bootstrap algorithm for testing symmetry of a univariate density. Validity of the bootstrap procedure is shown theoretically as well as via simulations. Three empirical examples demonstrate the versatility of the test in practice.

Credible inference requires attention to the possible fragility of the results ( \(p\) values for key hypothesis tests) to flaws in the model assumptions, notably accounting for the validity of the instruments used. Past sensitivity analysis has mainly consisted of experimentation with alternative model specifications and with tests of over-identif...

The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignores the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in te...

This paper details implementation of the recently proposed root-n kernel density estimator of (Escanciano, J. C., and D. T. Jacho-Chávez. 2012. “n-uniformly consistent density estimation in nonparametric regression models.” Journal of Econometrics 167: 305–316.) that circumvents the slow rate of convergence of traditional nonparametric kernel densi...

Given the popularity of nonparametric methods in applied econometric research, it is beneficial if students have exposure to these methods. We provide a simple, heuristic overview that can be used to discuss smoothing and nonparametric density and regression estimation suitable for an undergraduate econometrics class. We make connections to existin...

In 2008 Industry Canada auctioned 105MHz of spectrum to a group of bidders that included incumbents and potential new entrants into the Canadian mobile phone market, raising $4.25 billion. In an effort to promote new entry, 40MHz of spectrum was set-aside for new entrants. We adapt the methodology of Bajari and Fox (2009) to the Canadian auction se...