Richard J. Smith’s research while affiliated with Victoria University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (78)


Quasi‐Maximum Likelihood and The Kernel Block Bootstrap for Nonlinear Dynamic Models
  • Article

November 2020

·

35 Reads

·

4 Citations

Journal of Time Series Analysis

·

Richard J. Smith

This paper applies a novel bootstrap method, the kernel block bootstrap, to quasi‐maximum likelihood estimation of dynamic models with stationary strong mixing data. The method first kernel weights the components comprising the quasi‐log likelihood function in an appropriate way and then samples the resultant transformed components using the standard “m out of n" bootstrap. We investigate the first order asymptotic properties of the kernel block bootstrap method for quasi‐maximum likelihood demonstrating, in particular, its consistency and the first‐order asymptotic validity of the bootstrap approximation to the distribution of the quasi‐maximum likelihood estimator. A set of simulation experiments for the mean regression model illustrates the efficacy of the kernel block bootstrap for quasi‐maximum likelihood estimation.




Generalised empirical likelihood-based kernel density estimation

November 2017

·

23 Reads

·

2 Citations

If additional information about the distribution of a random variable is available in the form of moment conditions, the weighted kernel density estimator constructed by replacing the uniform weights with the generalised empirical likelihood probabilities provides an improved approximation to the moment constraints. More importantly, a reduction in variance is achieved due to the systematic use of the extra information. Same approach can be used to estimate a density or distribution of certain functions of data and, possibly, of the unknown parameters, the latter being replaced by their generalised empirical likelihood estimates. A special case of interest is estimation of densities or distributions of (generalised) residuals in semi-parametric models defined by a finite number of moment restrictions. Such estimates are of great practical interest and can be used for diagnostic purposes, including testing parametric assumptions about the error distribution, goodness-of-fit, or overidentifying moment restrictions. We give conditions under which such estimators are consistent, and describe their asymptotic mean squared error properties. Analytic examples illustrate the situations where re-weighting provides a reduction in variance, and a simulation study is conducted to evaluate small sample performance of the proposed estimators.



Tests of additional conditional moment restrictions

March 2017

·

16 Reads

Journal of Econometrics

The primary focus of this article is the provision of tests for the validity of a set of conditional moment constraints additional to those defining the maintained hypothesis that are relevant for independent cross-sectional data contexts. The point of departure and principal contribution of the paper is the explicit and full incorporation of the conditional moment information defining the maintained hypothesis in the design of the test statistics. Thus, the approach mirrors that of the classical parametric likelihood setting by defining restricted tests in contradistinction to unrestricted tests that partially or completely fail to incorporate the maintained information in their formulation. The framework is quite general allowing the parameters defining the additional and maintained conditional moment restrictions to differ and permitting the conditioning variates to differ likewise. GMM and generalized empirical likelihood test statistics are suggested. The asymptotic properties of the statistics are described under both null hypothesis and a suitable sequence of local alternatives. An extensive set of simulation experiments explores the practical efficacy of the various test statistics in terms of empirical size and size-adjusted power confirming the superiority of restricted over unrestricted tests. A number of restricted tests possess both sufficiently satisfactory empirical size and power characteristics to allow their recommendation for econometric practice.






Citations (57)


... Our idea for building confidence regions for the parameter of interest is to realize that smoothing the moment indicators as in the Generalized Empirical Likelihood (GEL) literature permits to bootstrap them as if they were i.i.d. Parente and Smith (2018a) study the first-order validity of GEL test statistics based on a similar bootstrapping scheme, the Kernel Block Bootstrap (henceforth KBB; see Parente and Smith (2018b) and Parente and Smith (2019)). Our approach differs from KBB in two significant aspects. ...

Reference:

A Higher-Order Correct Fast Moving-Average Bootstrap for Dependent Data
Quasi‐Maximum Likelihood and The Kernel Block Bootstrap for Nonlinear Dynamic Models
  • Citing Article
  • November 2020

Journal of Time Series Analysis

... The most widely employed survey quantification method is the probability method originally proposed by Theil (1952) and later formalized by Carlson and Parkin (1975), Pesaran (1987), and Cunningham, Smith, and Weale (1998). The probability method assumes that the categorical responses are derived from a continuous distribution of expectations. ...

Measurement errors and data estimation: the quantification of survey data
  • Citing Chapter
  • March 1998

... Table 1 presents an overview of recent studies grouped into broad topics on how empirical likelihood and other proposed related methods have been employed in the financial economics literature. Empirical likelihood methods have been incorporated into this field over time, and a few papers explored this family of estimators focusing on this audience [10,11]. This family of estimators was employed in applications in specific contexts in asset pricing, such as for valuing risk and option pricing [12][13][14][15][16], and specifically in portfolio theory [17][18][19]. ...

Recent Developments in Empirical Likelihood and Related Methods
  • Citing Article
  • August 2014

Annual Review of Economics

... The principle of this procedure is based on the fact that the matrix S will be SNR(1) if and only if the antisymmetric matrix (S S 0 ) is of rank 2 at most. The procedure is then to estimate a system of demands, to calculate the matrix (S S 0 ), and to test the rank of this matrix by appealing to existing techniques; see Robin and Smith (2000) for example. This test was carried out on Canadian data by Browning and Chiappori (1998). ...

Test of rank
  • Citing Article
  • January 2000

Econometric Theory

... This specification has been useful in analyzing the power of unit root tests (Elliott, Rothenberg and Stock, 1996) and in developing estimation and inference theory for models involving persistent variables (see, inter alia, Phillips, 1987;Stock, 1991;Campbell and Yogo, 2006;Phillips, 2014). The local-to-zero variance ω 2 T = c 2 /T 3/2 has been employed under ρ T = 1 by McCabe and Smith (1998) and Nishi and Kurozumi (2022) to derive and compare the local asymptotic power functions of unit root tests of ω 2 = 0 against ω 2 > 0. Introducing the local-to-zero variance provides us with a convenient framework in which we can evaluate power functions of several tests against the alternatives that are close to the null of ω 2 = 0 and seem relevant for empirical applications. 3 Model (2) integrates these two local-to-unit-root parametrizations, thereby rendering itself an empirically relevant random-coefficient model. ...

The Power of Some Tests for Difference Stationarity under Local Heteroscedastic Integration
  • Citing Article
  • June 1998

... Finally, one can test the strength of identification of the model parameters or even conduct identification-robust inference (e.g., Stock and Wright, 2000;Kleibergen, 2005;Guggenberger and Smith, 2005;Guggenberger, Ramalho, and Smith, 2012;Andrews and Mikusheva, 2016;Andrews, 2016;Andrews and Guggenberger, 2019). ...

GEL Statistics Under Weak Identication
  • Citing Article
  • October 2012

Journal of Econometrics

... In most choice models, records with missing data are removed prior to analysis, a practice that causes the parameter estimates of the models to be biased when the percentage of missing data is significant. The main body of literature on the non-response problem concerns imputation (Ramalho and Smith, 2002), but latent variable models to address non-response to attitudinal items have been applied in some social science studies (Knott et al., 1991;Albanese and Knott, 1992;Muircheartaigh and Moustaki, 1999). Notably, these studies were unable to handle more than two latent variables due to computational difficulties. ...

Discrete choice models for nonignorable missing data
  • Citing Article
  • January 2002

... This approach is labelled "Imputation". Second, we adapt a methodology based on Ramalho and Smith (2013), which allows models to be estimated in contexts in which missing data are abundant and nonrandom. This methodology involves maximum likelihood (ML) estimation by using all observations and allowing the missing values to be endogenous. ...

Discrete Choice Non-Response
  • Citing Article
  • April 2012

Review of Economic Studies

... where w S=2 (r) is the standard Brownian motion of (34). The use of circulant matrices simpli…es the derivation of (34) and hence the result in (38) compared with that of early studies of seasonal unit roots. Both the zero and Nyquist frequency tests retain their asymptotic Dickey-Fuller distributions in the presence of serial correlation in the SI process, provided that the regression (24) is appropriately augmented. ...

Likelihood Ratio Tests for Seasonal Unit Roots
  • Citing Article
  • January 2002

Journal of Time Series Analysis

... While surveys are traditionally seen as a quantitative data collection method, descriptive surveys can be suitable for a qualitative study and are useful in triangulating the data (Glik et al., 2005;Mitchell et al., 2013). In this study, the survey questions helped to compile the participant profiles, and helped to reinforce findings from the interviews. ...

Efficient Aggregation of Panel Qualitative Survey Data
  • Citing Article
  • June 2013

Journal of Applied Econometrics