Model checking in errors-in-variables regression. J Multivar Anal

Journal of Multivariate Analysis (Impact Factor: 0.93). 11/2008; 99(10):2406-2443. DOI: 10.1016/j.jmva.2008.02.034
Source: RePEc


This paper discusses a class of minimum distance tests for fitting a parametric regression model to a class of regression functions in the errors-in-variables model. These tests are based on certain minimized distances between a nonparametric regression function estimator and a deconvolution kernel estimator of the conditional expectation of the parametric model being fitted. The paper establishes the asymptotic normality of the proposed test statistics under the null hypothesis and that of the corresponding minimum distance estimators. We also prove the consistency of the proposed tests against a fixed alternative and obtain the asymptotic distributions for general local alternatives. Simulation studies show that the testing procedures are quite satisfactory in the preservation of the finite sample level and in terms of a power comparison.

Full-text preview

Available from:
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: A score-type test procedure is proposed for checking the adequacy of the errors-in-variables regression model when validation data are available. Under mild conditions, the score-type test statistic is proven to be asymptotically normal. The test procedure is shown to be consistent against general fixed alternatives and it can detect local alternatives which are close to the null model at the parametric rate. Monte-Carlo simulations are conducted to evaluated the finite sample performance of the proposed test.
    Preview · Article · Mar 2009 · Statistics [?] Probability Letters
  • [Show abstract] [Hide abstract] ABSTRACT: This paper investigates the scaled prediction variances in the errors-in-variables model and compares the performance with those in classic model of response surface designs for three factors. The ordinary least squares estimators of regression coefficients are derived from a second-order response surface model with errors in variables. Three performance criteria are proposed. The first is the difference between the empirical mean of maximum value of scaled prediction variance with errors and the maximum value of scaled prediction variance without errors. The second is the mean squared deviation from the mean of simulated maximum scaled prediction variance with errors. The last performance measure is the mean squared scaled prediction variance change with and without errors. In the simulations, 1 000 random samples were performed following three factors with 20 experimental runs for central composite designs and 15 for Box-Behnken design. The independent variables are coded variables in these designs. Comparative results show that for the low level errors in variables, central composite face-centered design is optimal; otherwise, Box-Behnken design has a relatively better performance. Keywordsresponse surface modeling–errors in variables–scaled prediction variance
    No preview · Article · Apr 2011 · Transactions of Tianjin University
  • [Show abstract] [Hide abstract] ABSTRACT: This survey intends to collect the developments on Goodness-of-Fit for regression models during the last 20 years, from the very first origins with the proposals based on the idea of the tests for density and distribution, until the most recent advances for complex data and models. Far from being exhaustive, the contents in this paper are focused on two main classes of tests statistics: smoothing-based tests (kernel-based) and tests based on empirical regression processes, although other tests based on Maximum Likelihood ideas will be also considered. Starting from the simplest case of testing a parametric family for the regression curves, the contributions in this field provide also testing procedures in semiparametric, nonparametric, and functional models, dealing also with more complex settings, as those ones involving dependent or incomplete data.
    No preview · Article · Sep 2013 · Test
Show more