The correction for attenuation due to measurement error: Clarifying concepts and creating confidence sets

Department of Psychology, University of California, Davis, CA 95616, USA.
Psychological Methods (Impact Factor: 4.45). 07/2005; 10(2):206-26. DOI: 10.1037/1082-989X.10.2.206
Source: PubMed

ABSTRACT The correction for attenuation due to measurement error (CAME) has received many historical criticisms, most of which can be traced to the limited ability to use CAME inferentially. Past attempts to determine confidence intervals for CAME are summarized and their limitations discussed. The author suggests that inference requires confidence sets that demarcate those population parameters likely to have produced an obtained value--rather than indicating the samples likely to be produced by a given population--and that most researchers tend to confuse these 2 types of confidence sets. Three different Monte-Carlo methods are presented, each offering a different way of examining confidence sets under the new conceptualization. Exploring the implications of these approaches for CAME suggests potential consequences for other statistics.

Download full-text


Available from: Eric Phillip Charles, Feb 19, 2014
1 Follower
  • Source
    • "The problem is that the two sources of bias are not guaranteed to exactly cancel out the impact of attenuation (except by chance) and, as shown by Rönkkö (2014), will often lead to positively biased and inefficient estimates. Considering that we have more than a hundred years of research showing how the effects of measurement error can be adjusted in regression analysis with composites through the well-known correction for attenuation (cf., Charles, 2005; Muchinsky, 1996), or using errors-in-variables regression (Fuller, 1987), relying on a capitalization on chance in small samples is hardly the optimal approach for dealing with measurement error attenuation (Rönkkö, 2014, pp. 176–177). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The partial least squares technique (PLS) has been touted as a viable alternative to latent variable structural equation modeling (SEM) for evaluating theoretical models in the differential psychology domain. We bring some balance to the discussion by reviewing the broader methodological literature to highlight: (1) the misleading characterization of PLS as an SEM method; (2) limitations of PLS for global model testing; (3) problems in testing the significance of path coefficients; (4) extremely high false positive rates when using empirical confidence intervals in conjunction with a new “sign change correction” for path coefficients; (5) misconceptions surrounding the supposedly superior ability of PLS to handle small sample sizes and non-normality; and (6) conceptual and statistical problems with formative measurement and the application of PLS to such models. Additionally, we also reanalyze the dataset provided by Willaby et al. (2015; doi:10.1016/j.paid.2014.09.008) to highlight the limitations of PLS. Our broader review and analysis of the available evidence makes it clear that PLS is not useful for statistical estimation and testing.
    Personality and Individual Differences 07/2015; DOI:10.1016/j.paid.2015.07.019 · 1.86 Impact Factor
  • Source
    • "Allowing unreliability in ratings of job performance to affect the conclusions that can be drawn about direct and indirect determinants of job performance is irresponsible. " This would amount to preferring a systematically biased measure over a more variable unbiased measure, encouraging a misrepresentation of data " (Charles, 2005, p. 222). If LeBreton et al.'s emotional preferences were to be embraced, they would lead to disastrous scientific consequences for our field. "
    Industrial and Organizational Psychology 12/2014; 7(4). DOI:10.1111/iops.12186 · 0.65 Impact Factor
  • Source
    • "If the correlation between true scores is zero, however, there is no room for attenuation and estimates will be unbiased and Type I error rates will be correct. If the true score correlation is not zero, significance testing becomes more complicated and measurement error can lead to biased estimates and incorrect inferences (Charles, 2005; Schmidt & Hunter, 1999; Zimmerman, Zumbo, & Williams, 2003). The impact is similar for simple regression (i.e., with a single predictor). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new insights into the causes of this problem. Computer simulations and an illustrative example are used to demonstrate that when the predictor variables in a multiple regression model are correlated and one or more of them contains random measurement error, Type I error rates can approach 1.00, even for a nominal level of 0.05. The most important factors causing the problem are summarized and the implications are discussed. The authors use Zumbo’s Draper–Lindley–de Finetti framework to show that the inflation in Type I error rates results from a mismatch between the data researchers have, the assumptions of the statistical model, and the inferences they hope to make.
    Educational and Psychological Measurement 12/2013; 73:733-756. DOI:10.1177/0013164413487738 · 1.17 Impact Factor
Show more