The correction for attenuation due to measurement error: clarifying concepts and creating confidence sets.

Department of Psychology, University of California, Davis, CA 95616, USA.
Psychological Methods (Impact Factor: 4.45). 07/2005; 10(2):206-26. DOI: 10.1037/1082-989X.10.2.206
Source: PubMed


The correction for attenuation due to measurement error (CAME) has received many historical criticisms, most of which can be traced to the limited ability to use CAME inferentially. Past attempts to determine confidence intervals for CAME are summarized and their limitations discussed. The author suggests that inference requires confidence sets that demarcate those population parameters likely to have produced an obtained value--rather than indicating the samples likely to be produced by a given population--and that most researchers tend to confuse these 2 types of confidence sets. Three different Monte-Carlo methods are presented, each offering a different way of examining confidence sets under the new conceptualization. Exploring the implications of these approaches for CAME suggests potential consequences for other statistics.

Download full-text


Available from: Eric Phillip Charles, Feb 19, 2014
  • Source
    • "The problem is that the two sources of bias are not guaranteed to exactly cancel out the impact of attenuation (except by chance) and, as shown by Rönkkö (2014), will often lead to positively biased and inefficient estimates. Considering that we have more than a hundred years of research showing how the effects of measurement error can be adjusted in regression analysis with composites through the well-known correction for attenuation (cf., Charles, 2005; Muchinsky, 1996), or using errors-in-variables regression (Fuller, 1987), relying on a capitalization on chance in small samples is hardly the optimal approach for dealing with measurement error attenuation (Rönkkö, 2014, pp. 176–177). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The partial least squares technique (PLS) has been touted as a viable alternative to latent variable structural equation modeling (SEM) for evaluating theoretical models in the differential psychology domain. We bring some balance to the discussion by reviewing the broader methodological literature to highlight: (1) the misleading characterization of PLS as an SEM method; (2) limitations of PLS for global model testing; (3) problems in testing the significance of path coefficients; (4) extremely high false positive rates when using empirical confidence intervals in conjunction with a new “sign change correction” for path coefficients; (5) misconceptions surrounding the supposedly superior ability of PLS to handle small sample sizes and non-normality; and (6) conceptual and statistical problems with formative measurement and the application of PLS to such models. Additionally, we also reanalyze the dataset provided by Willaby et al. (2015; doi:10.1016/j.paid.2014.09.008) to highlight the limitations of PLS. Our broader review and analysis of the available evidence makes it clear that PLS is not useful for statistical estimation and testing.
    Personality and Individual Differences 07/2015; DOI:10.1016/j.paid.2015.07.019 · 1.95 Impact Factor
  • Source
    • "Allowing unreliability in ratings of job performance to affect the conclusions that can be drawn about direct and indirect determinants of job performance is irresponsible. " This would amount to preferring a systematically biased measure over a more variable unbiased measure, encouraging a misrepresentation of data " (Charles, 2005, p. 222). If LeBreton et al.'s emotional preferences were to be embraced, they would lead to disastrous scientific consequences for our field. "

    Industrial and Organizational Psychology 12/2014; 7(4). DOI:10.1111/iops.12186 · 0.65 Impact Factor
  • Source
    • "In real-world scenarios, the correlation between random errors and true scores is not zero. These nonzero correlations between true scores and random errors are referred to as nuisance correlations (Charles, 2005; Zimmerman, 2007). Our Supplemental Material provides a more in-depth description of nuisance correlations. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Failures to replicate published psychological research findings have contributed to a "crisis of confidence." Several reasons for these failures have been proposed, the most notable being questionable research practices and data fraud. We examine replication from a different perspective and illustrate that current intuitive expectations for replication are unreasonable. We used computer simulations to create thousands of ideal replications, with the same participants, wherein the only difference across replications was random measurement error. In the first set of simulations, study results differed substantially across replications as a result of measurement error alone. This raises questions about how researchers should interpret failed replication attempts, given the large impact that even modest amounts of measurement error can have on observed associations. In the second set of simulations, we illustrated the difficulties that researchers face when trying to interpret and replicate a published finding. We also assessed the relative importance of both sampling error and measurement error in producing variability in replications. Conventionally, replication attempts are viewed through the lens of verifying or falsifying published findings. We suggest that this is a flawed perspective and that researchers should adjust their expectations concerning replications and shift to a meta-analytic mind-set. © The Author(s) 2014.
    Perspectives on Psychological Science 05/2014; 9(3):305-318. DOI:10.1177/1745691614528518 · 4.89 Impact Factor
Show more