Article

The impact of job complexity and performance measurement on the temporal consistency, stability, and test-retest reliability of employee job performance ratings.

School of Hotel Administration, Cornell University, Ithaca, NY 14853, USA.
Journal of Applied Psychology (Impact Factor: 4.31). 04/2005; 90(2):269-83. DOI: 10.1037/0021-9010.90.2.269
Source: PubMed

ABSTRACT Although research has shown that individual job performance changes over time, the extent of such changes is unknown. In this article, the authors define and distinguish between the concepts of temporal consistency, stability, and test-retest reliability when considering individual job performance ratings over time. Furthermore, the authors examine measurement type (i.e., subjective and objective measures) and job complexity in relation to temporal consistency, stability, and test-retest reliability. On the basis of meta-analytic results, the authors found that the test-retest reliability of these ratings ranged from .83 for subjective measures in low-complexity jobs to .50 for objective measures in high-complexity jobs. The stability of these ratings over a 1-year time lag ranged from .85 to .67. The analyses also reveal that correlations between performance measures decreased as the time interval between performance measurements increased, but the estimates approached values greater than zero.

1 Follower
 · 
61 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper develops synthetic validity estimates based on a meta-analytic-weighted least squares (WLS) approach to job component validity (JCV), using position analysis questionnaire (PAQ) estimates of job characteristics, and the Data, People, & Things ratings from the Dictionary of Occupational Titles as indices of job complexity. For the general aptitude test battery database of 40,487 employees, nine validity coefficients were estimated for 192 positions. The predicted validities from the WLS approach had lower estimated variability than would be obtained from either the classic JCV approach or local criterion-related validity studies. Data, People, & Things summary ratings did not consistently moderate validity coefficients, whereas the PAQ data did moderate validity coefficients. In sum, these results suggest that synthetic validity procedures should incorporate a WLS regression approach. Moreover, researchers should consider a comprehensive set of job characteristics when considering job complexity rather than a single aggregated index.
    Personnel Psychology 08/2009; 62(3):533 - 552. DOI:10.1111/j.1744-6570.2009.01147.x · 2.93 Impact Factor
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We review developments in personnel selection since the previous review by Hough & Oswald (2000) in the Annual Review of Psychology. We organize the review around a taxonomic structure of possible bases for improved selection, which includes (a) better understanding of the criterion domain and criterion measurement, (b) improved measurement of existing predictor methods or constructs, (c) identification and measurement of new predictor methods or constructs, (d) improved identification of features that moderate or mediate predictor-criterion relationships, (e) clearer understanding of the relationship between predictors or between predictors and criteria (e.g., via meta-analytic synthesis), (f) identification and prediction of new outcome variables, (g) improved ability to determine how well we predict the outcomes of interest, (h) improved understanding of subgroup differences, fairness, bias, and the legal defensibility, (i) improved administrative ease with which selection systems can be used, (j) improved insight into applicant reactions, and (k) improved decision-maker acceptance of selection systems.
    Annual Review of Psychology 02/2008; 59:419-50. DOI:10.1146/annurev.psych.59.103006.093716 · 20.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper develops synthetic validity estimates based on a meta-analytic-weighted least squares (WLS) approach to job component validity (JCV), using position analysis questionnaire (PAQ) estimates of job characteristics, and the Data, People, & Things ratings from the Dictionary of Occupational Titles as indices of job complexity. For the general aptitude test battery database of 40,487 employees, nine validity coefficients were estimated for 192 positions. The predicted validities from the WLS approach had lower estimated variability than would be obtained from either the classic JCV approach or local criterion-related validity studies. Data, People, & Things summary ratings did not consistently moderate validity coefficients, whereas the PAQ data did moderate validity coefficients. In sum, these results suggest that synthetic validity procedures should incorporate a WLS regression approach. Moreover, researchers should consider a comprehensive set of job characteristics when considering job complexity rather than a single aggregated index.
    Personnel Psychology 08/2009; 62(3):533 - 552. DOI:10.1111/j.1744-6570.2009.01147.x · 2.93 Impact Factor
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We review developments in personnel selection since the previous review by Hough & Oswald (2000) in the Annual Review of Psychology. We organize the review around a taxonomic structure of possible bases for improved selection, which includes (a) better understanding of the criterion domain and criterion measurement, (b) improved measurement of existing predictor methods or constructs, (c) identification and measurement of new predictor methods or constructs, (d) improved identification of features that moderate or mediate predictor-criterion relationships, (e) clearer understanding of the relationship between predictors or between predictors and criteria (e.g., via meta-analytic synthesis), (f) identification and prediction of new outcome variables, (g) improved ability to determine how well we predict the outcomes of interest, (h) improved understanding of subgroup differences, fairness, bias, and the legal defensibility, (i) improved administrative ease with which selection systems can be used, (j) improved insight into applicant reactions, and (k) improved decision-maker acceptance of selection systems.
    Annual Review of Psychology 02/2008; 59:419-50. DOI:10.1146/annurev.psych.59.103006.093716 · 20.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper develops synthetic validity estimates based on a meta-analytic-weighted least squares (WLS) approach to job component validity (JCV), using position analysis questionnaire (PAQ) estimates of job characteristics, and the Data, People, & Things ratings from the Dictionary of Occupational Titles as indices of job complexity. For the general aptitude test battery database of 40,487 employees, nine validity coefficients were estimated for 192 positions. The predicted validities from the WLS approach had lower estimated variability than would be obtained from either the classic JCV approach or local criterion-related validity studies. Data, People, & Things summary ratings did not consistently moderate validity coefficients, whereas the PAQ data did moderate validity coefficients. In sum, these results suggest that synthetic validity procedures should incorporate a WLS regression approach. Moreover, researchers should consider a comprehensive set of job characteristics when considering job complexity rather than a single aggregated index.
    Personnel Psychology 08/2009; 62(3):533 - 552. DOI:10.1111/j.1744-6570.2009.01147.x · 2.93 Impact Factor
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We review developments in personnel selection since the previous review by Hough & Oswald (2000) in the Annual Review of Psychology. We organize the review around a taxonomic structure of possible bases for improved selection, which includes (a) better understanding of the criterion domain and criterion measurement, (b) improved measurement of existing predictor methods or constructs, (c) identification and measurement of new predictor methods or constructs, (d) improved identification of features that moderate or mediate predictor-criterion relationships, (e) clearer understanding of the relationship between predictors or between predictors and criteria (e.g., via meta-analytic synthesis), (f) identification and prediction of new outcome variables, (g) improved ability to determine how well we predict the outcomes of interest, (h) improved understanding of subgroup differences, fairness, bias, and the legal defensibility, (i) improved administrative ease with which selection systems can be used, (j) improved insight into applicant reactions, and (k) improved decision-maker acceptance of selection systems.
    Annual Review of Psychology 02/2008; 59:419-50. DOI:10.1146/annurev.psych.59.103006.093716 · 20.53 Impact Factor