Christopher M Berry

Texas A&M University, College Station, Texas, United States

Are you Christopher M Berry?

Claim your profile

Publications (25)70.09 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Range restriction is a common problem in personnel selection and other contexts in applied psychology. For many years researchers have used corrections that assume range restriction was direct, even when it was known that range restriction was indirect. Hunter, Schmidt, and Le (2006) proposed a new correction for cases of indirect range restriction that greatly increases its potential usefulness due to its reduced information requirements compared to alternatives. The current study examines the applicability of Hunter et al.'s correction to settings where its assumed structural model is violated by including the measures that are to be involved in corrections in the original selection composite. We conclude that Hunter et al.'s correction should generally be preferred when compared to its common alternative, Thorndike's Case II correction for direct range restriction. However, this is due to the likely violation of one of the other assumptions of the Hunter et al. correction in most applied settings. Correction mechanisms and practical implications are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Applied Psychology 07/2014; 99(4):587-598. · 4.31 Impact Factor
  • Nichelle C. Carpenter, Christopher M. Berry, Lawrence Houston
    [Show abstract] [Hide abstract]
    ABSTRACT: Given the common use of self-ratings and other-ratings (e.g., supervisor or coworker) of organizational citizenship behavior (OCB), the purpose of this meta-analysis was to evaluate the extent to which these rating sources provide comparable information. The current study's results provided three important lines of evidence supporting the use and construct-related validity of self-rated OCB. The meta-analysis of mean differences demonstrated that the mean difference in OCB ratings is actually quite small between self- and other-raters. Importantly, the difference between self- and other-raters was influenced by neither the response scale (i.e., agreement vs. frequency) nor the use of antithetical/reverse-worded items on OCB scales. The meta-analysis of correlations showed that self- and other-ratings are moderately correlated but that self–other convergence is higher when antithetical items are not used and when agreement response scales are used. In addition, self-ratings and supervisor-ratings showed significantly more convergence than self-ratings and coworker-ratings. Finally, an evaluation of self-rated and other-rated OCB nomological networks showed that although self-rated and other-rated OCBs have similar patterns of relationships with common correlates, other-rated OCB generally contributed negligible incremental variance to correlates and only contributed appreciable incremental variance to other-rated behavioral variables (e.g., task performance and counterproductive work behavior). Implications and future research directions are discussed, particularly regarding the need to establish a nomological network for other-rated OCB. Copyright © 2013 John Wiley & Sons, Ltd.
    Journal of Organizational Behavior 12/2013; · 3.85 Impact Factor
  • Anita Kim, Christopher M Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the personality processes involved in the debate surrounding the use of cognitive ability tests in college admissions. In Study 1, 108 undergraduates (18.88 years, 60 women, 80 Whites) completed measures of Social Dominance Orientation (SDO), testing self-efficacy, and attitudes regarding the use of cognitive ability tests in college admissions; SAT/ACT scores were collected from the Registrar. Sixty-seven undergraduates (19.06 years, 39 women, 49 Whites) completed the same measures in Study 2 along with measures of endorsement of commonly presented arguments about test use. In Study 3, 321 American adults (35.58 years, 180 women, 251 Whites) completed the same measures used in Study 2; half were provided with facts about race and validity issues surrounding cognitive ability tests. Individual differences in SDO significantly predicted support for the use of cognitive ability tests in all samples, after controlling for SAT/ACT scores and test self-efficacy and also among participants who read facts about cognitive ability tests. Moreover, arguments for and against test use mediated this effect. The present study sheds new light on an old debate by demonstrating that individual differences in beliefs about hierarchy play a key role in attitudes toward cognitive ability test use.
    Journal of Personality 11/2013; · 2.44 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Differential validity and differential prediction analyses have come to conflicting conclusions regarding whether the relationship between cognitive ability tests and performance is the same across racial/ethnic subgroups. A prominent criticism of differential validity analyses is that they are confounded by subgroup differences in the ratio of criterion-to-test standard deviations (SDs). We investigated whether subgroup differences in these ratios can account for this conflicting evidence. Drawing on data from over 1 million participants, we find that subgroup differences in criterion-to-test SD ratios in general only account for a relatively small portion of subgroup differences in test-criterion correlations. Practitioner pointsCognitive ability tests exhibit differential validity for Asian, Black, Hispanic, and White subgroups.In most domains, subgroup differences in criterion-to-test SD ratios only account for a relatively small portion of these validity differences and thus do not explain why validities differ across subgroups.Still, because a portion of the already-small validity differences are accounted for by SD ratios, the magnitude of any subgroup regression slope differences is small and is unlikely to result in underprediction of criterion performance for Asian, Black, and Hispanic subgroups.
    Journal of Occupational and Organizational Psychology. 09/2013; 87:208-220.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examines whether and how self-monitoring moderates the relationships between two personality traits (agreeableness and conscientiousness) and counterproductive work behavior directed toward the orga-nization (CWB-O) and toward other employees (CWB-I). High self-monitors strive to attain personal goals related to status and prestige enhancement by adjusting their behavior to what the situation requires or allows for. We propose that the status enhancement motive can take on two different yet related forms—impression management (interpersonal potency) and opportunism (win-at-all-costs)—depending on relevant situational cues. We hypothesize that in public, interpersonal settings where their behavior is visible to others, high self-monitors' desire to enhance their status by looking good to others suppresses the natural expression of low agreeableness via increased engagement in CWB-I. Conversely, we hypothesize that in private, non-interpersonal settings where their behavior is rarely visible to others, high self-monitors' desire to enhance their status by doing whatever it takes to get what they want intensifies the natural expression of low consci-entiousness via increased engagement in CWB-O. On the basis of two independent samples of participants, results of moderated multiple regression analyses provided support for the hypotheses.
    Journal of Organizational Behavior 08/2013; · 3.85 Impact Factor
  • Society for Industrial and Organizational Psychology; 04/2013
  • Yan Liu, Christopher M. Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: Time theft is a costly burden on organizations. However, there is limited knowledge about why time theft occurs. To advance this line of research, this conceptual paper looks at the association between organizational injustice and time theft from identity, moral, and equity perspectives. This paper proposes that organizational injustice triggers time theft through decreased organizational identification. It also proposes that moral disengagement and equity sensitivity moderate this process such that organizational identification is less likely to mediate among employees with high moral disengagement and more likely to mediate among employees who are equity sensitives and entitleds.
    Journal of Business Ethics 01/2013; · 0.96 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Despite mean differences between sexes, virtually no research has investigated sex-based differential prediction of personality tests in civilian employment samples. The present study investigated the degree to which personality test scores differentially predicted job performance ratings in two managerial samples. In both samples, participants completed a Five-Factor Model personality test and the participants' supervisors, peers, and subordinates provided ratings of participants' task and contextual performance. The current study found sex-based differential prediction in 6.7 per cent of differential prediction analyses in Sample 1, but found no sex-based differential prediction in Sample 2. Across the two samples sex-based differential prediction of performance only occurred 3.3 per cent of the time, which is less than would be expected by chance alone, given alpha = .05. Thus, based on the present study and the extant literature to date, no sex-based differential prediction studies have identified evidence of personality test bias.
    Applied Psychology 01/2013; 62(1):13-43. · 1.52 Impact Factor
  • Christopher M Berry, Nichelle C Carpenter, Clare L Barratt
    [Show abstract] [Hide abstract]
    ABSTRACT: Much of the recent research on counterproductive work behaviors (CWBs) has used multi-item self-report measures of CWB. Because of concerns over self-report measurement, there have been recent calls to collect ratings of employees' CWB from their supervisors or coworkers (i.e., other-raters) as alternatives or supplements to self-ratings. However, little is still known about the degree to which other-ratings of CWB capture unique and valid incremental variance beyond self-report CWB. The present meta-analysis investigates a number of key issues regarding the incremental contribution of other-reports of CWB. First, self- and other-ratings of CWB were moderately to strongly correlated with each other. Second, with some notable exceptions, self- and other-report CWB exhibited very similar patterns and magnitudes of relationships with a set of common correlates. Third, self-raters reported engaging in more CWB than other-raters reported them engaging in, suggesting other-ratings capture a narrower subset of CWBs. Fourth, other-report CWB generally accounted for little incremental variance in the common correlates beyond self-report CWB. Although many have viewed self-reports of CWB with skepticism, the results of this meta-analysis support their use in most CWB research as a viable alternative to other-reports.
    Journal of Applied Psychology 12/2011; 97(3):613-36. · 4.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Using meta-analytic tests based on 87 statistically independent samples, we investigated the relationships between the five-factor model (FFM) of personality traits and organizational citizenship behaviors in both the aggregate and specific forms, including individual-directed, organization-directed, and change-oriented citizenship. We found that Emotional Stability, Extraversion, and Openness/Intellect have incremental validity for citizenship over and above Conscientiousness and Agreeableness, 2 well-established FFM predictors of citizenship. In addition, FFM personality traits predict citizenship over and above job satisfaction. Finally, we compared the effect sizes obtained in the current meta-analysis with the comparable effect sizes predicting task performance from previous meta-analyses. As a result, we found that Conscientiousness, Emotional Stability, and Extraversion have similar magnitudes of relationships with citizenship and task performance, whereas Openness and Agreeableness have stronger relationships with citizenship than with task performance. This lends some support to the idea that personality traits are (slightly) more important determinants of citizenship than of task performance. We conclude with proposed directions for future research on the relationships between FFM personality traits and specific forms of citizenship, based on the current findings.
    Journal of Applied Psychology 06/2011; 96(6):1140-66. · 4.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Using meta-analytic tests based on 87 statistically independent samples, we investigated the relationships between the five-factor model (FFM) of personality traits and organizational citizenship behaviors in both the aggregate and specific forms, including individual-directed, organization-directed, and change- oriented citizenship. We found that Emotional Stability, Extraversion, and Openness/Intellect have incremental validity for citizenship over and above Conscientiousness and Agreeableness, 2 well- established FFM predictors of citizenship. In addition, FFM personality traits predict citizenship over and above job satisfaction. Finally, we compared the effect sizes obtained in the current meta-analysis with the comparable effect sizes predicting task performance from previous meta-analyses. As a result, we found that Conscientiousness, Emotional Stability, and Extraversion have similar magnitudes of rela- tionships with citizenship and task performance, whereas Openness and Agreeableness have stronger relationships with citizenship than with task performance. This lends some support to the idea that personality traits are (slightly) more important determinants of citizenship than of task performance. We conclude with proposed directions for future research on the relationships between FFM personality traits and specific forms of citizenship, based on the current findings.
    Journal of Applied Psychology 04/2011; · 4.31 Impact Factor
  • Source
  • Christopher M Berry, Malissa A Clark, Tara K McClure
    [Show abstract] [Hide abstract]
    ABSTRACT: The correlation between cognitive ability test scores and performance was separately meta-analyzed for Asian, Black, Hispanic, and White racial/ethnic subgroups. Compared to the average White observed correlation ( = .33, N = 903,779), average correlations were lower for Black samples ( = .24, N = 112,194) and Hispanic samples ( = .30, N = 51,205) and approximately equal for Asian samples ( = .33, N = 80,705). Despite some moderating effects (e.g., type of performance criterion, decade of data collection, job complexity), validity favored White over Black and Hispanic test takers in almost all conditions that included a sizable number of studies. Black-White validity comparisons were possible both across and within the 3 broad domains that use cognitive ability tests for high-stakes selection and placement: civilian employment, educational admissions, and the military. The trend of lower Black validity was repeated in each domain; however, average Black-White validity differences were largest in military studies and smallest in educational and employment studies. Further investigation of the reasons for these validity differences is warranted.
    Journal of Applied Psychology 03/2011; 96(5):881-906. · 4.31 Impact Factor
  • Source
    CHRISTOPHER M. BERRY, PAUL R. SACKETT, VANESSA TOBARES
    [Show abstract] [Hide abstract]
    ABSTRACT: James et al. (2005) reported an estimate of criterion-related validity (corrected only for dichotomization of criteria) of r = .44 across 11 conditional reasoning test of aggression (CRT-Aggression) validity studies. This meta-analysis incorporated a total sample size more than twice that of James et al. Our comparable validity estimate for CRT-Aggression scales predicting counterproductive work behaviors was r = .16. Validity for the current, commercially marketed test version (CRT-A) was lower (r = .10). These validity estimates increased somewhat (into the .24–.26 range) if studies using dichotomous criteria with low base rates were excluded from the meta-analysis. CRT-Aggression scales were correlated r = .14 with measures of job performance. As we differed with James et al. in some of our coding decisions, we reran all analyses using James et al.'s coding decisions and arrived at extremely similar results.
    Personnel Psychology 05/2010; 63(2):361 - 384. · 2.93 Impact Factor
  • CHRISTOPHER M. BERRY, PAUL R. SACKETT
    [Show abstract] [Hide abstract]
    ABSTRACT: Most faking research has examined the use of personality measures when using top-down selection. We used simulation to examine the use of personality measures in selection systems using cut scores and outlined a number of issues unique to these situations. In particular, we compared the use of 2 methods of setting cut scores on personality measures: applicant-data-derived (ADD) and nonapplicant-data-derived (NADD) cut-score strategies. We demonstrated that the ADD strategy maximized mean performance resulting from the selection system in the face of applicant faking but that this strategy also resulted in the displacement of deserving applicants by fakers (which has fairness implications). On the other hand, the NADD strategy minimized displacement of deserving applicants but at the cost of some mean performance. Therefore, the use of the ADD versus NADD strategies can be viewed as a strategic decision to be made by the organization, as there is a tradeoff between the 2 strategies in effects on performance versus fairness to applicants. We quantitatively outlined these tradeoffs at various selection ratios, levels of validity, and amounts of faking in the applicant pool.
    Personnel Psychology 11/2009; 62(4):833 - 863. · 2.93 Impact Factor
  • Source
    In-Sue Oh, Christopher M Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: This study investigated the usefulness of the five-factor model (FFM) of personality in predicting two aspects of managerial performance (task vs. contextual) assessed by utilizing the 360 degree performance rating system. The authors speculated that one reason for the low validity of the FFM might be the failure of single-source (e.g., supervisor) ratings to comprehensively capture the construct of managerial performance. The operational validity of personality was found to increase substantially (50%-74%) across all of the FFM personality traits when both peer and subordinate ratings were added to supervisor ratings according to the multitrait-multimethod approach. Furthermore, the authors responded to the recent calls to validate tests via a multivariate (e.g., multitrait-multimethod) approach by decomposing overall managerial performance into task and contextual performance criteria and by using multiple rating perspectives (sources). Overall, this study contributes to the evidence that personality may be even more useful in predicting managerial performance if the performance criteria are less deficient.
    Journal of Applied Psychology 11/2009; 94(6):1498-513. · 4.31 Impact Factor
  • Source
    Christopher M Berry, Paul R Sackett
    [Show abstract] [Hide abstract]
    ABSTRACT: We demonstrate that the validity of SAT scores and high school grade point averages (GPAs) as predictors of academic performance has been underestimated because of previous studies' reliance on flawed performance indicators (i.e., college GPA) that are contaminated by the effects of individual differences in course choice. We controlled for this contamination by predicting individual course grades, instead of GPAs, in a data set containing more than 5 million college grades for 167,816 students. Percentage of variance accounted for by SAT scores and high school GPAs was 30 to 40% lower when the criteria were freshman and cumulative GPAs than when the criteria were individual course grades. SAT scores and high school GPAs together accounted for between 44 and 62% of the variance in college grades. This study provides new estimates of the criterion-related validity of SAT scores and high school GPAs, and highlights the care that must be taken in choosing appropriate criteria in validity studies.
    Psychological Science 06/2009; 20(7):822-30. · 4.43 Impact Factor
  • CHRISTOPHER M. BERRY, PAUL R. SACKETT, RICHARD N. LANDERS
    [Show abstract] [Hide abstract]
    ABSTRACT: This study revisits the relationship between interviews and cognitive ability tests, finding lower magnitudes of correlation than have previous meta-analyses; a finding that has implications for both the construct and incremental validity of the interview. Our lower estimates of this relationship than previous meta-analyses were mainly due to (a) an updated set of studies, (b) exclusion of samples in which interviewers potentially had access to applicants' cognitive test scores, and (c) attention to specific range restriction mechanisms that allowed us to identify a sizable subset of studies for which range restriction could be accurately accounted. Moderator analysis results were similar to previous meta-analyses, but magnitudes of correlation were generally lower than in previous meta-analyses. Findings have implications for the construct and incremental validity of interviews, and meta-analytic methodology in general.
    Personnel Psychology 11/2007; 60(4):837 - 874. · 2.93 Impact Factor
  • Source
    CHRISTOPHER M. BERRY, PAUL R. SACKETT, SHELLY WIEMANN
    [Show abstract] [Hide abstract]
    ABSTRACT: A sizable body of new literature on integrity tests has appeared since the last review of this literature by Sackett and Wanek (1996). Understanding of the constructs underlying integrity tests continues to grow, aided by new work at the item level. Validation work against a growing variety of criteria continues to be carried out. Work on documenting fakability and coachability continues, as do efforts to increase resistance to faking. New test types continue to be developed. Examination of subgroup differences continues, both at the test and facet level. Research addressing applicant reactions and cross-cultural issues is also reviewed.
    Personnel Psychology 05/2007; 60(2):271 - 301. · 2.93 Impact Factor
  • Christopher M Berry, Deniz S Ones, Paul R Sackett
    [Show abstract] [Hide abstract]
    ABSTRACT: Interpersonal deviance (ID) and organizational deviance (OD) are highly correlated (R. S. Dalal, 2005). This, together with other empirical and theoretical evidence, calls into question the separability of ID and OD. As a further investigation into their separability, relationships among ID, OD, and their common correlates were meta-analyzed. ID and OD were highly correlated (rho = .62) but had differential relationships with key Big Five variables and organizational citizenship behaviors, which lends support to the separability of ID and OD. Whether the R. J. Bennett and S. L. Robinson (2000) instrument was used moderated some relationships. ID and OD exhibited their strongest (negative) relationships with organizational citizenship, Agreeableness, Conscientiousness, and Emotional Stability. Correlations with organizational justice were small to moderate, and correlations with demographic variables were generally negligible.
    Journal of Applied Psychology 04/2007; 92(2):410-24. · 4.31 Impact Factor

Publication Stats

277 Citations
70.09 Total Impact Points

Institutions

  • 2009–2013
    • Texas A&M University
      • • Department of Psychology
      • • Department of Management
      College Station, Texas, United States
    • Wayne State University
      • Department of Psychology
      Detroit, MI, United States
    • University of Alberta
      • Department of Strategic Management and Organization
      Edmonton, Alberta, Canada
  • 2006–2007
    • University of Minnesota Twin Cities
      • Department of Psychology
      Minneapolis, MN, United States