Christopher M Berry

Texas A&M University, College Station, Texas, United States

Are you Christopher M Berry?

Claim your profile

Publications (28)84.19 Total impact

  • Christopher M Berry · Peng Zhao
    [Show abstract] [Hide abstract]
    ABSTRACT: Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Applied Psychology 08/2014; 100(1). DOI:10.1037/a0037615 · 4.31 Impact Factor
  • Adam S Beatty · Clare L Barratt · Christopher M Berry · Paul R Sackett
    [Show abstract] [Hide abstract]
    ABSTRACT: Range restriction is a common problem in personnel selection and other contexts in applied psychology. For many years researchers have used corrections that assume range restriction was direct, even when it was known that range restriction was indirect. Hunter, Schmidt, and Le (2006) proposed a new correction for cases of indirect range restriction that greatly increases its potential usefulness due to its reduced information requirements compared to alternatives. The current study examines the applicability of Hunter et al.'s correction to settings where its assumed structural model is violated by including the measures that are to be involved in corrections in the original selection composite. We conclude that Hunter et al.'s correction should generally be preferred when compared to its common alternative, Thorndike's Case II correction for direct range restriction. However, this is due to the likely violation of one of the other assumptions of the Hunter et al. correction in most applied settings. Correction mechanisms and practical implications are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Applied Psychology 07/2014; 99(4):587-598. DOI:10.1037/a0036361 · 4.31 Impact Factor
  • Nichelle C. Carpenter · Christopher M. Berry · Lawrence Houston
    [Show abstract] [Hide abstract]
    ABSTRACT: Given the common use of self-ratings and other-ratings (e.g., supervisor or coworker) of organizational citizenship behavior (OCB), the purpose of this meta-analysis was to evaluate the extent to which these rating sources provide comparable information. The current study's results provided three important lines of evidence supporting the use and construct-related validity of self-rated OCB. The meta-analysis of mean differences demonstrated that the mean difference in OCB ratings is actually quite small between self- and other-raters. Importantly, the difference between self- and other-raters was influenced by neither the response scale (i.e., agreement vs. frequency) nor the use of antithetical/reverse-worded items on OCB scales. The meta-analysis of correlations showed that self- and other-ratings are moderately correlated but that self–other convergence is higher when antithetical items are not used and when agreement response scales are used. In addition, self-ratings and supervisor-ratings showed significantly more convergence than self-ratings and coworker-ratings. Finally, an evaluation of self-rated and other-rated OCB nomological networks showed that although self-rated and other-rated OCBs have similar patterns of relationships with common correlates, other-rated OCB generally contributed negligible incremental variance to correlates and only contributed appreciable incremental variance to other-rated behavioral variables (e.g., task performance and counterproductive work behavior). Implications and future research directions are discussed, particularly regarding the need to establish a nomological network for other-rated OCB. Copyright © 2013 John Wiley & Sons, Ltd.
    Journal of Organizational Behavior 05/2014; 35(4). DOI:10.1002/job.1909 · 3.85 Impact Factor
  • Source
    In-Sue Oh · Steven D Charlier · Michael K Mount · Christopher M Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examines whether and how self-monitoring moderates the relationships between two personality traits (agreeableness and conscientiousness) and counterproductive work behavior directed toward the orga-nization (CWB-O) and toward other employees (CWB-I). High self-monitors strive to attain personal goals related to status and prestige enhancement by adjusting their behavior to what the situation requires or allows for. We propose that the status enhancement motive can take on two different yet related forms—impression management (interpersonal potency) and opportunism (win-at-all-costs)—depending on relevant situational cues. We hypothesize that in public, interpersonal settings where their behavior is visible to others, high self-monitors' desire to enhance their status by looking good to others suppresses the natural expression of low agreeableness via increased engagement in CWB-I. Conversely, we hypothesize that in private, non-interpersonal settings where their behavior is rarely visible to others, high self-monitors' desire to enhance their status by doing whatever it takes to get what they want intensifies the natural expression of low consci-entiousness via increased engagement in CWB-O. On the basis of two independent samples of participants, results of moderated multiple regression analyses provided support for the hypotheses.
    Journal of Organizational Behavior 01/2014; 35(1). DOI:10.1002/job.1856 · 3.85 Impact Factor
  • Anita Kim · Christopher M Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the personality processes involved in the debate surrounding the use of cognitive ability tests in college admissions. In Study 1, 108 undergraduates (18.88 years, 60 women, 80 Whites) completed measures of Social Dominance Orientation (SDO), testing self-efficacy, and attitudes regarding the use of cognitive ability tests in college admissions; SAT/ACT scores were collected from the Registrar. Sixty-seven undergraduates (19.06 years, 39 women, 49 Whites) completed the same measures in Study 2 along with measures of endorsement of commonly presented arguments about test use. In Study 3, 321 American adults (35.58 years, 180 women, 251 Whites) completed the same measures used in Study 2; half were provided with facts about race and validity issues surrounding cognitive ability tests. Individual differences in SDO significantly predicted support for the use of cognitive ability tests in all samples, after controlling for SAT/ACT scores and test self-efficacy and also among participants who read facts about cognitive ability tests. Moreover, arguments for and against test use mediated this effect. The present study sheds new light on an old debate by demonstrating that individual differences in beliefs about hierarchy play a key role in attitudes toward cognitive ability test use.
    Journal of Personality 11/2013; 83(1). DOI:10.1111/jopy.12078 · 2.44 Impact Factor
  • Christopher M Berry · Michael J Cullen · Jolene M Meyer
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent meta-analyses demonstrated that the observed correlation between cognitive ability test scores and performance criteria was lower for Black and Hispanic subgroups than for Asian and White subgroups in college admissions, civilian employment, and military domains (i.e., differential validity). Given mean score differences between racial/ethnic subgroups, these observed validities may have been confounded by subgroup differences in range restriction. The present study draws on data from hundreds of cognitive ability test validity studies including more than 1 million persons to investigate whether Asian, Black, Hispanic, and White subgroups have differed in amounts of range restriction. We first replicated observed differential validity results and also extended them by presenting the first meta-analytic evidence that observed cognitive ability test validity is lower for the Hispanic subgroup in civilian employment settings. All subgroups were approximately equivalently restricted in range in college admissions and civilian employment domains, but the Black subgroup was more restricted in range than the White subgroup in military studies. In all 3 domains, any differences in range restriction could not account for observed validity differences between subgroups. We also provide estimates of range-restriction-corrected validities; Black and Hispanic subgroups' corrected validities were 11.3-18.0% lower than White corrected validities across domains. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
    Journal of Applied Psychology 11/2013; 99(1). DOI:10.1037/a0034376 · 4.31 Impact Factor
  • Christopher M. Berry · Clare L. Barratt · Christen L. Dovalina · Peng Zhao
    [Show abstract] [Hide abstract]
    ABSTRACT: Differential validity and differential prediction analyses have come to conflicting conclusions regarding whether the relationship between cognitive ability tests and performance is the same across racial/ethnic subgroups. A prominent criticism of differential validity analyses is that they are confounded by subgroup differences in the ratio of criterion-to-test standard deviations (SDs). We investigated whether subgroup differences in these ratios can account for this conflicting evidence. Drawing on data from over 1 million participants, we find that subgroup differences in criterion-to-test SD ratios in general only account for a relatively small portion of subgroup differences in test-criterion correlations. Practitioner pointsCognitive ability tests exhibit differential validity for Asian, Black, Hispanic, and White subgroups.In most domains, subgroup differences in criterion-to-test SD ratios only account for a relatively small portion of these validity differences and thus do not explain why validities differ across subgroups.Still, because a portion of the already-small validity differences are accounted for by SD ratios, the magnitude of any subgroup regression slope differences is small and is unlikely to result in underprediction of criterion performance for Asian, Black, and Hispanic subgroups.
    09/2013; 87:208-220. DOI:10.1111/joop.12036
  • Society for Industrial and Organizational Psychology; 04/2013
  • Source
    Yan Liu · Christopher M. Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: Time theft is a costly burden on organizations. However, there is limited knowledge about why time theft occurs. To advance this line of research, this conceptual paper looks at the association between organizational injustice and time theft from identity, moral, and equity perspectives. This paper proposes that organizational injustice triggers time theft through decreased organizational identification. It also proposes that moral disengagement and equity sensitivity moderate this process such that organizational identification is less likely to mediate among employees with high moral disengagement and more likely to mediate among employees who are equity sensitives and entitleds.
    Journal of Business Ethics 01/2013; 118(1). DOI:10.1007/s10551-012-1554-5 · 1.33 Impact Factor
  • Source
    Christopher M. Berry · Anita Kim · Ying Wang · Rebecca Thompson · William H. Mobley
    [Show abstract] [Hide abstract]
    ABSTRACT: Despite mean differences between sexes, virtually no research has investigated sex-based differential prediction of personality tests in civilian employment samples. The present study investigated the degree to which personality test scores differentially predicted job performance ratings in two managerial samples. In both samples, participants completed a Five-Factor Model personality test and the participants' supervisors, peers, and subordinates provided ratings of participants' task and contextual performance. The current study found sex-based differential prediction in 6.7 per cent of differential prediction analyses in Sample 1, but found no sex-based differential prediction in Sample 2. Across the two samples sex-based differential prediction of performance only occurred 3.3 per cent of the time, which is less than would be expected by chance alone, given alpha = .05. Thus, based on the present study and the extant literature to date, no sex-based differential prediction studies have identified evidence of personality test bias.
    Applied Psychology 01/2013; 62(1):13-43. DOI:10.1111/j.1464-0597.2012.00493.x · 1.52 Impact Factor
  • Christopher M. Berry · Paul R. Sackett · Amy Sund
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose Berry et al.’s (J Appl Psychol 96:881–906, 2011) meta-analysis of cognitive ability test validity data across employment, college admissions, and military domains demonstrated that validity is lower for Black and Hispanic subgroups than for Asian and White subgroups. However, Berry et al. relied on observed test-criterion correlations and it is therefore not clear whether validity differences generalize beyond observed validities. The present study investigates the roles that range restriction and criterion contamination play in differential validity. Design/Methodology/Approach A large dataset (N > 140,000) containing SAT scores and college grades of Asian, Black, Hispanic, and White test takers was used. Within-race corrections for multivariate range restriction were applied. Differential validity analyses were carried out using freshman GPA versus individual course grades as criteria to control for the contaminating influence of individual differences between students in course choice. Findings Observed validities underestimated the magnitude of validity differences between subgroups relative to when range restriction and criterion contamination were controlled. Analyses also demonstrate that validity differences would translate to larger regression slope differences (i.e., differential prediction). Implications Subgroup differences in range restriction and/or individual differences in course choice cannot account for lower validity of the SAT for Black and Hispanic subgroups. Controlling for these factors increased subgroup validity differences. Future research must look to other explanations for subgroup validity differences. Originality The present study is the first differential validity study to simultaneously control for range restriction and individual differences in course choice, and answers a call to investigate potential causes of differential validity.
    Journal of Business and Psychology 09/2012; 28(3). DOI:10.1007/s10869-012-9284-3 · 1.25 Impact Factor
  • Source
    Christopher M. Berry · Ariel M. Lelchook · Malissa A. Clark
    [Show abstract] [Hide abstract]
    ABSTRACT: We meta-analyzed the correlations between voluntary employee lateness, absenteeism, and turnover to (i) provide the most comprehensive estimates to date of the interrelationships between these withdrawal behaviors; (ii) test the viability of a withdrawal construct; and (iii) evaluate the evidence for competing models of the relationships between withdrawal behaviors (i.e., alternate forms, compensatory forms, independent forms, progression of withdrawal, and spillover model). Corrected correlations were .26 between lateness and absenteeism, .25 between absenteeism and turnover, and .01 between lateness and turnover. These correlations were even smaller in recent studies that had been carried out since the previous meta-analyses of these relationships 15–20 years ago. The small-to-moderate intercorrelations are not supportive of a withdrawal construct that includes lateness, absenteeism, and turnover. These intercorrelations also rule out many of the competing models of the relationships between withdrawal behaviors, as many of the models assume all relationships will be positive, null, or negative. On the basis of path analyses using meta-analytic data, the progression of withdrawal model garnered the most support. This suggests that lateness may moderately predict absenteeism and absenteeism may moderately predict turnover. Copyright © 2011 John Wiley & Sons, Ltd.
    Journal of Organizational Behavior 07/2012; 33(5). DOI:10.1002/job.778 · 3.85 Impact Factor
  • Christopher M Berry · Nichelle C Carpenter · Clare L Barratt
    [Show abstract] [Hide abstract]
    ABSTRACT: Much of the recent research on counterproductive work behaviors (CWBs) has used multi-item self-report measures of CWB. Because of concerns over self-report measurement, there have been recent calls to collect ratings of employees' CWB from their supervisors or coworkers (i.e., other-raters) as alternatives or supplements to self-ratings. However, little is still known about the degree to which other-ratings of CWB capture unique and valid incremental variance beyond self-report CWB. The present meta-analysis investigates a number of key issues regarding the incremental contribution of other-reports of CWB. First, self- and other-ratings of CWB were moderately to strongly correlated with each other. Second, with some notable exceptions, self- and other-report CWB exhibited very similar patterns and magnitudes of relationships with a set of common correlates. Third, self-raters reported engaging in more CWB than other-raters reported them engaging in, suggesting other-ratings capture a narrower subset of CWBs. Fourth, other-report CWB generally accounted for little incremental variance in the common correlates beyond self-report CWB. Although many have viewed self-reports of CWB with skepticism, the results of this meta-analysis support their use in most CWB research as a viable alternative to other-reports.
    Journal of Applied Psychology 12/2011; 97(3):613-36. DOI:10.1037/a0026739 · 4.31 Impact Factor
  • Source
    Dan S Chiaburu · In-Sue Oh · Christopher M Berry · Ning Li · Richard G Gardner
    [Show abstract] [Hide abstract]
    ABSTRACT: Using meta-analytic tests based on 87 statistically independent samples, we investigated the relationships between the five-factor model (FFM) of personality traits and organizational citizenship behaviors in both the aggregate and specific forms, including individual-directed, organization-directed, and change-oriented citizenship. We found that Emotional Stability, Extraversion, and Openness/Intellect have incremental validity for citizenship over and above Conscientiousness and Agreeableness, 2 well-established FFM predictors of citizenship. In addition, FFM personality traits predict citizenship over and above job satisfaction. Finally, we compared the effect sizes obtained in the current meta-analysis with the comparable effect sizes predicting task performance from previous meta-analyses. As a result, we found that Conscientiousness, Emotional Stability, and Extraversion have similar magnitudes of relationships with citizenship and task performance, whereas Openness and Agreeableness have stronger relationships with citizenship than with task performance. This lends some support to the idea that personality traits are (slightly) more important determinants of citizenship than of task performance. We conclude with proposed directions for future research on the relationships between FFM personality traits and specific forms of citizenship, based on the current findings.
    Journal of Applied Psychology 06/2011; 96(6):1140-66. DOI:10.1037/a0024004 · 4.31 Impact Factor
  • Source
    Dan S. Chiaburu · In-Sue Oh · Christopher M. Berry · Ning Li · Richard G. Gardner
    [Show abstract] [Hide abstract]
    ABSTRACT: Using meta-analytic tests based on 87 statistically independent samples, we investigated the relationships between the five-factor model (FFM) of personality traits and organizational citizenship behaviors in both the aggregate and specific forms, including individual-directed, organization-directed, and change- oriented citizenship. We found that Emotional Stability, Extraversion, and Openness/Intellect have incremental validity for citizenship over and above Conscientiousness and Agreeableness, 2 well- established FFM predictors of citizenship. In addition, FFM personality traits predict citizenship over and above job satisfaction. Finally, we compared the effect sizes obtained in the current meta-analysis with the comparable effect sizes predicting task performance from previous meta-analyses. As a result, we found that Conscientiousness, Emotional Stability, and Extraversion have similar magnitudes of rela- tionships with citizenship and task performance, whereas Openness and Agreeableness have stronger relationships with citizenship than with task performance. This lends some support to the idea that personality traits are (slightly) more important determinants of citizenship than of task performance. We conclude with proposed directions for future research on the relationships between FFM personality traits and specific forms of citizenship, based on the current findings.
    Journal of Applied Psychology 04/2011; · 4.31 Impact Factor
  • Source
    Dan S Chiaburu · In-Sue Oh · Christopher M Berry · Ning Li · Richard G Gardner
  • Source
    Christopher M Berry · Malissa A Clark · Tara K McClure
    [Show abstract] [Hide abstract]
    ABSTRACT: The correlation between cognitive ability test scores and performance was separately meta-analyzed for Asian, Black, Hispanic, and White racial/ethnic subgroups. Compared to the average White observed correlation ( = .33, N = 903,779), average correlations were lower for Black samples ( = .24, N = 112,194) and Hispanic samples ( = .30, N = 51,205) and approximately equal for Asian samples ( = .33, N = 80,705). Despite some moderating effects (e.g., type of performance criterion, decade of data collection, job complexity), validity favored White over Black and Hispanic test takers in almost all conditions that included a sizable number of studies. Black-White validity comparisons were possible both across and within the 3 broad domains that use cognitive ability tests for high-stakes selection and placement: civilian employment, educational admissions, and the military. The trend of lower Black validity was repeated in each domain; however, average Black-White validity differences were largest in military studies and smallest in educational and employment studies. Further investigation of the reasons for these validity differences is warranted.
    Journal of Applied Psychology 03/2011; 96(5):881-906. DOI:10.1037/a0023222 · 4.31 Impact Factor
  • Source
    CHRISTOPHER M. BERRY · PAUL R. SACKETT · VANESSA TOBARES
    [Show abstract] [Hide abstract]
    ABSTRACT: James et al. (2005) reported an estimate of criterion-related validity (corrected only for dichotomization of criteria) of r = .44 across 11 conditional reasoning test of aggression (CRT-Aggression) validity studies. This meta-analysis incorporated a total sample size more than twice that of James et al. Our comparable validity estimate for CRT-Aggression scales predicting counterproductive work behaviors was r = .16. Validity for the current, commercially marketed test version (CRT-A) was lower (r = .10). These validity estimates increased somewhat (into the .24–.26 range) if studies using dichotomous criteria with low base rates were excluded from the meta-analysis. CRT-Aggression scales were correlated r = .14 with measures of job performance. As we differed with James et al. in some of our coding decisions, we reran all analyses using James et al.'s coding decisions and arrived at extremely similar results.
    Personnel Psychology 05/2010; 63(2):361 - 384. DOI:10.1111/j.1744-6570.2010.01173.x · 2.93 Impact Factor
  • CHRISTOPHER M. BERRY · PAUL R. SACKETT
    [Show abstract] [Hide abstract]
    ABSTRACT: Most faking research has examined the use of personality measures when using top-down selection. We used simulation to examine the use of personality measures in selection systems using cut scores and outlined a number of issues unique to these situations. In particular, we compared the use of 2 methods of setting cut scores on personality measures: applicant-data-derived (ADD) and nonapplicant-data-derived (NADD) cut-score strategies. We demonstrated that the ADD strategy maximized mean performance resulting from the selection system in the face of applicant faking but that this strategy also resulted in the displacement of deserving applicants by fakers (which has fairness implications). On the other hand, the NADD strategy minimized displacement of deserving applicants but at the cost of some mean performance. Therefore, the use of the ADD versus NADD strategies can be viewed as a strategic decision to be made by the organization, as there is a tradeoff between the 2 strategies in effects on performance versus fairness to applicants. We quantitatively outlined these tradeoffs at various selection ratios, levels of validity, and amounts of faking in the applicant pool.
    Personnel Psychology 11/2009; 62(4):833 - 863. DOI:10.1111/j.1744-6570.2009.01159.x · 2.93 Impact Factor
  • Source
    In-Sue Oh · Christopher M Berry
    [Show abstract] [Hide abstract]
    ABSTRACT: This study investigated the usefulness of the five-factor model (FFM) of personality in predicting two aspects of managerial performance (task vs. contextual) assessed by utilizing the 360 degree performance rating system. The authors speculated that one reason for the low validity of the FFM might be the failure of single-source (e.g., supervisor) ratings to comprehensively capture the construct of managerial performance. The operational validity of personality was found to increase substantially (50%-74%) across all of the FFM personality traits when both peer and subordinate ratings were added to supervisor ratings according to the multitrait-multimethod approach. Furthermore, the authors responded to the recent calls to validate tests via a multivariate (e.g., multitrait-multimethod) approach by decomposing overall managerial performance into task and contextual performance criteria and by using multiple rating perspectives (sources). Overall, this study contributes to the evidence that personality may be even more useful in predicting managerial performance if the performance criteria are less deficient.
    Journal of Applied Psychology 11/2009; 94(6):1498-513. DOI:10.1037/a0017221 · 4.31 Impact Factor

Publication Stats

739 Citations
84.19 Total Impact Points

Institutions

  • 2009–2014
    • Texas A&M University
      • • Department of Psychology
      • • Department of Management
      College Station, Texas, United States
    • Wayne State University
      • Department of Psychology
      Detroit, Michigan, United States
  • 2011
    • Auburn University
      AUO, Alabama, United States
  • 2006–2011
    • University of Minnesota Duluth
      • • Department of Psychology
      • • Department of Civil Engineering
      Duluth, Minnesota, United States
  • 2007
    • University of Minnesota Twin Cities
      • Department of Psychology
      Minneapolis, Minnesota, United States