Racial groups and test fairness, considering history and construct validity

Department of Psychology, Texas A&M University, College Station, TX 77843-4235, USA.
American Psychologist (Impact Factor: 6.87). 01/2008; 62(9):1082-3. DOI: 10.1037/0003-066X.62.9.1082
Source: PubMed

ABSTRACT According to Helms, "test fairness" is defined as "removal from test scores of systematic variance attributable to experiences of racial or cultural socialization." Some of Helms's reasoning is based on earlier work, which recommended that racial group or category variables be replaced entirely with individual-level constructs, to reflect racial socialization experiences that vary within racial groups. Treatment of the test fairness issue--a social and political issue--will benefit from explicitly considering historical events that contributed to group-level race differences. In light of this history, D. A. Newman et al suggest (a) retaining a group-level conceptualization of race/racial socialization and also (b) focusing on criterion-irrelevant variance in test scores that is attributable to race.

Download full-text


Available from: Daniel A. Newman, Nov 24, 2014
10 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In Ricci v. DeStefano, 129 S. Ct. 2658 (2009), the Supreme Court recently reaffirmed the doctrine, first articulated by the Court in Griggs v. Duke Power Company, 401 U.S. 424 (1971), that employers can be held liable under Title VII of the 1964 Civil Rights Act for neutral personnel practices with a disparate impact on minority workers. The Griggs Court further held that employers can escape liability by showing that their staffing practices are job related or consistent with business necessity. In the interim since Griggs, social scientists have generated evidence undermining two key assumptions behind that decision and its progeny. First, the case law on disparate impact rests on the implicit assumption, reflected in the so-called 4/5 rule, that fair and valid hiring criteria will result in a workplace that roughly reflects the representation of each group in the background population. Work in labor economics shows that this assumption is unjustified. Blacks, and to a lesser extent Hispanics, currently lag significantly behind whites in the abilities needed to succeed in most jobs. Therefore, screening criteria that best predict worker productivity will routinely screen out minorities and violate the 4/5 rule. Second, the Court in Griggs noted the absence of evidence that the selection criteria in that case (a high school diploma and an aptitude test) were related to subsequent performance of the service jobs at issue, and expressed doubt about the existence of such a link. But research in industrial and organizational psychology has repeatedly documented a “validity-diversity tradeoff”: job selection devices that best predict future job performance generate the smallest number of minority hires in a broad range of positions. The reason is that cognitive ability remains the best predictor of performance for jobs at all levels of complexity. Because blacks lag significantly behind whites on that measure, valid job screens will have a substantial adverse impact on this group. Because legitimately meritocratic (that is, job-related) job selection practices will routinely trigger prima facie violations of the disparate impact rule, employers who adopt such practices run the risk of being required to justify them – a costly and difficult task that encourages undesirable, self-protective behaviors and may result in unwarranted liability. To alleviate this burden, the article proposes to adopt a new regime of “disparate impact realism” that abandons the 4/5 rule in favor of sliding scale ratios pegged to measured disparities in group performance and the selectivity of particular positions. Alternatively, the disparate impact rule should be repealed altogether. The data indicate that pronounced differences in the background distribution of skill and human capital, not arbitrary hurdles imposed by employers, are the principle factor behind racial imbalances in most jobs. Moreover, blacks lag behind whites in actual on-the-job performance, which indicates that employers are not unfairly excluding minorities from the workforce but rather bending over backwards to include them. Disparate impact litigation, which does nothing to correct existing disparities and distracts from the task of addressing them, represents a cumbersome, misplaced effort that could better be directed at the root causes of workforce racial imbalance.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Replies to comments by R. J. Griffore and D. A. Newman et al on the author's original article on test validity and cultural bias in racial-group assessment. Helms notes that, given that within-group variance exceeds between-groups variance, racial groups are probably simulating a psychological construct that is more strongly related to individuals' test scores than to their respective racial group's mean test scores. Therefore, models of individual differences, such as her Helms individual-differences (HID) model, that remove construct-irrelevant racial variance, are needed to make the testing process fair at the level of individual African American, Latino/Latina American, and Native American test takers. Her HID model is intended to focus attention on identifying the factors responsible for the racial-group-level differences and, thereby, assist test users to look beyond presumed physical appearance (e.g., racial-group designations) for explanations of individuals' cognitive abilities, knowledge, or skills test scores.
    American Psychologist 01/2008; 62(9):1083-5. DOI:10.1037/0003-066X.62.9.1083 · 6.87 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The authors review criticisms commonly leveled against cognitively loaded tests used for employment and higher education admissions decisions, with a focus on large-scale databases and meta-analytic evidence. They conclude that (a) tests of developed abilities are generally valid for their intended uses in predicting a wide variety of aspects of short-term and long-term academic and job performance, (b) validity is not an artifact of socioeconomic status, (c) coaching is not a major determinant of test performance, (d) tests do not generally exhibit bias by underpredicting the performance of minority group members, and (e) test-taking motivational mechanisms are not major determinants of test performance in these high-stakes settings.
    American Psychologist 05/2008; 63(4):215-27. DOI:10.1037/0003-066X.63.4.215 · 6.87 Impact Factor
Show more