Racial Bias in Motor Vehicle Searches

William Penn University, Filadelfia, Pennsylvania, United States
Journal of Political Economy (Impact Factor: 2.9). 12/1999; 109(1). DOI: 10.1086/318603
Source: RePEc


African- American motorist in the United States are much more likely than white motorists to have their car searched by police checking for illegal drugs and other contraband. The courts are faced with the task of deciding on the basis of traffic-search data whether police behavior reflects a rackial bias. We discuss why a simple test for racial bias commonly applied by the courts is inadequate and develop a model of law enforcement that suggests an alternative test. The model assumes a population with two racial types who also differ along other dimensions relevant to criminal behavior. Using the model, we construct a test for whether racial disparities in motor vehicle searches reflect racial prejudice, or instead are consistent with the behavior of non-prejudiced police maximizing drug interdiction. The test is valid even when the set of characteristics observed by the police is only partially observable by the econometrician. We apply the test to traffic-search data from Maryland and find the observed black-white disparities in search rates to be consistent with the hypothesis of no racial prejudice. Finally, we present a simple analysis of the tradeoff between efficiency of drug interdiction and racial fairness in policing. We show that in some circumstances there is no trade-off; constraining the police to be color-blind may achieve greater efficiency in drug interdiction.

Full-text preview

Available from:
  • Source
    • "Although the idea of favoring a group is mentioned in the early literature, it comes up only in connection with favoring black motorists from fear of future litigation when searching them (Knowles, Persico, and Todd 2001, 227). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In their article “An Alternative Test of Racial Prejudice in Motor Vehicle Searches: Theory and Evidence,” published in the American Economic Review in 2006, Shamena Anwar and Hanming Fang study racial prejudice in motor vehicle searches by Florida Highway Patrol officers (“troopers”). Their data include the race and ethnicity of the trooper and of the motorist stopped and possibly searched. A search is deemed successful if the trooper finds contraband in the vehicle. Using data on troopers and motorists of three race-ethnicity groups (white non-Hispanic, black, and white Hispanic, with others being dropped), Anwar and Fang compute nine trooper-on-motorist search rates and nine search-success rates. They present a model that exploits this information to test whether troopers go beyond statistical discrimination to racial prejudice. Irrespective of whether troopers exhibit racial prejudice, the model has a crucial testable implication, an implication that concerns the rank-order of the search and search-success rates. Anwar and Fang report that their data neatly fit this predicted rank-order implication with high statistical significance across the board, strongly supporting the soundness of the model. In turn, the model is applied to address the question of racial prejudice. They do not find evidence of racial prejudice, and neither do I—so the present critique does not arrive at results about prejudice contrary to their results. The present critique starts by reporting on my effort to replicate Anwar and Fang’s preliminary rank-order findings. I am unable to replicate two of their nine reported search-success rates, nor can I replicate the reported statistical significance of four of the six Z-statistics and one of the three χ2 test statistics for the rankings of the search-success rates. My new results imply that the empirical support for the model’s soundness is not what Anwar and Fang claim it to be. This problem of irreplicability is my primary point, but I then move on to another matter: My replications draw attention to a neglected statistical caveat in Anwar and Fang’s implementation of the empirical tests of racial prejudice. It turns out that the novel resampling procedure they employ does not provide robust results. I pinpoint the empirical source of this issue and, in an appendix, show how a simple extension to their method improves robustness. In another appendix I put forth an alternative randomization test that seems more appropriate when testing such resampled data.
    Full-text · Article · Sep 2014 · Econ journal watch
  • Source
    • "An example of post-stop outcome analysis consists of checking whether the search for drugs among stopped vehicles is biased against the driver's race. In this respect, starting from the influential paper proposed in (Knowles et al., 2001), several extensions and critiques have been presented (Antonovics & Knight, 2009; Anwar & Fang, 2006; Gardner, 2009; Rowe, 2009; Sanga, 2009). We refer to the surveys (Tillyer et al., 2010; Engel, 2008) for extensive references. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Discrimination data analysis has been investigated for the last fifty years in a large body of social, legal, and economic studies. Recently, discrimination discov-ery and prevention has become a blooming research topic in the knowledge discov-ery community. This chapter provides a multi-disciplinary annotated bibliography of the literature on discrimination data analysis, with the intended objective to pro-vide a common basis to researchers from a multi-disciplinary perspective. We cover legal, sociological, economic and computer science references.
    Preview · Article · Jan 2013
  • Source
    • "For meta-analyses, see notably Gordon (1996) and Higgins et al. (2003) and Appelbaum and Hughes (1998) for a survey. 7 Empirical tests of favoritism focus on demographic characteristics (Goldin and Rouse, 2000; Knowles et al., 2001; Fershtman and Gneezy, 2001) or on home bias in sports (Kocher and Sutter, 2004; Garicano et al., 2005) and editing (Laband and Piette, 1994). In addition to Bandiera et al. (2009), recent experimental studies have "
    [Show abstract] [Hide abstract]
    ABSTRACT: We provide experimental evidence of workers' ingratiation by opinion conformity and of managers' discrimination in favor of workers with whom they share similar opinions. In our Baseline, managers can observe both workers' performance at a task and opinions before assigning unequal payoffs. In the Ingratiation treatment, workers can change their opinion after learning that held by the manager. In the Random treatment, workers can also change opinion but payoffs are assigned randomly, which gives a measure of non-strategic opinion conformism. We find evidence of high ingratiation indices, as overall, ingratiation is effective. Indeed, managers reward opinion conformity, and even more so when opinions cannot be manipulated. Additional treatments reveal that ingratiation is cost sensitive and that the introduction of performance pay for managers as well as a less noisy measure of performance increase the role of relative performance in the assignment of payoffs, without eliminating the reward of opinion conformity.
    Full-text · Article · May 2012 · SSRN Electronic Journal
Show more