MARGARET MOOR’s research while affiliated with Yale University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


FIGURE 2. For a Given Sample Size, Whether the List Experiment is Preferred to Direct Questions Depends on the Expected Level of Sensitivity Bias
Observed List Experiment Responses by Treatment Status for Whether a Bribe Was Received and Whether a Bribe Influenced the Respondent's Vote from the 2007 Kenya Postelection Survey Reported in Kramon (2016)
Meta-analysis Estimates of Sensitivity Bias
When to Worry about Sensitivity Bias: A Social Reference Theory and Evidence from 30 Years of List Experiments
  • Article
  • Full-text available

August 2020

·

539 Reads

·

187 Citations

American Political Science Association

GRAEME BLAIR

·

·

MARGARET MOOR

Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social desirability bias, a subset of what we label sensitivity bias. We make three contributions. First, we propose a social reference theory of sensitivity bias to structure expectations about survey responses on sensitive topics. Second, we explore the bias-variance trade-off inherent in the choice between direct and indirect measurement technologies. Third, to estimate the extent of sensitivity bias, we meta-analyze the set of published and unpublished list experiments (a.k.a., the item count technique) conducted to date and compare the results with direct questions. We find that sensitivity biases are typically smaller than 10 percentage points and in some domains are approximately zero.

Download

Citations (1)


... Survey researchers go to great lengths to reduce social desirability bias and avoid traumatizing respondents when asking "sensitive" questions to ensure accurate responses. Despite recognition that sensitivity bias will vary considerably across topics and contexts (Blair et al. 2020;Andreenkova and Javeline 2018;Rosenfeld et al. 2016), much of the focus in comparative politics remains on a subset of known sensitive questions: including corruption (Brierley 2020), support for (Kramon and Weghorst 2019) and exposure to political violence (Jaffe et al. 2015;Brück et al. 2016) as well as support for democracy (Panel 2019) and evaluations of authoritarian regimes (Tannenberg 2022). ...

Reference:

Understanding the Sensitivity of Party Identification Questions in Polarized African Contexts
When to Worry about Sensitivity Bias: A Social Reference Theory and Evidence from 30 Years of List Experiments

American Political Science Association