ArticlePDF Available

The autocratic bias: self-censorship of regime support

Authors:

Abstract and Figures

Because of a perceived (and real) risk of repressive action, some survey questions are sensitive in more autocratic countries while less so in more democratic countries. Yet, survey data on potentially sensitive topics are frequently used in comparative research despite concerns about comparability. To examine the comparability of politically sensitive questions, I employ a multilevel analysis with more than 228,000 respondents in 37 African countries to test for systematic bias when the survey respondents believe (fear) that the government, rather than an independent research institute, has commissioned the survey. The findings indicate that fear of the government induces a substantial and significant bias on questions regarding trust, approval and corruption perceptions in more autocratic countries, but not in more democratic countries. In contrast, innocuous, apolitical questions are not systematically influenced by regime type.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=fdem20
Democratization
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/fdem20
The autocratic bias: self-censorship of regime
support
Marcus Tannenberg
To cite this article: Marcus Tannenberg (2022) The autocratic bias: self-censorship of regime
support, Democratization, 29:4, 591-610, DOI: 10.1080/13510347.2021.1981867
To link to this article: https://doi.org/10.1080/13510347.2021.1981867
© 2022 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group
View supplementary material
Published online: 15 Nov 2021. Submit your article to this journal
Article views: 2277 View related articles
View Crossmark data Citing articles: 2 View citing articles
RESEARCH ARTICLE
The autocratic bias: self-censorship of regime support
Marcus Tannenberg
Department of Political Science, V-Dem Institute, University of Gothenburg, Gothenburg, Sweden
ABSTRACT
Because of a perceived (and real) risk of repressive action, some survey questions are
sensitive in more autocratic countries while less so in more democratic countries. Yet,
survey data on potentially sensitive topics are frequently used in comparative research
despite concerns about comparability. To examine the comparability of politically
sensitive questions, I employ a multilevel analysis with more than 228,000
respondents in 37 African countries to test for systematic bias when the survey
respondents believe (fear) that the government, rather than an independent
research institute, has commissioned the survey. The ndings indicate that fear of
the government induces a substantial and signicant bias on questions regarding
trust, approval and corruption perceptions in more autocratic countries, but not in
more democratic countries. In contrast, innocuous, apolitical questions are not
systematically inuenced by regime type.
ARTICLE HISTORY Received 8 April 2021; Accepted 13 September 2021
KEYWORDS self-censorship; regime support; preference falsication; autocracy; legitimacy
Introduction
When Zimbabweans were asked in 2018 how much they trusted their President
Emmerson Mnangagwa, on average 68% said a lotor somewhat.This is considered
strong approval by most accounts, but is it true? Dividing respondents into two groups
we get a dierent picture. In the one that believes that the interviewer was sent by the
government 77% indicated trust in Mnangagwa, and in the one that does not, some
57% shared this sentiment. In contrast, in democratic Ghana, the dierence between
these two groups of respondents was only 4 percentage points, compared to 20
points in considerably less democratic Zimbabwe.
1
Does autocracy bias certain
survey questions?
Given that much of our current knowledge about politics and everyday life in auto-
cratic countries are informed by public opinion surveys,
2
it begs to question are we
misinformed? What we know about the eects and causes of, for example, trust in gov-
ernment, democratic attitudes, corruption perceptions, regime support, and political
legitimacy, rely to a large extent on survey research comparing countries with
© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.
org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work
is properly cited.
CONTACT Marcus Tannenberg marcus.tannenberg@gu.se
Supplemental data for this article can be accessed at https://doi.org/10.1080/13510347.2021.1981867.
DEMOCRATIZATION
2022, VOL. 29, NO. 4, 591610
https://doi.org/10.1080/13510347.2021.1981867
varying regime types,
3
where data is derived through direct questions on these (in
some countries, but not in others) sensitive topics. Such studies all make the assump-
tion that survey respondents across the sampled countries are somewhat equally
willing to express their true opinions with respect to political topics. In this article, I
show that this assumption is not safe to make as bias from self-censorship do not
operate uniformly across regime type.
In autocratic countries, survey questions can be sensitive for reasons beyond
privacy and social adaption in particular, questions regarding the citizensattitudes
towards and evaluations of the authorities. Respondents subjected to autocratic rule
may practice preference falsicationto align their answers with the perceived
wishes of the regime.
4
Given that authoritarian regimes often pay close attention to
what their citizens do and say in order to sanction those who challenge the ocial dis-
course,
5
there is a real risk that respondents will associate public opinion surveys with
government intelligence gathering. Respondents can therefore be expected to appease
the regime with their responses out of fear that failure to do so may result in repression,
physical or otherwise.
Existing empirical evidence of self-censorship in authoritarian regimes are pri-
marily drawn from single-country studies and their ndings are mixed. A handful
of studies show self-censorship to be problematic for measuring public opinion,
6
while others nd self-censorship to be less of a concern
7
or even non-existing.
8
In
a recent study Shen and Truex
9
estimate self-censorship across a large number of
countries by utilizing dierences in non-response rates between sensitive and
non-sensitive items. They nd self-censorship to be highly variable across autocratic
regimes but conclude that it is oftentimes not an obstacle to capturing public
opinion in autocratic settings. With an approach utilizing actual responses (as
opposed to non-responses) this article nds self-censorship to be a severe issue in
most autocracies.
I test the variation in levels of self-censorship in countries where respondents
experience dierent perceived (and real) risks of repressive action, by employing a
simple research design that utilizes data on whether respondents think the government
sent the survey enumerator to interview them, or that the enumerator works for an
independent research organization. I then analyse if these two groups of respondents
answer systematically dierently to potentially sensitive questions and whether this
dierence is a function of the climate of political repression. Drawing upon data
from more than 228,000 respondents across 37 African countries over 6 survey
rounds the results show that there is indeed an autocratic bias. Responses to questions
related to the citizen-state relationship, such as whether respondents trust the presi-
dent or prime minister and corruption perceptions of key state institutions, are system-
atically biased with the level of democracy in the country, while apolitical questions,
such as How much do you trust your neighbours?are not. Thus, caution is warranted
in employing the former types of survey items, but not the latter, in comparative
studies across dierent regime types.
Lastly, I engage with the literature on response bias induced by the ethnic (mis-
s)match of the enumerator and the respondent.
10
I show that the bias introduced by
enumerators who are coethnic with the countrys key leader but non-coethnic with
the respondent operates along the logic of the autocratic bias: that the ethnic (mis-
s)match prompts regime friendly responses in autocratic countries but not in more
democratic countries.
592 M. TANNENBERG
Self-censorship and how to estimate it
Survey respondents can feel the need to censor themselves if a question elicits
responses that can be socially undesirable or politically incorrect, or if the respondent
fear that their responses can have consequences if disclosed.
11
Questions related to, for
example, income, voter turnout, prejudice against ethnic or religious groups, and drug
abuse can cause the respondent to hide the truth because of concerns about their pres-
tige, fear of social sanctioning from peers, or fear of punishment. This can lead to high
rates of systematic non-responses and/or biased answers, resulting in poor data. Con-
cerns about prestige and social sanctioning can induce social desirability bias in
surveys conducted in both democratic and autocratic regimes, while fear of punish-
ment is of greater concern in autocratic and semi-autocratic contexts, where the per-
ceived risk of repressive action is likely to be higher. Kuran
12
argues that citizens
subjected to authoritarian rule have strong incentives to practice preference falsica-
tionand Schedler
13
raises concerns about the possibility of obtaining reliable
measures of regime legitimacy through representative public-opinion surveys or quali-
tative interviews in autocracies because of the opaque and repressive features of those
regimes. Fear of repercussions for failing to give the ocially desired answer is
expected to have an eect on responses, especially when respondents are uncertain
about their anonymity.
What does this mean for cross-country comparative studies if we are interested in,
for example, approval ratings, regime support, or the legitimacy of the regime attrib-
uted by citizens? If the levels of self-censorship are more or less equal across countries
on proxies for or components of said question or indexes, the issue is less severe. We
would simply have to deal with either inated or deated numbers across the board.
However, if the propensity to self-censor depends on some traits that are hetero-
geneous across countries, such as the level of democracy or political repression, the
size of the bias diers systematically between countries, and thus restricts the possi-
bility of comparative analysis.
Recent ndings warrant caution regarding the reliability of survey responses in
repressive and non-democratic settings. In Zimbabwe where government repression
were commonplace Garcia-Ponce and Pasquale
14
nd reported level of trust in the
president and the ruling party to be aected by recent experiences of state-led repres-
sion. Kalinin
15
employs a series of list experiments and nds that Russianselectoral
support for Vladimir Putin is inated by about 20 percentage points. In contrast,
also using list experiments, Frye et al.
16
estimate Putins approval ratings at about 10
percentage points below those received through direct questioning but conclude that
direct survey questions largely reect the attitudes of Russian citizens. Shockley
et al.
17
show score ination of elite governance evaluations among executive survey
informants in autocratic regimes, and in particular that executives of rms with head-
quarters in autocratic Qatar inated scores vis-a-vis those with out of country head-
quarters. In the Chinese context, Jiang and Yang
18
show an increase in preference
falsication in the aftermath of a major political purge in Shanghai, and using list
experiments Robinson and Tannenberg
19
nd respondents to falsify regime support
by up to 25 percentage points. These studies support concerns that individual respon-
dents inate their approval in autocratic settings.
Does this mean that we cannot trust surveys to measure citizenstrust in govern-
ment, or their political preferences in general? To answer this question, we need to
DEMOCRATIZATION 593
complement single-country studies and test for systematic bias across a larger sample
of countries. To test for response bias due to perceived fear of the government, I turn to
Afrobarometer data, and specically the last item of the Afrobarometer battery, which
asks, just one more question, who do you think sent us to do this interview?Even
though the enumerators conducting the survey introduces themselves as aliated
with an independent research organizationthat does not represent the government
or any political party,over the second to seventh rounds of the Afrobarometer survey,
49% of respondents believed that the survey was sponsored by the government, while
35% considered it to be independent and 16% stated that they did not know. With the
help of this survey question, I divide respondents into three groups: non-suspecting
(those who believe the survey to be independent), suspecting (those who believe the
government was sponsoring the survey), and a Dont knowgroup.
This item has been featured as a proxy for the costliness of dissentor fear of the
governmentin models predicting vote choice in 16 African countries and voting
intentions in Zimbabwe.
20
In contrast to these authors, I argue that the propensity
to suspect the government as sponsor of the survey is not informative of political
fear in that country. This is illustrated by the fact that in the most democratic and
the most autocratic countries in the most recent round, Cape Verde and Cameroon,
virtually the same percentage (41) of respondents believed the government to be spon-
soring the survey. In the full sample of 157 country years, suspecting the government as
survey sponsor is only correlated with the countrys level of democracy at a level of 0.1.
Government sponsorship and self-censorship
Existing ndings on the eect of perceived and real survey sponsorship on politically
sensitive items are mixed. In a study of 20 African countries, Zimbalist
21
uses this per-
ceived survey sponsor to show that a fear-of-the-state, on average, bias survey
responses, and illustrates with case studies why this bias is more pronounced in auto-
cratic Mozambique than in democratic Cape Verde. In the context of Communist
Poland, Sulek
22
compares responses to political items from surveys run by a govern-
ment opinion pollster to responses in independent academic surveys in mid-1980s.
He nds more critique of the government in the independently run surveys, and
show that this dierence disappears late 1980s with the fall of the Communist
regime. In 1970s communist Bulgaria, Welsh
23
nds substantial eects on items of
regime support of being interview by a Party-cadre instead of a university-aliated
interviewer. In contrast, Calvo et al.
24
do not nd beliefs about survey sponsor to
induce bias when comparing government-sponsored household surveys to indepen-
dently run Afrobarometer surveys in eight African countries. In the opposite direction
Lei and Lu
25
nd that Chinese respondents in fact are slightly more critical of the
regime when the survey enumerator convey cues of Chinese Communist Party
membership.
To move this debate forward I expand the scope of previous studies by analysing
responses to more than 40 survey items across 37 countries over 6 survey rounds.
Taking stock of Zimbalists
26
case studies I argue that beliefs about survey sponsorship
should have an impact on sensitive questions only to the extent that the respondents
also fear punishment from the authorities. In sum, you should be more likely to falsify
your preferences when you believe the regime will learn what you say and care about it.
Suspicion of the survey sponsor should lead to preference falsication on potentially
594 M. TANNENBERG
sensitive topics, but not on questions that are apolitical in nature. From this I derive the
following hypotheses:
.H1: Respondents who believe the government has commissioned the survey will
give responses that are more favourable to the regime compared to citizens who
believe the survey to be independent if they live in more autocratic countries, but
not if they live in more democratic countries.
.H2: Respondents who believe the government has commissioned the survey will not
answer dierently on non-sensitive questions, compared to citizens who believe the
survey to be independent, irrespective of regime type.
Research design
To test if political survey items suer from greater sensitivity bias in authoritarian con-
texts, I employ a simple research design that compares respondents who think the gov-
ernment sent the survey enumerator to interview them to those who believe the
enumerator works for an independent research organization. I then analyse if these
two groups of respondents answer systematically dierently a set of 41 sensitive
(H1) and non-sensitive (H2) survey questions, and test if this dierence is conditioned
by the climate of political repression.
For example, if respondents who believe the government has sent the enumerator
are signicantly and substantively more likely to indicate that they trust the President
to those who believe the enumerator works for an independent research organization,
if they live in a country more autocratic country compared, but not if they live in a
more democratic country, this would be indicative of H1. If, in contrast, there is no
dierence between the suspecting and non-suspecting respondents for trust in their
relatives regardless of regime type, this would be indicative of H2. The level of democ-
racy functions as a proxy for the fear-of-the-governmentmechanisms theorized to
induce self-censorship.
In assessing the potential sensitivity of question items I rely on Blair, Coppock and
Moors
27
Social Referent Theory of Sensitivity Bias which state that we should expect
sensitivity bias when all four of the following are present: the respondent (1) has a
social referent in mind when responding; (2) believes the referent can infer his/her
response; (3) has a perception of the referents preferred response; and (4) has a per-
ception that failure to provide the preferred response would entail a cost. In this appli-
cation, the state is the social referent (1). Perceived survey sponsor indicates if the
respondent believes that the referent can know the respondents answer (2). Respon-
dents are assumed to know the referents preferred response (3). Lastly, the level of
democracy works as a proxy for whether or not failing to provide the preferred
response the social would entail costs to the respondent (4). This setup predicts all
items probing for an evaluation of the state or state institutions to be sensitive
whereas evaluations of private or non-state actors and institutions are not (see Table
1). Most items are clear cut, but a brief discussion is warranted on the potential sen-
sitivity of questions regarding preference for democracy, army rule, and traditional
leaders, where it may be more dicult for respondents to know the referents prefer-
ence.First, it is possible that respondents do not perceive it sensitive to report that they
prefer democracy if they live in autocratic countries that frequently (ab)use the term
DEMOCRATIZATION 595
democracy to describe their rule or even incorporate it in their name, such as in the
PeoplesDemocratic Republic of Algeria. Second, given that the vast majority of auto-
cratic regimes in the sample are not ruled by military juntas, disagreement with the
proposition of Army rule should generally be non-sensitive. This should not be the
case in Egypt in 2015 where I expect the question to be particularly sensitive in the
aftermath of the 2013 military coup. This is also what the data show: in Algeria per-
ceived survey sponsorship has no bearing on reported preference for democracy,
and in Egypt perceived survey sponsorship is a particularly strong predictor of accep-
tance of military rule. Lastly, the degree to which traditional chiefs are included in the
governing structure varies substantially between and within states,
28
which makes a
general prediction of the referents preferred answer dicult.
This design can inform us about variation in the level of self-censorship but it does
not allow estimates of the absolute level of self-censorship at hand. Even among
respondents who believe that the survey truly is independent, self-censorship may
Table 1. List of dependent variables by category and potential sensitivity.
Category Variable Sensitive
President/Prime minister Yes
Member of Parliament (MP) Yes
Local government Yes
Ruling party Yes
Opposition party Yes
Electoral commission Yes
Police Yes
Courts Yes
Army Yes
TRUST Tax ocials
Government news
Yes
Yes
Government broadcasting Yes
Traditional leaders
Independent news
Possibly
No
Independent broadcasting No
Religious institutions No
Neighbours No
Relatives No
Vendors No
Most people No
President/Prime minister Yes
Member of Parliament (MP) Yes
Local government Yes
Police Yes
Courts Yes
CORRUPTION Bureaucracy
Tax authority
Yes
Yes
Increase in past year Yes
Traditional leaders Possibly
Businesses No
Religious institutions No
NGOs No
APPROVAL President/Prime minister
Member of Parliament
Local Government
Yes
Yes
Yes
Traditional leaders Possibly
VALUES Prefer democracy
One-party rule
Strong-man rule
Yes
Yes
Yes
Army rule Possibly
596 M. TANNENBERG
be taking place. They may still be wary that the authorities can use the survey to trace
unsanctioned opinions to an individual, a neighbourhood, or a village. To the extent
that respondents adopt a better-safe-than-sorry approach, the overall response bias
will be larger, as this would reduce between-group dierences. The results reported
in this article include this potential built-in downward bias of the estimates. Bearing
in mind that the absolute levels of bias cannot be established, the ndings do show
that between-group dierences are clear and meaningful.
One assumption of the research design is that the suspecting respondents in auto-
cracies and democracies do not dier on any dimensions other than those I can
account for in the analysis. This would be violated if, for example, regime supporters
in autocracies (but not in democracies) are more likely to believe the regime is power-
ful and therefore also more likely to sponsor the survey. I cannot test this assumption,
but the fact that state capacity does not exhibit a relationship with sponsorship belief
oers an indication that the assumption is not violated.
Another assumption is that respondentsbelief about survey sponsorship is stable
throughout the survey, and not formed towards the end of the survey after having
answered the potentially sensitive items. Given that the question is always asked last
this assumption is not testable.
Lastly, the research design does not allow me to determine whether the eects stem
from bias caused by believing that the government has sent the enumerator or by
believing that the enumerator is from an independent organization, or from a combi-
nation of the two. There are, however, clear theoretical reasons to suspect that the bias
stems from the former.
Data and modelling strategy
Individual-level data are taken from the second, third, fourth, fth, sixth and seventh
rounds of the Afrobarometer.
29
I match the various survey rounds with country-level
data for the corresponding year from the Varieties of Democracy data set.
30
This
pooled dataset provides more than 228,000 respondents nested in 157 country years,
nested in 37 countries.
Dependent variables
To test the autocratic bias hypothesis, I employ all variables probing for trust, corruption
perception and approval rating available in the data sets, as well as a handful of questions
on democratic values. In total I have 41 dependent variables (DVs), most of which are
theoretically sensitive and a smaller share that are not. The guiding premise is that ques-
tions evaluating the state or state institutions are potentially sensitive whereas evalu-
ations of private or non-state actors and institutions are not. The DVs fall under four
categories: Trust;Corruption;Approval;andValues and are listed in Table 1.
For the Trust DVs respondents are asked How much do you trust each of the fol-
lowing, or havent you heard enough about them to say.And are provided the follow-
ing answer options resulting in a Not at all;Just a little;I trust them somewhat;I
trust them a lot; and Dont know/Havent heard enough.In the main analysis I drop
respondents who chose Dont know/Havent heard enough(DK), resulting in a
4-point scale. The only exception is trust in Most people, with the binary choices
Most people can be trustedand Must be very careful.
DEMOCRATIZATION 597
The Corruption questions ask respondents How many of the following people do
you think are involved in corruption, or havent you heard enough about them to say,
with the answer options: None; Some of them; Most of them; All of them; and Dont
know.Dropping DKs in the main analysis gives a 4-point scale. The one exception is
Increase in past year which is on a 5-point scale ranging from Increased a lotto
Decreased a lot.
For the Approval DVs respondents are asked Do you approve or disapprove of the
way the following people have performed their jobs over the past twelve months, or
havent you heard enough about them to say,with 4 answer options ranging from
Strongly disapproveto Strongly approve,plus a Dont know-option.
For the Values DV Prefer democracy, respondents are asked which statement is
closest to your own opinion. For someone like me, it doesnt matter what kind of gov-
ernment we have; In some circumstances, a non-democratic government can be pre-
ferable; and Democracy is preferable to any other kind of government,resulting in a
3-point scale with higher values indicating a preference for democracy. For the three
other DVs, One-party; Army; and Strong-man rule respondents are asked There are
many ways to govern a country. Would you disapprove or approve of the following
alternatives,with the options ranging from Strongly disapproveto Strongly
approve,including a neutral middle option resulting in a 5-point scale.
Note that all DVs are not available for all countries and all rounds. The number of
respondents and country years available for each of the 41 DVs are included in the
regression tables in Appendix A.
Independent variables
The main independent variable, Survey sponsor, is generated from the question item
Just one more question: Who do you think sent us to do this interview?Respondents
who indicate that they believe the enumerator was sent by the local, regional, or
national government, or any of its agencies, are coded as 1, while those who believed
the survey to be commissioned by a non-governmental organization (NGO), a univer-
sity, a research company, etc. are coded as 0. Of the complete sample, 49% of respon-
dents reported that they believed that the government was behind the survey, while
36% believed it to be independent and 15% said that they did not know. The variance
of suspecting the government as sponsor at a country year level is between 6.5% in
Liberia in 2015 and 82% in Madagascar in 2005. Figure 6 Appendix F, displays the
share of suspecting respondents in each country over time. Regressing the share of sus-
pecting respondents on a number of plausible predictors shows only corruption to be
associated with a larger share suspicion. State capacity, human rights abuses and the
level of democracy are not associated with suspicion at the country level. The latter
is important for the research design and it is reassuring that the between-country vari-
ation of suspicion only correlates with the level of democracy at a 11-level. The Dont
knowgroup of respondents is excluded in the main analysis but are coded as 1
together with the suspectinggroup in robustness checks of all model specications,
the rationale being that not knowing who the sponsor is likely to also induce preference
falsication in repressive settings, albeit to a lesser degree than suspecting the
government.
A note is warranted on the issue of item non-response: are people who believe that
the enumerator is asking a question on behalf of the government more likely to refuse
598 M. TANNENBERG
to answer or simply state that they Dont knowwhen asked political questions in
autocracies? I nd no evidence for that. This is consistent with theories of preference
falsication, which propose that fearful respondents will resort to the safest possible
answer.
31
Saying that you do not know or refuse to answer whether you, for
example, trust the president can raise suspicion and risk signalling dissent (see Appen-
dix E, Table 21, 22 and 23 for details).
While there is no correlation between the country level averages of suspecting that
the government sponsored the survey and countrieslevels of democracy, the next
issue is to examine whether there is something particular about those individuals
who suspect the survey sponsor that also makes them more likely to state high trust
in the key leader, etc. than the population at large. Note that it would need to be a
peculiarity that alters their behaviour only in more autocratic countries and not in
more democratic countries. So, who suspects the government? The balance of individ-
ual-level covariates for the dierent groups of respondents shows minor dierences. If
we split the sample into autocratic and democratic countries using V-Dems categori-
cal Regimes of the World index,
32
and look at the covariate balance between the two
groups of respondents, the pattern of dierences is the same. For example, in the
democratic sample, the mean level of poverty for those who suspect the government
as survey sponsor is 0.14 higher than for those who believe in the surveys indepen-
dence; in the autocratic sample, this dierence is 0.11 (see Appendix D, Table 18).
The dierences in means between the two groups are similar in both samples, and
in the same direction for all demographic variables. There are no observable individual
characteristics that can explain why only those who live in authoritarian countries and
who suspect the government to be behind the survey are more likely to provide regime-
friendly responses to politically sensitive questions. As a robustness test, I follow Calvo
et al.
33
and use propensity score matching to account for selection into treatment (see
Appendix G, Tables 25 and 26 for details). Matching on pre-treatment covariates do
not change the results.
In all model specications, I control for the following set of individual level control
variables: age, gender, education level, urban/rural residency, and for an index of lived
poverty.
34
To avoid post-treatment bias I do not include variables that are themselves
sensitive items and hence might be aected by sponsorship belief. For example,
whether you believe the country is moving in the right or wrong direction,may
predict whether you trust the key leadership gure, but it may also be aected by spon-
sorship belief.
Country level
As a proxy for the perceived risk of repressive actions at the country level, I employ the
Varieties of Democracys Electoral Democracy Index
35
(see Coppedge et al., and Mar-
quardt and Pemstein
36
for detail on aggregation rules and methodology). The rationale
for using a highly aggregated index of the level of democracy is to be able to test
whether existing studies that draw conclusions from comparative survey data from
countries at vastly dierent levels of democracy are suering from biases. To get
closer to the mechanism of fear that the mediating variable is theorized to induce I sub-
stitute the Electoral Democracy Index with a sub-index of Freedom of Expression for
all models (see Appendix B). The two indexes are highly correlated and it is no surprise
that the results hold to this substitution. I note that the theorized patterns are in fact
even more pronounced using freedom of expression as the mediating variable. This
DEMOCRATIZATION 599
makes sense as states that have a have a low score on its Democracy index partly due to
low state capacity may not invoke the same fear as states that have capacity and chooses
to use it to suppress political opponents.
Estimation
Because respondents are not randomly distributed but clustered within countries, I
employ multilevel models that take these data hierarchies into account and allow
testing for the eect of a two-level interaction between perceived survey sponsor (indi-
vidual level) and level of democracy (country year level). The model is a linear random
slope model. The specication of the baseline multilevel model is as follows:
yic =
g
oo +
g
1demc+
g
2sponsic +X
ic
l
+Z
c
d
+U0c+R1c+
h
ic (1)
Adding the two-level interaction term between individual-level suspicion of survey
sponsor and country-level democracy, we get the full multilevel model specication
(see Aguinis et al.
37
):
yic =
g
oo +
g
1demc+
g
2sponsic +
g
3(demsponsic)+X
ic
l
+Z
c
d
+U0c+R1c
+
h
ic (2)
where y
ic
is the dependent variable for individual iin country c,γ
00
is the average
individual level intercept, dem
c
is the country-level democracy, spons
ic
is an individuals
perception of survey sponsor,
ic
and
c
are vectors of individual- and country-level
controls, U
0c
is the intercept variance, R
1c
is the slope variance (for spons
ic
), and η
ic
is
the individual-level error term. I do not discuss nor present model building, the slope,
and intercept variance in the main text and tables. In short, the intercept variance is
reduced in each step of building up the models and the slope variance is reduced when
introducing the two-level interaction, i.e. it explains some of the between-country variance
(for details, see replication code). To facilitate easy interpretation, I proceed by graphing
the interaction eects and visualizing the eects of perceived sponsor on each dependent
variable, conditioned on the level of democracy in order to contrast more autocratic and
more democratic countries. I de-mean and standardize all outcome variables by each
country and survey round so that eect sizes are in country-round-specic standard devi-
ation units. Thus, a coecient of 0.25 indicates that believing you are interviewed by a
government agent, rather than an independent researcher, is associated with a quarter
of a standard deviation change in the dependent variable.
Empirical ndings
Irst look at the Trust variables. Figure 1 shows the marginal eects plots for all 20
trust variables. The bar charts at the bottom of each graph display the distribution
of country-years (level-2 units). For simplicity, detailed regression output is not
displayed here, but can be found in the regression tables in the Appendix A: Tables
28. All of the following DVs have substantial and signicant interaction eects in
the expected direction: President; Members of Parliament; Local government; Ruling
party; Opposition party (opposite direction from the other); Electoral commission;
Police; Courts; Army; Traditional leaders; Government News; and Government Broad-
casts. In contrast the following trust variables are not biased along regime type:
600 M. TANNENBERG
Independent News; Independent Broadcasts; Tax ocials; Religious institutions; Neigh-
bours; Relatives; Vendors; Most people.
The results are in line with H1: that sponsorship belief bias responses to sensitive
questions in favour of the regime in more autocratic countries but not in more demo-
cratic countries. To take an example, respondents who believe its the government who
is asking is estimated to say that they trust their President 0.2 standard deviations more
than those who believe an independent institute is asking, when they live in autocratic
Figure 1. Estimated eect of sponsor perception (government) on trust across level of democracy.
DEMOCRATIZATION 601
countries. An eect of that magnitude is expected in countries with an electoral
democracy score in the range of 0.20.4, such as Burundi, Eswatini, Egypt and Zim-
babwe. The estimated dierence is low or indistinguishable from zero in more demo-
cratic countries on top of the scale, such as Ghana, Cape Verde and South Africa. Only
one sensitive variable does not exhibit the hypothesized pattern: reported trust in Tax
ocial is only marginally aected by sponsorship belief and the eect does not depend
on the level of democracy.
The results are largely in support of H2: the respondents who believe the govern-
ment has commissioned the survey will not answer dierently to non-sensitive ques-
tions, compared to citizens who believe the survey to be independent, irrespective of
regime type. Save for trust in Traditional leaders, responses to non-sensitive questions,
such as trust in Neighbours, Relatives, Vendors, Most people etc. are unaected by spon-
sorship beliefs and regime type.
Moving on to the Corruption variables, all the following have substantial and signicant
interaction eects in the expected direction: President; Members of Parliament; Local gov-
ernment; Police; Courts; Army; Increase in past year; and Businesses. In contrast, the follow-
ing variables are not biased along regime type: Bureaucracy; Tax ocials; Traditional
leaders; Religious institutions; and NGOs (see Appendix A Tables 5 and 6 for details).
Figure 2 shows how respondentsreported corruption perception is systematically
biased across the level of democracy. All else equal, respondents who believe it is the gov-
ernment that is asking are less likely to answer that corruption is widespread among
various state institutions, compared to those who believe the question is coming from a
research organization. With the exception of Bureaucracy and Tax authority,thepatterns
for all sensitive variable support H1. Similarly, the absence of an eect with regards to per-
ceptions of Traditional leaders, Religious institutions and NGOs is in line with H2. Yet, the
apparent interaction-eect for Businesses goes against the expectation of H2.
The signicant eects of the four sensitive Approval variables oer additional
support for H1 (see Figure 3). Believing the government to sponsor the survey substan-
tially increases reported approval of the President, Member of parliament, and Local
government in more autocratic countries, while having an insignicant eect as the
level of democracy surpasses values around 0.7. Thus, countries at a level of democracy
on par with Benin and Tunisia are not expected to see an upward bias. While the inter-
action eect of the supposedly non-sensitive item Traditional leader is insignicant, it
does exhibit a pattern counter to H2. The relationship is consistent with how respon-
dents react to Traditional leaders with regards to trust and corruption and may indi-
cate that it may in fact be a sensitive topic.
Moving on to democratic values,Figure 4 shows how respondentspreference for
democracy is systematically biased across the level of democracy. All else equal, respon-
dents who believe it is the government that is asking are around one-tenth of a standard
deviation less likely to answer that democracy is a preferable system, compared to those
who believe the question is coming from a research organization.Studies concerned with
public demand for democracy
38
likely underestimate the true demand due to this down-
ward bias in more autocratic countries. We see a similar interaction eect with regards to
acceptance of One-party rule, with over reporting in more autocratic countries when the
government is the perceived sponsor. The eect of survey sponsor is uniform across
regime types for acceptance of Army rule and Strong-man rule.Therst is unsurprising
given that most autocratic regimes in the sample are not military regimes, but the latter is
counter to my expectation.
602 M. TANNENBERG
(How)does ethnicity matters?
Given that ethnicity is politically salient in many of the countries in the sample, and
that this article is concerned with political questions, I test inuence of respondent
and enumeratorsethnic identities. This is especially relevant since Adida et al.
39
Figure 2. Estimated eect of sponsor perception (government) on corruption perceptions across level of
democracy.
Figure 3. Estimated eect of sponsor perception (government) on approval across level of democracy.
DEMOCRATIZATION 603
show how a large number of the Afrobarometer survey items suer from response bias
stemming from the ethnic (miss)match of the enumerator and the respondent. In par-
ticular, the authors show that being interviewed by a non-coethnic generate more
socially desirable responses, and pertinent to this study that respondents inter-
viewed by an enumerator who is a coethnic of the countrys key leader are more
likely to give regime friendly responses, such as approve of the governments handling
of the economy and express lower trust in opposition parties. An alternative expla-
nation is thus that the autocratic bias put forward in this article is driven by an ethni-
cally induced social desirability bias.
This is not the case: the models are robust to controlling for the ethnic match
between the enumerator and respondent, as well as for the enumerator being a coeth-
nic with the key leader. Moreover, by replicating a number of Adida et al. models and
adding an interaction between the variable on leader-interviewer coethnicity and level
of democracy I show in Figure 5 that the induced biased appears to operate through the
logic of the autocratic bias (for full regression tables see Appendix C, Tables 16 and 17):
i.e. that the bias is primarily a concern in more autocratic countries and generally not
in democratic countries. These interaction graphs display the marginal eect of being
interviewed by a non-coethnic enumerator who is coethnic with the countrys key
leader (potential ethnic threat) for: trust in the president; the ruling party; opposition
parties; vendors; corruption perceptions of the oce of the president; approval of the pre-
sident; evaluation of the government handling of the economy; and approval for strong-
man rule, conditioned on level democracy. The models include all controls used
throughout the article and serve as a replication of four models included in Adida
et al.s study (see gure 7B in their paper).
Discussion
The empirical ndings suggest that more repressive regimes indeed enjoy the auto-
cratic bias. Autocratic countries receive inated evaluations of trust, corruption,
approval and regime-friendly values. This becomes particularly clear looking at the
stark contrast between the otherwise identical questions on trust in government news-
papers and independent newspapers, as well as government broadcasting services and
independent broadcasting services (see Figure 1, row 4). Sponsorship belief impacts
reported trust in the governments information channels in autocratic countries but
not in more democratic countries. Trust in the independent information channels
remains unaected by sponsorship belief irrespective of regime type.
Figure 4. Estimated eect of sponsor perception (government) on values across level of democracy.
604 M. TANNENBERG
These results have implications for the comparative use of survey data. As the bias is
consistent across a large number of survey items commonly used in comparative poli-
tics, previous studies that rely on survey research comparing countries with varying
regime types may need re-evaluation.
40
Future work would do well to consider the
obstacle self-censorship pose to their ability to compare public opinion data across
regimes, and whenever possible test for its presence. Indirect questioning techniques,
such as the list experiment or the randomized response technique are powerful tools to
estimate sensitive opinions but are not without cost. The indirect approaches require a
larger sample,
41
are cognitively taxing, and sacrice individual-level data for aggre-
gated estimates. Because of these drawbacks, researchers will often have to rely on
direct questions for sensitive questions. When direct questioning is necessitated and
when information on perceived sponsorship is unavailable, researchers need to be
explicit about the assumptions they are making with regards self-censorship and
how a violation of these may alter their inferences. This is especially prudent when
the potential bias works in favorof the hypothesis being tested.
In addition to across-regime comparative work, the autocratic bias can distort
results of longitudinal studies of approval ratings of leaders and ruling parties,
42
and
demand for democracy
43
within the same autocratic regime. Even in the absence of
signicant political changes, the share of respondents that perceive the state to
sponsor the survey can dier between survey rounds leading to seemingly dramatic
public opinion shifts. Mozambique provides an illustrative example of this. Between
2012 and 2015, the share of respondents that reported trust in the ruling party
dropped 18 percentage points (from 74 to 56). Disregarding the autocratic bias, one
conclusion might be that support is rapidly wittering for the dominant one-party
Figure 5. Estimated eect of ethnic threat across level of democracy.
DEMOCRATIZATION 605
regime. However, the share of respondents that perceived the interviewer to be sent by
the government was a full 28 percentage points lower in 2015 than 2012. Given that in
2012, all else equal, that group of respondents were 2.4 times more likely to indicate
trust in the ruling party, the high levels of trust in FRELIMO in 2012 is likely a
product of the autocratic bias. This illustrates that it is important and also that it is
possible to increase the perceived independence of a survey. In the Mozambique
case, two dierent rms were responsible for the eld work in 2012 and 2015. It is
possible that the reputation of rms, names, logos, attire, etc. inuence perceived inde-
pendence. Given the large variation in sponsorship perception between and within
countries (see Appendix F, Figure 6), it is clear that sponsorship belief is not static
and future research into how to minimize respondentssuspicion will help to
advance survey research in autocracies.
Conclusion
This article shows that respondentsbelief about who has commissioned the survey
inuences answers on politically sensitive questions in more autocratic countries
while having no impact on responses in very democratic countries. In more politically
repressive environments, respondents who believe (fear) that the government has sent
the interviewer are more likely to state that they trust the countrys leader or ruling
party, and less likely to state that they believe rulers and state institutions to be
corrupt, as compared to respondents who think that the interviewer works on
behalf of an independent research organization.
This study provides a signicant contribution to comparative public opinion
research in general and to the study of political behaviour and public opinion in
authoritarian regimes in particular. I provide evidence that a large set of commonly
used survey items ranging from regime legitimacy and popular support for incum-
bents to corruption perceptions and preferences for democracy suer from systematic
bias. Sensitive questions evaluating rulers suer from a larger bias than do questions
evaluating those exercising public power. This is evident by the larger eect on trust
in or corruption perception of in the key leadership gure or ruling party compared
to the eect vis-a-vis the bureaucracy and tax authorities. Innocuous apolitical ques-
tions, such as trust in your neighbours or relatives, provide perfect tests for the auto-
cratic bias hypothesis, as one would expect no dierence between the two groups of
respondentsanswers, no matter the level of political repression. Indeed, there is none.
The autocratic bias is not only of methodological concern. Insofar as good-govern-
ance or democracy-promotion initiatives are informed by survey items measuring cor-
ruption perceptions or demand for democracy, this bias is also of direct political
relevance. The usefulness of a survey item probing respondentssense of anonymity
should be evident. Given the low implementation cost, such an item should be
added to surveys with the ambition to be comparable across countries where the
doubt of anonymity is likely to produce dierent response behaviour. Save for the
Afrobarometer, large-scale public opinion surveys do not include a similar item,
44
and thus fail to provide data users with an easy and eective method of estimating
the varying sensitivity of survey items across countries. Studies using data where
such a variable is available should ideally include an interaction term in the analysis
to determine the presence of systematic bias. In the presence of bias, running the analy-
sis using only the sub-sample of respondents who believe the survey to be independent
606 M. TANNENBERG
provides a rst robustness check for the problem of preference falsication. Having
estimated the bias for a set of sensitive items, another avenue forward would be to con-
struct reliability weights to enable the researcher to account for biases in the analysis
while retaining the full sample.
Notes
1. The results summarized in this paragraph are drawn from the Afrobarometer survey round 7
(www.afrobarometer.org).
2. E.g. Treisman, Presidential Popularity in a Hybrid Regime; Geddes and Zaller, Sources of
Popular Support; Stockmann and Gallagher, Remote Control: How the Media Sustain
Authoritarian Rule; Weyland, A Paradox of Success? Determinants of political support.
3. E.g. Rose and Mishler, Comparing Regime Support in Non-Democratic and Democratic
Countries; Gilley, The Determinants of State Legitimacy; Booth and Seligson, The Legiti-
macy Puzzle in Latin America; Mattes and Bratton, Learning About Democracy in Africa;
Magalhães, Government Eectiveness and Support for Democracy; Chang and Kerr, An
InsiderOutsider Theory of Popular Tolerance.
4. Kuran, Private Truths, Public Lies.
5. Linz, Totalitarian and Authoritarian Regimes.
6. Jiang and Yang, Lying or Believing? Measuring Preference Falsication; Kalinin, The Social
Desirability Bias in Autocrats Electoral Ratings; Robinson and Tannenberg, Self-Censorship
of Regime Support in Authoritarian States.
7. Frye et al., Is Putins Popularity Real?.
8. Lei and Lu, Revisiting Political Wariness in Chinas Public Opinion Surveys; Tang, Populist
Authoritarianism: Chinese Political Culture.
9. Shen and Truex, In Search of Self-Censorship.
10. See Adida et al., Whos Asking? Interviewer Coethnicity Eects.
11. Tourangeau and Yan, Sensitive Questions in Surveys.
12. Kuran, Private Truths, Public Lies.
13. Schedler, The Self-Restraining State: Power and Accountability.
14. Garcia-Ponce and Pasquale, How Political Repression Shapes Attitudes Toward the State.
15. Kalinin, The Social Desirability Bias in Autocrats Electoral Ratings.
16. Frye et al., Is Putins Popularity Real?.
17. Shockley et al., Exaggerating Good Governance.
18. Jiang and Yang, Lying or Believing? Measuring Preference Falsication.
19. Robinson and Tannenberg, Self-Censorship of Regime Support in Authoritarian States.
20. Bratton, Bhavnani, and Chen, Voting Intentions in Africa: Ethnic, Economic or Partisan?;
Bratton and Masunungure, Voting Intentions in Zimbabwe.
21. Zimbalist, “‘Fear-of-the-State Biasin Survey Data.
22. Sulek, O rzetelnosci i nierzetelnosci badan ankietowych w Polsce, Proba analizy empirycznej.
23. Welsh, Survey Research and Public Attitudes in Eastern Europe and the Soviet Union.
24. In Calvo, Razandrakoto and Roubaud, Fear of the State in Governance Surveys? Empirical
Evidence,the authors address bias from survey sponsor in general, and not in autocratic
regimes in particular. Out of the 8 countries included in the study, 3 are clearly autocratic.
25. Lei and Lu, Revisiting Political Wariness in Chinas Public Opinion Surveys.
26. Zimbalist, “‘Fear-of-the-State Biasin Survey Data.
27. Blair, Coppock, and Moor, When to Worry About Sensitivity Bias.
28. Baldwin, When Politicians Cede Control of Resources.
29. Afrobarometer, All Countries, Rounds 2, 3, 4, 5, 6, 7.
30. Coppedge et al., V-Dem Dataset v10.
31. Kuran, Private Truths, Public Lies.
32. Lührmann, Tannenberg and Lindberg, Regimes of the world (RoW): Opening New Avenues.
33. Calvo, Razandrakoto and Roubaud, Fear of the State in Governance Surveys? Empirical
Evidence.
34. For a discussion of the Lived Poverty Index see Mattes, The Material and Political Bases of
Lived Poverty in Africa.
DEMOCRATIZATION 607
35. Coppedge et al., V-Dem Dataset v10; Teorell et al., Measuring Polyarchy Across the Globe,
19002017.
36. Marquardt and Pemstein, IRT Models for Expert-Coded Panel Data.
37. Aguinis, Gottfredson, and Culpepper, Best-Practice Recommendations for Estimating Cross-
Level Interaction Eects Using Multilevel Modelling.
38. E.g. Mattes and Bratton, Learning about Democracy in Africa: Awareness, Performance, and
Experience; Magalhães, Government Eectiveness and Support for Democracy; and Claas-
sen, Does Public Support Help Democracy Survive?.
39. Adida et al., Whos Asking? Interviewer Coethnicity Eects.
40. E.g. Rose and Mishler, Comparing Regime Support in Non-Democratic and Democratic
Countries; Gilley, The Determinants of State Legitimacy; Booth and Seligson, The Legiti-
macy Puzzle in Latin America; Mattes and Bratton, Learning About Democracy in Africa;
Magalhães, Government Eectiveness and Support for Democracy; Chang and Kerr, An
InsiderOutsider Theory of Popular Tolerance for Corrupt Politicians.
41. See Blair, Coppock, and Moor, When to Worry About Sensitivity Biasfor a thorough review
of the trade-obetween direct and indirect measurement approaches.
42. Treisman, Presidential Popularity in a Hybrid Regime; Geddes and Zaller, Sources of
Popular Support for Authoritarian Regimes.
43. Robbins and Tessler, The Eect of Elections on Public Opinion Toward Democracy.
44. An exception is the 2016 AmericasBarometer in Ecuador which asked a about perceived survey
sponsor.
Acknowledgements
I am grateful to the anonymous reviewers, Staan I. Lindberg, Ellen Lust, Rachel Sigman, Anja Neun-
dorf, Soa Axelsson, Anna Persson and participants in the 2017 V-Dem Annual Research Conference,
the 2018 Quality of Government Research Conference and the 2017 MPSA Annual Meeting for valu-
able comments and suggestions.
Disclosure statement
No potential conict of interest was reported by the author.
Funding
This work was supported by Vetenskapsrådet [439-2014-38].
Notes on contributor
Marcus Tannenberg is a PhD candidate at the V-Dem Institute at the Department of Political Science
at the University of Gothenburg. His research on list-experiments, self-censorship, regime classi-
cation and legitimacy has been published in Research & Politics, Politics and Governance, and the
European Political Science Review.
ORCID
Marcus Tannenberg http://orcid.org/0000-0003-0077-4711
Bibliography
Adida, Claire L., Karen E. Ferree, Daniel N. Posner, and Amanda Lea Robinson. Whos Asking?
Interviewer Coethnicity Eects in African Survey Data.Comparative Political Studies 49, no. 12
(2016): 16301660.
608 M. TANNENBERG
Afrobarometer. All Countries, Rounds 2, 3, 4, 5, 6, 7.2019. http://www. afrobarometer.org.
Aguinis, Herman, Ryan K. Gottfredson, and Steven Andrew Culpepper. Best-Practice
Recommendations for Estimating Cross-Level Interaction Eects Using Multilevel Modeling.
Journal of Management 39, no. 6 (2013): 14901528.
Baldwin, Kate. When Politicians Cede Control of Resources: Land, Chiefs, and Coalition-Building in
Africa.Comparative Politics 46, no. 3 (2014): 253271.
Blair, Graeme, Alexander Coppock, and Margaret Moor. When to Worry About Sensitivity Bias: A
Social Reference Theory and Evidence from 30 Years of List Experiments.American Political
Science Review 114, no. 4 (2020): 12971315.
Booth, John A., and Mitchell A. Seligson. The Legitimacy Puzzle in Latin America: Political Support
and Democracy in Eight Nations. Cambridge: Cambridge University Press, 2009.
Bratton, Michael, Ravi Bhavnani, and Tse-Hsin Chen. Voting Intentions in Africa: Ethnic, Economic
or Partisan?Commonwealth & Comparative Politics 50, no. 1 (2012): 2752.
Bratton, Michael, and Eldred Masunungure. Voting Intentions in Zimbabwe: A Margin of Terror.
Afrobarometer Brieng Paper 103, 2012.
Calvo, Thomas, Mireille Razandrakoto, and François Roubaud. Fear of the State in
Governance Surveys? Empirical Evidence from African Countries.World Development 123
(2019): 104609.
Chang, Eric C. C., and Nicholas N. Kerr. An InsiderOutsider Theory of Popular Tolerance for
Corrupt Politicians.Governance 30, no. 1 (2017): 6784.
Claassen, Christopher. Does Public Support Help Democracy Survive?American Journal of Political
Science 64, no. 1 (2020): 118134.
Coppedge, Michael, John Gerring, Carl Henrik Knutsen, Staan I Lindberg, Jan Teorell, David
Altman, Michael Bernhard, et al. V-Dem Dataset v9.Varieties of Democracy (V-Dem) Project,
2019 doi:10.23696/vdemcy19.
Frye, Timothy, Scott Gehlbach, Kyle L. Marquardt, and Ora John Reuter. Is PutinsPopularity Real?
Post-Soviet Aairs 33, no. 1 (2017): 115.
Garcia-Ponce, Omar, and Benjamin Pasquale. How Political Repression Shapes Attitudes Toward the
State: Evidence from Zimbabwe.Working Paper, 2015.
Geddes, Barbara, and John Zaller. Sources of Popular Support for Authoritarian Regimes.American
Journal of Political Science 33, no. 3 (1989): 319347.
Gilley, Bruce. The Determinants of State Legitimacy: Results for 72 Countries.International Political
Science Review 27, no. 1 (2006): 4771.
Jiang, Junyan, and Dali L Yang. Lying or Believing? Measuring Preference Falsication from a
Political Purge in China.Comparative Political Studies 49, no. 5 (2016): 600634.
Kalinin, Kirill. The Social Desirability Bias in Autocrats Electoral Ratings: Evidence from the 2012
Russian Presidential Elections.Journal of Elections, Public Opinion and Parties 26, no. 2 (2016):
191211.
Kuran, Timur. Private Truths, Public Lies: The Social Consequences of Preference Falsication.
Cambridge, MA: Harvard University Press, 1997.
Lei, Xuchuan, and Jie Lu. Revisiting Political Wariness in Chinas Public Opinion Surveys:
Experimental Evidence on Responses to Politically Sensitive Questions.Journal of
Contemporary China 26, no. 104 (2017): 213232.
Linz, Juan José. Totalitarian and Authoritarian Regimes. Boulder, CO: Lynne Rienner Publishers,
2000.
Lührmann, Anna, Marcus Tannenberg, and Staan I. Lindberg. Regimes of the World (RoW):
Opening New Avenues for the Comparative Study of Political Regimes.Politics & Governance
6, no. 1 (2018): 6075.
Magalhães, Pedro C. Government Eectiveness and Support for Democracy.European Journal of
Political Research 53, no. 1 (2014): 7797.
Marquardt, Kyle L., and Daniel Pemstein. IRT Models for Expert-Coded Panel Data.Political
Analysis 26, no. 4 (2018): 431456.
Mattes, Robert. The Material and Political Bases of Lived Poverty in Africa: Insights from the
Afrobarometer.In Barometers of Quality of Life Around the Globe, edited by Valerie Møller,
Denis Huschka, and Alex C. Michalos, 161185. Heidelberg, Germany: Springer, 2008.
Mattes, Robert, and Michael Bratton. Learning About Democracy in Africa: Awareness,
Performance, and Experience.American Journal of Political Science 51, no. 1 (2007): 192217.
DEMOCRATIZATION 609
Moehler, Devra C. Critical Citizens and Submissive Subjects: Election Losers and Winners in Africa.
British Journal of Political Science 39, no. 2 (2009): 345366.
Robbins, Michael DH, and Mark Tessler. The Eect of Elections on Public Opinion Toward
Democracy: Evidence from Longitudinal Survey Research in Algeria.Comparative Political
Studies 45, no. 10 (2012): 12551276.
Robinson, Darrel, and Marcus Tannenberg. Self-Censorship of Regime Support in Authoritarian
States: Evidence from List Experiments in China.Research & Politics 6, no. 3 (2019): 19.
Rose, Richard, and William Mishler. Comparing Regime Support in non-Democratic and
Democratic Countries.Democratization 9, no. 2 (2002): 120.
Schedler, Andreas. The Self-Restraining State: Power and Accountability in New Democracies. Boulder,
CO: Lynne Rienner Publishers, 1999.
Shen, Xiaoxiao, and Rory Truex. In Search of Self-Censorship.British Journal of Political Science
(2020): 113. doi:10.1017/S0007123419000735.
Shockley, Bethany, Michael Ewers, Yioryos Nardis, and Justin Gengler. Exaggerating Good
Governance: Regime Type and Score Ination among Executive Survey Informants.
Governance 31, no. 4 (2018): 643664.
Stockmann, Daniela, and Mary E Gallagher. Remote Control: How the Media Sustain Authoritarian
Rule in China.Comparative Political Studies 44, no. 4 (2011): 436467.
Sulek, Antoni. O rzetelnosci i nierzetelnosci badan ankietowych w Polsce, Proba analizy empirycz-
nej.Kultura i Społeczenstwo 33, no. 1 (1989): 2349.
Tang, Wenfang. Populist Authoritarianism: Chinese Political Culture and Regime Sustainability.
Oxford, UK: Oxford University Press, 2016.
Teorell, Jan, Michael Coppedge, Staan Lindberg, and Svend-Erik Skaaning. Measuring Polyarchy
Across the Globe, 19002017.Studies in Comparative International Development 54, no. 1
(2019): 7195.
Tourangeau, Roger, and Ting Yan. Sensitive Questions in Surveys.Psychological Bulletin 133, no. 5
(2007): 859.
Treisman, Daniel. Presidential Popularity in a Hybrid Regime: Russia Under Yeltsin and Putin.
American Journal of Political Science 55, no. 3 (2011): 590609.
Welsh, William A. Survey Research and Public Attitudes in Eastern Europe and the Soviet Union.
Oxford, UK: Pergamon Press, 1981.
Weyland, Kurt. A Paradox of Success? Determinants of Political Support for President Fujimori.
International Studies Quarterly 44, no. 3 (2000): 481502.
Zimbalist, Zack. “‘Fear-of-the-State Biasin Survey Data.International Journal of Public Opinion
Research 30, no. 4 (2018): 631651.
610 M. TANNENBERG
... government, despite assurances that it was not (Tannenberg, 2021). The negligible benefits of faithfully responding to the survey may now be perceived as potentially dangerous. ...
... Finally, it should be noted that a high proportion of survey respondents in many contexts appear to believe that the surveys they are responding to are sponsored by the government and that there is evidence that fear of the state biases results on surveys (Zimbalist, 2018;Tannenberg, 2021). Of course, we cannot assume fear plays an important role in shaping response patterns to regime assessment questions in all authoritarian states [see Lei and Lu (2017); Stockmann et al. (2018)]. ...
... There is a strong and consistent negative correlation between freedom of expression and confidence in the government which is decreasing at a decreasing rate. This result is consistent with similar recent analyses that have been conducted (Nathan, 2020;Tannenberg, 2021), as well as the theoretical prediction that preference falsification is consistently increasing in the costs of expressing criticism of the regime. Of course, the correlation could be due to confounding variables associated with the set of highly repressive regimes where the survey was conducted or the potential for authoritarian propaganda to alter preferences and beliefs. ...
Article
Full-text available
Among the greatest challenges facing scholars of public opinion are the potential biases associated with survey item nonresponse and preference falsification. This difficulty has led researchers to utilize nonre-sponse rates to gauge the degree of preference falsification across regimes. This article addresses the use of survey nonresponse rates to proxy for preference falsification. A simulation analysis exploring the expression of preferences under varying degrees of repression was conducted to examine the viability of using nonresponse rates to regime assessment questions. The simulation demonstrates that nonresponse rates to regime assessment questions and indices based on nonresponse rates are not viable proxies for preference falsification. An empirical examination of survey data supports the results of the simulation analysis.
... The suppression of critical voices serves the purposes of maintaining legitimacy, signaling political power, instilling fear among the public, and preventing collective actions (Deibert, 2015;King, Pan, & Roberts, 2013). The exercise of legal and political power thus becomes the root cause of self-censorship in such contexts (Tannenberg, 2022). In China, Chang and Manion (2021) found that citizens are especially cautious about discussing politics on Weibo during sensitive times. ...
Article
Full-text available
People living in authoritarian or autocratizing societies may have to refrain from expressing their genuine political views to avoid troubles. Besides preference falsification, some may simply refrain from engaging in political expressions and discussions. This study aims at understanding avoidance of political discussions in an autocratizing society. It posits perceptions of legal and social risks, political frustration, political orientation, and secondary control as possible predictors of avoidance of political discussions. A survey of citizens in post-National Security Law Hong Kong shows that pro-democracy citizens in Hong Kong are more likely to perceive the presence of social and legal risks. They are also more likely to feel frustrated by the political environment. Perceived social risks significantly predict avoidance of political discussions, and the relationship is stronger among people with higher levels of secondary control. Implications of the findings are discussed.
... In the context of the de facto abolition of freedom of speech, freedom of assembly, and the criminalization of political activity, it is difficult to draw correct conclusions about the state of Russian society through the articulation of interests (Tannenberg, 2022;Neundorf and Pop-Eleches, 2020). ...
Preprint
Full-text available
The digital landscape provides a dynamic platform for political discourse crucial for understanding shifts in public opinion and engagement especially under authoritarian governments This study examines YouTube user behavior during the Russian-Ukrainian war analyzing 2168 videos with over 36000 comments from January 2022 to February 2024 We observe distinct patterns of participation and gender dynamics that correlate with major political and military events Notably females were more active in antigovernment channels especially during peak conflict periods Contrary to assumptions about online engagement in authoritarian contexts our findings suggest a complex interplay where women emerge as pivotal digital communicators This highlights online platforms role in facilitating political expression under authoritarian regimes demonstrating its potential as a barometer for public sentiment.
Article
This study investigates the political effects of corruption convictions involving former heads of government. Drawing on an original dataset of convictions and annual, nationally representative surveys covering over 130 countries from 2006 to 2019, we employ a difference-indifferences approach to analyze how these events shape government approval. Our findings indicate a notable contrast: in less democratic countries, convictions boost support for the government, whereas their effect is negligible in more democratic contexts. We reveal a key mechanism behind this divergence by showing how incumbent governments respond differently to the convictions. In less democratic settings, governments exploit corruption convictions by emphasizing the personal virtues of their leaders. Beyond identifying this critical condition that prompts political elites to adopt personalistic appeals, we also demonstrate how these appeals resonate with the public. These findings have implications for debates on the consequences of anti-corruption efforts and their relationship with populism.
Article
Despite the considerable amount of literature on self-reported corruption and the salience of anti-corruption narratives among international organizations, little is known about bribe payers’ reporting decisions. Relying on the analysis of a self-report survey among a specific subsample of the victimized population—the university students who were forced to pay a bribe in their interaction with state agencies after extortion in Azerbaijan (n = 152), this paper attempts to ascertain whether the previously employed crime seriousness construct is relevant in the context of bribery. Results suggest that only 25.7% of bribe payments have been reported to the authorities. As the first study exploring the role of crime seriousness in the context of bribery reporting, we found that neither financial impact nor emotional distress caused by a bribe payment affects reporting behavior. Instead, previous contact with the criminal justice system was the sole predictor of decision-making, indicating for the first time in the literature the applicability of the contact thesis in the context of corruption. The study contributes to victimology as it provides insights into the correlates of decision-making among victims of bribery requests and tests the applicability of crime seriousness in a previously unstudied offence.
Article
Full-text available
Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social desirability bias, a subset of what we label sensitivity bias. We make three contributions. First, we propose a social reference theory of sensitivity bias to structure expectations about survey responses on sensitive topics. Second, we explore the bias-variance trade-off inherent in the choice between direct and indirect measurement technologies. Third, to estimate the extent of sensitivity bias, we meta-analyze the set of published and unpublished list experiments (a.k.a., the item count technique) conducted to date and compare the results with direct questions. We find that sensitivity biases are typically smaller than 10 percentage points and in some domains are approximately zero.
Article
Full-text available
Item nonresponse rates across regime assessment questions and nonsensitive items are used to create a self-censorship index, which can be compared across countries, over time and across population subgroups. For many authoritarian systems, citizens do not display higher rates of item nonresponse on regime assessment questions than their counterparts in democracies. This result suggests such questions may not be that sensitive in many places, which in turn raises doubts that authoritarian citizens are widely feigning positive attitudes towards regimes they secretly despise. Higher levels of self-censorship are found under regimes without electoral competition for the executive.
Article
Full-text available
The need to collect data on governance-related issues has been growing since the 1990s. Demand gained momentum in 2015 with the adoption of SDG16 worldwide and Agenda 2063 in Africa. African countries played a key role in the adoption of SDG16 and are now leading the process of collecting harmonised household data on Governance, Peace and Security (GPS). Yet the possibility has recently been raised that sensitive survey data collected by government institutions are potentially biased due to self-censorship by respondents. This paper studies the potential bias in responses to what are seen as sensitive questions, here governance issues, in surveys conducted by public organisations. We compare Afrobarometer (AB) survey data, collected in eight African countries by self-professed independent institutions, with first-hand harmonised GPS survey data collected by National Statistics Offices (NSOs). We identify over 20 similarly worded questions on democracy, trust in institutions and perceived corruption. We first compare responses from AB survey respondents based on who they believe the survey sponsor to be. No systematic response bias is found between respondents who believe the government to be behind the AB survey and those who consider it to be conducted by an independent institution. Our estimations suggest that the observed residual differences are due to a selection bias on the observables, which is mitigated by propensity score matching procedures. The absence of a systematic self-censorship or attenuation bias is further evidenced by means of an experimental design, whereby responses from GPS surveys conducted by NSOs (the treatment) are compared with AB surveys sponsored by reportedly independent bodies. Our results provide evidence, at much higher levels of precision than other existing data sources, of the capacity and legitimacy of government-related organisations to collect data on governance as a matter of national interest and sovereignty.
Article
Full-text available
The study of popular support for authoritarian regimes has long relied on the assumption that respondents provide truthful answers to surveys. However, when measuring regime support in closed political systems there is a distinct risk that individuals are less than forthright due to fear that their opinions may be made known to the public or the authorities. In order to test this assumption, we conducted a novel web-based survey in China in which we included four list experiments of commonly used items in the comparative literature on regime support. We find systematic bias for all four measures; substantially more individuals state that they support the regime with direct questioning than when presented with our indirect list experiments. The level of self-censorship, which ranges from 24.5 to 26.5 percentage points, is considerably higher than previously thought. Self-censorship is further most prevalent among the wealthy, urban, female and younger respondents.
Article
Full-text available
This paper presents a new measure polyarchy for a global sample of 182 countries from 1900 to 2017 based on the Varieties of Democracy (V-Dem) data, deriving from an expert survey of more than 3000 country experts from around the world, with on average 5 experts rating each indicator. By measuring the five components of Elected Officials, Clean Elections, Associational Autonomy, Inclusive Citizenship, and Freedom of Expression and Alternative Sources of Information separately, we anchor this new index directly in Dahl’s (1971) extremely influential theoretical framework. The paper describes how the five polyarchy components were measured and provides the rationale for how to aggregate them to the polyarchy scale. We find that personal characteristics or ideological predilections of the V-Dem country experts do not systematically predict their ratings on our indicators. We also find strong correlations with other existing measures of electoral democracy, but also decisive differences where we believe the evidence supports the polyarchy index having higher face validity.
Article
It is widely believed that democracy requires public support to survive. The empirical evidence for this hypothesis is weak, however, with existing tests resting on small cross‐sectional samples and producing contradictory results. The underlying problem is that survey measures of support for democracy are fragmented across time, space, and different survey questions. In response, this article uses a Bayesian latent variable model to estimate a smooth country‐year panel of democratic support for 135 countries and up to 29 years. The article then demonstrates a positive effect of support on subsequent democratic change, while adjusting for the possible confounding effects of prior levels of democracy and unobservable time‐invariant factors. Support is, moreover, more robustly linked with the endurance of democracy than its emergence in the first place. As Lipset (1959) and Easton (1965) hypothesized over 50 years ago, public support does indeed help democracy survive.
Article
Public attitude surveys provide invaluable data for assessing how people view their countries’ democratic progress and government performance, in addition to a range of other outcomes. Yet, these data are vulnerable to substantial biases deriving from interviewer effects. Apart from social desirability bias resulting from a (non)coethnic interviewer, this article demonstrates that the perception of a government interviewer is another crucial mechanism that generates bias in the African context. The evidence suggests that fear of the state, rather than social desirability, leads people in less open societies to provide more positive assessments of democratic and government performance and underreport corruption. In identifying this new source of bias, the article discusses potential improvements to survey protocols and modes of administration. © The Author 2017. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.
Article
Data sets quantifying phenomena of social-scientific interest often use multiple experts to code latent concepts. While it remains standard practice to report the average score across experts, experts likely vary in both their expertise and their interpretation of question scales. As a result, the mean may be an inaccurate statistic. Item-response theory (IRT) models provide an intuitive method for taking these forms of expert disagreement into account when aggregating ordinal ratings produced by experts, but they have rarely been applied to cross-national expert-coded panel data. We investigate the utility of IRT models for aggregating expert-coded data by comparing the performance of various IRT models to the standard practice of reporting average expert codes, using both data from the V-Dem data set and ecologically motivated simulated data. We find that IRT approaches outperform simple averages when experts vary in reliability and exhibit differential item functioning (DIF). IRT models are also generally robust even in the absence of simulated DIF or varying expert reliability. Our findings suggest that producers of cross-national data sets should adopt IRT techniques to aggregate expert-coded data measuring latent concepts.