Available via license: CC BY 4.0
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=fdem20
Democratization
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/fdem20
The autocratic bias: self-censorship of regime
support
Marcus Tannenberg
To cite this article: Marcus Tannenberg (2022) The autocratic bias: self-censorship of regime
support, Democratization, 29:4, 591-610, DOI: 10.1080/13510347.2021.1981867
To link to this article: https://doi.org/10.1080/13510347.2021.1981867
© 2022 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group
View supplementary material
Published online: 15 Nov 2021. Submit your article to this journal
Article views: 2277 View related articles
View Crossmark data Citing articles: 2 View citing articles
RESEARCH ARTICLE
The autocratic bias: self-censorship of regime support
Marcus Tannenberg
Department of Political Science, V-Dem Institute, University of Gothenburg, Gothenburg, Sweden
ABSTRACT
Because of a perceived (and real) risk of repressive action, some survey questions are
sensitive in more autocratic countries while less so in more democratic countries. Yet,
survey data on potentially sensitive topics are frequently used in comparative research
despite concerns about comparability. To examine the comparability of politically
sensitive questions, I employ a multilevel analysis with more than 228,000
respondents in 37 African countries to test for systematic bias when the survey
respondents believe (fear) that the government, rather than an independent
research institute, has commissioned the survey. The findings indicate that fear of
the government induces a substantial and significant bias on questions regarding
trust, approval and corruption perceptions in more autocratic countries, but not in
more democratic countries. In contrast, innocuous, apolitical questions are not
systematically influenced by regime type.
ARTICLE HISTORY Received 8 April 2021; Accepted 13 September 2021
KEYWORDS self-censorship; regime support; preference falsification; autocracy; legitimacy
Introduction
When Zimbabweans were asked in 2018 how much they trusted their President
Emmerson Mnangagwa, on average 68% said “a lot”or “somewhat.”This is considered
strong approval by most accounts, but is it true? Dividing respondents into two groups
we get a different picture. In the one that believes that the interviewer was sent by the
government 77% indicated trust in Mnangagwa, and in the one that does not, some
57% shared this sentiment. In contrast, in democratic Ghana, the difference between
these two groups of respondents was only 4 percentage points, compared to 20
points in considerably less democratic Zimbabwe.
1
Does autocracy bias certain
survey questions?
Given that much of our current knowledge about politics and everyday life in auto-
cratic countries are informed by public opinion surveys,
2
it begs to question –are we
misinformed? What we know about the effects and causes of, for example, trust in gov-
ernment, democratic attitudes, corruption perceptions, regime support, and political
legitimacy, rely to a large extent on survey research comparing countries with
© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.
org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work
is properly cited.
CONTACT Marcus Tannenberg marcus.tannenberg@gu.se
Supplemental data for this article can be accessed at https://doi.org/10.1080/13510347.2021.1981867.
DEMOCRATIZATION
2022, VOL. 29, NO. 4, 591–610
https://doi.org/10.1080/13510347.2021.1981867
varying regime types,
3
where data is derived through direct questions on these (in
some countries, but not in others) sensitive topics. Such studies all make the assump-
tion that survey respondents across the sampled countries are –somewhat equally –
willing to express their true opinions with respect to political topics. In this article, I
show that this assumption is not safe to make as bias from self-censorship do not
operate uniformly across regime type.
In autocratic countries, survey questions can be sensitive for reasons beyond
privacy and social adaption –in particular, questions regarding the citizens’attitudes
towards and evaluations of the authorities. Respondents subjected to autocratic rule
may practice “preference falsification”to align their answers with the perceived
wishes of the regime.
4
Given that authoritarian regimes often pay close attention to
what their citizens do and say in order to sanction those who challenge the official dis-
course,
5
there is a real risk that respondents will associate public opinion surveys with
government intelligence gathering. Respondents can therefore be expected to appease
the regime with their responses out of fear that failure to do so may result in repression,
physical or otherwise.
Existing empirical evidence of self-censorship in authoritarian regimes are pri-
marily drawn from single-country studies and their findings are mixed. A handful
of studies show self-censorship to be problematic for measuring public opinion,
6
while others find self-censorship to be less of a concern
7
or even non-existing.
8
In
a recent study Shen and Truex
9
estimate self-censorship across a large number of
countries by utilizing differences in non-response rates between sensitive and
non-sensitive items. They find self-censorship to be highly variable across autocratic
regimes but conclude that it is oftentimes not an obstacle to capturing public
opinion in autocratic settings. With an approach utilizing actual responses (as
opposed to non-responses) this article finds self-censorship to be a severe issue in
most autocracies.
I test the variation in levels of self-censorship in countries where respondents
experience different perceived (and real) risks of repressive action, by employing a
simple research design that utilizes data on whether respondents think the government
sent the survey enumerator to interview them, or that the enumerator works for an
independent research organization. I then analyse if these two groups of respondents
answer systematically differently to potentially sensitive questions and whether this
difference is a function of the climate of political repression. Drawing upon data
from more than 228,000 respondents across 37 African countries over 6 survey
rounds the results show that there is indeed an autocratic bias. Responses to questions
related to the citizen-state relationship, such as whether respondents trust the presi-
dent or prime minister and corruption perceptions of key state institutions, are system-
atically biased with the level of democracy in the country, while apolitical questions,
such as “How much do you trust your neighbours?”are not. Thus, caution is warranted
in employing the former types of survey items, but not the latter, in comparative
studies across different regime types.
Lastly, I engage with the literature on response bias induced by the ethnic (mis-
s)match of the enumerator and the respondent.
10
I show that the bias introduced by
enumerators who are coethnic with the country’s key leader but non-coethnic with
the respondent operates along the logic of the autocratic bias: that the ethnic (mis-
s)match prompts regime friendly responses in autocratic countries but not in more
democratic countries.
592 M. TANNENBERG
Self-censorship and how to estimate it
Survey respondents can feel the need to censor themselves if a question elicits
responses that can be socially undesirable or politically incorrect, or if the respondent
fear that their responses can have consequences if disclosed.
11
Questions related to, for
example, income, voter turnout, prejudice against ethnic or religious groups, and drug
abuse can cause the respondent to hide the truth because of concerns about their pres-
tige, fear of social sanctioning from peers, or fear of punishment. This can lead to high
rates of systematic non-responses and/or biased answers, resulting in poor data. Con-
cerns about prestige and social sanctioning can induce social desirability bias in
surveys conducted in both democratic and autocratic regimes, while fear of punish-
ment is of greater concern in autocratic and semi-autocratic contexts, where the per-
ceived risk of repressive action is likely to be higher. Kuran
12
argues that citizens
subjected to authoritarian rule have strong incentives to practice “preference falsifica-
tion”and Schedler
13
raises concerns about the possibility of obtaining reliable
measures of regime legitimacy through representative public-opinion surveys or quali-
tative interviews in autocracies because of the opaque and repressive features of those
regimes. Fear of repercussions for failing to give the officially desired answer is
expected to have an effect on responses, especially when respondents are uncertain
about their anonymity.
What does this mean for cross-country comparative studies if we are interested in,
for example, approval ratings, regime support, or the legitimacy of the regime attrib-
uted by citizens? If the levels of self-censorship are more or less equal across countries
on proxies for or components of said question or indexes, the issue is less severe. We
would simply have to deal with either inflated or deflated numbers across the board.
However, if the propensity to self-censor depends on some traits that are hetero-
geneous across countries, such as the level of democracy or political repression, the
size of the bias differs systematically between countries, and thus restricts the possi-
bility of comparative analysis.
Recent findings warrant caution regarding the reliability of survey responses in
repressive and non-democratic settings. In Zimbabwe –where government repression
were commonplace –Garcia-Ponce and Pasquale
14
find reported level of trust in the
president and the ruling party to be affected by recent experiences of state-led repres-
sion. Kalinin
15
employs a series of list experiments and finds that Russians’electoral
support for Vladimir Putin is inflated by about 20 percentage points. In contrast,
also using list experiments, Frye et al.
16
estimate Putin’s approval ratings at about 10
percentage points below those received through direct questioning but conclude that
direct survey questions largely reflect the attitudes of Russian citizens. Shockley
et al.
17
show score inflation of elite governance evaluations among executive survey
informants in autocratic regimes, and in particular that executives of firms with head-
quarters in autocratic Qatar inflated scores vis-a-vis those with out of country head-
quarters. In the Chinese context, Jiang and Yang
18
show an increase in preference
falsification in the aftermath of a major political purge in Shanghai, and using list
experiments Robinson and Tannenberg
19
find respondents to falsify regime support
by up to 25 percentage points. These studies support concerns that individual respon-
dents inflate their approval in autocratic settings.
Does this mean that we cannot trust surveys to measure citizens’trust in govern-
ment, or their political preferences in general? To answer this question, we need to
DEMOCRATIZATION 593
complement single-country studies and test for systematic bias across a larger sample
of countries. To test for response bias due to perceived fear of the government, I turn to
Afrobarometer data, and specifically the last item of the Afrobarometer battery, which
asks, “just one more question, who do you think sent us to do this interview?”Even
though the enumerators conducting the survey introduces themselves as affiliated
with “an independent research organization”that does not “represent the government
or any political party,”over the second to seventh rounds of the Afrobarometer survey,
49% of respondents believed that the survey was sponsored by the government, while
35% considered it to be independent and 16% stated that they did not know. With the
help of this survey question, I divide respondents into three groups: non-suspecting
(those who believe the survey to be independent), suspecting (those who believe the
government was sponsoring the survey), and a “Don’t know”group.
This item has been featured as a proxy for the “costliness of dissent”or “fear of the
government”in models predicting vote choice in 16 African countries and voting
intentions in Zimbabwe.
20
In contrast to these authors, I argue that the propensity
to suspect the government as sponsor of the survey is not informative of political
fear in that country. This is illustrated by the fact that in the most democratic and
the most autocratic countries in the most recent round, Cape Verde and Cameroon,
virtually the same percentage (41) of respondents believed the government to be spon-
soring the survey. In the full sample of 157 country years, suspecting the government as
survey sponsor is only correlated with the country’s level of democracy at a level of 0.1.
Government sponsorship and self-censorship
Existing findings on the effect of perceived and real survey sponsorship on politically
sensitive items are mixed. In a study of 20 African countries, Zimbalist
21
uses this per-
ceived survey sponsor to show that a fear-of-the-state, on average, bias survey
responses, and illustrates with case studies why this bias is more pronounced in auto-
cratic Mozambique than in democratic Cape Verde. In the context of Communist
Poland, Sulek
22
compares responses to political items from surveys run by a govern-
ment opinion pollster to responses in independent academic surveys in mid-1980s.
He finds more critique of the government in the independently run surveys, and
show that this difference disappears late 1980s with the fall of the Communist
regime. In 1970s communist Bulgaria, Welsh
23
finds substantial effects on items of
regime support of being interview by a Party-cadre instead of a university-affiliated
interviewer. In contrast, Calvo et al.
24
do not find beliefs about survey sponsor to
induce bias when comparing government-sponsored household surveys to indepen-
dently run Afrobarometer surveys in eight African countries. In the opposite direction
Lei and Lu
25
find that Chinese respondents in fact are slightly more critical of the
regime when the survey enumerator convey cues of Chinese Communist Party
membership.
To move this debate forward I expand the scope of previous studies by analysing
responses to more than 40 survey items across 37 countries over 6 survey rounds.
Taking stock of Zimbalist’s
26
case studies I argue that beliefs about survey sponsorship
should have an impact on sensitive questions only to the extent that the respondents
also fear punishment from the authorities. In sum, you should be more likely to falsify
your preferences when you believe the regime will learn what you say and care about it.
Suspicion of the survey sponsor should lead to preference falsification on potentially
594 M. TANNENBERG
sensitive topics, but not on questions that are apolitical in nature. From this I derive the
following hypotheses:
.H1: Respondents who believe the government has commissioned the survey will
give responses that are more favourable to the regime compared to citizens who
believe the survey to be independent if they live in more autocratic countries, but
not if they live in more democratic countries.
.H2: Respondents who believe the government has commissioned the survey will not
answer differently on non-sensitive questions, compared to citizens who believe the
survey to be independent, irrespective of regime type.
Research design
To test if political survey items suffer from greater sensitivity bias in authoritarian con-
texts, I employ a simple research design that compares respondents who think the gov-
ernment sent the survey enumerator to interview them to those who believe the
enumerator works for an independent research organization. I then analyse if these
two groups of respondents answer systematically differently a set of 41 sensitive
(H1) and non-sensitive (H2) survey questions, and test if this difference is conditioned
by the climate of political repression.
For example, if respondents who believe the government has sent the enumerator
are significantly and substantively more likely to indicate that they trust the President
to those who believe the enumerator works for an independent research organization,
if they live in a country more autocratic country compared, but not if they live in a
more democratic country, this would be indicative of H1. If, in contrast, there is no
difference between the suspecting and non-suspecting respondents for trust in their
relatives regardless of regime type, this would be indicative of H2. The level of democ-
racy functions as a proxy for the “fear-of-the-government”mechanisms theorized to
induce self-censorship.
In assessing the potential sensitivity of question items I rely on Blair, Coppock and
Moor’s
27
Social Referent Theory of Sensitivity Bias which state that we should expect
sensitivity bias when all four of the following are present: the respondent (1) has a
social referent in mind when responding; (2) believes the referent can infer his/her
response; (3) has a perception of the referent’s preferred response; and (4) has a per-
ception that failure to provide the preferred response would entail a cost. In this appli-
cation, the state is the social referent (1). Perceived survey sponsor indicates if the
respondent believes that the referent can know the respondent’s answer (2). Respon-
dents are assumed to know the referent’s preferred response (3). Lastly, the level of
democracy works as a proxy for whether or not failing to provide the preferred
response the social would entail costs to the respondent (4). This setup predicts all
items probing for an evaluation of the state or state institutions to be sensitive
whereas evaluations of private or non-state actors and institutions are not (see Table
1). Most items are clear cut, but a brief discussion is warranted on the potential sen-
sitivity of questions regarding preference for democracy, army rule, and traditional
leaders, where it may be more difficult for respondents to know the referent’s prefer-
ence.First, it is possible that respondents do not perceive it sensitive to report that they
prefer democracy if they live in autocratic countries that frequently (ab)use the term
DEMOCRATIZATION 595
democracy to describe their rule or even incorporate it in their name, such as in the
People’sDemocratic Republic of Algeria. Second, given that the vast majority of auto-
cratic regimes in the sample are not ruled by military juntas, disagreement with the
proposition of Army rule should generally be non-sensitive. This should not be the
case in Egypt in 2015 where I expect the question to be particularly sensitive in the
aftermath of the 2013 military coup. This is also what the data show: in Algeria per-
ceived survey sponsorship has no bearing on reported preference for democracy,
and in Egypt perceived survey sponsorship is a particularly strong predictor of accep-
tance of military rule. Lastly, the degree to which traditional chiefs are included in the
governing structure varies substantially between and within states,
28
which makes a
general prediction of the referent’s preferred answer difficult.
This design can inform us about variation in the level of self-censorship but it does
not allow estimates of the absolute level of self-censorship at hand. Even among
respondents who believe that the survey truly is independent, self-censorship may
Table 1. List of dependent variables by category and potential sensitivity.
Category Variable Sensitive
President/Prime minister Yes
Member of Parliament (MP) Yes
Local government Yes
Ruling party Yes
Opposition party Yes
Electoral commission Yes
Police Yes
Courts Yes
Army Yes
TRUST Tax officials
Government news
Yes
Yes
Government broadcasting Yes
Traditional leaders
Independent news
Possibly
No
Independent broadcasting No
Religious institutions No
Neighbours No
Relatives No
Vendors No
Most people No
President/Prime minister Yes
Member of Parliament (MP) Yes
Local government Yes
Police Yes
Courts Yes
CORRUPTION Bureaucracy
Tax authority
Yes
Yes
Increase in past year Yes
Traditional leaders Possibly
Businesses No
Religious institutions No
NGOs No
APPROVAL President/Prime minister
Member of Parliament
Local Government
Yes
Yes
Yes
Traditional leaders Possibly
VALUES Prefer democracy
One-party rule
Strong-man rule
Yes
Yes
Yes
Army rule Possibly
596 M. TANNENBERG
be taking place. They may still be wary that the authorities can use the survey to trace
unsanctioned opinions to an individual, a neighbourhood, or a village. To the extent
that respondents adopt a better-safe-than-sorry approach, the overall response bias
will be larger, as this would reduce between-group differences. The results reported
in this article include this potential built-in downward bias of the estimates. Bearing
in mind that the absolute levels of bias cannot be established, the findings do show
that between-group differences are clear and meaningful.
One assumption of the research design is that the suspecting respondents in auto-
cracies and democracies do not differ on any dimensions other than those I can
account for in the analysis. This would be violated if, for example, regime supporters
in autocracies (but not in democracies) are more likely to believe the regime is power-
ful and therefore also more likely to sponsor the survey. I cannot test this assumption,
but the fact that state capacity does not exhibit a relationship with sponsorship belief
offers an indication that the assumption is not violated.
Another assumption is that respondents’belief about survey sponsorship is stable
throughout the survey, and not formed towards the end of the survey after having
answered the potentially sensitive items. Given that the question is always asked last
this assumption is not testable.
Lastly, the research design does not allow me to determine whether the effects stem
from bias caused by believing that the government has sent the enumerator or by
believing that the enumerator is from an independent organization, or from a combi-
nation of the two. There are, however, clear theoretical reasons to suspect that the bias
stems from the former.
Data and modelling strategy
Individual-level data are taken from the second, third, fourth, fifth, sixth and seventh
rounds of the Afrobarometer.
29
I match the various survey rounds with country-level
data for the corresponding year from the Varieties of Democracy data set.
30
This
pooled dataset provides more than 228,000 respondents nested in 157 country years,
nested in 37 countries.
Dependent variables
To test the autocratic bias hypothesis, I employ all variables probing for trust, corruption
perception and approval rating available in the data sets, as well as a handful of questions
on democratic values. In total I have 41 dependent variables (DVs), most of which are
theoretically sensitive and a smaller share that are not. The guiding premise is that ques-
tions evaluating the state or state institutions are potentially sensitive whereas evalu-
ations of private or non-state actors and institutions are not. The DVs fall under four
categories: Trust;Corruption;Approval;andValues and are listed in Table 1.
For the Trust DVs respondents are asked “How much do you trust each of the fol-
lowing, or haven’t you heard enough about them to say.”And are provided the follow-
ing answer options resulting in a “Not at all”;“Just a little”;“I trust them somewhat”;“I
trust them a lot”; and “Don’t know/Haven’t heard enough.”In the main analysis I drop
respondents who chose “Don’t know/Haven’t heard enough”(DK), resulting in a
4-point scale. The only exception is trust in Most people, with the binary choices
“Most people can be trusted”and “Must be very careful.”
DEMOCRATIZATION 597
The Corruption questions ask respondents “How many of the following people do
you think are involved in corruption, or haven’t you heard enough about them to say,”
with the answer options: “None; Some of them; Most of them; All of them; and Don’t
know.”Dropping DKs in the main analysis gives a 4-point scale. The one exception is
Increase in past year which is on a 5-point scale ranging from “Increased a lot”to
“Decreased a lot.”
For the Approval DVs respondents are asked “Do you approve or disapprove of the
way the following people have performed their jobs over the past twelve months, or
haven’t you heard enough about them to say,”with 4 answer options ranging from
“Strongly disapprove”to “Strongly approve,”plus a Don’t know-option.
For the Values DV Prefer democracy, respondents are asked which statement is
closest to your own opinion. “For someone like me, it doesn’t matter what kind of gov-
ernment we have; In some circumstances, a non-democratic government can be pre-
ferable”; and “Democracy is preferable to any other kind of government,”resulting in a
3-point scale with higher values indicating a preference for democracy. For the three
other DVs, One-party; Army; and Strong-man rule respondents are asked “There are
many ways to govern a country. Would you disapprove or approve of the following
alternatives,”with the options ranging from “Strongly disapprove”to “Strongly
approve,”including a neutral middle option resulting in a 5-point scale.
Note that all DVs are not available for all countries and all rounds. The number of
respondents and country years available for each of the 41 DVs are included in the
regression tables in Appendix A.
Independent variables
The main independent variable, Survey sponsor, is generated from the question item
“Just one more question: Who do you think sent us to do this interview?”Respondents
who indicate that they believe the enumerator was sent by the local, regional, or
national government, or any of its agencies, are coded as 1, while those who believed
the survey to be commissioned by a non-governmental organization (NGO), a univer-
sity, a research company, etc. are coded as 0. Of the complete sample, 49% of respon-
dents reported that they believed that the government was behind the survey, while
36% believed it to be independent and 15% said that they did not know. The variance
of suspecting the government as sponsor at a country year level is between 6.5% in
Liberia in 2015 and 82% in Madagascar in 2005. Figure 6 Appendix F, displays the
share of suspecting respondents in each country over time. Regressing the share of sus-
pecting respondents on a number of plausible predictors shows only corruption to be
associated with a larger share suspicion. State capacity, human rights abuses and the
level of democracy are not associated with suspicion at the country level. The latter
is important for the research design and it is reassuring that the between-country vari-
ation of suspicion only correlates with the level of democracy at a 11-level. The “Don’t
know”group of respondents is excluded in the main analysis but are coded as 1
together with the “suspecting”group in robustness checks of all model specifications,
the rationale being that not knowing who the sponsor is likely to also induce preference
falsification in repressive settings, albeit to a lesser degree than suspecting the
government.
A note is warranted on the issue of item non-response: are people who believe that
the enumerator is asking a question on behalf of the government more likely to refuse
598 M. TANNENBERG
to answer or simply state that they “Don’t know”when asked political questions in
autocracies? I find no evidence for that. This is consistent with theories of preference
falsification, which propose that fearful respondents will resort to the safest possible
answer.
31
Saying that you do not know or refuse to answer whether you, for
example, trust the president can raise suspicion and risk signalling dissent (see Appen-
dix E, Table 21, 22 and 23 for details).
While there is no correlation between the country level averages of suspecting that
the government sponsored the survey and countries’levels of democracy, the next
issue is to examine whether there is something particular about those individuals
who suspect the survey sponsor that also makes them more likely to state high trust
in the key leader, etc. than the population at large. Note that it would need to be a
peculiarity that alters their behaviour only in more autocratic countries and not in
more democratic countries. So, who suspects the government? The balance of individ-
ual-level covariates for the different groups of respondents shows minor differences. If
we split the sample into autocratic and democratic countries using V-Dem’s categori-
cal Regimes of the World index,
32
and look at the covariate balance between the two
groups of respondents, the pattern of differences is the same. For example, in the
democratic sample, the mean level of poverty for those who suspect the government
as survey sponsor is 0.14 higher than for those who believe in the survey’s indepen-
dence; in the autocratic sample, this difference is 0.11 (see Appendix D, Table 18).
The differences in means between the two groups are similar in both samples, and
in the same direction for all demographic variables. There are no observable individual
characteristics that can explain why only those who live in authoritarian countries and
who suspect the government to be behind the survey are more likely to provide regime-
friendly responses to politically sensitive questions. As a robustness test, I follow Calvo
et al.
33
and use propensity score matching to account for selection into treatment (see
Appendix G, Tables 25 and 26 for details). Matching on pre-treatment covariates do
not change the results.
In all model specifications, I control for the following set of individual level control
variables: age, gender, education level, urban/rural residency, and for an index of lived
poverty.
34
To avoid post-treatment bias I do not include variables that are themselves
sensitive items and hence might be affected by sponsorship belief. For example,
whether you believe the country is “moving in the right or wrong direction,”may
predict whether you trust the key leadership figure, but it may also be affected by spon-
sorship belief.
Country level
As a proxy for the perceived risk of repressive actions at the country level, I employ the
Varieties of Democracy’s Electoral Democracy Index
35
(see Coppedge et al., and Mar-
quardt and Pemstein
36
for detail on aggregation rules and methodology). The rationale
for using a highly aggregated index of the level of democracy is to be able to test
whether existing studies that draw conclusions from comparative survey data from
countries at vastly different levels of democracy are suffering from biases. To get
closer to the mechanism of fear that the mediating variable is theorized to induce I sub-
stitute the Electoral Democracy Index with a sub-index of Freedom of Expression for
all models (see Appendix B). The two indexes are highly correlated and it is no surprise
that the results hold to this substitution. I note that the theorized patterns are in fact
even more pronounced using freedom of expression as the mediating variable. This
DEMOCRATIZATION 599
makes sense as states that have a have a low score on its Democracy index partly due to
low state capacity may not invoke the same fear as states that have capacity and chooses
to use it to suppress political opponents.
Estimation
Because respondents are not randomly distributed but clustered within countries, I
employ multilevel models that take these data hierarchies into account and allow
testing for the effect of a two-level interaction between perceived survey sponsor (indi-
vidual level) and level of democracy (country year level). The model is a linear random
slope model. The specification of the baseline multilevel model is as follows:
yic =
g
oo +
g
1demc+
g
2sponsic +X′
ic
l
+Z′
c
d
+U0c+R1c+
h
ic (1)
Adding the two-level interaction term between individual-level suspicion of survey
sponsor and country-level democracy, we get the full multilevel model specification
(see Aguinis et al.
37
):
yic =
g
oo +
g
1demc+
g
2sponsic +
g
3(dem∗sponsic)+X′
ic
l
+Z′
c
d
+U0c+R1c
+
h
ic (2)
where y
ic
is the dependent variable for individual iin country c,γ
00
is the average
individual level intercept, dem
c
is the country-level democracy, spons
ic
is an individual’s
perception of survey sponsor, X´
ic
and Z´
c
are vectors of individual- and country-level
controls, U
0c
is the intercept variance, R
1c
is the slope variance (for spons
ic
), and η
ic
is
the individual-level error term. I do not discuss nor present model building, the slope,
and intercept variance in the main text and tables. In short, the intercept variance is
reduced in each step of building up the models and the slope variance is reduced when
introducing the two-level interaction, i.e. it explains some of the between-country variance
(for details, see replication code). To facilitate easy interpretation, I proceed by graphing
the interaction effects and visualizing the effects of perceived sponsor on each dependent
variable, conditioned on the level of democracy in order to contrast more autocratic and
more democratic countries. I de-mean and standardize all outcome variables by each
country and survey round so that effect sizes are in country-round-specific standard devi-
ation units. Thus, a coefficient of 0.25 indicates that believing you are interviewed by a
government agent, rather than an independent researcher, is associated with a quarter
of a standard deviation change in the dependent variable.
Empirical findings
Ifirst look at the Trust variables. Figure 1 shows the marginal effects plots for all 20
trust variables. The bar charts at the bottom of each graph display the distribution
of country-years (level-2 units). For simplicity, detailed regression output is not
displayed here, but can be found in the regression tables in the Appendix A: Tables
2–8. All of the following DVs have substantial and significant interaction effects in
the expected direction: President; Members of Parliament; Local government; Ruling
party; Opposition party (opposite direction from the other); Electoral commission;
Police; Courts; Army; Traditional leaders; Government News; and Government Broad-
casts. In contrast the following trust variables are not biased along regime type:
600 M. TANNENBERG
Independent News; Independent Broadcasts; Tax officials; Religious institutions; Neigh-
bours; Relatives; Vendors; Most people.
The results are in line with H1: that sponsorship belief bias responses to sensitive
questions in favour of the regime in more autocratic countries but not in more demo-
cratic countries. To take an example, respondents who believe it’s the government who
is asking is estimated to say that they trust their President 0.2 standard deviations more
than those who believe an independent institute is asking, when they live in autocratic
Figure 1. Estimated effect of sponsor perception (government) on trust across level of democracy.
DEMOCRATIZATION 601
countries. An effect of that magnitude is expected in countries with an electoral
democracy score in the range of 0.2–0.4, such as Burundi, Eswatini, Egypt and Zim-
babwe. The estimated difference is low or indistinguishable from zero in more demo-
cratic countries on top of the scale, such as Ghana, Cape Verde and South Africa. Only
one sensitive variable does not exhibit the hypothesized pattern: reported trust in Tax
official is only marginally affected by sponsorship belief and the effect does not depend
on the level of democracy.
The results are largely in support of H2: the respondents who believe the govern-
ment has commissioned the survey will not answer differently to non-sensitive ques-
tions, compared to citizens who believe the survey to be independent, irrespective of
regime type. Save for trust in Traditional leaders, responses to non-sensitive questions,
such as trust in Neighbours, Relatives, Vendors, Most people etc. are unaffected by spon-
sorship beliefs and regime type.
Moving on to the Corruption variables, all the following have substantial and significant
interaction effects in the expected direction: President; Members of Parliament; Local gov-
ernment; Police; Courts; Army; Increase in past year; and Businesses. In contrast, the follow-
ing variables are not biased along regime type: Bureaucracy; Tax officials; Traditional
leaders; Religious institutions; and NGOs (see Appendix A Tables 5 and 6 for details).
Figure 2 shows how respondents’reported corruption perception is systematically
biased across the level of democracy. All else equal, respondents who believe it is the gov-
ernment that is asking are less likely to answer that corruption is widespread among
various state institutions, compared to those who believe the question is coming from a
research organization. With the exception of Bureaucracy and Tax authority,thepatterns
for all sensitive variable support H1. Similarly, the absence of an effect with regards to per-
ceptions of Traditional leaders, Religious institutions and NGOs is in line with H2. Yet, the
apparent interaction-effect for Businesses goes against the expectation of H2.
The significant effects of the four sensitive Approval variables offer additional
support for H1 (see Figure 3). Believing the government to sponsor the survey substan-
tially increases reported approval of the President, Member of parliament, and Local
government in more autocratic countries, while having an insignificant effect as the
level of democracy surpasses values around 0.7. Thus, countries at a level of democracy
on par with Benin and Tunisia are not expected to see an upward bias. While the inter-
action effect of the supposedly non-sensitive item Traditional leader is insignificant, it
does exhibit a pattern counter to H2. The relationship is consistent with how respon-
dents react to Traditional leaders with regards to trust and corruption and may indi-
cate that it may in fact be a sensitive topic.
Moving on to democratic values,Figure 4 shows how respondents’preference for
democracy is systematically biased across the level of democracy. All else equal, respon-
dents who believe it is the government that is asking are around one-tenth of a standard
deviation less likely to answer that democracy is a preferable system, compared to those
who believe the question is coming from a research organization.Studies concerned with
public demand for democracy
38
likely underestimate the true demand due to this down-
ward bias in more autocratic countries. We see a similar interaction effect with regards to
acceptance of One-party rule, with over reporting in more autocratic countries when the
government is the perceived sponsor. The effect of survey sponsor is uniform across
regime types for acceptance of Army rule and Strong-man rule.Thefirst is unsurprising
given that most autocratic regimes in the sample are not military regimes, but the latter is
counter to my expectation.
602 M. TANNENBERG
(How)does ethnicity matters?
Given that ethnicity is politically salient in many of the countries in the sample, and
that this article is concerned with political questions, I test influence of respondent’
and enumerators’ethnic identities. This is especially relevant since Adida et al.
39
Figure 2. Estimated effect of sponsor perception (government) on corruption perceptions across level of
democracy.
Figure 3. Estimated effect of sponsor perception (government) on approval across level of democracy.
DEMOCRATIZATION 603
show how a large number of the Afrobarometer survey items suffer from response bias
stemming from the ethnic (miss)match of the enumerator and the respondent. In par-
ticular, the authors show that being interviewed by a non-coethnic generate more
socially desirable responses, and –pertinent to this study –that respondents inter-
viewed by an enumerator who is a coethnic of the country’s key leader are more
likely to give regime friendly responses, such as approve of the government’s handling
of the economy and express lower trust in opposition parties. An alternative expla-
nation is thus that the autocratic bias put forward in this article is driven by an ethni-
cally induced social desirability bias.
This is not the case: the models are robust to controlling for the ethnic match
between the enumerator and respondent, as well as for the enumerator being a coeth-
nic with the key leader. Moreover, by replicating a number of Adida et al. models and
adding an interaction between the variable on leader-interviewer coethnicity and level
of democracy I show in Figure 5 that the induced biased appears to operate through the
logic of the autocratic bias (for full regression tables see Appendix C, Tables 16 and 17):
i.e. that the bias is primarily a concern in more autocratic countries and generally not
in democratic countries. These interaction graphs display the marginal effect of being
interviewed by a non-coethnic enumerator who is coethnic with the country’s key
leader (potential ethnic threat) for: trust in the president; the ruling party; opposition
parties; vendors; corruption perceptions of the office of the president; approval of the pre-
sident; evaluation of the government handling of the economy; and approval for strong-
man rule, conditioned on level democracy. The models include all controls used
throughout the article and serve as a replication of four models included in Adida
et al.’s study (see figure 7B in their paper).
Discussion
The empirical findings suggest that more repressive regimes indeed enjoy the auto-
cratic bias. Autocratic countries receive inflated evaluations of trust, corruption,
approval and regime-friendly values. This becomes particularly clear looking at the
stark contrast between the otherwise identical questions on trust in government news-
papers and independent newspapers, as well as government broadcasting services and
independent broadcasting services (see Figure 1, row 4). Sponsorship belief impacts
reported trust in the government’s information channels in autocratic countries but
not in more democratic countries. Trust in the independent information channels
remains unaffected by sponsorship belief irrespective of regime type.
Figure 4. Estimated effect of sponsor perception (government) on values across level of democracy.
604 M. TANNENBERG
These results have implications for the comparative use of survey data. As the bias is
consistent across a large number of survey items commonly used in comparative poli-
tics, previous studies that rely on survey research comparing countries with varying
regime types may need re-evaluation.
40
Future work would do well to consider the
obstacle self-censorship pose to their ability to compare public opinion data across
regimes, and whenever possible test for its presence. Indirect questioning techniques,
such as the list experiment or the randomized response technique are powerful tools to
estimate sensitive opinions but are not without cost. The indirect approaches require a
larger sample,
41
are cognitively taxing, and sacrifice individual-level data for aggre-
gated estimates. Because of these drawbacks, researchers will often have to rely on
direct questions for sensitive questions. When direct questioning is necessitated and
when information on perceived sponsorship is unavailable, researchers need to be
explicit about the assumptions they are making with regards self-censorship and
how a violation of these may alter their inferences. This is especially prudent when
the potential bias works “in favor”of the hypothesis being tested.
In addition to across-regime comparative work, the autocratic bias can distort
results of longitudinal studies of approval ratings of leaders and ruling parties,
42
and
demand for democracy
43
within the same autocratic regime. Even in the absence of
significant political changes, the share of respondents that perceive the state to
sponsor the survey can differ between survey rounds leading to seemingly dramatic
public opinion shifts. Mozambique provides an illustrative example of this. Between
2012 and 2015, the share of respondents that reported trust in the ruling party
dropped 18 percentage points (from 74 to 56). Disregarding the autocratic bias, one
conclusion might be that support is rapidly wittering for the dominant one-party
Figure 5. Estimated effect of ethnic threat across level of democracy.
DEMOCRATIZATION 605
regime. However, the share of respondents that perceived the interviewer to be sent by
the government was a full 28 percentage points lower in 2015 than 2012. Given that in
2012, all else equal, that group of respondents were 2.4 times more likely to indicate
trust in the ruling party, the high levels of trust in FRELIMO in 2012 is likely a
product of the autocratic bias. This illustrates that it is important and also that it is
possible to increase the perceived independence of a survey. In the Mozambique
case, two different firms were responsible for the field work in 2012 and 2015. It is
possible that the reputation of firms, names, logos, attire, etc. influence perceived inde-
pendence. Given the large variation in sponsorship perception between and within
countries (see Appendix F, Figure 6), it is clear that sponsorship belief is not static
and future research into how to minimize respondents’suspicion will help to
advance survey research in autocracies.
Conclusion
This article shows that respondents’belief about who has commissioned the survey
influences answers on politically sensitive questions in more autocratic countries
while having no impact on responses in very democratic countries. In more politically
repressive environments, respondents who believe (fear) that the government has sent
the interviewer are more likely to state that they trust the country’s leader or ruling
party, and less likely to state that they believe rulers and state institutions to be
corrupt, as compared to respondents who think that the interviewer works on
behalf of an independent research organization.
This study provides a significant contribution to comparative public opinion
research in general and to the study of political behaviour and public opinion in
authoritarian regimes in particular. I provide evidence that a large set of commonly
used survey items –ranging from regime legitimacy and popular support for incum-
bents to corruption perceptions and preferences for democracy suffer from systematic
bias. Sensitive questions evaluating rulers suffer from a larger bias than do questions
evaluating those exercising public power. This is evident by the larger effect on trust
in or corruption perception of in the key leadership figure or ruling party compared
to the effect vis-a-vis the bureaucracy and tax authorities. Innocuous apolitical ques-
tions, such as trust in your neighbours or relatives, provide perfect tests for the auto-
cratic bias hypothesis, as one would expect no difference between the two groups of
respondents’answers, no matter the level of political repression. Indeed, there is none.
The autocratic bias is not only of methodological concern. Insofar as good-govern-
ance or democracy-promotion initiatives are informed by survey items measuring cor-
ruption perceptions or demand for democracy, this bias is also of direct political
relevance. The usefulness of a survey item probing respondents’sense of anonymity
should be evident. Given the low implementation cost, such an item should be
added to surveys with the ambition to be comparable across countries where the
doubt of anonymity is likely to produce different response behaviour. Save for the
Afrobarometer, large-scale public opinion surveys do not include a similar item,
44
and thus fail to provide data users with an easy and effective method of estimating
the varying sensitivity of survey items across countries. Studies using data where
such a variable is available should ideally include an interaction term in the analysis
to determine the presence of systematic bias. In the presence of bias, running the analy-
sis using only the sub-sample of respondents who believe the survey to be independent
606 M. TANNENBERG
provides a first robustness check for the problem of preference falsification. Having
estimated the bias for a set of sensitive items, another avenue forward would be to con-
struct reliability weights to enable the researcher to account for biases in the analysis
while retaining the full sample.
Notes
1. The results summarized in this paragraph are drawn from the Afrobarometer survey round 7
(www.afrobarometer.org).
2. E.g. Treisman, “Presidential Popularity in a Hybrid Regime”; Geddes and Zaller, “Sources of
Popular Support”; Stockmann and Gallagher, “Remote Control: How the Media Sustain
Authoritarian Rule”; Weyland, “A Paradox of Success? Determinants of political support.”
3. E.g. Rose and Mishler, “Comparing Regime Support in Non-Democratic and Democratic
Countries”; Gilley, “The Determinants of State Legitimacy”; Booth and Seligson, The Legiti-
macy Puzzle in Latin America; Mattes and Bratton, “Learning About Democracy in Africa”;
Magalhães, “Government Effectiveness and Support for Democracy”; Chang and Kerr, “An
Insider–Outsider Theory of Popular Tolerance.”
4. Kuran, Private Truths, Public Lies.
5. Linz, Totalitarian and Authoritarian Regimes.
6. Jiang and Yang, “Lying or Believing? Measuring Preference Falsification”; Kalinin, “The Social
Desirability Bias in Autocrat’s Electoral Ratings”; Robinson and Tannenberg, “Self-Censorship
of Regime Support in Authoritarian States.”
7. Frye et al., “Is Putin’s Popularity Real?.”
8. Lei and Lu, “Revisiting Political Wariness in China’s Public Opinion Surveys”; Tang, Populist
Authoritarianism: Chinese Political Culture.
9. Shen and Truex, “In Search of Self-Censorship.”
10. See Adida et al., “Who’s Asking? Interviewer Coethnicity Effects.”
11. Tourangeau and Yan, “Sensitive Questions in Surveys.”
12. Kuran, Private Truths, Public Lies.
13. Schedler, The Self-Restraining State: Power and Accountability.
14. Garcia-Ponce and Pasquale, “How Political Repression Shapes Attitudes Toward the State.”
15. Kalinin, “The Social Desirability Bias in Autocrat’s Electoral Ratings.”
16. Frye et al., “Is Putin’s Popularity Real?.”
17. Shockley et al., “Exaggerating Good Governance.”
18. Jiang and Yang, “Lying or Believing? Measuring Preference Falsification.”
19. Robinson and Tannenberg, “Self-Censorship of Regime Support in Authoritarian States.”
20. Bratton, Bhavnani, and Chen, “Voting Intentions in Africa: Ethnic, Economic or Partisan?”;
Bratton and Masunungure, “Voting Intentions in Zimbabwe.”
21. Zimbalist, “‘Fear-of-the-State Bias’in Survey Data.”
22. Sulek, “O rzetelnosci i nierzetelnosci badan ankietowych w Polsce, Proba analizy empirycznej.”
23. Welsh, Survey Research and Public Attitudes in Eastern Europe and the Soviet Union.
24. In Calvo, Razafindrakoto and Roubaud, “Fear of the State in Governance Surveys? Empirical
Evidence,”the authors address bias from survey sponsor in general, and not in autocratic
regimes in particular. Out of the 8 countries included in the study, 3 are clearly autocratic.
25. Lei and Lu, “Revisiting Political Wariness in China’s Public Opinion Surveys.”
26. Zimbalist, “‘Fear-of-the-State Bias’in Survey Data.”
27. Blair, Coppock, and Moor, “When to Worry About Sensitivity Bias.”
28. Baldwin, “When Politicians Cede Control of Resources.”
29. Afrobarometer, “All Countries, Rounds 2, 3, 4, 5, 6, 7.”
30. Coppedge et al., “V-Dem Dataset v10.”
31. Kuran, Private Truths, Public Lies.
32. Lührmann, Tannenberg and Lindberg, “Regimes of the world (RoW): Opening New Avenues.”
33. Calvo, Razafindrakoto and Roubaud, “Fear of the State in Governance Surveys? Empirical
Evidence.”
34. For a discussion of the Lived Poverty Index see Mattes, “The Material and Political Bases of
Lived Poverty in Africa.”
DEMOCRATIZATION 607
35. Coppedge et al., “V-Dem Dataset v10”; Teorell et al., “Measuring Polyarchy Across the Globe,
1900–2017.”
36. Marquardt and Pemstein, “IRT Models for Expert-Coded Panel Data.”
37. Aguinis, Gottfredson, and Culpepper, “Best-Practice Recommendations for Estimating Cross-
Level Interaction Effects Using Multilevel Modelling.”
38. E.g. Mattes and Bratton, “Learning about Democracy in Africa: Awareness, Performance, and
Experience”; Magalhães, “Government Effectiveness and Support for Democracy”; and Claas-
sen, “Does Public Support Help Democracy Survive?.”
39. Adida et al., “Who’s Asking? Interviewer Coethnicity Effects.”
40. E.g. Rose and Mishler, “Comparing Regime Support in Non-Democratic and Democratic
Countries”; Gilley, “The Determinants of State Legitimacy”; Booth and Seligson, The Legiti-
macy Puzzle in Latin America; Mattes and Bratton, “Learning About Democracy in Africa”;
Magalhães, “Government Effectiveness and Support for Democracy”; Chang and Kerr, “An
Insider–Outsider Theory of Popular Tolerance for Corrupt Politicians.”
41. See Blair, Coppock, and Moor, “When to Worry About Sensitivity Bias”for a thorough review
of the trade-offbetween direct and indirect measurement approaches.
42. Treisman, “Presidential Popularity in a Hybrid Regime”; Geddes and Zaller, “Sources of
Popular Support for Authoritarian Regimes.”
43. Robbins and Tessler, “The Effect of Elections on Public Opinion Toward Democracy.”
44. An exception is the 2016 AmericasBarometer in Ecuador which asked a about perceived survey
sponsor.
Acknowledgements
I am grateful to the anonymous reviewers, Staffan I. Lindberg, Ellen Lust, Rachel Sigman, Anja Neun-
dorf, Sofia Axelsson, Anna Persson and participants in the 2017 V-Dem Annual Research Conference,
the 2018 Quality of Government Research Conference and the 2017 MPSA Annual Meeting for valu-
able comments and suggestions.
Disclosure statement
No potential conflict of interest was reported by the author.
Funding
This work was supported by Vetenskapsrådet [439-2014-38].
Notes on contributor
Marcus Tannenberg is a PhD candidate at the V-Dem Institute at the Department of Political Science
at the University of Gothenburg. His research on list-experiments, self-censorship, regime classifi-
cation and legitimacy has been published in Research & Politics, Politics and Governance, and the
European Political Science Review.
ORCID
Marcus Tannenberg http://orcid.org/0000-0003-0077-4711
Bibliography
Adida, Claire L., Karen E. Ferree, Daniel N. Posner, and Amanda Lea Robinson. “Who’s Asking?
Interviewer Coethnicity Effects in African Survey Data.”Comparative Political Studies 49, no. 12
(2016): 1630–1660.
608 M. TANNENBERG
Afrobarometer. “All Countries, Rounds 2, 3, 4, 5, 6, 7.”2019. http://www. afrobarometer.org.
Aguinis, Herman, Ryan K. Gottfredson, and Steven Andrew Culpepper. “Best-Practice
Recommendations for Estimating Cross-Level Interaction Effects Using Multilevel Modeling.”
Journal of Management 39, no. 6 (2013): 1490–1528.
Baldwin, Kate. “When Politicians Cede Control of Resources: Land, Chiefs, and Coalition-Building in
Africa.”Comparative Politics 46, no. 3 (2014): 253–271.
Blair, Graeme, Alexander Coppock, and Margaret Moor. “When to Worry About Sensitivity Bias: A
Social Reference Theory and Evidence from 30 Years of List Experiments.”American Political
Science Review 114, no. 4 (2020): 1297–1315.
Booth, John A., and Mitchell A. Seligson. The Legitimacy Puzzle in Latin America: Political Support
and Democracy in Eight Nations. Cambridge: Cambridge University Press, 2009.
Bratton, Michael, Ravi Bhavnani, and Tse-Hsin Chen. “Voting Intentions in Africa: Ethnic, Economic
or Partisan?”Commonwealth & Comparative Politics 50, no. 1 (2012): 27–52.
Bratton, Michael, and Eldred Masunungure. “Voting Intentions in Zimbabwe: A Margin of Terror.”
Afrobarometer Briefing Paper 103, 2012.
Calvo, Thomas, Mireille Razafindrakoto, and François Roubaud. “Fear of the State in
Governance Surveys? Empirical Evidence from African Countries.”World Development 123
(2019): 104609.
Chang, Eric C. C., and Nicholas N. Kerr. “An Insider–Outsider Theory of Popular Tolerance for
Corrupt Politicians.”Governance 30, no. 1 (2017): 67–84.
Claassen, Christopher. “Does Public Support Help Democracy Survive?”American Journal of Political
Science 64, no. 1 (2020): 118–134.
Coppedge, Michael, John Gerring, Carl Henrik Knutsen, Staffan I Lindberg, Jan Teorell, David
Altman, Michael Bernhard, et al. “V-Dem Dataset v9.”Varieties of Democracy (V-Dem) Project,
2019 doi:10.23696/vdemcy19.
Frye, Timothy, Scott Gehlbach, Kyle L. Marquardt, and Ora John Reuter. “Is Putin’sPopularity Real?”
Post-Soviet Affairs 33, no. 1 (2017): 1–15.
Garcia-Ponce, Omar, and Benjamin Pasquale. “How Political Repression Shapes Attitudes Toward the
State: Evidence from Zimbabwe.”Working Paper, 2015.
Geddes, Barbara, and John Zaller. “Sources of Popular Support for Authoritarian Regimes.”American
Journal of Political Science 33, no. 3 (1989): 319–347.
Gilley, Bruce. “The Determinants of State Legitimacy: Results for 72 Countries.”International Political
Science Review 27, no. 1 (2006): 47–71.
Jiang, Junyan, and Dali L Yang. “Lying or Believing? Measuring Preference Falsification from a
Political Purge in China.”Comparative Political Studies 49, no. 5 (2016): 600–634.
Kalinin, Kirill. “The Social Desirability Bias in Autocrat’s Electoral Ratings: Evidence from the 2012
Russian Presidential Elections.”Journal of Elections, Public Opinion and Parties 26, no. 2 (2016):
191–211.
Kuran, Timur. Private Truths, Public Lies: The Social Consequences of Preference Falsification.
Cambridge, MA: Harvard University Press, 1997.
Lei, Xuchuan, and Jie Lu. “Revisiting Political Wariness in China’s Public Opinion Surveys:
Experimental Evidence on Responses to Politically Sensitive Questions.”Journal of
Contemporary China 26, no. 104 (2017): 213–232.
Linz, Juan José. Totalitarian and Authoritarian Regimes. Boulder, CO: Lynne Rienner Publishers,
2000.
Lührmann, Anna, Marcus Tannenberg, and Staffan I. Lindberg. “Regimes of the World (RoW):
Opening New Avenues for the Comparative Study of Political Regimes.”Politics & Governance
6, no. 1 (2018): 60–75.
Magalhães, Pedro C. “Government Effectiveness and Support for Democracy.”European Journal of
Political Research 53, no. 1 (2014): 77–97.
Marquardt, Kyle L., and Daniel Pemstein. “IRT Models for Expert-Coded Panel Data.”Political
Analysis 26, no. 4 (2018): 431–456.
Mattes, Robert. “The Material and Political Bases of Lived Poverty in Africa: Insights from the
Afrobarometer.”In Barometers of Quality of Life Around the Globe, edited by Valerie Møller,
Denis Huschka, and Alex C. Michalos, 161–185. Heidelberg, Germany: Springer, 2008.
Mattes, Robert, and Michael Bratton. “Learning About Democracy in Africa: Awareness,
Performance, and Experience.”American Journal of Political Science 51, no. 1 (2007): 192–217.
DEMOCRATIZATION 609
Moehler, Devra C. “Critical Citizens and Submissive Subjects: Election Losers and Winners in Africa.”
British Journal of Political Science 39, no. 2 (2009): 345–366.
Robbins, Michael DH, and Mark Tessler. “The Effect of Elections on Public Opinion Toward
Democracy: Evidence from Longitudinal Survey Research in Algeria.”Comparative Political
Studies 45, no. 10 (2012): 1255–1276.
Robinson, Darrel, and Marcus Tannenberg. “Self-Censorship of Regime Support in Authoritarian
States: Evidence from List Experiments in China.”Research & Politics 6, no. 3 (2019): 1–9.
Rose, Richard, and William Mishler. “Comparing Regime Support in non-Democratic and
Democratic Countries.”Democratization 9, no. 2 (2002): 1–20.
Schedler, Andreas. The Self-Restraining State: Power and Accountability in New Democracies. Boulder,
CO: Lynne Rienner Publishers, 1999.
Shen, Xiaoxiao, and Rory Truex. “In Search of Self-Censorship.”British Journal of Political Science
(2020): 1–13. doi:10.1017/S0007123419000735.
Shockley, Bethany, Michael Ewers, Yioryos Nardis, and Justin Gengler. “Exaggerating Good
Governance: Regime Type and Score Inflation among Executive Survey Informants.”
Governance 31, no. 4 (2018): 643–664.
Stockmann, Daniela, and Mary E Gallagher. “Remote Control: How the Media Sustain Authoritarian
Rule in China.”Comparative Political Studies 44, no. 4 (2011): 436–467.
Sulek, Antoni. “O rzetelnosci i nierzetelnosci badan ankietowych w Polsce, Proba analizy empirycz-
nej.”Kultura i Społeczenstwo 33, no. 1 (1989): 23–49.
Tang, Wenfang. Populist Authoritarianism: Chinese Political Culture and Regime Sustainability.
Oxford, UK: Oxford University Press, 2016.
Teorell, Jan, Michael Coppedge, Staffan Lindberg, and Svend-Erik Skaaning. “Measuring Polyarchy
Across the Globe, 1900–2017.”Studies in Comparative International Development 54, no. 1
(2019): 71–95.
Tourangeau, Roger, and Ting Yan. “Sensitive Questions in Surveys.”Psychological Bulletin 133, no. 5
(2007): 859.
Treisman, Daniel. “Presidential Popularity in a Hybrid Regime: Russia Under Yeltsin and Putin.”
American Journal of Political Science 55, no. 3 (2011): 590–609.
Welsh, William A. Survey Research and Public Attitudes in Eastern Europe and the Soviet Union.
Oxford, UK: Pergamon Press, 1981.
Weyland, Kurt. “A Paradox of Success? Determinants of Political Support for President Fujimori.”
International Studies Quarterly 44, no. 3 (2000): 481–502.
Zimbalist, Zack. “‘Fear-of-the-State Bias’in Survey Data.”International Journal of Public Opinion
Research 30, no. 4 (2018): 631–651.
610 M. TANNENBERG