ArticlePDF Available

Sensitive Questions and Trust: Explaining Respondents’ Behavior in Randomized Response Surveys

Authors:

Abstract and Figures

The randomized response technique (RRT) is an indirect question method that uses stochastic noise to increase anonymity in surveys containing sensitive items. Former studies often implicitly assumed that the respondents trust and comply with the RRT procedure and, therefore, are motivated to give truthful responses. However, validation studies demonstrated that RRT may not always be successful in eliciting truthful answering—even when compared with direct questioning. The article theoretically explores and discusses the conditions under which this assumption is consistent (or inconsistent) with the survey respondents’ rational behavior. First, because P(A| Yes) > P(A| No), both types of respondents, A (with sensitive trait) and non-A (without sensitive trait), have an incentive to disregard the instructions in the RRT mode. In contrast, respondents type non-A have no incentive to lie in the direct questioning mode. Thus, the potential for social desirability bias is (theoretically) higher in the RRT mode. Second, a basic game theoretic approach conceptualizes the survey interview as a social interaction between the respondent and the interviewer within the context of norms and mutual expectations. It is argued that the respondent’s choice to answer truthfully depends on (a) the respondents’ estimated likelihood that the interviewer honors trust and (b) a relative comparison of the utility from conforming to “the norm of truthfulness” versus its costs. Finally, we review previous empirical evidence and show that our theoretical model can explain both successes and failures of the RRT.
Content may be subject to copyright.
https://doi.org/10.1177/2158244020936223
SAGE Open
July-September 2020: 1 –17
© The Author(s) 2020
DOI: 10.1177/2158244020936223
journals.sagepub.com/home/sgo
Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License
(https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of
the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
Original Research
Introduction
Sociological research often collects data on private, illegal,
and unsocial behavior or extreme attitudes via survey inter-
views. For example, the German General Social Survey
(ALLBUS) asks respondents to self-report on several
offenses such as dodging the fare, drunk driving, tax evasion,
and shoplifting. In the United States, the National Survey on
Drug Use and Health (NSDUH) and the General Social
Survey (GSS) regularly ask respondents to self-report on
sensitive topics such as drug use or sexual habits. The GSS
also asks about very sensitive topics such as prostitution
(“Thinking about the time since your 18th birthday, have you
ever had sex with a person you paid or who paid you for
sex?”). Some survey studies also investigate the incidence of
socially undesirable opinions such as xenophobia, racism,
and anti-Semitism (Krumpal, 2012; Ostapczuk et al., 2009;
Stocké, 2007b).
Cumulative evidence in survey methodologists’ research
literature indicates that self-reports on sensitive topics often
do not reflect the truth (Jann et al., 2019; Krumpal, 2013;
Tourangeau & Yan, 2007). Sensitive questions pose a trust
problem for the respondent. Besides the trust problem, there
could be other factors explaining why self-reports on sensi-
tive topics do not reflect the truth, for example, self-decep-
tion, rationalization, or the fact that recalling information
and reporting about unpleasant events can have a subjective
cost in itself for the respondent (see Näher & Krumpal, 2012;
Tourangeau & Yan, 2007). In this article, however, we focus
on the trust problem.
Due to fear of negative consequences, respondents are
unwilling to reveal deviant and norm-violating behaviors.
They misreport in a survey (systematically underreport
socially undesirable behaviors and overreport socially desir-
able ones) to avoid subjective costs such as embarrassment
936223SGOXXX10.1177/2158244020936223SAGE OpenKrumpal and Voss
research-article20202020
1Universität Leipzig, Germany
Corresponding Author:
Ivar Krumpal, Department of Sociology, Universität Leipzig,
Beethovenstraße 15, 04107 Leipzig, Germany.
Email: krumpal@sozio.uni-leipzig.de
Sensitive Questions and Trust: Explaining
Respondents’ Behavior in Randomized
Response Surveys
Ivar Krumpal1 and Thomas Voss1
Abstract
The randomized response technique (RRT) is an indirect question method that uses stochastic noise to increase anonymity
in surveys containing sensitive items. Former studies often implicitly assumed that the respondents trust and comply with
the RRT procedure and, therefore, are motivated to give truthful responses. However, validation studies demonstrated
that RRT may not always be successful in eliciting truthful answering—even when compared with direct questioning. The
article theoretically explores and discusses the conditions under which this assumption is consistent (or inconsistent) with
the survey respondents’ rational behavior. First, because P(A| Yes) > P(A| No), both types of respondents, A (with sensitive
trait) and non-A (without sensitive trait), have an incentive to disregard the instructions in the RRT mode. In contrast,
respondents type non-A have no incentive to lie in the direct questioning mode. Thus, the potential for social desirability
bias is (theoretically) higher in the RRT mode. Second, a basic game theoretic approach conceptualizes the survey interview
as a social interaction between the respondent and the interviewer within the context of norms and mutual expectations.
It is argued that the respondent’s choice to answer truthfully depends on (a) the respondents’ estimated likelihood that the
interviewer honors trust and (b) a relative comparison of the utility from conforming to “the norm of truthfulness” versus
its costs. Finally, we review previous empirical evidence and show that our theoretical model can explain both successes and
failures of the RRT.
Keywords
survey design, randomized response technique, sensitive questions, social norms, social desirability, rational choice, game
theory, trust, privacy protection
2 SAGE Open
in the interview situation or sanctions from third parties
beyond the interview setting (Rasinski et al., 1999). Such
misreporting leads to invalid survey estimates, which are dis-
torted by social desirability bias. To combat misreporting
and to obtain more valid answers to sensitive questions, sur-
vey researchers have developed different data collection
approaches designed to reduce social influence in the data
collection process, to guarantee anonymity of the respon-
dent’s answers and to reduce the respondent’s self-presenta-
tion concerns (Lee, 1993).
The Randomized Response Technique
(RRT)
The RRT is a method to elicit more honest answers in sensi-
tive surveys (Warner, 1965). Warner’s original method relies
on the pairing of two statements, both relating to the sensi-
tive attribute (statement and negation of the statement). The
respondent uses a randomization device (e.g., cards, coins,
dice) to select which of the two statements he or she will
answer. For example:
1. I sometimes smoke marijuana (selected with proba-
bility p)
2. I never smoke marijuana (selected with probability 1
p)
Without telling the interviewer which statement was cho-
sen, respondents answer “Yes” or “No” according to their
marijuana smoking habits. Because only the respondent
knows the outcome of the randomization device, a specific
answer is always ambiguous to the interviewer. The inter-
viewer cannot infer the respondent’s true status from a given
answer and, under idealistic assumptions, the respondent
trusts in his or her data protection. Probability theory is used
to derive an unbiased estimator π ˆ of the sensitive behavior in
the population of interest. The expected value ϕ of observing
a “Yes” answer can be written as
ϕπ π
=+ )
pp
()(
11−−
,
where π is the unknown population prevalence of the sensi-
tive behavior. Because the observed sample proportion of
“Yes” answers ϕˆ is an estimate of ϕ, and the selection prob-
ability p is given by design, the population prevalence π can
be estimated:
πϕ
Warner=
+−
p
p
1
21
Furthermore, the sampling variance of π̂Warner can be esti-
mated by
Var( )= ()
Warner

πϕϕ1
12 12
−−
()
()np
Different modifications of Warner’s original method have
been developed and empirically applied (overviews of
designs and estimators for different RRT schemes can be
found in Blair et al., 2015; Chaudhuri et al., 2016; Fox &
Tracy, 1986; Krumpal et al., 2015; Lensvelt-Mulders, Hox,
& van der Heijden, 2005). For example, the “forced–choice-
design,” which is one of the most widely applied RRT
schemes, works as follows (Boruch, 1971): A randomization
device determines whether the respondent is supposed to
answer the sensitive question truthfully (with probability p)
or to give a surrogate answer “Yes” (with probability λ) or
“No” (with probability 1
−−
p
λ
). Before answering a sensi-
tive question (e.g., do you sometimes smoke marijuana?),
the respondents could be requested to toss three coins (out-
come of the coin toss is private information of the respon-
dent). The randomization provides a known probability
distribution:
probability p of being directed to answer the sensitive
question truthfully (mixture of heads and tails) = 1 −
.53 − .53 = .75,
probability λ of being directed to give an automatic
“Yes” answer (three tails) = .53 = .125,
probability 1
−−
p
λ
of being directed to give an auto-
matic “No” answer (three heads) = .53 = .125.
The expected value ϕ of observing a “Yes” answer can be
written as
ϕλ π
=
+
p, where π is the unknown population
prevalence of the sensitive behavior. Because the observed
sample proportion of “Yes” answers ϕˆ is an estimate of ϕ,
and the probability distribution of the coin toss is known, the
population prevalence π answering “Yes” to the sensitive
question can be estimated:
π
ϕλ
FC =
p
Furthermore, the sampling variance of π̂FC can be esti-
mated by
Var( )=
FC
π
1
12
In general, all variants of the RRT share the common fea-
ture that by deliberately introducing a random element in the
question-and-answer process, respondents’ answers do not
reveal anything definite to the interviewer (see Nayak, 1994,
for a generalized approach for integrating and comparing dif-
ferent RRT designs). The advantage of “protection via ran-
domization” is faced with different drawbacks: Compared
with direct questioning, RRT imposes a higher cognitive bur-
den on the respondent. Landsheer et al. (1999) show empiri-
cally that respondents with a low degree of understanding of
the RRT procedure also have less trust in the method com-
pared with respondents who have a higher degree of under-
standing of the instructions. However, Landsheer et al.’s
results seem incompatible with results of a more recent study
by Hoffmann et al. (2017), who did not find a correlation
between comprehension of the RRT and perceived privacy
protection.
Krumpal and Voss 3
Empirical evidence indicates that a substantial proportion
of respondents do not comply with the RRT instructions
(Ostapczuk et al., 2009). They give self-protective “No”
answers even if the outcome of the randomization device
instructs them to answer “Yes.” Statistical models have been
developed to account for such self-protective response
behavior (Cruyff et al., 2007). Note that some designs,
including Warner’s original RRT as well as the crosswise
model (Yu et al., 2008), do not feature a safe, self-protective
response option. Thus, in these specific (non-)RRT designs,
noncompliance is not clearly associated with a specific
response option, which makes a cheating correction more
difficult compared with RRT schemes with an unambiguous
self-protective response option (such as the typical forced
choice design or the unrelated question model; see Krumpal
et al., 2015, for an overview of different RRT schemes).
A meta-analysis conducted by Lensvelt-Mulders, Hox,
van der Heijden, and Mass (2005) suggests that self-reports
of self-stigmatizing behavior are overall more accurate with
RRT than with direct questioning. However, several other
studies indicate that there are serious difficulties of using the
RRT (such as higher item nonresponse, negative prevalence
estimates, or increased break-off rates) and that the superior-
ity of the RRT should not be taken for granted in any case
(Coutts et al., 2011; Coutts & Jann, 2011; Höglinger et al.,
2016; Höglinger & Jann, 2018; Holbrook & Krosnick, 2010;
Kirchner, 2015; Stem & Steinhorst, 1984; Weissman et al.,
1986; Wolter & Preisendörfer, 2013).
John et al. (2018) give a useful overview of previous vali-
dation studies demonstrating at best mixed evidence on the
performance of RRT versus direct questioning. Based on
ideas from cognitive psychology and on experimental evi-
dence, the authors conjecture that RRT may fail because of
respondents’ concern over response misinterpretation. In
particular, innocent respondents may be concerned that com-
plying to the RRT instructions (e.g., to answer “Yes”) will be
misinterpreted as indicating that one belongs to the group of
people with sensitive trait A. We argue that even perfectly
rational and self-regarding respondents will be (rationally)
concerned over misinterpretation.
From a sociological perspective, one fundamental ques-
tion of the research on sensitive topics is still unresolved:
Why do survey respondents answer truthfully to sensitive
questions? Esser (1986, 1990) argues that respondent reac-
tions to the measurement process (e.g., truthful vs. socially
desirable answering) could be explained by general behav-
ioral regularities, by habits, and by norms that are activated
in social interactions in secondary relations (e.g., presenta-
tion and deference).
Respondents’ Behavior as a Rational
Choice
Former research often assumed that the RRT procedure guar-
antees complete privacy of answers. The respondent is
expected to self-report sensitive information truthfully with-
out fear of negative consequences and, thus, social desirabil-
ity bias in survey estimates should decrease. However, this
expectation is questionable as will be demonstrated. In the
following, we present an attempt to model the interview situ-
ation as a social interaction via a simple game theoretic anal-
ysis. Comments on the RRT research indeed suggest that
game theoretic thinking may “be a valuable contribution to
the field” (Rao & Rao, 2016, p. 7). However, research along
these lines is extremely rare. Because we do not yet have a
comprehensive and empirically valid psychological theory
of respondent behavior in various interview situations, the
purpose of this analysis is to work out the conditions for
truthful answers by using an idealized model of rational
behavior. There is some previous research in this field within
the framework of a rational choice analysis of respondents’
behavior that assumes (expected) utility maximization
(Ljungqvist, 1993). Ljungqvist (1993) alludes to the possi-
bility of using theoretical tools from game theory in this area.
However, this work implicitly assumes that respondents per-
ceive the interview as a parametric (nonstrategic) situation
but not as a social interaction.
Behavioral Assumptions
In addition to consistency assumptions about desires (prefer-
ences), game theory postulates that expectations (beliefs) are
rational in the sense of objective or of Bayesian (subjective)
probabilities. In this way, one can analyze games with com-
plete information and also games with incomplete informa-
tion. The rationality assumption will be used throughout the
article. In game theory, rationality assumptions do not imply
that agents are self-interested. Altruism, fairness, or other
kinds of other-regarding “social preferences” and normative
orientations may well be represented by consistent prefer-
ences. In the following, we first use the motivational assump-
tion that agents (respondents) are completely self-regarding.
In other words, we first use a kind of rational egoism (or
“homo economicus”) model. The motivational assumption
of complete self-interestedness will be relaxed in a second
step, in that, we consider respondents who are endowed with
social preferences. This is, they are not merely motivated by
their own material payoffs but consider fairness or reciproc-
ity criteria or they are intrinsically motivated to act in accor-
dance with certain social norms.
Why Do Respondents Participate in Surveys?
There are useful applications of rational choice concepts in
previous survey research such as leverage–salience (Groves
et al., 2000), risk-of-disclosure (Couper et al., 2008), or ben-
efit–cost theories of survey participation (Singer, 2011).
These contributions explain the respondents’ choice of
whether to participate in a survey or not. The following theo-
retical ideas advance these contributions.
4 SAGE Open
Our analysis of respondents’ behavior obviously depends
on their willingness to participate in a survey. We assume
that the survey contains questions about sensitive items. Any
participation in a survey yields costs to the respondent in
terms of opportunity costs (e.g., costs related to alternative
usage of interview time). In addition, there may be costs that
are related to expected external sanctions. If there is a certain
risk of being detected to have a sensitive trait A and if the
interviewer (or the organization that administrates the sur-
vey) is not trustworthy to guarantee privacy, these costs may
be substantive. Consider as a case in point a survey on the
usage of illegal drugs among professional athletes or among
prison inmates that is administrated by an organization that is
affiliated with drug control agencies or the prison adminis-
tration. Then, the cost of being detected may not be negligi-
ble as perceived by the respondent.
Given these costs, it is tempting to ask whether a rational
egoist would ever participate. Even agents who are com-
pletely self-regarding, however, may consider rewards when
participating: In some institutional contexts (e.g., inmates of
total institutions), there can be forced participation or defect-
ing may be interpreted as a negative sign triggering a suspi-
cion among officials that the person in fact has sensitive trait
A. As another kind of incentive, academic or commercial
survey organizations often provide participants with material
rewards (e.g., money, participation in a lottery, shopping
vouchers) for participating in the survey. Furthermore,
rewards may be related to the expected “fun” that is expected
by a participant.
There may be thus conditions (rewards compensate
expected costs) such that rational egoists are willing to par-
ticipate. Given that an agent has social preferences, there are
additional rewards and additional costs. As to the costs, there
are expected informal sanctions and psychological costs of
being detected as someone with the sensitive trait A. With
regard to the rewards, there are some further commodities,
which may motivate participation: Survey participation can
be due to “warm-glow” altruism (in the sense of Andreoni,
1990). It may also be that the participant perceives a moral or
other normative obligation to cooperate. Survey participa-
tion can also stem from “positive reciprocity” (Fehr &
Gächter, 2000; Gouldner, 1960), in particular in face-to-face
interviews, if the respondent reciprocates the interviewer’s
kindness.
Our presentation rests on certain assumptions, which will
be introduced in each of the following paragraphs and which
will be modified step by step subsequently. Our contribution
is based on the idea that surveys that include sensitive items
generate trust problems. There can be trust problems on both
sides of the survey relation: The interviewer has a trust prob-
lem that arises because the respondent may not give truthful
answers, in particular with respect to sensitive items. In this
article, however, we focus on the respondents’ perspective:
Respondents may distrust whether the interviewer (or the
organization that administrates the interview and controls the
collected data) in fact is willing to protect the respondent’s
privacy. We also develop our argument by comparing incen-
tives to answer truthfully in RRT surveys with surveys that
employ the direct mode of questioning. We furthermore
demonstrate the impact of several motivational assumptions
in these survey modes. In contrast to prior contributions to
the field (e.g., Ljungqvist, 1993), we argue that respondents’
behavior depends not only on preferences and beliefs with
respect to the stigmatizing trait but also on subjective esti-
mates with respect to the interviewer’s trustworthiness. We
share the assumptions that participants indeed (a) are willing
to participate in the survey, (b) are able to act as if they could
calculate posterior probabilities (based on estimates of the
unknown parameter π), and (c) perceive expected costs if
there is a positive probability that the interviewer suspects
that the participant belongs to the stigmatized group.
Analysis of Respondents’ Behavior in the Direct
Mode (Rational Egoism)
Assumption 1: The respondent participates in the
interview.
Assumption 2: Whether or not the respondent has sensi-
tive trait A is the respondent’s private knowledge.
Assumptions 1 and 2 are constant across all presented
situations. To reduce repetitiveness, they will not be repeated
in the following different situations under consideration.
Assumption 3: Respondents with trait A will incur costs
C > 0 if privacy is not protected. Respondents of type
non-A, however, will have costs C = 0.
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5: The interviewer avoids efforts or has no
interest in the protection of privacy. Thus, the interviewer
receives a payoff of R if she protects privacy and T if she
does not. We assume that T > R.
Note that these assumptions refer to the respondent’s sub-
jective beliefs about the interview situation. It is not neces-
sary that these assumptions are veridical representations of
the “true” properties of the interviewer’s preferences.
Assumptions 4 and, in particular, 5 represent extremely pes-
simistic beliefs of the respondent with regard to the inter-
viewer’s type (these assumptions will be relaxed in the
“Relaxing Pessimistic Assumptions About the Interviewer’s
Trustworthiness: The Incomplete Information Game” sec-
tion). We propose to represent the interview situation as a
trust relation. In sociology, trust has been seminally analyzed
by Coleman (1990, chapter 5), who models the investment of
trust as a rational decision under risk. Coleman’s account has
been subject to the criticism of neglecting the strategic nature
of the investment decision. Both agents, interviewer and
Krumpal and Voss 5
respondent, must instead be modeled as being rational
agents, which can be accomplished by using game theory.
The most elementary game theoretic model of the interview
situation is depicted in the game tree of Figure 1. The social
interaction between an interviewer and a respondent type A
in a sensitive survey can be conceived as a game that is akin
to a trust game. Although our game is slightly different from
the standard trust game (as described, for instance, in
Buskens & Raub, 2002; Tutic & Voss, 2020; Voss, 1998), our
modified trust game in the following for convenience will be
labeled “trust game.” In this game, as in the standard trust
game, the unique subgame perfect Nash equilibrium is not to
give a truthful answer and not to protect privacy.1 Thus, the
model predicts that a respondent type A will not answer
truthfully in the direct mode. A respondent type non-A (with-
out sensitive trait) obviously (due to the assumption C = 0)
has no incentive to lie in the direct mode (not shown in
Figure 1). In fact, respondents of this type are indifferent
between answering truthfully or lying as long as giving no
truthful answer does not give a sign to the interviewer that
the respondent is suspect of having a sensitive trait A. In the
latter case, the respondent has a positive incentive to answer
truthfully.
Proposition 1: In the direct mode, respondents who have
sensitive trait A do not answer truthfully whereas respon-
dents with trait non-A have no incentive to lie.
The Degree of Privacy Disclosure in the RRT
Mode
To give an analysis of respondents’ behavior in the RRT
mode, it is useful to specify a measure for the degree
of privacy disclosure in RRT surveys. Remember that RRT
surveys are designed to increase the degree of privacy pro-
tection and decrease the degree of privacy disclosure, respec-
tively. If the respondent is convinced that there is perfect
privacy protection, there will, in principle, be no positive
incentive to lie.
Note that the following analysis and the proof (see the
appendix) hold for RRT designs offering an unambiguous
self-protective response option, that is, the forced-choice
design or related designs (such as the unrelated question
model). The following statements do not hold for symmetric
RRT designs, in which noncompliance is not clearly associ-
ated with a specific response option (e.g., Warner’s original
RRT or the crosswise model; see Yu et al., 2008).
For simplicity, but without loss of generality, only dichot-
omous items with possible answers “Yes” or “No” in a typi-
cal “forced-choice” RRT design will be considered: Although
many alternative privacy measures have been discussed or
used in the literature for the purposes of our analysis, we
assume that the degree of privacy disclosure depends on the
difference between the conditional probabilities of being
perceived as belonging to a sensitive group A given a spe-
cific answer.2 Thus, the difference
PP
(A |Yes)(A|No) is
interpreted as the “degree of privacy disclosure” in the RRT
mode. Because it holds
PP
(A |Yes)> (A |No) for a com-
prehensive set of conditions, which are fulfilled in the type of
RRT survey that is covered here (for proof, see the appen-
dix), both types of respondents A (with sensitive trait) and
non-A (without sensitive trait) have an incentive to disregard
the instructions in the RRT mode under certain conditions
(i.e., for non-A respondents, who are instructed to use the
“Yes” answer). In contrast, respondents type non-A have no
incentive to lie in the direct questioning mode. Thus, the
Figure 1. Respondent type A in direct mode (simple trust game).
6 SAGE Open
potential for social desirability bias is higher in the RRT
mode.
In the next section, some elementary game theoretic argu-
ments to explain a respondent’s tendency to answer truth-
fully and/or to follow the RRT instructions are presented.
Analysis of Respondents’ Behavior in the RRT
Mode (Rational Egoism)
Truthful answers of a respondent with trait A reveal trait A
with P(A| Yes) > P(A| No). If A is detected, the respondent
will incur cost C > 0 of becoming known to be an A.
However, because the RRT design implies that detecting an
A is not perfect but depends on the degree of privacy disclo-
sure
PP
(A |Yes)(A|No), the expected cost of answering
truthfully is C‘:= [P(A| Yes) − P(A| No)] C. Given that P(A|
Yes) > P(A| No), even respondents with trait non-A will
become suspects of belonging to the stigmatized group if
they follow RRT instructions in the case that the survey
requires them (with probability λ) to give the “Yes” answer.
Respondents with trait non-A are assumed to prefer not to be
associated with the stigmatized group and incur costs if the
interviewer does not protect privacy in this case. For conve-
nience, but without loss of generality, we assume that, in this
case, a non-A similarly incurs costs C’.3
Assumption 3’: Respondents with trait A will, therefore,
incur costs C >> C> 0 if privacy is not protected.
Respondents with trait non-A who follow the instruction
to give an automatic “Yes” answer will similarly incur
costs C> 0 if privacy is not protected.
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5: The interviewer avoids efforts or has no
interest in the protection of privacy. Thus, the interviewer
receives a payoff of R if she protects privacy and T if she
does not. We assume that T > R.
Let us now examine the case of asking a sensitive ques-
tion in the RRT mode. The interview in this case is repre-
sented by a simple trust game as depicted in Figure 2. Let us
first look at the situation of respondents with sensitive trait
A. Because 0 >C’, the respondent’s unique Nash equilib-
rium strategy is to disregard the RRT instructions and give a
protective answer.
In addition, rational non-As (who do not have the sensitive
trait) may be reluctant to follow the RRT instructions. They
are tempted to give a protective “No” answer even if the result
of the randomizing device instructs them to answer “Yes.”
This is so because only the protective answer will secure that
respondents do not become suspect of having sensitive trait
A. In other words, both types of respondents, As and non-As,
have an incentive to lie or to disregard the RRT instructions,
respectively. Assuming rationality, both types of respondents
will recognize that “Yes” answers (which would be stigmatiz-
ing in the case of direct questioning) reveal trait A with prob-
abilities
PP
(A |Yes)> (A |No).
Note that the modified structure of the trust game in
Figure 2 predicts that even respondents type non-A have an
incentive to disregard the RRT instructions and to give eva-
sive “No” answers even if the result of the randomizing
device instructs them to answer “Yes.” This corresponds to
qualitative observations in former RRT surveys. Some exem-
plary respondents’ statements were “I only said ‘Yes’ because
I tossed 3 times head” or “what I tossed does not reflect my
true opinion.” Especially with items reflecting xenophobic
and anti-Semitic attitudes, respondents were reluctant to give
Figure 2. Respondent types A and non-A in RRT mode (simple trust game).
Note. RRT = randomized response technique.
Krumpal and Voss 7
a surrogate “Yes” answer independent of their personal opin-
ions (Krumpal, 2010). The unique Nash equilibrium is not to
give a truthful answer (and not to follow the RRT instruc-
tions, respectively) and not to protect privacy. Because bias
is introduced by both types of respondents, As and non-As,
the potential for overall social desirability bias is higher in
the RRT mode. It is important to notice that in the case that a
proportion of type non-A respondents does not follow RRT
instructions, there will, ceteris paribus and even if—counter-
factually—all A types answer truthfully, be an underestima-
tion of ϕ and, therefore, also of the true population prevalence
π of the sensitive trait. If there is a considerable fraction of
rational egoists among respondents, there will be many false
negatives and even negative prevalence estimates (as
reported, on the basis of experimental data, in Coutts & Jann,
2011). In contrast, only respondents of type A introduce
social desirability bias into prevalence estimates in the direct
questioning mode.
Proposition 2: In the RRT mode both types of respon-
dents, As and non-As, have no positive incentive to
answer truthfully or to follow the RRT instructions.
In conclusion, our analysis predicts that rational and self-
regarding respondents (under standard “homo economicus”
rationality assumptions) in general will not participate and
(if so) not answer truthfully in sensitive surveys. To elaborate
conditions under which respondents answer truthfully (and
comply with the RRT instructions, respectively) in sensitive
surveys, different motivational assumptions have to be intro-
duced into the model.
Relaxing Pessimistic Assumptions About the
Interviewer’s Trustworthiness: The Incomplete
Information Game
Our results about behavior in the direct and in the RRT
modes critically depend on respondents’ extremely pessimis-
tic beliefs about the type of the interviewer. However,
respondents may be more optimistic, in that, they know (in
game theoretic terms: have a common prior probability esti-
mate) that a fraction µ (1 > µ > 0) of interviewers is trust-
worthy. Thus, we employ the following (modified) behavioral
assumptions in the direct mode:
Assumption 3: Respondents with trait A will incur costs
C > 0 if privacy is not protected. Respondents of type
non-A, however, will have costs C = 0.
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5’: There are two types of interviewers. One
type is trustworthy and is willing to protect the respon-
dent’s privacy. The payoffs are R* if privacy is protected
and T* if not. It holds R* > T* for this type. The other
type behaves opportunistically and avoids efforts or has
no interest in the protection of privacy. Thus, the inter-
viewer receives a payoff of R if she protects privacy and T
if she does not. We assume that T > R.
Assumption 6: Respondents (and interviewers) have a
common prior probability estimate as to the distribution
of both types of interviewers such that there is a fraction µ
of trustworthy interviewers and a fraction (1 − µ) of inter-
viewers who are opportunists.
Assumption 7: Interviewers know their type, but respon-
dents do not know the type of an interviewer who is part-
ner in a particular interview situation. The respondent
only is informed about parameter µ.
The rationale for Assumptions 5’ and 6 can be seen in the
fact that some proportion of interviewers or organizations
that administrate the surveys are intrinsically motivated to
behave trustworthy or because they want to acquire a good
reputation. However, according to assumption 7, respon-
dents are not able to evaluate the trustworthiness of individ-
ual interviewers. Figure 3 depicts the basic structure of the
incomplete information game representing the direct mode.
Examining the incomplete information game for the direct
mode is straightforward. Because to lie is weakly dominant
whenever (as has been assumed) µ < 1, there is no incentive
to give a truthful answer—irrespective how large the prior µ
is. For µ = 1, however, the game is equivalent to the situation
with complete information and with an interviewer who is
considered as being perfectly trustworthy.
Proposition 3: Type A respondents will only answer
truthfully in the direct mode if µ = 1, that is, if they are
perfectly certain that the interviewer is trustworthy.
A respondent type non-A (without sensitive trait) obvi-
ously (due to the assumption C = 0) has no incentive to lie in
the direct mode (not shown in Figure 3). Our analysis of the
incomplete information game easily extends to the RRT
interview. In this case, all the assumptions except Assumption
3 are kept as follows:
Assumption 3’: Respondents with trait A will, therefore,
incur costs C >> C> 0 if privacy is not protected.
Respondents with trait non-A who follow the instruction
to give an automatic “Yes” answer will similarly incur
costs C> 0 if privacy is not protected.
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5’: There are two types of interviewers. One
type is trustworthy and is willing to protect the respon-
dent’s privacy. The payoffs are R* if privacy is protected
and T* if not. It holds: R* > T* for this type. The other type
behaves opportunistically and avoids efforts or has no
interest in the protection of privacy. Thus, the interviewer
8 SAGE Open
receives a payoff of R if she protects privacy and T if she
does not. We assume that T > R.
Assumption 6: Respondents (and interviewers) have a
common prior probability estimate as to the distribution
of both types of interviewers such that there is fraction µ
of trustworthy interviewers and a fraction (1 − µ) of inter-
viewers who are opportunists.
Assumption 7: Interviewers know their type, but respon-
dents do not know the type of an interviewer who is part-
ner in a particular interview situation. The respondent
only knows parameter µ.
Figure 4 shows the game tree of this incomplete informa-
tion game in the RRT mode. Because the game is structurally
identical to the game in Figure 3, analogous results apply.
Proposition 4: Rational egoists in general will not answer
truthfully (or follow RRT instructions) in RRT surveys.
This even holds for “optimistic” beliefs 0 < µ < 1 and
irrespective how large the cost C is. It furthermore holds
for both types of respondents, As and non-As.
Introducing Social Preferences and Norms
There is by now a comprehensive literature in behavioral
game theory indicating the effects of social preferences and
of norms on cooperative behavior (see, for example, Camerer,
2003; Diekmann, 2004). In the survey methodology litera-
ture, a great deal of work assumes that participating in an
interview may depend on rewards (such as approval from the
interviewer) such that a motive of positive reciprocity is
(0,R)(0,0) (-C,T) (0,R*)(0,0)(-C,T*)
privacy
N
1-µµ
Respondent A
truthful lielie truthful
privacyno privacyno privacy
T>R R*>T* C>0
Figure 3. Incomplete information in direct mode for respondent type A (extended trust game).
(0,R)(0,0) (-C’,T) (0,R*)(0,0)(-C’,T*)
privacy
N
1-µµ
Respondent
A andNon-A
truthful lielie truthful
privacyno privacyno privacy
T>R R*>T* C’>0
Figure 4. Incomplete information in RRT mode for respondent types A and non-A (extended trust game).
Note. RRT = randomized response technique.
Krumpal and Voss 9
elicited on part of the respondent. Sometimes this reciprocity
is associated with the activation of a “norm of truthful
answering” (Esser, 1990) prescribing that someone should
be honest and cooperative in a social interaction (e.g., in a
survey interview). This norm may possibly interfere with
another norm, which is relevant in this realm, namely, the
“norm of social desirability,” specifying that certain kinds of
behavior are negatively valued by society. Given this norm,
respondents with sensitive trait A will incur costs of embar-
rassment if they answer truthfully in the direct mode or if
there is a positive probability that the trait will be detected by
the interviewer in the RRT mode. This may in particular be
the case in face-to-face interview situations.4
There is of course a plethora of possible ways to model
social preferences and internalized norms in a game theoretic
context. Because, in this article, we do only want to use the
most elementary modeling tools, we can represent these
ideas by the following assumptions, which apply to the inter-
view situation in the direct mode:
Assumption 3a: The respondent with trait A will incur
costs C > 0 if privacy is not protected. Respondents of
type non-A, however, will have costs C = 0. The cost may
(in addition to material sanctions) be related to the cost of
violating “the norm of social desirability.”
Assumption 3b: Respondents who answer truthfully and,
therefore, conform to “the norm of truthfulness” receive a
utility U > 0.
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5: The interviewer avoids efforts or has no
interest in the protection of privacy. Thus, the interviewer
receives a payoff of R if she protects privacy and T if she
does not. We assume that T > R.
First, consider the direct mode under complete informa-
tion conditions including social norms, which is represented
in Figure 5. The figure covers two types of respondents:
Either UC < 0 or UC > 0. It is obvious that a strong
internalized norm to be honest (i.e., to answer truthfully in a
survey) or a weak strength of the norm of social desirability
(a situation covered by UC > 0) is necessary as an incen-
tive to answer truthfully even under pessimistic assumptions
about the trustworthiness of the interviewer. If UC > 0,
the unique subgame perfect Nash equilibrium is giving a
truthful answer and not protecting privacy.
Proposition 5: Rational respondents with internalized
norms of answering truthfully or of social desirability will
answer truthfully if and only if UC > 0 in the direct
mode.
Introducing more optimistic beliefs as before to the direct
mode situation leads to our next result. We assume that there
is a nonzero probability of a trustworthy interviewer as
before.
Assumption 3a: The respondent with trait A will incur
costs C > 0 if privacy is not protected. Respondents of
type non-A, however, will have costs C = 0. The cost may
(in addition to material sanctions) be related to the cost of
violating “the norm of social desirability.”
Assumption 3b: Respondents who answer truthfully and,
therefore, conform to “the norm of truthfulness” receive a
utility U > 0.
Respondent A
(0,0)
no truthful answer truthful answer
Respondent:
(1) U>C>0
(2) C>U>0
Interviewer:
0<R<T
not protect
privacy
(U-C,T)(U,R)
Interviewer
protect
privacy
Figure 5. Respondent type A in direct mode (simple trust game including social norms).
10 SAGE Open
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5’: There are two types of interviewers. One
type is trustworthy and is willing to protect the respon-
dent’s privacy. The payoffs are R* if privacy is protected
and T* if not. It holds: R* > T* for this type. The other
type behaves opportunistically and avoids efforts or has
no interest in the protection of privacy. Thus, the inter-
viewer receives a payoff of R if she protects privacy and T
if she does not. We assume that T > R.
Assumption 6: Respondents (and interviewers) have a
common prior probability estimate as to the distribution
of both types of interviewers such that there is fraction µ
of trustworthy interviewers and a fraction (1 − µ) of inter-
viewers who are opportunists.
Assumption 7: Interviewers know their type, but respon-
dents do not know the type of an interviewer who is part-
ner in a particular interview situation. The respondent
only knows parameter µ.
The game model for the direct mode under incomplete
information conditions including social norms is depicted in
Figure 6.
For a respondent type A, the following predictions could
be derived with respect to the direct mode: If µ exceeds the
critical probability µ*: = 1 − (U / C), the respondent will
give a truthful “Yes” answer in the interview.
Proposition 6: Type A respondents will only answer
truthfully in the direct mode if the probability for the
interviewer’s trustworthiness µ exceeds the critical value
µ*: = 1 − (U / C).
This result can again be applied to two types of respon-
dents: Either UC < 0 or UC > 0. If UC > 0, the
respondent will always give a truthful answer. If UC < 0,
the prediction for the respondent’s behavior will become
more sophisticated (see below).
If C is a positively increasing function of the item’s sensi-
tivity (i.e., the strength of the underlying “norm of social
desirability”) and U is independent of the item’s sensitivity
(we assume that U is a characteristic of the respondent), the
following hypothesis would result:
The larger the strength of the intrinsic motivation to
tell the truth U and the lower the sensitivity of the item
C (i.e., the weaker the underlying “norm of social
desirability”), the higher the tendency to answer
truthfully.
Conformity to the “the norm of truthfulness” is immedi-
ately recognized by the interviewer if the respondent gives
a self-stigmatizing “Yes” answer in the direct mode.
Furthermore, a “Yes” answer in the direct mode can be
interpreted as a strong signal to the interviewer that the
respondent values the “norm of truthfulness” highly. In
contrast, a respondent type non-A will always give a truth-
ful “No” answer in the direct mode.
Let us finally examine the RRT situation under condi-
tions of more optimistic beliefs about the interviewer and
for respondents with internalized norms. Now the follow-
ing assumptions apply:
Assumption 3a’: Truthful answers of a respondent with
trait A reveal trait A with P(A| Yes) > P(A| No). If A is
detected, the respondent will incur cost C > 0 of
(U,R)(0,0) (U-C,T)(U,R*)(0,0)(U-C,T*)
privacy
N
1-µµ
Respondent A
truthful lieli
et
ruthful
privacyno privacyno privacy
T>R R*>T*
Figure 6. Incomplete information in direct mode for respondent type A (extended trust game including social norms).
Krumpal and Voss 11
becoming known to be an A. However, because the
RRT design implies that detecting an A is not perfect
but depends on the degree of privacy disclosure
PP
(A |Yes)(A|No), the expected cost of answering
truthfully is C‘:= [P(A| Yes) − P(A| No)] C.
The respondent with trait A will, therefore, incur costs
C >> C> 0 if privacy is not protected.
The cost may (in addition to material sanctions) be related
to the cost of violating “the norm of social desirability.”
Assumption 3b: Respondents who answer truthfully and,
therefore, conform to “the norm of truthfulness” receive a
utility U > 0.
Assumption 4: The interviewer (or the organization that
employs the interviewer) is interested to know whether
the respondent has trait A.
Assumption 5’: There are two types of interviewers. One
type is trustworthy and is willing to protect the respon-
dent’s privacy. The payoffs are R* if privacy is protected
and T* if not. It holds: R* > T* for this type. The other
type behaves opportunistically and avoids efforts or has
no interest in the protection of privacy. Thus, the inter-
viewer receives a payoff of R if she protects privacy and
T if she does not. We assume that T > R.
Assumption 6: Respondents (and interviewers) have a
common prior probability estimate as to the distribution
of both types of interviewers such that there is fraction µ
of trustworthy interviewers and a fraction (1 − µ) of inter-
viewers who are opportunists.
Assumption 7: Interviewers know their type, but respon-
dents do not know the type of an interviewer who is part-
ner in a particular interview situation. The respondent
only knows parameter µ.
The game model for the RRT mode under incomplete
information conditions including social norms is depicted in
Figure 7:
With respect to the RRT mode, the following predictions
could be derived for both types of respondents A and non-A:
If µ exceeds the critical probability µ**: = 1− (U / C‘) with
C‘: = PPC(A |Yes)(A|No)
[]
and µ* > µ** (assump-
tion: U and C are held constant across modes), both types of
respondents will follow the RRT procedure and are expected
to give an incriminating “Yes” answer in the interview.
Conformity to the “norm of truthfulness” is not directly
recognized by the interviewer if the respondent answers
“Yes” in the RRT mode. For respondent type A, a truthful
answer is less costly in terms of subjective risks of being
punished (if interviewer is opportunistic) compared with the
direct mode. Furthermore, a “Yes” answer in the RRT mode
may be interpreted as a weak signal to the interviewer that
the respondent values the “norm of truthfulness” highly.
Proposition 7: In the RRT mode, both types of respon-
dents (with internalized norms) A and non-A will answer
truthfully and comply with the RRT instructions,
respectively, if the probability for the interviewer’s
trustworthiness µ exceeds the critical value µ**: = 1 −
(U / C‘).
Comparing Propositions 6 and 7 yields the following
proposition with respect to the probability to answer truth-
fully: The probability for As to answer truthfully is (holding
constant U and C across modes) higher in the RRT mode
than in the direct mode. In contrast, the probability for non-
As to answer truthfully and comply with the RRT instruc-
tions, respectively (holding constant U and C across modes),
is lower in the RRT mode than in the direct mode or equal in
(U,R)(0,0) (U-C', T) (U,R*)(0,0)(U-C', T*)
privacy
N
1-
Respondent A and Non-A
truthful lielie truthful
privacy no privacyno privacy
T>R R*>T*
Figure 7. Incomplete information in RRT mode for respondent types A and non-A (extended trust game including social norms).
Note. RRT = randomized response technique.
12 SAGE Open
both modes (depending on the respondent’s preferences,
either U – C < 0 or U – C > 0).
Summary
In summary, Table 1 gives an overview of the conditions for
giving truthful answers under incomplete information for
respondent type and interview mode.
Our approach reveals that rational egoists will not answer
truthfully even in RRT surveys, if there is a positive proba-
bility that privacy is not protected, that is, the interviewers
are perceived as being not perfectly trustworthy: (1 − µ) > 0.
Introducing nonstandard preferences and norms is neces-
sary to explain truthful answering in sensitive surveys. For
type A respondents with normative orientations, the proba-
bility to answer truthfully is higher in the RRT mode than in
the direct mode, because µ** = 1 − (U / C‘) < µ* = 1 − (U
/ C). For type non-A respondents with normative orienta-
tions, in contrast, the probability to answer truthfully and
comply with the RRT instructions, respectively, is lower in
the RRT mode than in the direct mode or equal in both modes
(depending on the respondent’s preferences, either U – C <
0 or U – C > 0).
Discussion
In this article, a simple game theoretic approach to the sur-
vey interview has been presented. Our analysis is based on
certain assumptions, which may be targets of critical com-
ments. With regard to the assumption that is implied in most
theoretical work on this subject that respondents are able to
act as if they could calculate posterior probabilities (based
on estimates of the unknown parameter π), it can be argued
that research from cognitive psychology has empirically
demonstrated that humans systematically deviate from rules
of Bayesian reasoning (cf. the seminal contributions by
Kahneman and Tversky, e.g., Kahneman, 2011). It seems
indeed cognitively quite demanding to naive and also to
experienced participants (“experts”) to correctly apply
Bayes’ rule. In fact, posterior probabilities are often severely
overestimated. To illustrate, this may be due to the so-called
inverse fallacy (Mandel, 2014). This heuristic confounds
P(A |Yes) with P(Yes |A). By design, P(Yes |A) equals
p+
λ
= .75 + .125 = .875 (assuming that respondents fol-
low RRT instructions). If in addition P(A |No) and
P(No|A) are confounded too (P(No|A) equals 1
−−
p
λ
=
.125 by design), the degree of privacy disclosure will erro-
neously be estimated as .75. Another intuitive heuristic,
which seems to be quite common (see Mandel, 2014),
namely, neglecting the base rate in calculating or estimating
posterior probabilities yields the same incorrect estimate for
the degree of privacy disclosure of .75. The correct value
depends on the base rate P(A), which is unknown but may
be guessed by the respondent if she is a member of, or if she
is familiar with, the stigmatized group. To illustrate, assum-
ing a base rate P(A) = .05 (and
λλ=−
1p = .125 and that
every subject follows the RRT instructions), an application
of Bayes’ rule will result in P(A Yes) = .269 and in a
degree of privacy disclosure of .262.5 In general, the degree
of bias due to these heuristics is minimal for values of the
base rate in the vicinity of P(A) = .5. It increases as P(A)
0 and as P(A) 1 (see Figure 8). Thus, one might say that
these heuristics will not be adaptive with extreme values of
P(A). In these cases, the degree of privacy disclosure will be
severely overestimated. It is of course an empirical question
whether or not and in which degree respondents use heuris-
tics that generate biased subjective estimates for the poste-
rior P(A|Yes). We suggest that such biases will make our
arguments stronger because perceived expected costs of
answering truthfully become larger.
In the following, some further ideas are outlined: One
possible model extension is relaxing the assumption that U
and C are constant across modes. Instead, one could conjec-
ture that the “norm of truthfulness” may be less relevant to
Table 1. Conditions for Giving Truthful Answers Under
Incomplete Information for Respondent Type and Interview Mode.
Rational respondents
with egoistic orientations Direct questioning RRT
Respondent is an A µ = 1 µ = 1
Respondent is a non-A Answers truthfully µ = 1
Rational respondents
with normative orientations
Respondent is an A µ* = 1 − (U / C) µ** = 1 − (U/C‘)
Respondent is a non-A Answers truthfully µ** = 1 − (U/C‘)
Note. µ, µ*, and µ** denote the critical values for the fraction of trustworthy
interviewers with µ* > µ**. RRT = randomized response technique.
Figure 8. The degree of privacy disclosure as a function of the
base rate P(A).
Note. The solid curve shows the degree of privacy disclosure as a function
of the base rate P(A) using Bayes’ rule, maximum value is .75 for P(A) =.5;
the dotted line represents the erroneous estimate (.75) using one of the
cognitive heuristics (see text).
Krumpal and Voss 13
respondents in RRT surveys compared with respondents in
the direct mode (U and C may vary across modes). Because
a specific “Yes” or “No” answer is always ambiguous and
does not reveal anything definite about the respondent, con-
formity to the “norm of truthfulness” is not directly recog-
nized by the interviewer (i.e., a “Yes” answer in the RRT
mode cannot be interpreted as a strong signal that the respon-
dent values the “norm of truthfulness” highly).
Thus, one can think of a “crowding out effect of the
“norm of truthfulness” in the RRT mode. Such “crowding
out” effects could be triggered by the unusual and complex
RRT procedure that would not activate habits and social
norms. More specifically, an anonymous interview situation
could reduce the intrinsic motivation to tell the truth in the
RRT mode (compared with the direct mode). In this case, one
could conjecture that µ* = 1 − (U / C) is not necessarily
larger than µ** = 1 − (U / C‘).
A rational choice analysis of the social interaction in sen-
sitive surveys shows that modelling normative orientations
is necessary to explain the occurrence of self-stigmatizing
self-reports in sensitive surveys. More complex modeling
could be useful with regard to the relative strengths of (a) the
“crowding out” effect of the “norm of truthfulness” and (b)
the effect of a reduced cost of violations of privacy CC in
the RRT mode to predict whether respondents of type A will
show a higher tendency for truthful answers in the RRT
mode. Focusing on type A respondents, some previous indi-
vidual validation studies comparing RRT with direct ques-
tioning yielded more valid results in the RRT condition
(Lensvelt-Mulders, Hox, van der Heijden, & Mass, 2005).
Thus, it could be speculated that the effect of a reduced cost
of privacy violations would outperform the assumed “crowd-
ing out” effect of the “norm of truthfulness” for respondents
of type A, that is, it would still hold that µ* = 1 − (U / C) >
µ** = 1 − (U / C‘).
Furthermore, future theoretical and empirical studies
could focus on the impact of the RRT scheme on the innocu-
ous (type non-A) respondents’ tendency to answer truthfully
and comply with the RRT instructions, respectively. Whereas
respondents of type A might benefit from the RRT mode,
respondents type non-A might not: In our discussion of pre-
liminary research, we reviewed empirical studies document-
ing noncompliance with the RRT rules, self-protective “No”
answers, and negative prevalence estimates (Coutts & Jann,
2011; Holbrook & Krosnick, 2010). It is likely that these
problems are primarily driven by respondents type non-A.
This result is in accordance with our game theoretic model
predicting that the probability for non-As to answer truth-
fully and comply with the RRT instructions, respectively, is
lower in the RRT mode than in the direct mode or equal in
both modes (depending on the respondent’s preferences,
either UC < 0 or UC > 0; see the last section in section
“Introducing Social Preferences and Norms”).
In regard to prevalence estimation using different data
collection methods, one could hypothesize that RRT failures
are more likely to occur with sensitive characteristics that are
less prevalent (e.g., heroin use) compared with ones that are
highly prevalent (e.g., alcohol use). This is because, in the
former case, a higher share of respondents type non-A exists,
for which the use of the RRT mode might be less beneficial
as our theoretical model suggests. In future empirical studies
focusing on different sensitive characteristics with varying
prevalence rates, this prediction could be directly tested.
However, note that the suggested manipulations will, in
many cases, affect not only the prevalence rates (and thus the
influence of self-protective answer behavior by respondents
type non-A) but also the costs: Attributes with low preva-
lence rates are also often very sensitive (e.g., heroin use),
whereas attributes with high prevalence rates tend to be less
sensitive (e.g., alcohol use). With increasing item’s sensitiv-
ity, the extent of self-protective answer behavior (and also
the risk of RRT failure) is expected to increase. Researchers
designing an experimental test of our model’s prediction
should be aware of the potential of confounding between the
prevalence rate (i.e., the share of respondents type non-A)
and the item’s sensitivity.
Finally, possibilities and limits of game theoretic analyses
of the survey response process in sensitive surveys could be
further explored. In our article, we explicate and discuss the
theoretical foundation of the research on sensitive topics and
social desirability bias in the context of a general theory of
social interactions. Taking into account the interactive nature
of the interview situation in sensitive surveys, our work
advances former theoretical contributions (i.e., parametric
models of decision making; see Esser, 1986; Stocké, 2007b),
who conceptualized the choice whether or not to answer
truthfully as a parametric decision problem of the respondent
and not as a strategic situation. We think that our game theo-
retic model contributes to a better understanding of the psy-
chological processes and social interactions between the
actors (respondents, interviewers, and data collection institu-
tions) that are involved in the collection of sensitive data.
Empirical researchers could also benefit from our insights
providing them with a substantiated theoretical basis for
optimizing the survey design to achieve high-quality data:
Former theoretical papers assumed that all respondents give
truthful answers and follow the RRT procedure, respectively
(e.g., Nayak, 1994). In contrast, our theoretical model argues
that these assumptions are questionable and predicts that
truthful responding is less likely for innocuous (type non-A)
respondents in the RRT mode than in the direct mode. To
increase the respondents’ motivation to comply with the RRT
instructions, careful designing and pretesting of the concrete
RRT implementation as well as a thorough interviewers’
training seem reasonable strategies to generate better data.
RRT surveys should always be pretested very carefully. If the
pretests of a specific study indicate severe problems in regard
to the implementation of the RRT, alternative methods of pri-
vacy protection might be considered (e.g., self-administered
data collection, mixed mode designs, sealed envelope
14 SAGE Open
techniques, or special wording approaches; for an overview,
see Krumpal, 2013; Tourangeau & Yan, 2007).
In regard to prevalence estimation, statistical methods
using a cheating extension of the RRT (e.g., Ostapczuk et al.,
2011; Reiber et al., 2020) should be used to account for self-
protective response behavior, especially in surveys in which
the characteristic under investigation is very sensitive or has
a low prevalence rate (i.e., in populations in which the share
of respondents type non-A is high). These considerations
regarding survey design and analysis are quite general in
nature. They are based on predictions of the proposed theory
that should be tested empirically in future research studies.
Appendix
Proof of
(A |Yes)> (A |No)
There are two types of respondents, A (with sensitive trait)
and non-A (without sensitive trait), with the probabilities
P(A) = 1 − P(non-A). Note that P(A) is equal to the
unknown population proportion πA with the sensitive trait
A. For simplicity, only dichotomous items with possible
answers “Yes” or “No” are considered. Furthermore, 0 <
P(A) = πA < 1 is a necessary condition for
PP
(A |Yes)> (A |No) (see below).
First, the conditions are identified under which
PP
(A |Yes)> (A |No) . The randomized response technique
(RRT) format determines the design probabilities:
PP
PP
(Yes |A)=1(No |A) and
(Yes | non -A)= (No | non -A)
1
Under the assumption that all respondents follow the RRT
instructions, Bayes’ theorem can be used to calculate the
conditional probabilities of being a member of the stigma-
tized group A given a specific response:
PPP
PP P
(A |Response) =(A)(Response|A)
(A)(Response|A) +(non -A
))
(Response|non -A)P
Substituting a specific response results in the two condi-
tional probabilities:
PPP
PP P
P
(A |Yes)= (A)(Yes|A)
(A)(Yes|A) +(non -A)
(Yes | non -A)
and
(A |No) = (A)(No |A)
(A)(No |A)+ (non -A)
(No|no
PPP
PP P
P
nn-A)
Next, the condition has to be identified under which
P
P
(A |Yes)
(A |No) >1
!
:
P
P
PP
PP PP
(A |Yes)
(A |No) =
=(A)(Yes|A)
(A)(Yes|A) +(non -A)(Yes
⋅⋅
|| non -A)
(A)(No |A)+ (non -A)(No | non -A)
(A)(No |A)
=
(A
⋅⋅
PP PP
PP
P))(Yes|A) (No|A) +(non -A)
(Yes |A)(No | non -A)
(A)(Ye
⋅⋅
⋅⋅
PPP
PP
PP
ss|A) (No|A) +(non -A)
(Yes | non -A)(No |A)
>1
!
⋅⋅
PP
PP
aP PP
aP PP
+(non -A)(Yes|A) (No|non -A)
+(non -A)(Yes|non -A)(N
⋅⋅
⋅⋅oo|A) >1
!
PPP
PP P
(non -A)(Yes|A) (No|non -A)
>(non -A)(Yes|non -A)(No |A
⋅⋅
⋅⋅))
PP
PP
(Yes |A)(No | non -A)> (Yes | non -A)(No |A)
⋅⋅
From
PP
(No|non -A)= (Yes | non -A)1, it follows
that
PP
PP
(Yes |A)(1(Yes|non -A))
>(Yes|non -A)(No |A)
⋅−
⋅⇔
PPP
PP
(Yes |A)(Yes|A) (Yes | non -A)
>(Yes|non -A)(No |A)
−⋅
⋅⇔
PP
PPP
(Yes |A)> (Yes | non -A)
(No|A) +(Yes|A) (Yes | non -A)
⋅⋅
PP PP(Yes |A)> (Yes | non -A)( (No|A) +(Yes|A))
Because
PP
(No|A) +(Yes|A) = 1, the condition for
PP
(A |Yes)> (A |No) to be true is
PP(Yes |A)> (Yes | non -A)
In the RRT design, the provision of a surrogate “Yes” or
“No” answer contains no information and is independent of
the respondent’s true status. Let us denote a truthful answer
to a sensitive question Q and a surrogate answer Q’, with
selection probability p = P(Q) = 1 − P(Q’). Let us further
assume that all respondents follow the RRT procedure. In the
case that the randomization device determines the provision
of a surrogate answer (with probability 1 − p), we obtain
PP(Yes |A Q’)= (Yes | non -A Q’)
∩∩
= q
Krumpal and Voss 15
In contrast, in the case that the randomization device
determines the provision of a truthful answer (with probabil-
ity p) we obtain
P(Yes |A Q) =1 and P(Yes | non -A Q)=0,
from which follows that
PP P
PP
(Yes |A)= (Yes |A Q) (Q)
+(Yes|AQ’) (Q’)
∩⋅
∩⋅ = p + q (1 − p) and
PP P
P
(Yes | non -A)= (Yes | non -A Q) (Q)
+(Yes|non -A Q’)P(Q’)
∩⋅
∩⋅
= q (1 − p)
Therefore, PP(Yes |A)> (Yes | non -A)
⇔+ −> pq pq p()
()11
⇔>
p0
Thus,
PP
(A |Yes)> (A |No) is true for any sensitive
question’s selection probability p > 0, and 0 < P(A) = πA <
1. In the RRT design, the difference
PP
(A |Yes)> (A |No)
can be interpreted as the “degree of privacy disclosure.”
Survey designers could maximize the degree of privacy pro-
tection and minimize the degree of privacy disclosure,
respectively, by choosing P(A |Yes) and P(A |No) reason-
ably close to each other (see Ljungqvist, 1993, for similar
recommendations).
Authors’ Note
The ordering of authorship is alphabetic.
Acknowledgments
Previous versions have been presented at seminars at Venice
International University and University of Leipzig. In addition to par-
ticipants of these seminars we thank three anonymous referees for
very helpful comments. Philipp Voss helped in providing Figure 8.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect
to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research and/or
authorship of this article.
ORCID iD
Ivar Krumpal https://orcid.org/0000-0003-2149-2203
Notes
1. A Nash equilibrium can be defined as “a set of strategies such
that each player has correct beliefs about the others’ strategies
and strategies are best for each player given beliefs about the
other’s strategies” (Dixit et al., 2009, p. 120). In addition, sub-
game perfectness reduces the number of potential equilibria
by using the criteria of credibility requiring players “to use
strategies that constitute a Nash equilibrium in every subgame
of the larger game” (Dixit et al., 2009, p. 198). In sections
“Relaxing Pessimistic Assumptions About the Interviewer’s
Trustworthiness: The Incomplete Information Game” and
“Introducing Social Preferences and Norms,” refinements of
equilibrium concepts for games with incomplete information
will be used. An easily accessible introduction can, for exam-
ple, be found in Osborne (2004).
2. Ljungqvist (1993) introduces a similar measure. Notice that
this measure in our context is not intended as a contribution to
the vast statistical literature on privacy measures, which may
help to optimize the design of RRT surveys with respect to
statistical criteria (e.g., efficiency).
3. Costs of survey participants belonging to type non-A respon-
dents may in fact be lower than C’ because they may actually
be able to provide credible evidence to the interviewer agency
that they are innocuous. This, however, would not affect our
argument provided that these costs are positive. For conve-
nience we, therefore, label these costs as C’.
4. “Social desirability” is a judgment about how highly valued a
particular trait is in the society (Groves, 1989). Some behav-
iors are positively valued by social norms (e.g., blood dona-
tion); other behaviors are negatively valued (e.g., illicit drug
use). Social norms are the basis of social desirability beliefs
(SD beliefs, in the literature often referred to as “trait desir-
ability,” for a discussion, see Krumpal & Näher, 2012; Stocké,
2007a). In a survey interview, respondents are reluctant to
reveal the possession of certain traits, which they believe will
be judged undesirable by society because they violate social
norms. In the following, we use the term “norm of social
desirability” to label social norms concerning which traits are
desirable in society.
5. Ignoring the base rate, one would (incorrectly) calculate P
(A Yes) by using P
P
PP
(A |Yes)= (Yes |A)
(Yes |A)+ (Yes | non -A)
instead of (correctly) applying Bayes‘ rule
P
PP
PP PP
(A|Yes)= (A)(Yes|A)
(A)(Yes|A) +(non -A)(Yes|non -A)
⋅⋅
.
References
Andreoni, J. (1990). Impure altruism and donations to public goods:
A theory of warm-glow giving. The Economic Journal, 100,
464–477.
Blair, G., Imai, K., & Zhou, Y.-Y. (2015). Design and analysis of
the randomized response technique. Journal of the American
Statistical Association, 110, 1304–1319.
Boruch, R. F. (1971). Assuring confidentiality of responses in social
research: A systematic analysis. The American Psychologist,
26, 413–430.
Buskens, V., & Raub, W. (2002). Embedded trust: Control and
learning. In E. J. Lawler & S. R. Thye (Eds.), Advances in
group processes (Vol. 19, pp. 167–202). Emerald Group.
Camerer, C. F. (2003). Behavioral game theory: Experiments in
strategic interaction. Russell Sage Foundation.
Chaudhuri, A., Christofides, T. C., & Rao, C. R. (Eds.). (2016).
Handbook of statistics (Vol. 34): Data gathering, analysis
16 SAGE Open
and protection of privacy through randomized response tech-
niques: Qualitative and quantitative human traits. Elsevier.
Coleman, J. S. (1990). Foundations of social theory. The Belknap
Press of Harvard University Press.
Couper, M. P., Singer, E., Conrad, F., & Groves, R. (2008). Risk
of disclosure, perceptions of risk, and concerns about privacy
and confidentiality as factors in survey participation. Journal
of Official Statistics, 24, 255–275.
Coutts, E., & Jann, B. (2011). Sensitive questions in online surveys.
Experimental results for the randomized response technique
(RRT) and the unmatched count technique (UCT). Sociological
Methods & Research, 40, 169–193.
Coutts, E., Jann, B., Krumpal, I., & Näher, A. F. (2011). Plagiarism
in student papers: Prevalence estimates using special techniques
for sensitive questions. Jahrbücher für Nationalökonomie und
Statistik, 231, 749–760.
Cruyff, M., van den Hout, A., van der Heijden, P. G. M., &
Böckenholt, U. (2007). Log-linear randomized-response
models taking self-protective response behavior into account.
Sociological Methods & Research, 36, 266–282.
Diekmann, A. (2004). The power of reciprocity. Journal of Conflict
Resolution, 48, 487–505.
Dixit, A., Skeath, S., & Reiley, D., Jr. (2009). Games of strategy
(3rd ed.). W.W. Norton.
Esser, H. (1986). Können Befragte lügen? Zum Konzept des
“wahren Wertes” im Rahmen der handlungstheoretischen
Erklärung von Situationseinflüssen bei der Befragung [Can
interviewees lie? On the concept of "truth value" within the
framework of the theory-of-action explanation of situation
influences in interviews]. Kölner Zeitschrift für Soziologie und
Sozialpsychologie, 38, 314–336.
Esser, H. (1990). “Habits,” “Frames” und “Rational Choice”: Die
Reichweite von Theorien der rationalen Wahl (am Beispiel der
Erklärung des Befragtenverhaltens). Zeitschrift für Soziologie,
19, 231–247.
Fehr, E., & Gächter, S. (2000). Fairness and retaliation: The eco-
nomics of reciprocity. Journal of Economic Perspectives, 14,
159–181.
Fox, J. A., & Tracy, P. E. (1986). Randomized response: A method
for sensitive surveys. SAGE.
Gouldner, A. (1960). The norm of reciprocity: A preliminary state-
ment. American Sociological Review, 25, 161–178.
Groves, R. M. (1989). Survey errors and survey costs. Wiley.
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency
theory of survey participation: Description and an illustration.
Public Opinion Quarterly, 64, 299–308.
Hoffman, A., Waubert de Puiseau, B., Schmidt, A. F., & Musch, J.
(2017). On the comprehensibility and perceived privacy pro-
tection of indirect questioning techniques. Behavior Research
Methods, 49, 1470–1483.
Höglinger, M., & Jann, B. (2018). More is not always better: An
experimental individual-level validation of the randomized
response technique and the crosswise model. PLOS ONE,
13(8), Article e0201770. https://doi.org/10.1371/journal.
pone.0201770
Höglinger, M., Jann, B., & Diekmann, A. (2016). Sensitive ques-
tions in online surveys: An experimental evaluation of different
implementations of the randomized response technique and the
crosswise model. Survey Research Methods, 10, 171–187.
Holbrook, A. L., & Krosnick, J. A. (2010). Measuring voter turnout
by using the randomized response technique: Evidence calling
into question the method’s validity. Public Opinion Quarterly,
74, 328–343.
Jann, B., Krumpal, I., & Wolter, F. (Eds.). (2019). Social desirabil-
ity bias in surveys—Collecting and analyzing sensitive data
[Special Issue of Methods, Data, Analyses (MDA)]. GESIS.
John, L., Loewenstein, G., Acquisti, A., & Vosgerau, J. (2018).
When and why randomized response techniques (fail to) elicit
the truth. Organizational Behavior and Human Decision
Processes, 148, 101–123.
Kahneman, D. (2011). Thinking, fast and slow. Allen Lane
(Penguin).
Kirchner, A. (2015). Validating sensitive questions: A comparison of
survey and register data. Journal of Official Statistics, 31, 31–59.
Krumpal, I. (2010). Sensitive questions and measurement error:
Using the randomized response technique to reduce social
desirability bias in CATI surveys [Doctoral dissertation].
University of Leipzig.
Krumpal, I. (2012). Estimating the prevalence of xenophobia and anti-
Semitism in Germany: A comparison of randomized response
and direct questioning. Social Science Research, 41, 1387–1403.
Krumpal, I. (2013). Determinants of social desirability bias in sen-
sitive surveys: A literature review. Quality & Quantity, 47,
2025–2047.
Krumpal, I., Jann, B., Auspurg, K., & von Hermanni, H. (2015).
Asking sensitive questions: A critical account of the random-
ized response technique and related methods. In U. Engel, B.
Jann, P. Lynn, A. Scherpenzeel, & P. Sturgis (Eds.), Improving
survey methods: Lessons from recent research (pp. 122–136).
Routledge.
Krumpal, I., & Näher, A. F. (2012). Entstehungsbedingungen sozial
erwünschten Antwortverhaltens: Eine experimentelle Studie
zum Einfluss des Wordings und des Kontexts bei unangenehmen
Fragen [Determinants of Social Desirability Bias: An Experimental
Online Study on the Impact of Forgiving Wording and Question
Context in Sensitive Surveys]. Soziale Welt, 63, 65–89.
Landsheer, J. A., van der Heijden, P. G. M., & Van Gils, G. (1999).
Trust and understanding, two psychological aspects of random-
ized response. Quality & Quantity, 33, 1–12.
Lee, R. M. (1993). Doing research on sensitive topics. SAGE.
Lensvelt-Mulders, G. J. L. M., Hox, J. J., & van der Heijden, P.
G. M. (2005). How to improve the efficiency of randomized
response designs. Quality & Quantity, 39, 253–265.
Lensvelt-Mulders, G. J. L. M., Hox, J. J., van der Heijden, P. G.
M., & Mass, C. J. M. (2005). Meta-analysis of randomized
response research: Thirty-five years of validation. Sociological
Methods & Research, 33, 319–348.
Ljungqvist, L. (1993). A unified approach to measures of privacy
in randomized response models: A utilitarian perspective.
Journal of the American Statistical Association, 88, 97–103.
Mandel, D. R. (2014). The psychology of Bayesian reasoning.
Frontiers in Psychology, 5, 1–4.
Näher, A. F., & Krumpal, I. (2012). Asking sensitive questions: The
impact of forgiving wording and question context on social
desirability bias. Quality & Quantity, 46, 1601–1616.
Nayak, T. K. (1994). On randomized response surveys for estimat-
ing a proportion. Communications in Statistics-Theory and
Methods, 23, 3303–3321.
Krumpal and Voss 17
Osborne, M. J. (2004). An introduction to game theory. Oxford
University Press.
Ostapczuk, M., Musch, J., & Moshagen, M. (2009). A random-
ized-response investigation of the education effect in attitudes
towards foreigners. European Journal of Social Psychology,
39, 920–931.
Ostapczuk, M., Musch, J., & Moshagen, M. (2011). Improving self-
report measures of medication non-adherence using a cheat-
ing detection extension of the randomised-response-technique.
Statistical Methods in Medical Research, 20, 489–503.
Rao, T. J., & Rao, C. R. (2016). Advances in randomized response
techniques. In A. Chaudhuri, T. C. Christofides, & C. R. Rao
(Eds.), Data gathering, analysis and protection of privacy
through randomized response techniques: Qualitative and
quantitative human traits (pp. 1–11). Elsevier.
Rasinski, K. A., Willis, G. B., Baldwin, A. K., Yeh, W. C., & Lee,
L. (1999). Methods of data collection, perceptions of risks and
losses, and motivation to give truthful answers to sensitive sur-
vey questions. Applied Cognitive Psychology, 13, 465–484.
Reiber, F., Pope, H., & Ulrich, R. (2020). Cheater detection
using the unrelated question model. Sociological Methods
& Research. Advance online publication. https://doi.
org/10.1177/0049124120914919
Singer, E. (2011). Toward a benefit-cost theory of survey partici-
pation: Evidence, further tests, and implications. Journal of
Official Statistics, 27, 379–392.
Stem, D. E., & Steinhorst, R. K. (1984). Telephone interview and
mail questionnaire applications of the randomized response
model. Journal of the American Statistical Association, 79,
555–564.
Stocké, V. (2007a). Determinants and consequences of survey
respondents’ social desirability beliefs about racial attitudes.
Methodology, 3, 125–138.
Stocké, V. (2007b). The interdependence of determinants for the
strength and direction of social desirability bias in racial atti-
tude surveys. Journal of Official Statistics, 23, 493–514.
Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys.
Psychological Bulletin, 133, 859–883.
Tutic, A., & Voss, T. (2020). Trust and game theory. In J. Simon
(Ed.), The Routledge handbook of trust and philosophy (pp.
175–188). Taylor & Francis.
Voss, T. (1998). Vertrauen in modernen Gesellschaften—Eine
spieltheoretische Analyse [Trust in modern societies—A
game theoretic analysis]. In R. Metze, K. Mühler, & K.-
D. Opp (Eds.), Der Transformationsprozess (pp. 91–129).
Universitätsverlag.
Warner, S. L. (1965). Randomized response: A survey technique
for eliminating evasive answer bias. Journal of the American
Statistical Association, 60, 63–69.
Weissman, A. N., Steer, R. A., & Lipton, D. S. (1986). Estimating
illicit drug use through telephone interviews and the random-
ized response technique. Drug and Alcohol Dependence, 18,
225–233.
Wolter, F., & Preisendörfer, P. (2013). Asking sensitive ques-
tions: An evaluation of the randomized response technique
versus direct questioning using individual validation data.
Sociological Methods & Research, 42, 321–353.
Yu, J.-W., Tian, G.-L., & Tang, M.-L. (2008). Two new models
for survey sampling with sensitive characteristic: Design and
analysis. Metrika, 67, 251–263.
... To be successful, SQTs rely on the assumption that those who do not possess the sensitive trait will comply with instructions and respond appropriately (Krumpal & Voss, 2020). However, methods like RRT can enhance socially desirable responding rather than reduce it, particularly when those who do not possess the sensitive trait are forced to provide affirmative responses (Krumpal & Voss, 2020). ...
... To be successful, SQTs rely on the assumption that those who do not possess the sensitive trait will comply with instructions and respond appropriately (Krumpal & Voss, 2020). However, methods like RRT can enhance socially desirable responding rather than reduce it, particularly when those who do not possess the sensitive trait are forced to provide affirmative responses (Krumpal & Voss, 2020). As in Chuang et al. (2021), our data suggest that some respondents understood the instructions but deliberately chose not to comply with them, mostly when sensitive responses were required and particularly for the bean and RRT methods. ...
Article
Conservation increasingly relies on social science tools to understand human behaviour. Specialised Questioning Techniques (SQTs) are a suite of methods designed to reduce bias in social surveys, and are widely used to collect data on sensitive topics, including compliance with conservation rules. Most SQTs have been developed in western, industrialised, educated, rich and democratic countries (so called WEIRD contexts), meaning their suitability in other contexts may be limited. Whether these techniques perform better than conventional direct questioning is important for those considering their use. Here, we adopt an experimental design to validate the performance of four SQTs (Unmatched Count Technique, Randomised Response Technique, Crosswise model, Bean method) against direct questions when asking about a commonly researched sensitive behaviour in conservation, wildlife hunting. We developed fictional characters, and for each method, asked respondents to report the answers that each fictional character should give when asked if they hunt wildlife. With data collected from 609 individuals living close to protected areas in two different cultural and socio‐economic contexts (Indonesia, Tanzania), we quantified the extent to which respondents understood and followed SQT instructions and explored the socio‐demographic factors that influenced whether they provided a correct response. Participants were more likely to refuse SQTs than direct questions and modelling suggested SQTs were harder for participants to understand. Demographic factors, including age and education level significantly influenced response accuracy. When sensitive responses were required, all SQTs (excluding Bean method) outperformed direct questions, demonstrating that SQTs can successfully reduce sensitivity bias. However, when asked about each method, most respondents (59‐89%) reported they would feel uncomfortable using them to provide information on their own hunting behaviour, highlighting the considerable challenge of encouraging truthful reporting on sensitive topics. This work demonstrates the importance of assessing the suitability of social science methods prior to their implementation in conservation contexts. This article is protected by copyright. All rights reserved
... Indirect estimation surveys, such as surveys using random response techniques, seem to be the most promising (e.g., De Hon et al., 2015;Lensveld-Mulders et al., 2005;Pitsch, 2016. However, more need to be conducted using comparable response formats to determine validity (Krumpal and Voss, 2020;Sagoe et al., 2021). ...
... Repeating this process until the enumerator is confident the respondent understands the process is important. If pre-tests indicate respondent concerns regarding privacy, consider mitigating these using additional measures (e.g. using a ballot-box if surveys are selfadministered, revising the randomising device, reducing p) (Arias et al., 2020;Krumpal and Voss, 2020). If understanding is not reached, it is useful to provide enumerators with a mechanism to record this, so that potentially confused responses can be excluded from analysis. ...
Article
Full-text available
Conservation increasingly seeks knowledge of human behaviour. However, securing reliable data can be challenging, particularly if the behaviour is illegal or otherwise sensitive. Specialised questioning methods such as Randomised Response Techniques (RRTs) are increasingly used in conservation to provide greater anonymity, increase response rates, and reduce bias. A rich RRT literature exists, but successfully navigating it can be challenging. To help conservationists access this literature, we summarise the various RRT designs available and conduct a systematic review of empirical applications of RRTs within (n = 32), and beyond conservation (n = 66). Our results show increased application of RRTs in conservation since 2000. We compare the performance of RRTs against known prevalence of the sensitive behaviour and relative to other questioning techniques to assess how successful RRTs are at reducing bias (indicated by securing higher estimates). Findings suggest that RRT applications in conservation were less likely than those in other disciplines to provide prevalence estimates equal to, or higher than those derived from direct questions. Across all disciplines, we found reports of non-compliance with RRT instructions were common, but rarely accounted for in study design or analysis. For the first time, we provide conservationists considering RRTs with evidence on what works, and provide guidance on how to develop robust designs suitable for conservation research contexts. We highlight when alternate methods should be used, how to increase design efficiency and improve compliance with RRT instructions. We conclude RRTs are a useful tool, but their performance depends on careful design and implementation.
Chapter
Der Beitrag gibt einen Überblick über mögliche Gemeinsamkeiten hinsichtlich der Erhebung von Umfragedaten in den Themenbereichen Korruption sowie COVID-19. Anhand einer Gegenüberstellung wird verdeutlicht, dass auf dem Weg zu einer (möglichst) unverzerrten Datenerhebung ähnliche Hürden zu bewältigen sind und beide Befragungsthemen nennenswerte Parallelen zeigen. Als zentrale Gemeinsamkeit stellt sich heraus, dass beide Themenbereiche als sensitiv kategorisiert werden können. Daher wird die Verwendung spezifischer Umfragemethoden für sensitive Inhalte (z. B. von sogenannten List Experimenten) als sinnvoll herausgestellt.
Article
Full-text available
Randomized response techniques (RRTs) are useful survey tools for estimating the prevalence of sensitive issues, such as the prevalence of doping in elite sports. One type of RRT, the unrelated question model (UQM), has become widely used because of its psychological acceptability for study participants and its favorable statistical properties. One drawback of this model, however, is that it does not allow for detecting cheaters—individuals who disobey the survey instructions and instead give self-protecting responses. In this article, we present refined versions of the UQM designed to detect the prevalence of cheating responses. We provide explicit formulas to calculate the parameters of these refined UQM versions and show how the empirical adequacy of these versions can be tested. The Appendices contain R-code for all necessary calculations.
Article
Full-text available
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, various indirect question techniques, such as the randomized response technique (RRT), have been proposed. We evaluate the viability of several popular variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6, 505). Our results from two validation designs indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT we do observe a reduction of false negatives. At the same time, however, there is a non-ignorable increase in false positives; a flaw that previous evaluation studies relying on comparative or aggregate-level validation could not detect. Overall, none of the evaluated indirect techniques outperformed conventional direct questioning. Furthermore, our study demonstrates the importance of identifying false negatives as well as false positives to avoid false conclusions about the validity of indirect sensitive question techniques.
Article
Full-text available
Self-administered online surveys may provide a higher level of privacy protection to respondents than surveys administered by an interviewer. Yet, studies indicate that asking sensitive questions is problematic also in self-administered surveys. Because respondents might not be willing to reveal the truth and provide answers that are subject to social desirability bias, the validity of prevalence estimates of sensitive behaviors from online surveys can be challenged. A well-known method to overcome these problems is the Randomized Response Technique (RRT). However, convincing evidence that the RRT provides more valid estimates than direct questioning in online surveys is still lacking. We therefore conducted an experimental study in which different implementations of the RRT, including two implementations of the so-called crosswise model, were tested and compared to direct questioning. Our study is an online survey (N = 6,037) on sensitive behaviors by students such as cheating in exams and plagiarism. Results vary considerably between different implementations, indicating that practical details have a strong effect on the performance of the RRT. Among all tested implementations, including direct questioning, the unrelated-question crosswise-model RRT yielded the highest estimates of student misconduct.
Article
Full-text available
On surveys that assess sensitive personal attributes, indirect questioning aims at increasing respondents’ willingness to answer truthfully by protecting confidentiality. However, the assumption that subjects understand questioning procedures fully and trust them to protect their privacy is tested rarely. In a scenario-based design, we compared four indirect questioning procedures in terms of comprehensibility and perceived privacy protection. All indirect questioning techniques were found less comprehensible for respondents than a conventional direct question used for comparison. Less-educated respondents experienced more difficulties when confronted with any indirect questioning technique. Regardless of education, the Crosswise Model was found most comprehensible among the four indirect methods. Indirect questioning was perceived to increase privacy protection in comparison to a direct question. Unexpectedly, comprehension and perceived privacy protection did not correlate. We recommend assessing these factors separately in future evaluations of indirect questioning.
Book
Game theory, the formalized study of strategy, began in the 1940s by asking how emotionless geniuses should play games, but ignored until recently how average people with emotions and limited foresight actually play games. This book marks the first substantial and authoritative effort to close this gap. Colin Camerer, one of the field's leading figures, uses psychological principles and hundreds of experiments to develop mathematical theories of reciprocity, limited strategizing, and learning, which help predict what real people and companies do in strategic situations. Unifying a wealth of information from ongoing studies in strategic behavior, he takes the experimental science of behavioral economics a major step forward. He does so in lucid, friendly prose. Behavioral game theory has three ingredients that come clearly into focus in this book: mathematical theories of how moral obligation and vengeance affect the way people bargain and trust each other; a theory of how limits in the brain constrain the number of steps of "I think he thinks . . ." reasoning people naturally do; and a theory of how people learn from experience to make better strategic decisions. Strategic interactions that can be explained by behavioral game theory include bargaining, games of bluffing as in sports and poker, strikes, how conventions help coordinate a joint activity, price competition and patent races, and building up reputations for trustworthiness or ruthlessness in business or life.
Article
Zusammenfassung „Rational-Choice“-Erklärungen sozialer Prozesse sind (in den Sozialwissenschaften) häufig mit dem Hinweis kritisiert worden, sie setzten einen perfekt informierten und „kalkulierenden“ Akteur voraus. „Traditionales Handeln“ und Phänomene der „Definition der Situation“ seien daher mit diesem Ansatz prinzipiell nicht zu erfassen. Der Beitrag zeigt, daß die Orientierung des Handelns an Routinen („habits“) bzw. die Situationelle Dominanz bestimmter Ziele und „Codes“ („frames“) als Spezialfälle der Theorie der rationalen Wahl konzipierbar sind. In dieser theoretischen Fassung wird dann auch erklärbar, warum Institutionen gegenüber Änderungen in der „Umwelt“ oft sehr resistent sind: weil die Akteure mit ihrer „bounded rationality“ rational umzugehen verstehen. Die theoretische Rekonstruktion wird am Beispiel des Befragtenverhaltens im Interview empirisch erläutert und untermauert.
Chapter
During the past five decades, a number of research papers appeared in the literature on randomized response technique (RRT) introduced by Warner in 1965 for eliciting information on questions of a sensitive nature. In this chapter, we shall briefly review some of the recent advances relating to RRT.