Money, Time, and Political Knowledge:
Distinguishing Quick Recall and Political
Markus Prior Princeton University
Arthur Lupia University of Michigan
Surveys provide widely cited measures of political knowledge. Do seemingly arbitrary features of survey interviews affect
their validity? Our answer comes from experiments embedded in a representative survey of over 1200 Americans. A control
contexts. One group received a monetary incentive for answering the questions correctly. Another was given extra time. The
treatments increase the number of correct answers by 11–24%. Our findings imply that conventional knowledge measures
confound respondents’ recall of political facts with variation in their motivation to exert effort during survey interviews.
Our work also suggests that existing measures fail to capture relevant political search skills and, hence, provide unreliable
assessments of what many citizens know when they make political decisions. As a result, existing knowledge measures likely
underestimate people’s capacities for informed decision making.
the population affects who has political power. For these
questions in political surveys (e.g., “How long is the term
of office for a U.S. Senator?”). These data yield a focal
The surveys from which such conclusions are drawn
have two noteworthy features. First, they do not offer re-
spondents an explicit incentive to consider a question care-
fully or answer it thoughtfully. In the absence of such an
incentive, observed frequencies of incorrect answers to
basic premise of democratic governance is that
citizens use information about politics and pol-
Markus Prior is assistant professor of politics and public affairs, Woodrow Wilson School of Public and International Affairs, Princeton
for Social Research, University of Michigan, 426 Thompson Street, 4252 ISR, Ann Arbor, MI 48104 (email@example.com).
We thank the Center for Political Studies at the University of Michigan and the University Committee on Research in the Humanities and
Social Sciences at Princeton University for funding this research. We thank Rick Li, Mike Dennis, Bill McCready, and Vicki Huggins at
Bullock, Will Bullock, Michael Delli Carpini, James Druckman, Elisabeth Gerber, Martin Gilens, Jennifer Jerit, Orit Kedar, Jon Krosnick,
Political Science Association meeting, the American Political Science Association meeting, and Princeton University for helpful advice.
political knowledge questions may reflect a lack of ef-
fort by the survey respondent rather than a true lack of
knowledge. Survey respondents may perform poorly on
political knowledge tests not because they are incapable
respondents may not be equally likely to answer a knowl-
edge question correctly if one is more motivated than
the other to consider the question and search her mem-
ory for an answer during a survey interview. Differential
motivation within a survey interview can distort political
Second, when political knowledge questions appear
firms give respondents advance warning (e.g., a letter in
the mail) that a survey is coming, many others give no
American Journal of Political Science, Vol. 52, No. 1, January 2008, Pp. 169–183
C ?2008, Midwest Political Science AssociationISSN 0092-5853
MARKUS PRIOR AND ARTHUR LUPIA
details about the questions they will ask. In surveys that
contain political knowledge questions, the conventional
sions about a person’s ability from his or her immediate
responses to unexpected survey questions, in other cases
this kind of inference can backfire. To see how, consider
did John Kerry receive in Kansas in the 2004 general elec-
tion?” Such questions from an eager undergraduate can
strike fear into the heart of many lecturers. Few political
scientists can answer such questions when they are asked
without warning. Although many scholars know where
and how to find the answers, and would do so quickly if
ture usually precludes halting the interaction to consult
trusted references. In such cases, mumbling something
about “a book on my shelf” or “a website that has the an-
would consider it unfair for students to base broad judg-
responses in such circumstances. And yet evaluations of
citizens’ capabilities whose evidentiary basis is poor per-
formance on survey-based political knowledge measures
rest on just this kind of inference.
plications of these survey attributes for scholarly claims
about political knowledge. The experiments show that
standard survey measures of political knowledge do not
reflect respondents’ abilities as well as they could because
they underestimate important aspects of what citizens
know. This study illustrates how greater attention to in-
dent psychology can clarify the meaning of existing data
and help scholars draw more accurate inferences about
The article continues as follows. In the next section,
we motivate and explain the experimental design in de-
tail. Then, we describe the survey in which the experi-
ments were included. Next, we present the results of our
experiments. In the conclusion, we spell out further im-
plications for how to better collect and interpret political
Political Knowledge (in Two
Kinds of Memory)
To measure political knowledge in surveys, researchers
typically use a set of factual questions about politics. Pre-
in how people answer these questions as it affects how
and Keeter 1996; Luskin 1987). However, survey research
ferent kind can also influence response quality.
The pace of a survey interview is established in part
by conversational norms (Schwarz 1996, chap. 5) and in
1987; Krosnick and Alwin 1987). Interviewers often have
tain number of interviews within a short period of time.
Simultaneously, respondents often want to finish surveys
quickly. Such dynamics can lead interviewers to move
briskly from one question to the next and respondents
to satisfice by offering answers without much thought.
Moreover, many scholars treat political knowledge
questions as if respondents necessarily exert sufficient ef-
ment is manifest in claims that incorrect responses to
knowledge questions constitute prima facie evidence that
citizens are ignorant of the queried facts. The idea that
satisficing during the survey interview contributes to bad
performance is not considered. It should be.
When asked to recall a fact in surveys, respondents
draw upon a kind of memory known as “declarative
declarative memory, there is a correspondence between
656–64; National Research Council 1994, 28–29). With
minimal effort, a relatively small set of facts from declar-
be recalled. Therefore, respondents may fail to answer a
To the extent that existing political knowledge measures
are likely to be biased downward.
on a more extensive search of declarative memory. One
kind of incentive, commonly used in experimental eco-
nomics, is a monetary incentive:
“The presence and amount of financial incentive
does seem to affect average performance in many
tasks, particularly...where increased effort im-
tion helps)...which are so mundane that mon-
etary reward induces persistent diligence when
intrinsic motivation wanes.” (Camerer and Hog-
arth 1999, 8)
MONEY, TIME, AND POLITICAL KNOWLEDGE
In our first experiment, we use a monetary incen-
tive to motivate more thorough searches of declarative
monetary reward ($1) every time they answer a po-
litically relevant knowledge question correctly. A con-
trol group answers the same questions under standard
survey conditions—no payment for a correct answer.
This experiment allows us to evaluate an important null
Null Hypothesis #1: Knowledge questions in
political facts in memory. Providing an incen-
tive for correctly answering knowledge questions
will not affect the likelihood of offering a correct
Regardless of the precise nature or magnitude of
the incentive, rejecting our first null hypothesis would
demonstrate that typical survey procedures do not elicit
that people acquire and store more political information
than previous research suggests.
A second attribute of memory also affects how
we should interpret political knowledge data. Cognitive
psychologists distinguish fact-based declarative memory
ory is the long-term memory of skills and procedures.1It
“accumulates slowly through repetition over many trials,
not ordinarily be expressed in words” (Kandel, Schwartz,
and Jessell 1995, 658). Knowing where and how to find
things is an important form of this kind of memory. To
figure out which candidate they prefer or how they feel
dural memories of how to gather information that might
help their decision.
There are circumstances when using procedural
memory to find information is at least as important a
performance criterion as the ability to recall facts instan-
taneously. In many aspects of life, people expand their
capabilities by using file drawers or computers to orga-
nize large amounts of information in ways that permit
quick retrieval when they need it. But most political sur-
veys offer people no opportunity to draw on analogous
sources of political knowledge, even though they can do
so when making political decisions.
1Many scholars use the terms “declarative” and “procedural” to
distinguish the two kinds of memory. Kandel et al. (1995, 656)
memory as “implicit” memory.
many surveys inhibit respondents from using procedu-
ral memories, our second experiment allows some of our
respondents to utilize that resource. In it, we give (a ran-
domly selected) half of our respondents only one minute
to answer each question, whereas the other half can take
24 hours to respond. This variation changes a knowledge
quiz into a knowledge hunt. Conceptually, it transforms
a measure of quick recall into a measure of the ability
to find the correct answers to political knowledge ques-
we call political learning skills. This experimental design
allows us to evaluate our second hypothesis:
rect answers when given an opportunity (polit-
ical learning skills) are not sufficiently different
to require separate measurement. Even if giving
respondents extra time increases the number of
correct answers, the change will be uninterest-
ply amplify differences between strong and weak
In what follows, we present experimental evidence
sufficient to reject both null hypotheses. On average, of-
fering a small monetary incentive led to an 11% increase
in the number of questions answered correctly. Offering
extra time had an even larger effect. Simply offering peo-
ple a little money for responding correctly or extra time
encyclopedias, but it does affect how they answer knowl-
edge questions. A substantial share of those who appear
to be “know-nothings” in existing research on political
knowledge can answer questions correctly when given a
small incentive or extra time to do so.
The Experimental Design
dents’ ability to answer knowledge questions correctly,
and the time offered to complete knowledge questions.
MARKUS PRIOR AND ARTHUR LUPIA
APPENDIX TABLE 1 Continued
(Correct Responsein Bold) Question IDQuestion Wording
The U.S. Bureau of Labor Statistics counts a person as
unemployed if they are not employed at any job and
are looking for work. By this definition, what
percentage of Americans was unemployed in August
• around 11 percent
• around 9 percent
• around 7 percent
• around5 percent
• around 3 percent
• About 95 percent of all Americans
• About 70 percent of all Americans
• About 50 percent of all Americans
• About 25 percent of all Americans
• Less than5 percent ofall Americans
open-ended, correct: 15.6percent,
Estate tax There is a federal estate tax—that is, a tax on the money
people leave to others when they die. What
percentage of Americans leaves enough money to
others for the federal estate tax to kick in?
In August 2004, the United States Census Bureau
reported an estimate of the number of Americans
without health insurance. The Census Bureau
classified people as uninsured if they were not
covered by any type of health insurance at any time
in 2003. By this definition, what percentage of
Americans did not have health insurance in 2003?
The outstanding public debt of the United States is the
total amount of money owed by the federal
government. Every year the government runs a
deficit, the size of the public debt grows. Every year
the government runs a surplus, the size of the public
debt shrinks. In January of 2001, when President
Bush took office, the outstanding public debt of the
United States was approximately 5.7 trillion dollars.
Which of the following responses is closest to the
outstanding public debt today?
• Less than 3.5 trillion dollars
• 4.5 trillion dollars
• 5.5 trillion dollars
• 6.5 trillion dollars
• 7.5trillion dollars
• 8.5 trillion dollars
• More than 9.5 trillion dollars
John Kerry says that he would eliminate the Bush tax
cuts on families making how much money?
• Over $50,000 a year
• Over $100,000 a year
• Over $150,000 a year
• Over$200,000 a year
• Over $500,000 a year
open-ended, correct 12.5 percent, accepted
Poverty rate In August 2004, the Census Bureau reported how many
Americans live in poverty. The poverty threshold
depends on the size of the household. For example, a
person under age 65 is considered to live in poverty
if his or her 2003 income was below $9,573 and a
family of four is considered to live in poverty if its
2003 income was below $18,810. By this definition,
what percentage of Americans lived in poverty in
Bassi, Anna, Rebecca Morton, and Kenneth Williams. 2006.
“Incentives, Complexity, and Motivations in Experiments.”
Presented at the annual summer meeting of the Soci-
ety for Political Methodology, University of California,
Blair, Edward A., and Scot Burton. 1987. “Cognitive Processes
Used by Survey Respondents to Answer Behavioral Fre-
Financial Incentives in Experiments: A Review and Capital-
Labor-Production Framework.” Journal of Risk and Uncer-
tainty 19: 7–42.
MONEY, TIME, AND POLITICAL KNOWLEDGE
Converse,PhilipE.1964.“TheNatureof Belief SystemsinMass
York: Free Press, 206–61.
Delli Carpini, Michael X., and Scott Keeter. 1996. What Amer-
icans Know About Politics and Why It Matters. New Haven,
CT: Yale University Press.
Johnston, Richard, Michael G. Hagen, and Kathleen Hall
Jamieson. 2004. The 2000 Presidential Election and the Foun-
dations of Party Politics. Cambridge: Cambridge University
Kandel, Eric R., James H. Schwartz, and Thomas M. Jessell.
1995. Essentials of Neural Science and Behavior. Norwalk,
CT: Appleton and Lange.
Kinder, Donald R., and David O. Sears. 1985. “Public Opinion
and Political Action.” In The Handbook of Social Psychology
3rd ed., ed. Gardner Lindzey and Elliot Aronson. New York:
Random House, 659–741.
Krosnick, Jon A., and D. Alwin. 1987. “An Evaluation of a Cog-
ment.” Public Opinion Quarterly 51: 201–19.
Krosnick, Jon A., Allyson L. Holbrook, Matthew K. Berent,
Richard T. Carson, W. Michael Hanemann, Raymond J.
Kopp, Robert Cameron Mitchell, Stanley Presser, Paul A.
Ruud, V. Kerry Smith, Wendy R. Moody, Melanie C. Green,
and Michael Conaway. 2002. “The Impact of ‘No Opinion’
or an Invitation to Satisfice?” Public Opinion Quarterly 66:
Lodge, Milton, M. Steenbergen, and S. Braun. 1995. “The Re-
sponsive Voter: Campaign Information and the Dynamics
of Candidate Evaluation.” American Political Science Review
Luskin, Robert C. 1987. “Measuring Political Sophistication.”
American Journal of Political Science 31(4): 856–99.
Mondak, Jeffrey J. 2001. “Developing Valid Knowledge Scales.”
American Journal of Political Science 45(1): 224–38.
Mondak, Jeffery J., and Belinda Creel Davis. 2001. “Asked
and Answered: Knowledge Levels When We Will Not Take
‘Don’t Know’ for an Answer.” Political Behavior 23(3): 199–
National Research Council. 1994. “Transfer: Training for Per-
formance.” In Learning, Remembering, Believing: Enhancing
Human Performance, ed. Daniel Druckman and Robert A.
Bjork. Washington, DC: National Academy Press, 25–56.
Rahn, Wendy M., John H. Aldrich, and Eugene Borgida. 1994.
Appraisal.” American Political Science Review 88: 193–99.
Schwarz, Norbert. 1996. Cognition and Communication: Judg-
mental Biases, Research Methods, and the Logic of Conversa-
tion. Mahwah, NJ: Lawrence Erlbaum.