Conference PaperPDF Available

Pretesting with Multiple-choice Questions Facilitates Learning


Abstract and Figures

Taking a test before study can improve subsequent learning of that pretested information. How the pretest affects subsequent learning of other information in the passage is less clear, however. In three experiments, we examined the consequences of taking a multiple-choice (MC) pretest on the later recall of both pretested and non-pretested related information, finding that pretesting improved recall of pretested information without impairing recall of non-pretested information. In addition, we compared a pretest condition to conditions in which subjects were told to memorize the questions and in which subjects studied facts prior to reading. Although taking a pretest was not significantly more effective than memorizing questions or studying facts for the pre-exposed information, it did not impair the learning of related information, whereas studying facts did. Thus, even when an MC pretest takes time away from study, that pretest appears to make subsequent study more effective than other types of activities that pre-expose students to to-be-tested information.
Content may be subject to copyright.
Pretesting with Multiple-choice Questions Facilitates Learning
Jeri L. Little (
Department of Psychology, 1285 Franz Hall, Box 951563
Los Angeles, CA 90024 USA
Elizabeth Ligon Bjork (
Department of Psychology, 1285 Franz Hall, Box 951563
Los Angeles, CA 90024 USA
Taking a test before study can improve subsequent learning of
that pretested information. How the pretest affects
subsequent learning of other information in the passage is less
clear, however. In three experiments, we examined the
consequences of taking a multiple-choice (MC) pretest on the
later recall of both pretested and non-pretested related
information, finding that pretesting improved recall of
pretested information without impairing recall of non-
pretested information. In addition, we compared a pretest
condition to conditions in which subjects were told to
memorize the questions and in which subjects studied facts
prior to reading. Although taking a pretest was not
significantly more effective than memorizing questions or
studying facts for the pre-exposed information, it did not
impair the learning of related information, whereas studying
facts did. Thus, even when an MC pretest takes time away
from study, that pretest appears to make subsequent study
more effective than other types of activities that pre-expose
students to to-be-tested information.
Keywords: pretesting, testing effects, multiple-choice
In addition to assessing learning, tests can enhance learning.
Testing information after study improves later recall more
than additional study (see, e.g., Roediger & Karpicke,
2006). Likewise, testing information before study (i.e.,
pretesting) has been shown to improve subsequent learning
of the pretested information (e.g., Kornell, Hays, & Bjork,
2009; Richland, Kornell, & Kao; 2009; Rothkopf, 1966),
although some evidence suggests that pretesting can have
negative consequences (e.g., persistence of errors, Fritz,
Morris, Bjork, Gelman, & Wickens, 2000). How pretest-
taking affects subsequent learning of information not
testedin particular, related informationis less clear, but
of concern for both practical and theoretical reasons. In the
present work, we examine the effects of pretesting on the
later recall of both tested and nontested related information.
The Effect of Pretesting on Pretested Information
Testing information after study may improve its later recall
because the act of retrieving information from memory
modifies its representation in such a way as to make it more
recallable in the future than it would have been otherwise
(e.g., Bjork, 1975; Bjork & Bjork, 1992). When given a
pretest, however, correct retrieval of answers is unlikely
given that students have not yet been exposed to the to-be-
learned material. Thus, the observed improved recall of
pretested information should reflect the consequence of
processes other than successful retrieval.
Pretesting may be beneficial because it encourages more
active involvement in learning, perhaps by increasing
general interest in the topic. Additionally, the pretest may
help students to discern what information is most important
or what type of information the teacher is likely to test later.
Thus, a pretest may lead to better recall for the previously
tested information because it directs attention to the need to
encode that information when encountered again during
subsequent study (Hamaker, 1986).
Possible Negative Consequences of Pretesting with
Multiple-choice (MC) Tests Answering a question
incorrectly may strengthen the erroneous response and
decrease one’s ability to learn the correct information later
(e.g., Fritz et al., 2000). A negative characteristic of MC
tests, in particular, is that they expose students to incorrect,
but often attractive, alternatives, which can lead students to
intrude those incorrect alternatives on later cued-recall tests
(e.g., Roediger & Marsh, 2005). Of particular concern
when using MC pretests is the finding by Butler and
Roediger (2008) that intrusions increase when participants
take a test without having studied the relevant information
beforehand. On the other hand, Butler and Roediger also
found that students were likely to change erroneous
responses made on the pretestto correct answers on a later
testwhen feedback was given. Therefore, the intrusion of
errors on a later test might be reduced or eliminated as a
consequence of being able to read the passage after taking
the MC pretest.
The Effect of Pretesting on Related Information
The effect of pretesting on the subsequent learning of
related information not tested on the pretest has been
investigated in a variety of studies (e.g., see Anderson &
Biddle, 1975; and Hamaker, 1986 for meta-analyses).
Evidence suggests that pretesting directs attention towards
the processing of pretested information. Pretesting appears
to improve subsequent learning of information that is related
to pretested information, but perhaps only when the tested
and nontested information are related in certain ways (e.g.,
the related information aids in searching for the pretested
information during subsequent study). To the extent that
non-pretested information does not aid in the search task,
however, such non-pretested information may not be better
learned during subsequent study, even if the pretested and
non-pretested information have a systematic relationship.
For example, non-pretested information that has a
competitive relationship with the pretested information
(e.g., the answer to one question in a pair of related
questions would be a plausible, although incorrect answer,
to the other question in the pair) might not be useful in the
search task. A pretest might thereby impair the learning of
competitive related information. To our knowledge, past
research has not examined the effect of pretesting on the
later learning of competitive non-pretested information.
Benefits for Related Information with Multiple-Choice
Tests Little and Bjork (2010) demonstrated that taking an
MC test following the study of a text passage improves the
later recall of both tested and competitive nontested
informationif the answer to a competitive question
appears as a incorrect alternative in the initial MC test.
They argued that MC alternatives with competitive incorrect
alternatives encourage students to recall information that not
only confirms why the right answer is correct, but also why
the other alternatives are incorrect. In a pretesting situation,
students would not have access to information that would
allow them to reject incorrect alternatives. The MC pretest,
however, would still encourage participants to examine the
alternatives thoroughly (perhaps in hope that they have
some background information that would enable them to
reject one or more alternatives to make a better guess). In
this process, students may encode not only the question, but
also the alternatives. Consequently, when reading the
passage after the pretest, students may direct their attention
not only to the processing of information that would answer
the pretested questions, but also to the processing of
information having to do with the alternativesinformation
that might otherwise interfere with the search for pretested
information. From this perspective, it seems likely that
pretest MC questions could be effective in improving
learning of information that has a competitive relationship
with the pretested information.
The Present Work
In the present research, we aimed to assess whether MC
pretests provide benefits for tested information that
outweigh the potential costs of errors made on the initial test
and whether such tests can improve learning of nontested
competitive information. In addition, we explored whether
benefits or costs occur simply as a consequence of being
exposed to to-be-tested information before study or whether
the act of trying to answer questions engages processing that
leads to these effects. Specifically, in Experiments 2 and 3,
we investigated whether a benefit of test-taking (although
mostly unsuccessful) exceeds that of exposure to material,
by comparing pretesting to a condition in which questions
are memorized, but not answered, before reading the
passage (Experiment 2) and a condition in which
participants study facts that could have been tested prior to
reading (Experiment 3). Our expectation was that testing
would provide benefits (for both tested and related
information) not afforded by spending the full time
studying, and that these benefits would be larger than those
obtained with other pre-exposure activities.
Experiment 1
Participants and design Twenty-five students at the
University of California, Los Angeles, participated for
partial course credit. Condition of study (pretested vs.
extended-study) was manipulated within-subjects. On the
final cued-recall test, all participants answered pretested and
non-pretested related questions from the passage that was
preceded by a pretest and questions from the passage that
was not preceded by a pretest (baseline control).
Materials Two passages were constructed, one about
Saturn and one about Yellowstone National Park (~800
words), and ten pairs of MC questions were created for each
passage. The two questions in each pair tested the same
topic (e.g., geysers) and had the same four alternatives (e.g.,
Old Faithful, Steamboat Geyser, Castle Geyser, and Daisy
Geyser), but different correct answers (e.g., What is the
tallest geyser in Yellowstone National Park? Answer:
Steamboat Geyser; and, What is the oldest geyser in
Yellowstone National Park? Answer: Castle Geyser). The
questions from each passage were randomly divided into
two 10-item sets (for each pair, one question was in Set A
and one question was in Set B), such that one set would be
tested on the pretest and the other set would serve as related
questions on the final test (counterbalanced). Passage order
and condition order were also counterbalanced across
Procedure Each participant read two passagesone that
would be preceded by an initial MC test (4 min test; 10 min
study) and one that would be studied for the full time (14
min study). For the pretested condition, the ten MC
questions (i.e., all the items in one of the question sets for
that passage) were presented one at a time on the computer
screen for 22 s. Participants were told that although they
had not yet read the passage, the experimenters wanted to
assess how much they already knew about the topic. In
addition, participants were told that they would take a later
test on the topic and that the questions on the pretest would
provide them with an idea of the type of information that
would be tested later. No corrective feedback was given
during the test. For the passage that was not tested
(control), participants were given the full 14 min to read the
passage, and they were told that if they finished reading
early, they should spend the remainder of the time studying.
Finally, after a 5-min retention interval during which they
played Tetris (a spatial-reasoning puzzle game), participants
received a 40-item final cued-recall test, with the questions
presented one at a time on the computer screen. For the
pretested condition, except for the absence of alternatives,
half of the questions were identical to the MC questions
(i.e., pretested) and half were the non-pretested related items
(i.e., the 10 questions from the set that had not been tested).
Related questions were always tested in the first half of the
test, along with half of the control questions, to which their
performance would be compared. Similarly, previously
tested questions were tested in the second half of the test,
along with half of the control questions, to which their
performance would be compared.
Results and Discussion
Pretest Performance Performance on the MC pretest (M =
27%, SD = 12%) was not significantly different from chance
performance (25%), t(24) = 0.89, p > .05.
Final-test Performance We found that taking an MC
pretest in lieu of additional time spent studying improved
recall of that pretested information, but not recall of related
Performance in the pretested condition was compared to
that in the corresponding extended-study control condition
via planned paired-samples t tests. Specifically, these
comparisons revealed that pretested questions (M = 61%, SE
= 4%) were answered correctly more often than were
control questions from the topic that did not receive a
pretest (M = 43%, SE = 4%), t(24) = 5.26, p < .05. No such
benefit occurred for questions of non-tested related
information (M = 48%, SE = 4%) as compared to control
questions (M = 46%, SE = 5%), t(24) = 0.35, p > .05.
Our results suggest that taking a pretest is beneficial for
learning. Previous work, however, has suggested that
spending part of one’s study time taking a test, for which
one is unlikely to know the correct answers might not lead
to the pattern of results that we obtained: Rather, that errors
would persist (e.g., Fritz et al., 2000).
Indeed, participants did sometimes recall previously
incorrect alternatives on the final test. Because all of the
alternatives were contained in the text, however, participants
intruded these responses in the extended study condition as
well. Of interest regarding the costs of MC testing is
whether intrusions are greater after taking a pretest than
after extended study. Table 1 shows recall (intrusion) rates
for incorrect alternatives (or items than could have been
incorrect alternatives) on the final test for the experimental
and control conditions in Experiments 1, 2, and 3. As
shown in Table 1, taking a MC test did not increase
intrusion of incorrect information.
Table 1: Intrusion Percentages for Experiments 1, 2, & 3
Previously Tested
Memorized, or Studied
study Control
Additionally, participants were notas a consequence of
spending some study time taking a testimpaired in their
ability to answer correctly related, but initially non-pretested
questions. Students were told that the pretest would “give
them an idea of the questions to expect, in order to steer
them away from focusing on the pretested information
during subsequent study. It was clear from the high
performance on pretested items, however, that
purposefully or notthey paid significant attention to
pretested information. In fact, it is more likely that
participants directed attention towards the processing of
pretested information than towards the processing of non-
pretested information. For this reason, we believe that a
numerical benefit for related information given significantly
less study time for that tested passage (10 min compared to
14 min, respectively) is worth consideration.
Experiment 2
In Experiment 1, we found that taking a pretest improved
the effectiveness of a subsequent reading session, even
though the pretest took time away from reading and
participants answered a majority of the questions incorrectly
on the pretest. Specifically, recall of pretested information
was improved. Moreover, recall of related information was
not hurt, suggesting that the 10 min of studying after a test
was as effective as studying for the full 14 min without a
It is uncertain, however, whether trying to answer
questions would increase the effectiveness of study beyond
what would occur from simply being exposed to those
questions beforehand. Previous work has demonstrated that
testing (with cued-recall questions) improves retention more
so than does reading those questions before study (Richland,
Kornell, & Kao, 2009). In Experiment 2, we explored
whether trying to answer an MC question leads to improved
performance as compared to memorizing that question and
what impact this difference in processing would have on the
learning of competitive related information. On the one
hand, if trying to answer a question engages a deeper level
of processing than memorizing a question, then answering
questions should provide a benefit that outweighs that of
memorizing questions. On the other hand, perhaps
answering questions would increase the encoding of
misinformation as compared to memorizing questions.
Participants and design Sixty-four students at the
University of California, Los Angeles, participated for
partial course credit. Condition of study (question-study vs.
extended-study) was manipulated within-subjects. Type of
instructions for the question-study condition (pretest vs.
memorize) was manipulated between subjects. On the final
cued-recall test, all participants answered pretested or
memorized questions and non-pretested or non-memorized
related questions from the passage that was preceded by
questions, in addition to questions from the passage that was
not preceded by questions (baseline control).
Materials and procedure The materials and procedure
were the same as those used in Experiment 1, with one
exception: the addition of a between-subject variable (type
of instructions). For half of the participants, instead of
answering the questions, they were told to memorize the
questions. Specifically, they were told that they would see
questions that could be asked about the to-be-read passage,
and that they should memorize the questions and the answer
choices. They were given this instruction to insure that they
processed the questions. The pretesting group was not told
anything about a later test, just that the experimenters
wanted to assess their knowledge of the to-be-learned topic.
Results and Discussion
Pretest Performance Performance on the MC test (M =
27%, SD = 18%) was not significantly different from chance
performance (25%), t(31) = 0.68, p > .05.
Final-test Performance Correct recall performance on the
final test is presented in Figure 1, and, as indicated there,
performance on previously exposed questions (in both the
pretest and memorize conditions) was improved, as
compared to spending the full time studying (i.e., control
condition). The recall of related information, however,
appears only to have been improved when participants
answered questions on an initial MC pretest, not when they
simply memorized those questions.
Prete ste d Re late d Exten ded -
Study Control
Mem orize d Re lated Exten ded -
Study Control
Answere d Que stions (Pretes t) Mem orized Ques tions
Correct Recall Performance
Figure 1: Correct recall performance percentages as a
function of instruction type and item type in Experiment 2.
The white bars show the average performance for extended
study control questions tested in the first half and second
half of the test for each condition. Error bars represent +/-1
Correct performance for the pretested and memorized
items was compared to that for the corresponding items in
the extended study control condition via planned paired-
samples t tests. Benefits were found for both pretested
items (M = 54%, SE = 5%) as compared to control (M =
37%, SE = 4%), as well as for memorized items (M = 46%,
SE = 4%) as compared to control (M = 36%, SE = 5%),
t(31) = 3.64, p < .01 and t(31) = 2.54, p < .05, respectively.
A 2 × 2 repeated-measures ANOVA did not reveal a
significant interaction between type of instructions
(pretested vs. memorized) and condition of study (question-
study vs. extended study control), F(1,62) = 1.55, p > .05.
Additionally, a 2 × 2 repeated-measures ANOVA also did
not reveal a significant interaction between type of
instructions (pretested vs. memorized) and condition of
study (question-study vs. extended study control) for related
information, F(1,62) = 2.32, p > .05. In both cases,
however, the interaction trended towards testing being better
than memorizing.
In Experiment 2, we found that although taking a pretest
and memorizing information both led to improved recall of
correct answers to those pre-exposed questions, the benefit
afforded by actually trying to answer questions was
numerically greater than that afforded by a comparable pre-
exposure activity (i.e., memorizing questions). We found
that this trend occurred for related information as well. In
neither case, however, was the benefit of taking a test
significantly better than the benefit afforded by memorizing
the questions, a result that we believe may have occurred as
a consequence of participants in the memorize condition
spontaneously trying to answer the questions. In
Experiment 3, we attempted to control for this possible
problem with a modified method that would remove the
propensity to endorse an answer, with the expectation that
testing would then reveal a greater overall benefit.
Experiment 3
In Experiment 3, our aim was similar to that in Experiment
2: to compare the effects of pretesting with another task that
also exposes participants to to-be-tested information before
reading. In this case, however, we wanted to ensure that the
non-test activity did not encourage the type of processing
occurring when answering questions, as might have been
the case in Experiment 2. Thus, in Experiment 3, we
compared taking a pretest to being exposed to comparable
facts that contained the correct answer as well as
competitors (e.g., The oldest geyser in Yellowstone National
Park is Castle Geyser, not Old Faithful, Steamboat Geyser,
or Daisy Geyser). Because the facts contained the correct
answer to the matched question, we also gave feedback in
the pretest condition. In addition, we manipulated all
variables within subjects. To do so, we developed a third
passage so that all participants would take a pretest before
one passage, study facts before another passage, and receive
extra time to study a third passage
Participants and design Seventy-two students at the
University of California, Los Angeles, participated for
partial course credit. Activity type (pretest vs. facts vs.
extended-study) was manipulated within subjects. On the
final cued-recall test, all participants answered questions
pertaining to the pretest or studied facts and questions
related to the pretest or studied facts, in addition to
questions from the passage that received extended study
time (baseline control).
Materials The materials were variations of those used in
Experiments 1 and 2. We shortened the passages about
Saturn and Yellowstone (~600 words), without removing
any information contained in the questions. In addition, we
added a passage about stimulant drugs. For each passage,
we used ten pairs of MC questions. Questions for Saturn
and Yellowstone were the same as those used in Experiment
1 and 2. Questions for the passage about stimulants were
constructed in the same manner.
Finally, Experiment 3 differed from Experiment 2 in that
the question-memorization condition from Experiment 2
was replaced with a fact-study condition. For example, in
the fact-study condition, participants would see: The oldest
geyser in Yellowstone National Park is Castle Geyser, not
Steamboat Geyser, Daisy Geyser, or Old Faithful. The facts
contained all of the information contained in the matched
question, including the competitors.
Procedure The procedure for Experiment 3 was the same
as that of Experiment 1, with the following exceptions. All
participants learned about three topics. For one topic, the
reading of the passage (6 min) was preceded by a 10-item
MC test (4 min); for a second topic, the reading of the
passage (6 min) was preceded by a 10-fact study session (4
min); for a third topic, participants were given the full 10
min to read the passage.
On the pretest, participants were given 20 s to answer
each question. After typing in their response, they
continued to view the question until the 20 s elapsed. After
each question, participants received feedback (i.e., the
correct answer presented below the question) for 4 s. For
the fact-study condition, participants were presented with 10
facts for 24 s each and were told to think about the fact for
the full time that it was presented.
After a 5-min non-verbal distractor task (i.e., playing
Tetris), participants received a final cued-recall test with
sixty questions, ten questions for each of the following
(previously tested, related to tested, previously studied as
facts, related to studied facts) and twenty questions from the
extended study control condition.
Results and Discussion
Pretest Performance Performance on the MC test (M =
30%, SD = 15%) was higher than chance performance, t(71)
= 2.69, p < .01, and resulted as a consequence of high
performance on the pretest for the stimulants topic (M =
38%). Performance for Saturn (M = 25%) and Yellowstone
(M = 26%) were comparable to that found in Experiments 1
and 2.
Final-test Performance Correct recall performance on the
final test is presented in Figure 2, and, as indicated there,
participants’ ability to answer questions for information that
was exposed during the pretest and fact-study conditions
was improved, as compared to spending the full time
studying (i.e., control condition). The recall of related
information appeared not to be hurt as a consequence of
taking an initial pretest, but appeared to be hurt as a
consequence of studying facts before reading.
Correct performance for the information pretested or
studied as facts was compared to that for the corresponding
items in the extended study control condition via planned
paired-samples t tests. Benefits were found for both
pretested items (M = 69%, SE = 3%) and items studied as
facts (M = 66%, SE = 2%) as compared to control (M =
39%, SE = 3%), t(71) = 10.67, p < .01 and t(71) = 9.21, p <
.01, respectively. Pretested information was not recalled
correctly more often than was information studied as facts.
Pretested Related Studied as
Pretest Fact Study Extended-
Study Control
Correct Recall
Performance Pe rcentages
Figure 2: Correct recall performance percentages as a
function of activity type and item type in Experiment 3.
The white bar shows the average performance for extended
study control questions tested in the first half and second
half of the test. Error bars represent +/-1 SE.
Correct performance on questions pertaining to
information that was related to the pre-exposed information
was not better than that for the corresponding information in
the extended-study control condition. Interestingly,
however, although correct performance on items related to
pretested information (M = 40%, SE = 2%) was not
significantly different than that for control items (M = 41%,
SE = 3%), correct performance on items related to studied
facts (M = 34%, SE = 2%) was significantly worse than that
for control items, t(71) = 2.57, p < .05, and, in fact, was
significantly worse than that for questions related to
pretested information, t(71) = 2.81, p < .01. Thus, although
answering questions before study did not impair one’s
ability to learn related information, studying comparable
facts before study dideven when the facts also provided
participants with the answers to the related questions.
When assessing the total effect of taking a pretest (as
compared to studying facts beforehand or spending the full
time studying) across both tested and related information (M
= 56%, SE = 2%), there was a clear benefit for pretesting
over studying facts (M = 50%, SE = 2%), t(71) = 2.67, p <
.01 and for pretesting compared to extended study (M =
40%, SE = 2%), t(71) = 6.91, p < .05.
In Experiment 3, we found that taking a test with
feedback prior to the reading of a passage provided a
learning benefit that outweighed that of studying facts
beforehand or of spending additional time studying.
General Discussion
Although incorrect answers were endorsed for a majority of
the questions on the MC pretests, we found that taking a
pretest made subsequent study more effective, as
demonstrated by improved recall of that pretested
information as compared to spending the full time studying;
additionally, the pretest appeared not to lead to
This result cannot be the sole consequence of pre-
exposure to to-be-tested information. The trend towards a
benefit for pretesting over studying facts in Experiment 3 is
intriguing because the fact-study condition provided
participants with access to the correct answer for each
question, in the form of a fact, for 24 s (as opposed to
receiving the correct answer for only 4 s after spending 20 s
choosing an incorrect answer for the majority of questions).
This pattern suggests that testing can serve as an effective
learning event before study, even when retrieval fails.
Additionally, although we did not find that related
information was learned better in the pretest condition than
in the extended-study condition, we never found it to be
impaired, even though the extended-study condition actually
provides a very conservative control for total time on task.
In the pretesting condition, participants received
substantially less time to read the passage than they did for
the extended-study condition. We would thus contend that
comparable recall across the two conditions is likely to be
under-representative of relative learning (i.e., correct recall
per minute of reading time). Evidence from Experiment 3
would further support this contention because reduced time
to read the passage in the fact-study condition led to
impaired recall of related information. Finally, compared to
this fact-study condition (which also offers a valid control
for time-on-task), testing did improve recall of related
The present pattern of results speaks to the benefit of
pretests as learning events. The present experiments,
however, do not reveal what specific mechanisms might
lead to testing being more beneficial than studying facts
(Experiment 3). The results are consistent with the idea that
an MC pretest directs attention broadly, such that students
search not only for the correct answer to the pretested
question, but also for information pertaining to the other
choices. Additionally, it is the act of trying to answer
question that leads to enhanced recall of related
informationnot simply pre-exposureas facts containing
the competitors do not seem to direct attention so broadly.
That is, testing might lead to a deeper level of processing
than trying to memorize questions (or studying facts), thus
making it more likely that students will be reminded of
those questions and alternatives when reading the passage
during subsequent study. Although the present research
provides evidence that MC pretests can serve as effective
study events, future research should further investigate the
specific underlying processes that lead to such benefits of
We thank Robert Bjork, Nate Kornell, and Matt Hays for
inspiring this work. Grant 29192G from the McDonnell
Foundation supported this research.
Anderson, R. C., & Biddle, M C. (1975). On asking people
questions about about what they are reading. In G. H.
Bower (Ed.), The Psychology of Learning and Motivation,
Vol. 9. (pp. 89-132). New York, NY, US: Academic Press
Bjork, R. A. (1975). Retrieval as a memory modifier. In R.
Solso (Ed.), Information processing and cognition: The
Loyola Symposium (pp. 123-144). Hillsdale, NJ:
Lawrence Erlbaum Associates.
Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse
and an old theory of stimulus fluctuation. In A. F. Healy,
S. M. Kosslyn, & R. M. Shiffrin (Eds.), From Learning
Processes to Cognitive Process. (pp. 35-67) Hillsdale, NJ:
Butler, A. C., & Roediger, H. L (2008). Feedback enhances
the positive effects and reduces the negative effects of
multiple-choice testing. Memory & Cognition, 36, 604-
Fritz, C. O., Morris, P. E., Bjork, R. A., Gelman, R., &
Wickens, T. D. (2000). When further learning fails:
Stability and change following repeated presentation of
text. British Journal of Psychology, 91, 493-511.
Kornell, N., Hays, M. J., & Bjork, R. A. (2009).
Unsuccessful retrieval attempts enhance subsequent
learning. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 35, 989-998.
Hamaker, C. (1986). The effects of adjunct questions on
prose learning. Review of Educational Research, 56, 212-
Little, J. L., & Bjork, E. L. (2010). Multiple-choice testing
can improve the retention of non-tested related
information. In S. Ohisson & R. Catrabone (Eds.)
Proceedings of the 32nd Annual Conference of the
Cognitive Science Society (pp. 1535-1540). Austin, TX:
Cognitive Science Society.
Richland, L. E., Kornell, N., & Kao, L. S. (2009). The
pretesting effect: Do unsuccessful retrieval attempts
enhance learning? Journal of Experimental Psychology:
Applied, 15, 243-257.
Roediger , H. L., & Karpicke, J. D. (2006). The power of
testing memory: Basic research and implications for
educational practice. Perspectives on Psychological
Science, 1, 181-210.
Roediger, H. L., & Marsh, E. J. (2005). The positive and
negative consequences of multiple-choice testing. Journal
of Experimental Psychology: Learning, Memory, and
Cognition, 31, 1155-1159.
Rothkopf, E. Z. (1966). Learning from written instructive
materials: An exploration of the control of inspection
behavior by test-like events. American Educational
Research Journal, 3, 241-249.
... In these examples, all items are independent of one-another, and results have consistently shown that items that are pretested are recalled better after subsequent study than items that are not pretested. Pretesting has also been shown to enhance retention of richer materials with test items that cover more elaborate study materials, such as educational passages (Little & Bjork, 2011Richland et al., 2009;St. Hilaire & Carpenter, 2020) and videos (Carpenter & Toftness, 2017;James & Storm, 2019). ...
Tests given to learners before they study new information can enhance the learning of that information. When responding to these pretests, learners typically generate answers that are incorrect but that are nevertheless helpful for improving the learning of the correct answers. The present research examined how providing learners with context prior to pretesting can enhance the benefits of pretesting. Across two experiments, participants were given a pretest for half of the to-be-learned information and then asked to read a passage about a fictional topic, an alien civilisation known as Yoffas (Experiment 1 and Experiment 2), or a unique fruit called the Anona (Experiment 2). Participants who read a short paragraph contextualising the to-be-learned information exhibited a significantly larger pretesting effect than participants who did not, with this interaction being observed regardless of whether memory was tested after a 5-min delay or 1-week delay, and regardless of whether contextualisation was manipulated between-subjects or within-subjects. These results suggest that what learners know prior to a pretest can have an impact on the extent to which learners benefit from that pretest.
... Future studies will need to address two limitations. First, it is possible that some of the changes in brain and behavior observed in this sample may have arisen from practice effects or repeated testing, which is also known to facilitate learning [85][86][87] . Follow-up experiments with a well-matched active control will be needed to better understand training-specific learning and functional brain reorganization. ...
Full-text available
Efficient memory-based problem-solving strategies are a cardinal feature of expertise across a wide range of cognitive domains in childhood. However, little is known about the neurocognitive mechanisms that underlie the acquisition of efficient memory-based problem-solving strategies. Here we develop, to the best of our knowledge, a novel neurocognitive process model of latent memory processes to investigate how cognitive training designed to improve children’s problem-solving skills alters brain network organization and leads to increased use and efficiency of memory retrieval-based strategies. We found that training increased both the use and efficiency of memory retrieval. Functional brain network analysis revealed training-induced changes in modular network organization, characterized by increase in network modules and reorganization of hippocampal-cortical circuits. Critically, training-related changes in modular network organization predicted performance gains, with emergent hippocampal, rather than parietal cortex, circuitry driving gains in efficiency of memory retrieval. Our findings elucidate a neurocognitive process model of brain network mechanisms that drive learning and gains in children’s efficient problem-solving strategies.
... 10 Moreover, if one experiences significant test anxiety, practice tests can help by desensitizing one to testing conditions, especially if one takes a practice test under time and other examination-related constraints. 11 In addition to tests being used effectively throughout a learning activity, tests given before a learning activity (pretests) offer benefits too, 14 perhaps by "priming students to focus on key information and cognitive activities encountered during study." 15p.11 ...
Retrieval practice is an evidence-based, science of learning strategy that is relevant to the planning and implementation of continuing professional development (CPD). Retrieval practice requires one to examine long-term memory to work with priority information again in working memory. Retrieval practice improves learning in two ways. It improves memory for the information itself (direct benefit), and retrieval practice provides feedback about what needs additional effort (indirect). Both benefits contribute significantly to durable learning. Research from cognitive psychology and neuroscience provides the rationale for retrieval practice, and examples of its implementation in health professions education are increasingly available in the literature. Through appropriate utilization, CPD participants can benefit from retrieval practice by making more-informed educational choices, and CPD planners can benefit in efforts to improve educational activities.
... A typical outcome of pretests is unsuccessful retrieval attempts, given that students are not familiar with the material being tested. However, the very act of trying to generate an answer seems to activate relevant prior knowledge, and this leads to more elaborate encoding of subsequently learned information (e.g., Carpenter, 2009Carpenter, , 2011Kornell et al., 2009;Richland et al., 2009;Little and Bjork, 2011). Moreover, failure to retrieve a correct response to the pretest questions provides students with opportunities to identify and reflect on knowledge gaps (Rozenblit and Keil, 2002) and facilitates feedback-seeking behaviors during subsequent instruction (Shepard, 2002). ...
Learning objectives (LOs) are statements that typically precede a study session and describe the knowledge students should obtain by the end of the session. Despite their widespread use, limited research has investigated the effect of LOs on learning. In three laboratory experiments, we examined the extent to which LOs improve retention of information. Participants in each experiment read five passages on a neuroscience topic and took a final test that measured how well they retained the information. Presenting LOs before each corresponding passage increased performance on the final test compared with not presenting LOs (experiment 1). Actively presenting LOs increased their pedagogical value: Performance on the final test was highest when participants answered multiple-choice pretest questions compared with when they read traditional LO statements or statements that included target facts (experiment 2). Interestingly, when feedback was provided on pretest responses, performance on the final test decreased, regardless of whether the pretest format was multiple choice or short answer (experiment 3). Together, these findings suggest that, compared with the passive presentation of LO statements, pretesting (especially without feedback) is a more active method that optimizes learning.
... The MCQs were primarily implemented as a tool to facilitate learning. [15] In doing so, we were able to capture the suggestion of acquired PM&R knowledge among the students who completed the clerkship. ...
Context: Physical medicine and rehabilitation (PM&R) is often underrepresented and taught inconsistently in medical schools. Medical students' awareness and understanding of disability and PM&R are often poor. Aims: This study aims to study the impact of a week-long structured PM&R clerkship on 3rd-year medical students' rehabilitation knowledge. Design: This was a retrospective analysis of pre- and post-clerkship multiple choice questions (MCQs). Settings: This was a rehabilitation center within a tertiary care teaching hospital. Participants: Seventy-two 3rd-year undergraduate medical students who underwent PM&R clerkships between November 15, 2016, and January 15, 2018. Subjects and Methods: At commencement, undergraduates were administered Best of Five MCQs based on the program content. Clerkship components included tutorials, prepared reading materials on PM&R, and exposure to various clinical work of the rehabilitation center. The same MCQs were readministered at the conclusion of clerkship. Results: Postclerkship MCQs saw improved scores in 98.6% of the students. The mean MCQs score improved from 14.44 to 20.01 (23.2% gain). Students, who had previous PM&R observership, scored higher in their preclerkship MCQs (16.06 vs. 13.98; P = 0.001). They fared better in components delving into PM&R work, concepts and roles (7.56 vs. 6.04, P = 0.01), and stroke rehabilitation (2.38 vs. 1.82, P = 0.01). Following clerkship completion, 52% of the students were able to describe PM&R using key rehabilitation concepts/principles, and 76.4% would consider PM&R as their career. Conclusions: The study suggests that a dedicated and structured PM&R clerkship in undergraduate medical curriculum improves PM&R knowledge and awareness among the medical students.
Considerable research has examined the prevalence and apparent consequences of task-unrelated thoughts (TUTs) in both laboratory and authentic classroom settings. Few studies, however, have explored methods to reduce TUTs during learning; those few studies tested small samples or used unvalidated TUT assessments. The present experimental study attempted to conceptually replicate previous findings of testing effects and pretesting effects on TUT and learning. In a study of 195 U.S. undergraduates, we investigated whether interpolated testing (compared to interpolated restudy) and pretesting on lecture-relevant materials (compared to pretesting on conceptually related but lecture-irrelevant materials) would reduce TUTs during a video lecture on introductory statistics. Subjects completed either a content-matched or content-mismatched pretest on statistics concepts and then watched a narrated lecture slideshow. During the lecture, half of the sample completed interpolated tests on the lecture material and half completed interpolated restudy of that material. All subjects responded to unpredictably presented thought probes during the video to assess their immediately preceding thoughts, including TUTs. Following the lecture, students reported on their situational interest elicited by the lecture and then completed a comprehensive posttest. Interpolated testing significantly reduced TUT rates during the lecture compared to restudying, conceptually replicating previous findings—but with a small effect size and no supporting Bayes-factor evidence. We found no statistical evidence for either an interpolated testing effect on learning, nor a pretesting effect on TUT rates or learning. Interpolated testing might have limited utility to support students’ attention, but varying effect sizes across studies warrants further work.
Full-text available
College professors in the social sciences and professional studies have adopted numerous strategies for teaching undergraduate statistics, yet few researchers provide empirical evidence students’ learning actually increased because of the instructional innovation. Assessment of pedagogy is frequently subjective and based on comments from students or faculty. Consequently, evaluating the effectiveness of teaching activities on student learning in statistical analysis courses is warranted. This study employed a pretest-posttest design to measure student learning and then examined the relationship between student demographics, prior knowledge, and course characteristics on knowledge gained in undergraduate statistics. Data derived from 185 students enrolled in six different sections of a statistical analysis course taught over a seven-year period by the same instructor. Multiple regression analyses revealed age, age X gender (interaction effect), major, prior knowledge, examinations, and group projects all had statistically significant effects on how much students learned in the course. The results suggest faculty assess students’ prior knowledge at the beginning of the semester and use such data to inform both the content and delivery of statistical analysis. Moreover, before embracing a new pedagogy, faculty should establish empirically that learning is linked to the teaching innovation.
The Cambridge Handbook of Cognition and Education - edited by John Dunlosky February 2019
Predoctoral students enter dental school with varying skill levels for searching biomedical databases and a tendency to overestimate their abilities. Accordingly, PubMed instruction is embedded within a required dental course and includes a graded component. This article describes a pretest/intervention/posttest developed for the PubMed session. The expectation for this new assessment was that motivation to learn PubMed would increase during the intervention if pretesting objectively showed students the difference between their self-perceived versus actual PubMed abilities. The goals were to help students better self-assess their genuine searching abilities, spark learning during the instruction session, and elicit measurable improvement in skills.
Although the testing effect has received a substantial amount of empirical attention, such research has largely focused on the effects of tests given after study. The present research examines the effect of using tests prior to study (i.e., as pretests), focusing particularly on how pretesting influences the subsequent learning of information that is not itself pretested but that is related to the pretested information. In Experiment 1, we found that multiple-choice pretesting was better for the learning of such related information than was cued-recall pretesting or a pre-fact-study control condition. In Experiment 2, we found that the increased learning of non-pretested related information following multiple-choice testing could not be attributed to increased time allocated to that information during subsequent study. Last, in Experiment 3, we showed that the benefits of multiple-choice pretesting over cued-recall pretesting for the learning of related information persist over 48 hours, thus demonstrating the promise of multiple-choice pretesting to potentiate learning in educational contexts. A possible explanation for the observed benefits of multiple-choice pretesting for enhancing the effectiveness with which related nontested information is learned during subsequent study is discussed.
Full-text available
A powerful way of improving one's memory for material is to be tested on that material. Tests enhance later retention more than additional study of the material, even when tests are given without feedback. This surpris- ing phenomenon is called the testing effect, and although it has been studied by cognitive psychologists sporadically over the years, today there is a renewed effort to learn why testing is effective and to apply testing in educational settings. In this article, we selectively review laboratory studies that reveal the power of testing in improving re- tention and then turn to studies that demonstrate the basic effects in educational settings. We also consider the related concepts of dynamic testing and formative assess- ment as other means of using tests to improve learning. Finally, we consider some negative consequences of testing that may occur in certain circumstances, though these negative effects are often small and do not cancel out the large positive effects of testing. Frequent testing in the classroom may boost educational achievement at all levels of education. In contemporary educational circles, the concept of testing has a dubious reputation, and many educators believe that testing is overemphasized in today's schools. By ''testing,'' most com- mentators mean using standardized tests to assess students. During the 20th century, the educational testing movement produced numerous assessment devices used throughout edu- cation systems in most countries, from prekindergarten through graduate school. However, in this review, we discuss primarily the kind of testing that occurs in classrooms or that students engage in while studying (self-testing). Some educators argue
Conference Paper
Full-text available
Taking an initial test leads to improved performance on later tests for those previously tested questions. Whether prior testing improves one's ability to answer related questions, however, is less clear, with some results showing impairment for related information, an effect called retrieval-induced forgetting (RIF; e.g., Anderson, Bjork, & Bjork, 1994). Two experiments investigated the use of initial multiple-choice tests on the retention of previously studied prose passages, specifically on the retention of related, but initially nontested information. In both experiments, an incorrect alternative on the initial test served as the correct answer to a related question on the final test. Results demonstrated that the retention of related information can, indeed, be facilitated by initial multiple-choice tests (Experiment 1) and that this benefit is dependant upon using competitive incorrect alternatives (Experiment 2). We discuss how and why our results differ from previous work (e.g., RIF) and address possible educational applications.
Full-text available
Testing previously studied information enhances long-term memory, particularly when the information is successfully retrieved from memory. The authors examined the effect of unsuccessful retrieval attempts on learning. Participants in 5 experiments read an essay about vision. In the test condition, they were asked about embedded concepts before reading the passage; in the extended study condition, they were given a longer time to read the passage. To distinguish the effects of testing from attention direction, the authors emphasized the tested concepts in both conditions, using italics or bolded keywords or, in Experiment 5, by presenting the questions but not asking participants to answer them before reading the passage. Posttest performance was better in the test condition than in the extended study condition in all experiments--a pretesting effect--even though only items that were not successfully retrieved on the pretest were analyzed. The testing effect appears to be attributable, in part, to the role unsuccessful tests play in enhancing future learning.
Full-text available
Taking tests enhances learning. But what happens when one cannot answer a test question-does an unsuccessful retrieval attempt impede future learning or enhance it? The authors examined this question using materials that ensured that retrieval attempts would be unsuccessful. In Experiments 1 and 2, participants were asked fictional general-knowledge questions (e.g., "What peace treaty ended the Calumet War?"). In Experiments 3-6, participants were shown a cue word (e.g., whale) and were asked to guess a weak associate (e.g., mammal); the rare trials on which participants guessed the correct response were excluded from the analyses. In the test condition, participants attempted to answer the question before being shown the answer; in the read-only condition, the question and answer were presented together. Unsuccessful retrieval attempts enhanced learning with both types of materials. These results demonstrate that retrieval attempts enhance future learning; they also suggest that taking challenging tests-instead of avoiding errors-may be one key to effective learning.
The research literature on the effects of factual and higher order adjunct questions is reviewed. The influence of 13 design variables on the direction and size of adjunct-questions effects was investigated. Adjunct questions of all cognitive levels have a strong facilitative effect on repeated test questions and a weaker effect on test questions related to the adjunct questions. Unrelated test questions are affected negatively by factual prequestions and by factual postquestions when study time is controlled. Factual postquestions have a positive effect on unrelated questions only when no time restrictions are imposed. Effect sizes are found to be related to text length, density of adjunct questions, adjunct-question format, test-question format, and the level of performance in the control group; they are unrelated to subjects’ age, the interval between reading task and test, whether or not subjects are allowed to consult the text while answering the adjunct questions, and the average distance between adjunct questions and relevant text information. When compared to factual adjunct questions, higher order adjunct questions lead to improved performance on repeated, related, and unrelated higher order test questions, and possibly also on unrelated factual test questions. This indicates that higher order adjunct questions may have a more general facilitative effect than factual adjunct questions. The analysis of a recent adjunct-questions study illustrates the role that review results can play in (re)interpreting experimental findings.
This chapter focuses on the effects of asking people questions during or shortly after exposure to text passages. It also presents several original experiments designed to evaluate a model of the “direct” effects of questions and a report of an attempt to use questioning techniques in an ongoing instructional program is presented. The chapter also contains discussions of (a) the kinds of effects of questions, their magnitude and consistency, (b) the conditions under which questioning facilitates learning, (c) an appraisal of the explanations which have been proposed to account for the effects of questions, and (d) a brief evaluation of the practical educational implications of questioning techniques. However, the set of experiments presented in the chapter failed in its major objective, which was to give rise to a theory of the direct effects of questions. The most important findings of the present research are that in every experiment, verbatim scores are significantly higher than paraphrase scores when the questions are asked immediately after reading the passage, and that verbatim scores declines more over a one-week interval than did paraphrase scores. The most plausible interpretation of these facts is that there are at least two kinds of memory code, a close-to-surface code with a relatively short memorial half life, and a more permanent semantic-based code.
Kay (1955) presented a text passage to participants on a weekly basis and found that most errors and omissions in recall persisted despite repeated re-presentation of the text. Experiment 1 replicated and extended Kay s original research, demonstrating that after a first recall attempt there was very little evidence of further learning, whether measured in terms of further acquisition or error correction, over three more presentations of the text passages. Varying the schedule of presentations and tests had little effect, although performance was better when intermediate trials included both presentation and test than when only presentations or tests occurred. Experiment 2 explored whether this ‘failure of further learning’ effect could be overcome by (a) warning participants against basing their recall on their previous recall efforts and specifically directing them to base their recall upon the passages, (b) making each presentation more distinctive, or (c) drawing participants’ attention to areas that would benefit from further learning by requiring them to tally their omissions and errors. The effect persisted in all cases. The findings have serious implications for the learning of text material.