Content uploaded by Daniel Grühn
Author content
All content in this area was uploaded by Daniel Grühn on Oct 12, 2015
Content may be subject to copyright.
Faculty Forum
A Self-Correcting Approach to
Multiple-Choice Exams Improves
Students’ Learning
Daniel Gru
¨hn
1
and Yanhua Cheng
1
Abstract
Montepare suggested the use of a self-correcting approach to multiple-choice tests: Students first take the exam as usual, but are
allowed to hand in a self-corrected version afterwards. The idea of this approach is that the additional interaction with the material
may foster further learning. To examine whether such an approach actually improves learning, we compared two large sections in
psychology: one section used traditional exams and the other section used self-correcting midterm exams. Indeed, compared to the
traditional approach, students using the self-correcting approach performed better on the final exam. Moreover, students who self-
corrected more items performed better on the final exam above and beyond students’ original performance. As a tool to foster
students’ engagement and learning, the self-correcting approach might be especially useful in large classroom settings.
Keywords
multiple-choice exams, self-correcting, active learning, large classrooms
Multiple-choice exams have a long history in teaching and are
an integrative part of college courses. Researchers have criti-
cized traditional multiple-choice exams to be insensitive to
assessing complex knowledge (Frederiksen, 1984) and to be
difficult to facilitate learning processes (Epstein et al., 2002).
However, multiple-choice exams may be particularly cost-
efficient in large classrooms. To improve the usefulness as a
learning tool, Montepare (2005, 2007) suggested a self-
correcting approach to multiple-choice tests consisting of two
steps: First, students complete the test during a course session
and hand in their answer sheets (original version). Then, stu-
dents take the questions home and hand in a self-revised ver-
sion of their responses during the next class period. Students
receive full credit for questions answered correctly in both the
original and the self-corrected version. To motivate students’
completion of the self-corrected version, students receive par-
tial credit for every item changed from wrong in the original to
correct in the self-corrected version. Note that students do not
receive feedback on their original responses before handing in
the self-revised version. Students have to figure out on their
own whether they answered a question correctly or not for the
self-revised version.
The idea behind self-correcting exams is that the additional
interaction with the material fosters deeper learning. Students are
challenged to discover the correct answer, to study the material
in their way, and to experience some degree of mastery. This
idea is consistent with the proposition of active learning strate-
gies, which encourage students’ engagement in the learning pro-
cess (Bonwell & Sutherland, 1996). Active learning has received
some empirical support in improving retention of information
(e.g., Ciarocco, Lewandowski, & Van Volkom, 2013; Prince,
2004). However, it may be difficult to implement active learning
strategies within a large classroom setting. Self-correcting
multiple-choice exams may be a way to incorporate some degree
of active engagement in large classrooms.
Francis and Barnett (2012) used self-correcting quizzes and
a self-correcting midterm exam in a section of general psychol-
ogy with 46 students. Compared to a control section without a
self-correcting option (n¼52), students with the self-
correcting approach performed better on the final exam after
controlling for the average initial quiz score. Although this
study demonstrated some limited effects of the self-
correcting approach, it did not address two central ques-
tions—(a) does the self-correcting approach predict subsequent
learning over and above students’ original performance on the
exams? and (b) can the self-correcting approach be generalized
to a large classroom setting (e.g., over 150 students)? Given the
inherent difficulties of engaging students in large classrooms,
the self-correcting approach may be a particularly useful and
cost-efficient tool for instructors of large-size classes. To
address these two research questions, we examined exam
1
North Carolina State University, Raleigh, NC, USA
Corresponding Author:
Daniel Gru¨hn, North Carolina State University, Department of Psychology,
Campus Box 7650, Raleigh, NC 27695, USA.
Email: daniel_gruehn@ncsu.edu
Teaching of Psychology
2014, Vol. 41(4) 335-339
ªThe Author(s) 2014
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0098628314549706
top.sagepub.com
grades in two large sections of a developmental psychology
course (a self-correcting sample and a control sample). We
expected that students with the self-correcting approach would
perform better on a cumulative final exam than students in the
control sample. Moreover, we expected that students would ben-
efit from the self-correcting approach on a cumulative final over
and above their original performance; specifically, we expected
that students would show benefits only for material that was used
under the self-correcting approach but not for new material, as
students did not engage in additional interaction with the new
material. We also expected that lower performing students
would benefit most from the self-correcting approach than
higher performing students, that is, the self-correcting approach
is effectively shrinking the grade dispersion.
Method
Participants
Students were enrolled in two large sections of Developmental
Psychology (300-level) in a public 4-year university in the
southeastern area of the United States. To investigate the
effects of the self-correcting approach on the exam scores,
we assessed a control sample of students, who were enrolled
in Developmental Psychology in Spring 2010, and a self-
correcting sample of students, who got the option of the self-
correcting exams in Developmental Psychology during Fall
2010. The control sample comprised 173 students (73.2%
females) ranging from 18 to 50 years (M¼21.2, SD ¼3.1).
Students’ academic level included 25 first-year, 64 sopho-
mores, 40 juniors, and 44 seniors. The self-correcting sample
comprised 175 students (78.4%females) ranging from 18 to
37 years (M¼20.3, SD ¼2.2). The self-correcting sample
included 16 first-year, 64 sophomores, 61 juniors, and
34 seniors. Both sections met for two 75-min classes per week
in the early afternoon. Both sections were taught by the same
instructor and got the very same exams.
Procedure
For both sections, the semester was divided into three parts
with three exams in total. Each of the first two parts ended with
a 50-item multiple-choice exam on the material of the corre-
sponding part. For the self-correcting sample, these first two
exams had the option of self-revising. For both samples, the
third exam was a traditional (nonself-correcting) final exam
with 100 multiple-choice questions. The final exam was cumu-
lative with 25 questions for Part 1, 25 questions for Part 2, and
50 questions for Part 3. The items on the final exam for Part 1
and Part 2 of the semester were different from the items used in
the first 2 exams. All multiple-choice items had four options
with one correct answer. Items were generally written to assess
higher-level thinking, such as conceptual thinking and knowl-
edge application, rather than simple factual knowledge.
We followed Montepare’s (2005, 2007) recommendations
for the self-correcting exams: First, students completed the
exam as usual in one class period and handed in their answers,
which is considered the ‘‘original’’ exam. Second, students
were given the option of taking the questions home and hand
in a self-corrected answer sheet during the next class period
(2 days later), which is the ‘‘self-corrected’’ exam. Students
made their answers for both versions on optical answer sheets
(bubble sheets). Self-correcting the exam was optional; how-
ever, all students actually completed a revised version.
Answers that were correct in both versions received 2 points
and answers that were wrong in both versions receive 0 points.
Answers that were correctly changed from wrong to correct and
that were falsely changed from correct to wrong received 1
point. We created a simple spreadsheet program to automatize
the scoring process.
Results
To investigate the effects of the self-revising approach on
learning, we investigated four questions: First, did the perfor-
mance differ between the control sample and the self-
correcting sample on the original exams? Second, how large
was a potential gain in the exam scores through the self-
correcting approach? Third, did the gain differ by students’
original performance? Finally, did self-correcting on the first
and second exams improve performance on the final exam?
First, we compared the performance of the control group
with the performance on the original exams of the self-
correcting sample. We ran a 2 3 (sample exam) repeated
analysis of variance (ANOVA) with section (control sample vs.
self-correcting sample) as a between-subjects factor and exam
(exam 1 vs. exam 2 vs. exam 3) as a within-subject factor.
The analysis revealed a significant main effect of exam,
F(2, 355) ¼173.48, p< .01, Z
2
¼.52, indicating that students
in both sections performed significantly worse on the first exam
than on the second and third. The main effect of section was not
significant, F(1, 355) ¼3.11, p¼.08, Z
2
< .01. However, there
was a significant interaction, F(2, 355) ¼10.32, p< .01,
Z
2
¼.06. Compared to the control group, students in the self-
correcting sample performed significantly worse on Exam 1,
t(346) ¼2.15, p¼.03, similarly on exam 2, t(346) ¼1.42,
p¼.15, and significantly better on exam 3, t(346) ¼2.70,
p< .01. Thus, students’ performance under the self-
Table 1. Exam Scores in Percentage in the Control Sample (Spring
2010, N¼173) and the Self-Correcting Sample (Fall 2010, N¼175).
Self-Correcting Sample
Control Sample Original Self-Corrected Received
M SD M SD M SD M SD
Exam 1 63.4
a
11.5 60.6
b
12.7 82.7
c
10.2 71.6
d
9.8
Exam 2 71.4
a
10.5 73.2
a
12.9 91.5
b
9.1 82.4
c
9.1
Exam 3 71.2
a
8.8 73.9
b
9.8
Note. The scores under the self-correcting sample represent the performance
on the original exam, the self-corrected exam (if counted full), as well as the
actually received score (getting partial credit for changed items). Scores with
different subscripts within one row differed significantly, p< .05.
336 Teaching of Psychology 41(4)
correcting approach increased significantly more over the
course of the semester than students’ performance under the
traditional approach. Table 1 displays the exam scores for the
control and self-correcting sample in percentages.
Second, to investigate the actual improvement of revising
the exams, we ran a 2 2 (exam version) repeated ANOVA
with exam (exam 1 vs. exam 2) and version (original exam vs.
self-corrected exam) as within-subject factors for the self-
correcting sample. Similar to the findings mentioned earlier,
the analysis revealed a significant main effect of exam,
F(1, 172) ¼208.57, p< .01, Z
2
¼.55, documenting that stu-
dents did better on the second than on the first exam. As to
be expected, there was a significant main effect of version,
F(1, 172) ¼828.56, p< .01, Z
2
¼.83, indicating that students
did much better on the self-corrected version than on the orig-
inal. There was also a significant interaction between exam and
version, F(1, 172) ¼13.00, p< .01, Z
2
¼.07. Students gained
more points on the first exam than on the second exam.
Third, we were interested to see whether the gain differed
significantly by students’ performance on the original exams.
To simplify the presentation, we used the letter grade received
on the original version as between-subject factor (A to F) for
investigating the received gain (the partial credit) through the
self-correcting approach. The analyses revealed significant
main effects for exam 1, F(4, 173) ¼38.33, p< .01,
Z
2
¼.46, and exam 2, F(4, 173) ¼98.02, p< .01, Z
2
¼.70.
As expected, students with weak performance on the original
exam gained most from the self-correcting approach. Figure
1 displays the gains by initial performance for both exams.
Finally, to investigate whether self-correcting actually
improved learning for the material, we examined whether
the number of items that changed correctly from the original
to the self-corrected version had an impact on the performance
on the final exam for the corresponding parts. We ran two
regressions via Mplus (Muthe´n & Muthe´n, 2007), one for
Exam 1 scores predicting the performance on Part 1 of the final
exam and one for Exam 2 scores predicting the performance on
Part 2 of the final. In both analyses, we included the number of
correct items on the original exam as well as the number of
items correctly changed from wrong to correct. Figure 2
displays the path diagram with unstandardized regression
weights. In both, the number of self-corrected items was indeed
positively related to the performance on the corresponding part
of the final. Thus, over and above the performance on the orig-
inal exam, the more students correctly changed their answers,
the better was their performance on the final. We also exam-
ined whether self-correcting had an impact on the performance
for the new material in Part 3 of the final that did not undergo a
self-correcting procedure. We ran a regression including num-
ber of correct items in the original exam as well as number of
corrected items from the first 2 exams as predictors and the
number of correct items in the third part of the final as the out-
come. The results revealed that only the number of correct
items in the second original exam was a significant predictor
(B¼.37, p< .01). No other effect reached significance (all p
> .10). Thus, the self-correcting procedure had no impact on
new material that did not undergo the self-correcting approach.
Discussion
The purpose of the study was to investigate the utility of self-
correcting exams for fostering learning in college students in
large classrooms. The study revealed three major findings:
First, students’ grades obviously improved through the self-
correcting procedure, especially for the low performing stu-
dents. Second, and more importantly, students’ knowledge
about the subject area tested under the self-correcting approach
improved. Third, the self-correcting approach was useful for
engaging students’ learning in large classroom settings.
Consistent with Montepare’s (2005, 2007) suggestions, stu-
dents’ knowledge seemed to improve by using self-correcting
exams. First, compared to the control sample, students who got
the self-correcting option improved significantly more over the
course of the semester on the three original exams. More specif-
ically, the self-correcting sample performed better on the final
exam than the control sample. Second, and more compelling evi-
dence for the learning benefits of the self-correcting approach,
self-correcting improved retention for the self-corrected mate-
rial. Over and above the performance on the original exam, the
more items were corrected, the better was the performance on
the corresponding part of the final exam. This finding documents
learning benefits through the self-correcting procedure.
Figure 1. Actual received gains by students’ original exam scores in
letter grades.
Figure 2. Number of correct items in original and number of self-
corrected items of exam 1 (top) and exam 2 (bottom) predicted
performance on the corresponding part in the final exam. Values are
unstandardized regression weights.
Gru
¨hn and Cheng 337
However, as we expected, the effects were limited to the subject
area tested under the self-correcting approach, that is, there were
no apparent transfer benefits of the self-correcting approach to
new and non-self-revised material. This finding is not surprising,
given that the hypothesized mechanism behind self-corrected
exam is students’ additional engagement with the material. Thus,
the self-correcting approach might work best if used frequently
over the semester to foster continuous engagement with the
material. Note that both samples of students were given the cor-
rect answers for Exam 1 and Exam 2 upon completion of final
scoring. Thus, the self-correcting approach showed knowledge
gains in the final exam over and above simply knowing the cor-
rect answers for the first 2 exams.
In terms of interindividual differences, the self-correcting
approach seems to be specifically beneficial for lower perform-
ing students. Although the higher performing students also
gained points, students who performed poorly on the original
exam were able to improve most from the procedure. This is
probably due to the fact that there was more room for improve-
ments for lower performing students than for higher perform-
ing students. The self-correcting approach practically reduces
grade point differences between students. From an instructor’s
perspective, this might be desirable as lower performing stu-
dents are more at risk to drop out physically or to withdraw
mentally from a course. The self-correcting procedure provides
a second chance for low performing students.
One potential criticism might be that the self-correcting
approach fosters cheating. If cheating involves studying the
textbook or discussing answers actively with peers, these
forms of ‘‘cheating’’ are actually desirable under the self-
correcting approach. Students should engage actively with the
material. If cheating means just copying answers from peers,
that would be passive and undesirable. We believe that our
exams were difficult enough to create doubts that peers actu-
ally had the correct answers. Moreover, the beneficial effects
of the self-correcting approach on the final exam cannot be
explained by simple cheating. Another criticism is that the
self-correcting approach leads to grade inflation—lower per-
forming students could gain a substantial number of points
through the self-correcting approach that they would not get
otherwise. We believe that this is again an issue of the ade-
quate test difficulty. Exams for the self-correcting approach
should be rather difficult. On one hand, exams should be chal-
lenging for students to provide opportunities to explore and to
master the material during the revision process. Students
should actively think about the material rather than find the
answers in the textbook. On the other hand, difficult exams
will reduce the grade inflation after the revision process. In
the current study, the average letter grade for exam 1 went
from D to C after the revision process, which seems to be a
reasonable course average. Thus, grade inflation under the
self-correcting approach is not an issue when tests have an
appropriate difficulty level.
The self-correcting approach may be particularly useful for
large classrooms that limit instructors’ options to foster active
learning. The self-correcting approach provides a relatively
cost-efficient and simple way of implementing an active learning
component. Given the growing availability of course manage-
ment systems, such as Blackboard, Sakai, and Moodle, the self-
correcting approach could be easily implemented online with
very little additional effort for the instructor. As an instructor,
improving students’ learning even slightly might be a worthwhile
endeavor. Moreover, in personal communications, students
expressed their positive attitudes toward the self-correcting
approach. The positive attitude may motivate students to engage
in class, enhance class climate, and even reduce test anxiety.
Future research might benefit from two avenues: First,
future research may try to examine the learning processes
and strategies triggered by a self-correcting approach in
more detail. The potential mechanisms of the learning ben-
efits through the self-correcting approach may include
(a) the additional time spent on mastering the material,
(b)enhancedstudentmotivationtobeactivelyinvolvedin
the exam process, or (c) identifying incorrect alternatives
and the reason why they are incorrect (Little, Bjork, Bjork,
& Angello, 2012). Future research may examine day-to-day
study behavior to provide a detailed account of potential
mechanisms behind the learning benefits in self-correcting
exams. Second, future research may benefit from investigat-
ing interindividual differences on the learning gains of the
self-correcting approach more systematically. The self-
correcting approach may benefit certain types of students
more so than others. Obviously, the self-correcting approach
benefits primarily low performing students in improving
their grades. However, high performing students may also
obtain substantial learning gains that might not be evident
when only looking at exam scores. Identifying interindivi-
dual differences in learning gains of self-correcting
approach may help instructors determine the goodness of fit
between this method and their students.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to
the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship,
and/or publication of this article.
References
Bonwell, C. C., & Sutherland, T. E. (1996). The active learning con-
tinuum: Choosing activities to engage students in the classroom.
New Directions for Teaching and Learning,1996, 3–16.
Ciarocco, N. J., Lewandowski, G. W., , Jr., & Van Volkom, M. (2013).
The impact of a multifaceted approach to teaching research meth-
ods on students’ attitudes. Teaching of Psychology,40, 20–25. doi:
10.1177/0098628312465859
Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K. A., Hen-
del, R. A., Epstein, B. B., & Brosvic, G. M. (2002). Immediate
feedback assessment technique promotes learning and corrects
inaccurate first responses. The Psychological Record,52, 187–201.
338 Teaching of Psychology 41(4)
Francis, A. L., & Barnett, J. (2012). The effect and implications of a
‘‘self-correcting’’ assessment procedure. Teaching of Psychology,
39, 38–41. doi:10.1177/0098628311430171
Frederiksen, N. (1984). The real test bias: Influences of testing on
teaching and learning. American Psychologist,39, 193–202.
doi:10.1037/0003-066x.39.3.193
Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-
choice tests exonerated, at least of some charges: Fostering test-
induced learning and avoiding test-induced forgetting. Psychological
Science,23, 1337–1344. doi:10.1177/0956797612443370
Montepare, J. M. (2005, October). A self-correcting approach to mul-
tiple choice tests. APS Observer,18, 35–36.
Montepare, J. M. (2007). A self-correcting approach to multiple-
choice tests. In B. Perlman, L. I. McCann, & S. H. McFadden
(Eds.), Lessons learned (Vol. 3, pp. 143–154). Washington, DC:
Association for Psychological Science.
Muthe´ n, L. K., & Muthe´n, B. O. (2007). Mplus user’s guide (5th ed.).
Los Angeles, CA: Muthe´n & Muthe´n.
Prince, M. (2004). Does active learning work? A review of the
research. Journal of engineering education,93, 223–231.
Gru
¨hn and Cheng 339