ArticlePDF Available

A Self-Correcting Approach to Multiple-Choice Exams Improves Students' Learning

Authors:

Abstract and Figures

Montepare suggested the use of a self-correcting approach to multiple-choice tests: Students first take the exam as usual, but are allowed to hand in a self-corrected version afterwards. The idea of this approach is that the additional interaction with the material may foster further learning. To examine whether such an approach actually improves learning, we compared two large sections in psychology: one section used traditional exams and the other section used self-correcting midterm exams. Indeed, compared to the traditional approach, students using the self-correcting approach performed better on the final exam. Moreover, students who self-corrected more items performed better on the final exam above and beyond students' original performance. As a tool to foster students' engagement and learning, the self-correcting approach might be especially useful in large classroom settings.
Content may be subject to copyright.
Faculty Forum
A Self-Correcting Approach to
Multiple-Choice Exams Improves
Students’ Learning
Daniel Gru
¨hn
1
and Yanhua Cheng
1
Abstract
Montepare suggested the use of a self-correcting approach to multiple-choice tests: Students first take the exam as usual, but are
allowed to hand in a self-corrected version afterwards. The idea of this approach is that the additional interaction with the material
may foster further learning. To examine whether such an approach actually improves learning, we compared two large sections in
psychology: one section used traditional exams and the other section used self-correcting midterm exams. Indeed, compared to the
traditional approach, students using the self-correcting approach performed better on the final exam. Moreover, students who self-
corrected more items performed better on the final exam above and beyond students’ original performance. As a tool to foster
students’ engagement and learning, the self-correcting approach might be especially useful in large classroom settings.
Keywords
multiple-choice exams, self-correcting, active learning, large classrooms
Multiple-choice exams have a long history in teaching and are
an integrative part of college courses. Researchers have criti-
cized traditional multiple-choice exams to be insensitive to
assessing complex knowledge (Frederiksen, 1984) and to be
difficult to facilitate learning processes (Epstein et al., 2002).
However, multiple-choice exams may be particularly cost-
efficient in large classrooms. To improve the usefulness as a
learning tool, Montepare (2005, 2007) suggested a self-
correcting approach to multiple-choice tests consisting of two
steps: First, students complete the test during a course session
and hand in their answer sheets (original version). Then, stu-
dents take the questions home and hand in a self-revised ver-
sion of their responses during the next class period. Students
receive full credit for questions answered correctly in both the
original and the self-corrected version. To motivate students’
completion of the self-corrected version, students receive par-
tial credit for every item changed from wrong in the original to
correct in the self-corrected version. Note that students do not
receive feedback on their original responses before handing in
the self-revised version. Students have to figure out on their
own whether they answered a question correctly or not for the
self-revised version.
The idea behind self-correcting exams is that the additional
interaction with the material fosters deeper learning. Students are
challenged to discover the correct answer, to study the material
in their way, and to experience some degree of mastery. This
idea is consistent with the proposition of active learning strate-
gies, which encourage students’ engagement in the learning pro-
cess (Bonwell & Sutherland, 1996). Active learning has received
some empirical support in improving retention of information
(e.g., Ciarocco, Lewandowski, & Van Volkom, 2013; Prince,
2004). However, it may be difficult to implement active learning
strategies within a large classroom setting. Self-correcting
multiple-choice exams may be a way to incorporate some degree
of active engagement in large classrooms.
Francis and Barnett (2012) used self-correcting quizzes and
a self-correcting midterm exam in a section of general psychol-
ogy with 46 students. Compared to a control section without a
self-correcting option (n¼52), students with the self-
correcting approach performed better on the final exam after
controlling for the average initial quiz score. Although this
study demonstrated some limited effects of the self-
correcting approach, it did not address two central ques-
tions—(a) does the self-correcting approach predict subsequent
learning over and above students’ original performance on the
exams? and (b) can the self-correcting approach be generalized
to a large classroom setting (e.g., over 150 students)? Given the
inherent difficulties of engaging students in large classrooms,
the self-correcting approach may be a particularly useful and
cost-efficient tool for instructors of large-size classes. To
address these two research questions, we examined exam
1
North Carolina State University, Raleigh, NC, USA
Corresponding Author:
Daniel Gru¨hn, North Carolina State University, Department of Psychology,
Campus Box 7650, Raleigh, NC 27695, USA.
Email: daniel_gruehn@ncsu.edu
Teaching of Psychology
2014, Vol. 41(4) 335-339
ªThe Author(s) 2014
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0098628314549706
top.sagepub.com
grades in two large sections of a developmental psychology
course (a self-correcting sample and a control sample). We
expected that students with the self-correcting approach would
perform better on a cumulative final exam than students in the
control sample. Moreover, we expected that students would ben-
efit from the self-correcting approach on a cumulative final over
and above their original performance; specifically, we expected
that students would show benefits only for material that was used
under the self-correcting approach but not for new material, as
students did not engage in additional interaction with the new
material. We also expected that lower performing students
would benefit most from the self-correcting approach than
higher performing students, that is, the self-correcting approach
is effectively shrinking the grade dispersion.
Method
Participants
Students were enrolled in two large sections of Developmental
Psychology (300-level) in a public 4-year university in the
southeastern area of the United States. To investigate the
effects of the self-correcting approach on the exam scores,
we assessed a control sample of students, who were enrolled
in Developmental Psychology in Spring 2010, and a self-
correcting sample of students, who got the option of the self-
correcting exams in Developmental Psychology during Fall
2010. The control sample comprised 173 students (73.2%
females) ranging from 18 to 50 years (M¼21.2, SD ¼3.1).
Students’ academic level included 25 first-year, 64 sopho-
mores, 40 juniors, and 44 seniors. The self-correcting sample
comprised 175 students (78.4%females) ranging from 18 to
37 years (M¼20.3, SD ¼2.2). The self-correcting sample
included 16 first-year, 64 sophomores, 61 juniors, and
34 seniors. Both sections met for two 75-min classes per week
in the early afternoon. Both sections were taught by the same
instructor and got the very same exams.
Procedure
For both sections, the semester was divided into three parts
with three exams in total. Each of the first two parts ended with
a 50-item multiple-choice exam on the material of the corre-
sponding part. For the self-correcting sample, these first two
exams had the option of self-revising. For both samples, the
third exam was a traditional (nonself-correcting) final exam
with 100 multiple-choice questions. The final exam was cumu-
lative with 25 questions for Part 1, 25 questions for Part 2, and
50 questions for Part 3. The items on the final exam for Part 1
and Part 2 of the semester were different from the items used in
the first 2 exams. All multiple-choice items had four options
with one correct answer. Items were generally written to assess
higher-level thinking, such as conceptual thinking and knowl-
edge application, rather than simple factual knowledge.
We followed Montepare’s (2005, 2007) recommendations
for the self-correcting exams: First, students completed the
exam as usual in one class period and handed in their answers,
which is considered the ‘‘original’’ exam. Second, students
were given the option of taking the questions home and hand
in a self-corrected answer sheet during the next class period
(2 days later), which is the ‘‘self-corrected’’ exam. Students
made their answers for both versions on optical answer sheets
(bubble sheets). Self-correcting the exam was optional; how-
ever, all students actually completed a revised version.
Answers that were correct in both versions received 2 points
and answers that were wrong in both versions receive 0 points.
Answers that were correctly changed from wrong to correct and
that were falsely changed from correct to wrong received 1
point. We created a simple spreadsheet program to automatize
the scoring process.
Results
To investigate the effects of the self-revising approach on
learning, we investigated four questions: First, did the perfor-
mance differ between the control sample and the self-
correcting sample on the original exams? Second, how large
was a potential gain in the exam scores through the self-
correcting approach? Third, did the gain differ by students’
original performance? Finally, did self-correcting on the first
and second exams improve performance on the final exam?
First, we compared the performance of the control group
with the performance on the original exams of the self-
correcting sample. We ran a 2 3 (sample exam) repeated
analysis of variance (ANOVA) with section (control sample vs.
self-correcting sample) as a between-subjects factor and exam
(exam 1 vs. exam 2 vs. exam 3) as a within-subject factor.
The analysis revealed a significant main effect of exam,
F(2, 355) ¼173.48, p< .01, Z
2
¼.52, indicating that students
in both sections performed significantly worse on the first exam
than on the second and third. The main effect of section was not
significant, F(1, 355) ¼3.11, p¼.08, Z
2
< .01. However, there
was a significant interaction, F(2, 355) ¼10.32, p< .01,
Z
2
¼.06. Compared to the control group, students in the self-
correcting sample performed significantly worse on Exam 1,
t(346) ¼2.15, p¼.03, similarly on exam 2, t(346) ¼1.42,
p¼.15, and significantly better on exam 3, t(346) ¼2.70,
p< .01. Thus, students’ performance under the self-
Table 1. Exam Scores in Percentage in the Control Sample (Spring
2010, N¼173) and the Self-Correcting Sample (Fall 2010, N¼175).
Self-Correcting Sample
Control Sample Original Self-Corrected Received
M SD M SD M SD M SD
Exam 1 63.4
a
11.5 60.6
b
12.7 82.7
c
10.2 71.6
d
9.8
Exam 2 71.4
a
10.5 73.2
a
12.9 91.5
b
9.1 82.4
c
9.1
Exam 3 71.2
a
8.8 73.9
b
9.8
Note. The scores under the self-correcting sample represent the performance
on the original exam, the self-corrected exam (if counted full), as well as the
actually received score (getting partial credit for changed items). Scores with
different subscripts within one row differed significantly, p< .05.
336 Teaching of Psychology 41(4)
correcting approach increased significantly more over the
course of the semester than students’ performance under the
traditional approach. Table 1 displays the exam scores for the
control and self-correcting sample in percentages.
Second, to investigate the actual improvement of revising
the exams, we ran a 2 2 (exam version) repeated ANOVA
with exam (exam 1 vs. exam 2) and version (original exam vs.
self-corrected exam) as within-subject factors for the self-
correcting sample. Similar to the findings mentioned earlier,
the analysis revealed a significant main effect of exam,
F(1, 172) ¼208.57, p< .01, Z
2
¼.55, documenting that stu-
dents did better on the second than on the first exam. As to
be expected, there was a significant main effect of version,
F(1, 172) ¼828.56, p< .01, Z
2
¼.83, indicating that students
did much better on the self-corrected version than on the orig-
inal. There was also a significant interaction between exam and
version, F(1, 172) ¼13.00, p< .01, Z
2
¼.07. Students gained
more points on the first exam than on the second exam.
Third, we were interested to see whether the gain differed
significantly by students’ performance on the original exams.
To simplify the presentation, we used the letter grade received
on the original version as between-subject factor (A to F) for
investigating the received gain (the partial credit) through the
self-correcting approach. The analyses revealed significant
main effects for exam 1, F(4, 173) ¼38.33, p< .01,
Z
2
¼.46, and exam 2, F(4, 173) ¼98.02, p< .01, Z
2
¼.70.
As expected, students with weak performance on the original
exam gained most from the self-correcting approach. Figure
1 displays the gains by initial performance for both exams.
Finally, to investigate whether self-correcting actually
improved learning for the material, we examined whether
the number of items that changed correctly from the original
to the self-corrected version had an impact on the performance
on the final exam for the corresponding parts. We ran two
regressions via Mplus (Muthe´n & Muthe´n, 2007), one for
Exam 1 scores predicting the performance on Part 1 of the final
exam and one for Exam 2 scores predicting the performance on
Part 2 of the final. In both analyses, we included the number of
correct items on the original exam as well as the number of
items correctly changed from wrong to correct. Figure 2
displays the path diagram with unstandardized regression
weights. In both, the number of self-corrected items was indeed
positively related to the performance on the corresponding part
of the final. Thus, over and above the performance on the orig-
inal exam, the more students correctly changed their answers,
the better was their performance on the final. We also exam-
ined whether self-correcting had an impact on the performance
for the new material in Part 3 of the final that did not undergo a
self-correcting procedure. We ran a regression including num-
ber of correct items in the original exam as well as number of
corrected items from the first 2 exams as predictors and the
number of correct items in the third part of the final as the out-
come. The results revealed that only the number of correct
items in the second original exam was a significant predictor
(B¼.37, p< .01). No other effect reached significance (all p
> .10). Thus, the self-correcting procedure had no impact on
new material that did not undergo the self-correcting approach.
Discussion
The purpose of the study was to investigate the utility of self-
correcting exams for fostering learning in college students in
large classrooms. The study revealed three major findings:
First, students’ grades obviously improved through the self-
correcting procedure, especially for the low performing stu-
dents. Second, and more importantly, students’ knowledge
about the subject area tested under the self-correcting approach
improved. Third, the self-correcting approach was useful for
engaging students’ learning in large classroom settings.
Consistent with Montepare’s (2005, 2007) suggestions, stu-
dents’ knowledge seemed to improve by using self-correcting
exams. First, compared to the control sample, students who got
the self-correcting option improved significantly more over the
course of the semester on the three original exams. More specif-
ically, the self-correcting sample performed better on the final
exam than the control sample. Second, and more compelling evi-
dence for the learning benefits of the self-correcting approach,
self-correcting improved retention for the self-corrected mate-
rial. Over and above the performance on the original exam, the
more items were corrected, the better was the performance on
the corresponding part of the final exam. This finding documents
learning benefits through the self-correcting procedure.
Figure 1. Actual received gains by students’ original exam scores in
letter grades.
Figure 2. Number of correct items in original and number of self-
corrected items of exam 1 (top) and exam 2 (bottom) predicted
performance on the corresponding part in the final exam. Values are
unstandardized regression weights.
Gru
¨hn and Cheng 337
However, as we expected, the effects were limited to the subject
area tested under the self-correcting approach, that is, there were
no apparent transfer benefits of the self-correcting approach to
new and non-self-revised material. This finding is not surprising,
given that the hypothesized mechanism behind self-corrected
exam is students’ additional engagement with the material. Thus,
the self-correcting approach might work best if used frequently
over the semester to foster continuous engagement with the
material. Note that both samples of students were given the cor-
rect answers for Exam 1 and Exam 2 upon completion of final
scoring. Thus, the self-correcting approach showed knowledge
gains in the final exam over and above simply knowing the cor-
rect answers for the first 2 exams.
In terms of interindividual differences, the self-correcting
approach seems to be specifically beneficial for lower perform-
ing students. Although the higher performing students also
gained points, students who performed poorly on the original
exam were able to improve most from the procedure. This is
probably due to the fact that there was more room for improve-
ments for lower performing students than for higher perform-
ing students. The self-correcting approach practically reduces
grade point differences between students. From an instructor’s
perspective, this might be desirable as lower performing stu-
dents are more at risk to drop out physically or to withdraw
mentally from a course. The self-correcting procedure provides
a second chance for low performing students.
One potential criticism might be that the self-correcting
approach fosters cheating. If cheating involves studying the
textbook or discussing answers actively with peers, these
forms of ‘‘cheating’’ are actually desirable under the self-
correcting approach. Students should engage actively with the
material. If cheating means just copying answers from peers,
that would be passive and undesirable. We believe that our
exams were difficult enough to create doubts that peers actu-
ally had the correct answers. Moreover, the beneficial effects
of the self-correcting approach on the final exam cannot be
explained by simple cheating. Another criticism is that the
self-correcting approach leads to grade inflation—lower per-
forming students could gain a substantial number of points
through the self-correcting approach that they would not get
otherwise. We believe that this is again an issue of the ade-
quate test difficulty. Exams for the self-correcting approach
should be rather difficult. On one hand, exams should be chal-
lenging for students to provide opportunities to explore and to
master the material during the revision process. Students
should actively think about the material rather than find the
answers in the textbook. On the other hand, difficult exams
will reduce the grade inflation after the revision process. In
the current study, the average letter grade for exam 1 went
from D to C after the revision process, which seems to be a
reasonable course average. Thus, grade inflation under the
self-correcting approach is not an issue when tests have an
appropriate difficulty level.
The self-correcting approach may be particularly useful for
large classrooms that limit instructors’ options to foster active
learning. The self-correcting approach provides a relatively
cost-efficient and simple way of implementing an active learning
component. Given the growing availability of course manage-
ment systems, such as Blackboard, Sakai, and Moodle, the self-
correcting approach could be easily implemented online with
very little additional effort for the instructor. As an instructor,
improving students’ learning even slightly might be a worthwhile
endeavor. Moreover, in personal communications, students
expressed their positive attitudes toward the self-correcting
approach. The positive attitude may motivate students to engage
in class, enhance class climate, and even reduce test anxiety.
Future research might benefit from two avenues: First,
future research may try to examine the learning processes
and strategies triggered by a self-correcting approach in
more detail. The potential mechanisms of the learning ben-
efits through the self-correcting approach may include
(a) the additional time spent on mastering the material,
(b)enhancedstudentmotivationtobeactivelyinvolvedin
the exam process, or (c) identifying incorrect alternatives
and the reason why they are incorrect (Little, Bjork, Bjork,
& Angello, 2012). Future research may examine day-to-day
study behavior to provide a detailed account of potential
mechanisms behind the learning benefits in self-correcting
exams. Second, future research may benefit from investigat-
ing interindividual differences on the learning gains of the
self-correcting approach more systematically. The self-
correcting approach may benefit certain types of students
more so than others. Obviously, the self-correcting approach
benefits primarily low performing students in improving
their grades. However, high performing students may also
obtain substantial learning gains that might not be evident
when only looking at exam scores. Identifying interindivi-
dual differences in learning gains of self-correcting
approach may help instructors determine the goodness of fit
between this method and their students.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to
the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship,
and/or publication of this article.
References
Bonwell, C. C., & Sutherland, T. E. (1996). The active learning con-
tinuum: Choosing activities to engage students in the classroom.
New Directions for Teaching and Learning,1996, 3–16.
Ciarocco, N. J., Lewandowski, G. W., , Jr., & Van Volkom, M. (2013).
The impact of a multifaceted approach to teaching research meth-
ods on students’ attitudes. Teaching of Psychology,40, 20–25. doi:
10.1177/0098628312465859
Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K. A., Hen-
del, R. A., Epstein, B. B., & Brosvic, G. M. (2002). Immediate
feedback assessment technique promotes learning and corrects
inaccurate first responses. The Psychological Record,52, 187–201.
338 Teaching of Psychology 41(4)
Francis, A. L., & Barnett, J. (2012). The effect and implications of a
‘‘self-correcting’’ assessment procedure. Teaching of Psychology,
39, 38–41. doi:10.1177/0098628311430171
Frederiksen, N. (1984). The real test bias: Influences of testing on
teaching and learning. American Psychologist,39, 193–202.
doi:10.1037/0003-066x.39.3.193
Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-
choice tests exonerated, at least of some charges: Fostering test-
induced learning and avoiding test-induced forgetting. Psychological
Science,23, 1337–1344. doi:10.1177/0956797612443370
Montepare, J. M. (2005, October). A self-correcting approach to mul-
tiple choice tests. APS Observer,18, 35–36.
Montepare, J. M. (2007). A self-correcting approach to multiple-
choice tests. In B. Perlman, L. I. McCann, & S. H. McFadden
(Eds.), Lessons learned (Vol. 3, pp. 143–154). Washington, DC:
Association for Psychological Science.
Muthe´ n, L. K., & Muthe´n, B. O. (2007). Mplus user’s guide (5th ed.).
Los Angeles, CA: Muthe´n & Muthe´n.
Prince, M. (2004). Does active learning work? A review of the
research. Journal of engineering education,93, 223–231.
Gru
¨hn and Cheng 339
... (Nugroho et al., 2018) states that the character of someone who does not easily believe the information he receives is called skepticism. (Cheng, 2014) states that students who are used to doing self-correction have better performance. ...
Article
Full-text available
The teacher's self-regulation in solving problems with contradictory information needs to be investigated because this certainly has an impact on students' self-regulation abilities. However, research related to this is still limited. Problem with Contradiction Information (PWCI) is appropriate to view self-regulation. This research is a case study which involved teachers in East Java, Indonesia and already have an educator certificate. There are 24 teachers as participants of this research, 14 females and 10 males. The objectives of this study describe how the teacher's response when completing PWCI and how the teacher's self-regulation when solving PWCI. Data were collected through tests and interviews. The results show that (1) There are two types of teacher responses in completing PWCI, the first type is the teacher who answers the questions directly without checking the provided information, the second type is the teacher who is thorough and cross-checks before working on the questions, (2) The emergence of self-assessment teacher regulation when completing PWCI is divided into four, namely, teacher self-regulation appears at the stage of understanding, implementing, re-checking and does not appear when completing PWCI. Most of the teachers are not aware of the contradictions in the questions given.
... Menurut (McCoubrie, 2004), tipe soal pilihan ganda merupakan tipe soal yang paling efisien dari jenis penilaian tertulis. Hal ini juga selaras dengan pernyataan (Gruhn & Cheng, 2014) bahwa tipe soal pilihan ganda merupakan jenis tes yang efisien. Tes pilihan ganda terdiri dari pilihan ganda biasa, hubungan antar hal (sebab akibat), analisa kasus, membaca diagram/tabel/grafik, dan pilihan ganda asosiasi (Widiyanto, 2018). ...
Article
Full-text available
Penelitian ini bertujuan untuk mengetahui ada atau tidak adanya pengaruh tipe soal pilihan ganda terhadap skor hasil belajar biologi siswa, pengaruh jenis kelamin terhadap skor hasil belajar biologi siswa, dan interaksi antara tipe soal pilihan ganda dengan jenis kelamin terhadap skor hasil belajar biologi siswa. Metode penelitian yang digunakan adalah quasi-eksperimental (percobaan semu) dan menyerupai model faktorial 2x2. Lokasi penelitian yaitu di Sekolah Menengah Atas (SMA) Negeri 1 Kota Tangerang Selatan, Provinsi Banten. Pengujian persyaratan analisis data menggunakan uji normalitas dan uji homogenitas, sedangkan pengujian hipotesis dilakukan melalui teknik analisis varians (ANOVA) dua jalur menggunakan program SPSS versi 24. Hasil penelitian membuktikan bahwa (1) terdapat pengaruh tipe soal pilihan ganda terhadap skor hasil belajar biologi, (2) terdapat pengaruh jenis kelamin terhadap skor hasil belajar biologi, dan (3) tidak terdapat interaksi antara tipe soal pilihan ganda dengan jenis kelamin terhadap skor hasil belajar biologi pada tingkat signifikansi α = 5%.
... The student's highest score is recorded. I allow students to take quizzes as frequently as they like to capitalize on the testing effect, or the fact that people tend to remember material after retrieval practice (Bastell et al., 2017;Grühn & Cheng, 2014;Roediger & Karpicke, 2006). Students can begin each quiz from any location that has access to Canvas, and students have 30 minutes to complete each quiz. ...
Chapter
In this chapter, I recall and reflect on teaching virtually an undergraduate group processes class during the COVID-19 pandemic. I explain the parameters and expectations I had for the class, in addition to reflecting on their effectiveness. I end the chapter with teaching tips based on knowledge attained from teaching the course in a new format.
... The student's highest score is recorded. I allow students to take quizzes as frequently as they like to capitalize on the testing effect, or the fact that people tend to remember material after retrieval practice (Bastell et al., 2017;Grühn & Cheng, 2014;Roediger & Karpicke, 2006). Students can begin each quiz from any location that has access to Canvas, and students have 30 minutes to complete each quiz. ...
Chapter
Teaching Cognitive Psychology online represents an exceptional opportunity to interact and connect with students in diverse and relevant ways. A course in Cognitive Psychology can be strengthened by considering the context of the current COVID-19 pandemic. In a world immersed by several sources of information, it is crucial undergraduate students both acquire tools for the analysis and processing of information and develop the ability to examine cognitive and decision-making processes. In this chapter, we present the development of the Cognitive Psychology online course and its implications to address the problems derived from the global health problem. We present the generalities of the course, the different interactions between real phenomena (e.g. beliefs about the fair distribution of income, information processing in reading acquisition, children’s learning of basic mathematical concepts and procedures, cognitive processing in decision-making, COVID-19 pandemic), and the content of the course. We discuss the different dynamics, activities, and assessments that we followed during the online course. Furthermore, we emphasize the importance of research to obtain and strengthen knowledge. Finally, we reflect on the advantages and implications of taking the online course and comment on two fundamental ideas for the consolidation of the concepts: 1) the conception and development of an advanced course in cognitive process modeling, and 2) the formulation of an international research seminar in cognitive sciences.
... [4][5] General education concepts show that practice tests can contribute to improved performance on standardized tests via comprehension calibration, study plan development, and application of metacognitive strategy. [6][7][8][9][10][11][12][13] fmCASES are online case-based modules originally created to meet the Society of Teachers of Family Medicine's National Clerkship Curriculum Objectives, 14 and were found to "foster self-directed and independent study" and "emphasize and model clinical problem-solving." 15 During the time of this initiative, 146 medical schools in the United States utilized fmCASES to teach or assess student learners. ...
Article
Background and objectives: Pretests have been shown to contribute to improved performance on standardized tests by serving to facilitate development of individualized study plans. fmCASES is an existing validated examination used widely in family medicine clerkships throughout the country. Our study aimed to determine if implementation of the fmCASES National Examination as a pretest decreased overall failure rates on the end-of-clerkship National Board of Medical Examiners (NBME) subject examination, and to assess if fmCASES pretest scores correlate with student NBME scores. Methods: One hundred seventy-one and 160 clerkship medical students in different class years at a single institution served as the control and intervention groups, respectively. The intervention group took the fmCASES National Examination as a pretest at the beginning of the clerkship and received educational prescriptions based on the results. Chi-square analysis, Pearson correlation, and receiver operating curve analysis were used to evaluate the effectiveness and correlations for the intervention. Results: Students completing an fmCASES National Examination as a pretest failed the end-of-clerkship NBME exam at significantly lower rates than those students not taking the pretest. The overall failure rate for the intervention group was 8.1% compared to 17.5% for the control group (P=0.01). Higher pretest scores correlated with higher NBME examination scores (r=0.55, P<0.001). Conclusions: fmCASES National Examination is helpful as a formative assessment tool for students beginning their family medicine clerkship. This tool introduces students to course learning objectives, assists them in identifying content areas most in need of study, and can be used to help students design individualized study plans.
... One method of providing higher-quality feedback while minimizing instructor burden is to offer students opportunities to self-correct. Grühn and Cheng (2014) found that students who were allowed to hand in a self-corrected midterm performed better on the final exam compared to students who took a traditional midterm (and were not allowed to self-correct). Grühn and Cheng utilized procedures for selfcorrection as laid out by Montepare (2005Montepare ( , 2007. ...
Article
Full-text available
Multiple-choice questions are frequently used in college classrooms as part of student assessment. While multiple-choice assessments (compared to other formats such as constructed response) seem to be the preferred method of testing by instructors and students, their effectiveness in assessing comprehension in college courses is sometimes called into question. Research has shown that there are ways to optimize the construction and use of multiple-choice testing to benefit college classroom instruction and assessment, student learning, and performance, and to more efficiently utilize instructor’s time and energy. This teacher-ready research review provides an overview of the research on utilizing multiple-choice questions as well as some tips on using, writing, and administering multiple-choice questions during assessments. We also summarize the benefits and potential issues with using multiple-choice questions including concerns about cheating, ways to detect and deter cheating, and testing issues and strategies unique to online formats. We hope that this short review will be helpful to instructors as they prepare courses and assessments and further encourage the use of empirical data in pedagogy related decision-making.
Article
Full-text available
BIOLOGY LEARNING BY DOING MULTIPLE CHOICE QUESTION FOR SENIOR HIGH SCHOOL STUDENTS. Multiple choice question learning and understanding can help learning process in the classroom. To maximize its use as learning tool, the students can have finish the task in 2 ways: first they will finish the Multiple-Choice Question (MCQ) task during the class hours and submit it when the class over. Or it can be homework to the students and hand it over at the next meeting session. The research showed that the instructor/teacher prefer on use multiple choice not only due to administration, time saving and simplicity but also objective and consistent. Furthermore, the student based on research result more like Multiple choice question assignments because they can eliminate wrong answers and these questions form more objective than anothers. Beside that with this method the student can participate actively in learning process.
Article
Full-text available
A multifaceted approach to teaching five experimental designs in a research methodology course was tested. Participants included 70 students enrolled in an experimental research methods course in the semester both before and after the implementation of instructional change. When using a multifaceted approach to teaching research methods that included both active learning and a form of scaffolding, students reported a greater efficacy in American Psychological Association style writing, a higher perceived utility of research and statistics, better attitudes toward statistics, and higher perceived skills/abilities in statistics. This approach benefitted students' perception of an often disliked but required course in psychology.
Article
Full-text available
Notes that there is evidence that tests influence teacher and student performance and that multiple-choice tests tend not to measure the more complex cognitive abilities. The more economical multiple-choice tests have nearly driven out other testing procedures that might be used in school evaluation. It is suggested that the greater cost of tests in other formats might be justified by their value for instruction (i.e., to encourage the teaching of higher level cognitive skills and to provide practice with feedback). (56 ref)
Article
Full-text available
Among the criticisms of multiple-choice tests is that-by exposing the correct answer as one of the alternatives-such tests engage recognition processes rather than the productive retrieval processes known to enhance later recall. We tested whether multiple-choice tests could trigger productive retrieval processes-provided the alternatives were made plausible enough to enable test takers to retrieve both why the correct alternatives were correct and why the incorrect alternatives were incorrect. In two experiments, we found not only that properly constructed multiple-choice tests can indeed trigger productive retrieval processes, but also that they had one potentially important advantage over cued-recall tests. Both testing formats fostered retention of previously tested information, but multiple-choice tests also facilitated recall of information pertaining to incorrect alternatives, whereas cued-recall tests did not. Thus, multiple-choice tests can be constructed so that they exercise the very retrieval processes they have been accused of bypassing.
Article
We investigated Montepare's (2005, 2007) self-correcting procedure for multiple-choice exams. Findings related to memory suggest this procedure should lead to improved retention by encouraging students to distribute the time spent reviewing the material. Results from a general psychology class (n = 98) indicate that the benefits are not as definitive as expected. Students' initial performance moderated the benefits of the procedure such that comprehensive final exam scores were significantly higher for the self-correcting condition when controlling for initial quiz performance, with a marginally significant interaction (p = .06) between initial quiz scores and condition. The findings underscore the importance of using the scientist-educator model to evaluate pedagogical decisions while considering practical implications and return on investment.
Article
Multiple-choice testing procedures that do not provide corrective feedback facilitate neither learning nor retention. In Studies 1 and 2, the performance of participants evaluated with the Immediate Feedback Assessment Technique (IF AT), a testing method providing immediate feedback and enabling participants to answer until correct, was compared to that of participants responding to identical tests with Scantron answer sheets. Performance on initial tests did not differ, but when retested after delays of 1 day or 1 week, participants evaluated with the IF AT demonstrated higher scores and correctly answered more questions that had been initially answered incorrectly than did participants evaluated with Scantron forms. In Study 3, immediate feedback and answering until correct was available to all participants using either the IF AT or a computerized testing system on initial tests, with the final test completed by all participants using Scantron forms. Participants initially evaluated with the IF AT demonstrated increased retention and correctly responded to more items that had initially been answered incorrectly. Active involvement in the assessment process plays a crucial role in the acquisition of information, the incorporation of accurate information into cognitive processing mechanisms, and the retrieval of correct answers during retention tests. Results of Studies 1-3 converge to indicate that the IF AT method actively engages learners in the discovery process and that this engagement promotes retention and the correction of initially inaccurate response strategies.
Article
We believe that all instructors should find ways to include meaningful active learning approaches in their classes. This chapter provides a framework for choosing active learning strategies that take into account course objectives, teaching styles, and students' level of experience.
Article
This study examines the evidence for the effectiveness of active learning. It defines the common forms of active learning most relevant for engineering faculty and critically examines the core element of each method. It is found that there is broad but uneven support for the core elements of active, collaborative, cooperative and problem-based learning.
A self-correcting approach to multiplechoice tests Association for Psychological Science
  • J M Montepare
Montepare, J. M. (2007). A self-correcting approach to multiplechoice tests. In B. Perlman, L. I. McCann, & S. H. McFadden (Eds.), Lessons learned (Vol. 3, pp. 143–154). Washington, DC: Association for Psychological Science.