ArticlePDF Available

Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom


Abstract and Figures

We compared students’ self-reported perception of learning with their actual learning under controlled conditions in large-enrollment introductory college physics courses taught using 1) active instruction (following best practices in the discipline) and 2) passive instruction (lectures by experienced and highly rated instructors). Both groups received identical class content and handouts, students were randomly assigned, and the instructor made no effort to persuade students of the benefit of either method. Students in active classrooms learned more (as would be expected based on prior research), but their perception of learning, while positive, was lower than that of their peers in passive environments. This suggests that attempts to evaluate instruction based on students’ perceptions of learning could inadvertently promote inferior (passive) pedagogical methods. For instance, a superstar lecturer could create such a positive feeling of learning that students would choose those lectures over active learning. Most importantly, these results suggest that when students experience the increased cognitive effort associated with active learning, they initially take that effort to signify poorer learning. That disconnect may have a detrimental effect on students’ motivation, engagement, and ability to self-regulate their own learning. Although students can, on their own, discover the increased value of being actively engaged during a semester-long course, their learning may be impaired during the initial part of the course. We discuss strategies that instructors can use, early in the semester, to improve students’ response to being actively engaged in the classroom.
Content may be subject to copyright.
Measuring actual learning versus feeling of learning in
response to being actively engaged in the classroom
Louis Deslauriers
, Logan S. McCarty
, Kelly Miller
, Kristina Callaghan
, and Greg Kestin
Department of Physics, Harvard University, Cambridge, MA 02138;
Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA
02138; and
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138
Edited by Kenneth W. Wachter, University of California, Berkeley, CA, and approved August 13, 2019 (received for review December 24, 2018)
We compared studentsself-reported perception of learning with
their actual learning under controlled conditions in large-
enrollment introductory college physics courses taught using 1)
active instruction (following best practices in the discipline) and
2) passive instruction (lectures by experienced and highly rated
instructors). Both groups received identical class content and hand-
outs, students were randomly assigned, and the instructor made
no effort to persuade students of the benefit of either method.
Students in active classrooms learned more (as would be expected
based on prior research), but their perception of learning, while
positive, was lower than that of their peers in passive environ-
ments. This suggests that attempts to evaluate instruction based
on studentsperceptions of learning could inadvertently promote
inferior (passive) pedagogical methods. For instance, a superstar
lecturer could create such a positive feeling of learning that stu-
dents would choose those lectures over active learning. Most im-
portantly, these results suggest that when students experience
the increased cognitive effort associated with active learning, they
initially take that effort to signify poorer learning. That disconnect
may have a detrimental effect on studentsmotivation, engage-
ment, and ability to self-regulate their own learning. Although
students can, on their own, discover the increased value of being
actively engaged during a semester-long course, their learning
may be impaired during the initial part of the course. We discuss
strategies that instructors can use, early in the semester, to improve
studentsresponse to being actively engaged in the classroom.
scientific teaching
undergraduate education
evidence-based teaching
Students learn more when they are actively engaged in the
classroom than they do in a passive lecture environment.
Extensive research supports this observation, especially in college-
level science courses (16). Research also shows that active
teaching strategies increase lecture attendance, engagement, and
studentsacquisition of expert attitudes toward the discipline
(3, 79). Despite this overwhelming evidence, most instructors
still use traditional methods, at least in large-enrollment college
courses (1012).
Why do these inferior methods of instruction persist? In-
structors cite many obstacles preventing them from adopting
active teaching strategies, such as insufficient time, limited re-
sources, a lack of departmental support, concerns about content
coverage, and concerns about evaluations of their teaching (13
18). They also perceive that students resist active teaching
strategies and prefer traditional methods (10, 14, 17, 1922).
Indeed, one-third of instructors who try active teaching eventu-
ally revert to passive lectures, many citing student complaints as
the reason (23). Instructors report that students dislike being
forced to interact with one another (15, 17, 24), they resent the
increase in responsibility for their own learning (21, 22), and they
complain that the blind cant lead the blind(19). More recent
literature shows that if instructors explain and facilitate active
learning, student attitudes toward it can improve over the course
of a semester (25, 26). However, these studies do not measure
studentsinherent, unbiased response to being actively engaged
with the material. There is nothing known about how students
naturally react to active learning without any promotion from the
instructor. In addition, previous studies used different course
materials for active versus passive instruction, potentially con-
founding the effect of pedagogy with that of course materials.
In this report, we identify an inherent student bias against
active learning that can limit its effectiveness and may hinder the
wide adoption of these methods. Compared with students in
traditional lectures, students in active classes perceived that they
learned less, while in reality they learned more. Students rated
the quality of instruction in passive lectures more highly, and
they expressed a preference to have all of their physics classes
taught this way,even though their scores on independent tests
of learning were lower than those in actively taught classrooms.
These findings are consistent with the observations that novices
in a subject are poor judges of their own competence (2729),
and the cognitive fluency of lectures can be misleading (30, 31).
Our findings also suggest that novice students may not accurately
assess the changes in their own learning that follow from their
experience in a class. These misperceptions must be understood
and addressed in order for research-based active instructional
strategies to be more effective and to become widespread.
Materials and Methods
Our study sought to measure studentsperception of learning when active
learning alone is toggled on and off. This contrasts with typical educational
interventions that include active engagement as one component of many
changes to a course. We compared actual learning to studentsfeeling of
Despite active learning being recognized as a superior method
of instruction in the classroom, a major recent survey found
that most college STEM instructors still choose traditional
teaching methods. This article addresses the long-standing
question of why students and faculty remain resistant to ac-
tive learning. Comparing passive lectures with active learning
using a randomized experimental approach and identical course
materials, we find that students in the active classroom learn
more, but they feel like they learn less. We show that this
negative correlation is caused in part by the increased cognitive
effort required during active learning. Faculty who adopt active
learning are encouraged to intervene and address this mis-
perception, and we describe a successful example of such an
Author contributions: L.D., L.S.M., and K.C. designed research; L.D., L.S.M., K.M., K.C., and
G.K. performed research; L.D., L.S.M., and K.M. analyzed data; and L.D., L.S.M., K.M., and
G.K. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This open access article is distributed under Creative Commons Attribution-NonCommercial-
NoDeriv atives L icense 4. 0 (CC BY-N C-ND).
To whom correspondence may be addressed. Email:
This article contains supporting information online at
1073/pnas.1821936116/-/DCSupplemental. PNAS Latest Articles
learning (FOL) following each of 2 contrasting instructional methods: active
learning (experimental treatment) and passive lecture (control). The entire
protocol was repeated twice in physics courses taught during fall and spring
at Harvard University. These calculus-based introductory courses cover topics
in mechanics at a level appropriate for physics majors. Classes meet for
90 min twice each week, during a semester lasting 15 wk. The regular instructor
for these courses had used a consistent syllabus and instructional approach
for a number of years prior to the study and continued the same approach in
both courses described here. Typical class meetings consisted of chalkboard
lectures enhanced with frequent physics demonstrations, along with occa-
sional interactive quizzes or conceptual questions. In the instructional tax-
onomy of Stains (12) this approach would likely be classified as interactive
lecture, with lecturing as the primary mode, supplemented by student in-
class activities. Consequently, while active learning was already a part of the
instructional style during the semester, students in the experimental group
had to adjust to an increase in the amount of active learning, while those
in the control group had to adjust to a complete elimination of any active
Although most of the students in these courses were considering majoring
in physics, fewer than one-third actually did so; the others majored in life
sciences, math, engineering, computer science, economics, or other fields.
Harvard also offers an alternative introductory mechanics course that in-
cludes advanced topics like Lagrangian mechanics, and this honors-level
course tends to attract the most well-prepared physics students, leaving a
more diverse range of students in the courses studied here. Indeed, although
the students in the more advanced course are often quite exceptional, the
students in this study have backgrounds comparable to those of physics
majors at other major research universities. For instance, the students who
took part in this study completed the Force Concept Inventory (FCI), which
measures basic conceptual knowledge about mechanics (32), and the Colo-
rado Learning Attitudes about Science Survey (CLASS), which measures the
extent to which studentsperceptions about physics are similar to those of
experts (7, 8). The pretest FCI scores in this study (Table 1) are similar to those
clustered near the high end of the distribution of university scores in the
metaanalysis published by Hake (1), which confirms that the students in our
study have high school preparation comparable to that at other top univer-
sities. The CLASS survey is perhaps more relevant as it measures expert thinking
in physics instead of specific background knowledge. The pretest CLASS scores
in this study (Table 1) are comparable to those of first-year physics majors (or
intended physics majors) at the University of Colorado (33), the University of
California San Diego (34), or the University of Edinburgh (35).
The experimental intervention took place during 2 consecutive class
meetings in week 12 of each course. Students were randomly assigned to 2
groups and told to report to 2 different classrooms: room A with instructor A
and room B with instructor B. For the first class meeting, on the topic of static
equilibrium, instructor A used active learning, while instructor B taught the
same topic using a passive lecture. For the second class meeting, on the topic
of fluids, instructor A used a passive lecture while instructor B used active
learning. At the end of each class period, students completed a brief survey
about their perceptions of the class and their FOL, followed by a multiple-
choice test of learning (TOL). Table 2 summarizes the experimental design.
As this study involved classroom-based research using normal educational
practices, it was exempt from Institutional Review Board oversight. We in-
formed students that they would be learning the same material in both
groups with different instructional methods, that they would all experience
both instructional approaches, and that their scores on the TOL would not
have any impact on their course grades. Nearly all students consented to
participate, so attrition was negligible: only 8 out of 157 opted out or failed
to complete the study.
The study design featured a number of controls to ensure consistency and
avoid bias: 1) Both instructors had extensive, identical training in active
learning, using best practices as detailed in prior research (3, 6, 36). 2) Both
instructors also had comparable experience in delivering fluent, traditional
lectures. 3) The lecture slides, handouts, and written feedback provided
during each class were identical for active instruction and for passive lecture.
4) Students were individually randomly assigned to 2 groups, and these
groups were indistinguishable on several measures of physics background
and proficiency (Table 1). 5) Each student experienced both types of instruc-
tion in a crossover study design that controls for other possible variation be-
tween students. 6) Students had no exposure to either of the instructors
before the experimental intervention. 7) The entire protocol was repeated in
2 different courses with the same results; a total of 149 students partici-
pated. 8) The instructors did not see the TOLs, which were prepared in-
dependently by another author. 9) The author of the TOLs did not have
access to the course materials or lecture slides and wrote the tests based only
on a list of detailed learning objectives for each topic.
Students in both groups received identical paper handouts with key
concepts and equations along with example problems targeting specific
learning objectives. The handouts had blank space for students to take notes
and fill in answers to these sample problems. (All materials are provided in SI
Appendix.) In the control group, the instructor presented slides based on the
handouts, gave explanations and demonstrations, and solved the example
problems while students listened and filled in the answers along with the
instructor. Emphasis was placed on maximizing the fluency with which the
information was delivered. The use of handouts and focus on problem-
solving was different from the usual lectures in these courses. Using the
taxonomy of Stains (12), these classes in the control group were strictly di-
dactic in approach, with none of the supplemental group activities found in
the usual class meetings. In the experimental group, the instructor actively
engaged the students using the principles of deliberate practice (3, 36, 37):
students were instructed to solve the sample problems by working together
in small groups while the instructor roamed the room asking questions and
offering assistance. After the students had attempted each problem, the
instructor provided a full solution that was identical to the solution given to
the control group. Students were actively engaged throughout the class
period, making the experimental group fully student-centered (12). The
crucial difference between the 2 groups was whether students were told
directly how to solve each problem or were asked to try to solve the prob-
lems themselves in small groups before being given the solution. In other
words, students in both groups received the exact same information from
the handouts and the instructor, and only active engagement with the
material was toggled on and off. Previous well-controlled studies that
compared active versus passive learning, such as the studies included in ref.
4, used distinctly different class materials with each group, potentially con-
founding active engagement with changes in class content (3). Likewise,
studies that compared studentsresponses to active versus passive learning
typically did not use precisely the same class content. Students who claimed
to prefer one mode of instruction over the other might have been
responding to differences in content or class materials in addition to dif-
ferences in the amount of active engagement.
Results and Discussion
At the end of each class period, students completed a brief
survey to measure their FOL followed by a multiple-choice TOL.
Table 1. Descriptive statistics for the randomized groups used in the study
Group A Group B
tdf*pMeasure of background and proficiency Mean SD Mean SD
Spring semester (enrollment: 65 students)
FCI pretest score (030) 23.3 6.57 24.8 4.35 0.96 49 0.34
CLASS pretest score (%) 75.7 10.04 73.4 11.93 0.82 58 0.41
Average of first 2 midterms (%) 77.9 13.4 78.1 14.4 0.07 63 0.94
Fall semester (enrollment: 92 students)
FCI pretest score (030) 23.6 4.19 24.6 3.40 1.05 70 0.30
CLASS pretest score (%) 77.7 9.84 75.5 14.04 0.75 78 0.46
Average of first 2 midterms (%) 70.5 13.0 72.02 13.9 0.54 84 0.59
*Some study participants did not have pretest data; all 149 participants had midterm scores.
| Deslauriers et al.
Students rated their level of agreement on a 5-point Likert scale,
with 1 representing strongly disagree and 5 representing strongly
agree. Students first evaluated the statement This class mostly
involved me as a listener while the instructor presented infor-
mation.As expected, the students in the passive lecture agreed
more strongly (mean =3.9) than those in the active classroom
(mean =2.9, P<0.001). Note that even in the experimental
group, about 50% of the class time featured the instructor giving
concise, targeted feedback as minilectures following each group
activity (3, 6, 36). The students then assessed their own FOL by
rating their level of agreement with 4 additional statements, each
of which probed some aspect of their perceived learning from the
class. The primary FOL item asked students to evaluate the state-
ment I feel like I learned a great deal from this class.The
remaining FOL questions were highly correlated with this primary
question, so we could use either this question alone or a composite
of all 4 survey items to measure studentsoverall FOL. Fig. 1 lists
the 4 FOL questions asked in the survey.
The subsequent tests of learning (1 on statics and 1 on fluids)
each consisted of 12 multiple-choice questions. The students were
encouraged to try their best on each TOL and were told that they
would be good practice for the final examination but were
reminded that their score on the TOL would not directly affect
their course grade. Students were also told that they would receive
participation points toward their final grade for completing the
TOL and the FOL surveys. (The FOL and TOL questions are
provided in SI Appendix.)
The bar graphs shown in Figs. 1 and 2 highlight several aspects
of these FOL and TOL results. We note, in particular, the fol-
lowing observations (all of which are confirmed by a more detailed
statistical analysis): 1) All of the FOL responses show a consistent
student preference for the passive lecture environment. 2) Scores
on the TOL, by contrast, are significantly higher in the active
classroom. 3) These trends are similar for both the statics and
fluids topics. Given the crossover study design (Table 2), it appears
that the shift in TOL and FOL scores between passive and active
learning was not strongly affected by the choice of topic, instructor,
or classroom.
We constructed linear regression models (fixed-effects mod-
els) to identify the factors contributing to these observed dif-
ferences in TOL and FOL scores. To control for student-level
variation, we included 3 measures of studentsindividual back-
ground and proficiency in physics: the FCI (34), the CLASS (7),
and the average scores on 2 midterm examinations that took
place prior to the study. The descriptive statistics summarized in
Table 1 confirm successful randomization at the student level for
these measures.
Table 3 summarizes these statistical models. Model 1 predicts
studentsoverall FOL, which is a composite of the FOL survey
responses weighted according to a principal components analy-
sis. (The entire analysis is virtually identical if the primary FOL
question 2 is used alone in place of this composite variable.) The
students in active classrooms reported more than half an SD
(0.56) lower FOL compared with those in passive lectures.
Model 2 predicts studentsperformance on the TOL. In this
case, students in active classrooms scored almost half an SD
(0.46) higher on the examination. These results are highly sig-
nificant (P<0.001). In addition, the crossover study design al-
lows us to control for any additional person-level variation by
adding a categorical variable for each individual student (treat-
ing each student as his or her own control); we find no mean-
ingful change using these additional covariates. Conversely, as
expected for a randomized experiment, if we remove from the
statistical model all student-level covariates (CLASS score, FCI
score, midterm average, and gender) the point estimates of the
effects of active learning also show no meaningful change (less
than half the SE).
In educational research, a question often arises whether to
analyze the data at the individual student level or at the group
level (typically by classroom or by school). The convention in re-
cent research on higher education, e.g., ref. 4, is that if preexisting
Fig. 1. A comparison of performance on the TOL and FOL responses between students taught with a traditional lecture (passive) and students taught actively
for the statics class. Error bars show 1 SE.
Table 2. Randomized experimental design for the study
Class topic
Group A:
instructor A in
classroom A
Group B:
instructor B in
classroom B
Static equilibrium Active (treatment) Passive (control)
Fluids Passive (control) Active (treatment)
Deslauriers et al. PNAS Latest Articles
groups are exposed to treatment versus control conditions, the
statistical analysis should account for these clusters, since both
randomization and treatment are applied at the group level. Many
studies of college science courses do not correctly account for
clustering, and indeed Freeman et al. (4) had to correct for this
oversight in their metaanalysis. On the other hand, if students are
individually randomized, or the experiment is a crossover study in
which each student receives both conditions, then an individual-
level analysis is appropriate, even if the treatment is (inevitably)
delivered at the class level. This convention is rigorously justified
(39) as long as peer effects are negligible. In our study, the
crossover design controls for peer effects at the linear level since
students have the same peer group under both active and passive
conditions. A remaining concern could be a nonlinear interaction
between peer effects and the 2 styles of teachingfor instance, if
students openly expressed disdain for the pedagogy only in the
active classroom. The physics courses used in this study are rou-
tinely video-recorded, and videos of the experiment show no overt
peer interactions that could affect the outcomes in active versus
passive classrooms. Students took the FOL and TOL surveys
immediately at the end of each class period, so there could be no
peer effects outside the classroom. Moreover, as shown in SI
Appendix, even if we postulate an extremely large unobserved peer
effect on active versus passive learning, our results would still
remain highly significant (P<0.001).
Having observed this negative correlation between students
FOL and their actual learning, we sought to understand the
causal factors behind this observation. A survey of the existing
literature suggests 2 likely factors: 1) the cognitive fluency of
lectures can mislead students into thinking that they are learning
more than they actually are (30, 31) and 2) novices in a subject
have poor metacognition and thus are ill-equipped to judge how
much they have learned (2729). We also propose a third factor:
3) students who are unfamiliar with intense active learning in the
college classroom may not appreciate that the increased cognitive
struggle accompanying active learning is actually a sign that the
learning is effective. We describe below some evidence suggesting
that all 3 factors are involved and propose some specific strategies
to improve studentsengagement with active learning.
Fig. 2. A comparison of performance on the TOL and FOL responses between students taught with a traditional lecture (passive) and students taught actively
for the fluids class. Error bars show 1 SE.
Table 3. Standardized coefficients for linear regression models predicting studentsoverall FOL (model 1) and
performance on the TOL (model 2)
Regression parameter Model 1: FOL (standardized zscore) Model 2: TOL (standardized zscore)
Constant 0.34 0.46
Passive (0) versus active (1) 0.56*** 0.46***
Topic (fluids =0; statics =1) 0.44** 0.44***
Semester (spring =0; fall =1) 0.37* 0.29*
Instructor (A =0; B =1) 0.03 0.12
CLASS pretest (zscore) 0.01 0.00
FCI pretest (zscore) 0.07 0.25***
Average of first 2 midterms (zscore) 0.20* 0.26***
Gender (female =0; male =1) 0.05 0.33*
0.17 0.39
RMSE 0.97 0.77
Both models control for class content (fluids versus statics), semester, instructor, and student data (CLASS score, FCI score, midterm
performance, and gender). ***P<0.001, **P<0.01, *P<0.05. Results are unaffected by the choice of ordinary least-squares or robust
SEs (38). The raw FOL and TOL scores were pooled before standardization, which accounts for the effect of the topiccovariate (fluids
vs. statics).
| Deslauriers et al.
One of the most important metacognitive cues is the apparent
fluency of cognitive tasks. Perceived fluency has broad impacts
on judgment and perception (31). In the laboratory context,
previous research has compared studentsperceived ability to
recall facts from a 5-min video from a fluent versus a disfluent
lecturer (30). The disfluent lecturerwho avoided eye contact,
did not speak clearly, and lacked flowled to lower perceived
retention even though studentsactual recall was the same as it
was with the fluent lecturer. Research has also shown that when
students are forced to struggle through something that is diffi-
cult, the consequent disfluency leads to deeper cognitive pro-
cessing (31, 40). In our study, students in the actively taught
groups had to struggle with their peers through difficult physics
problems that they initially did not know how to solve. The
cognitive effort involved in this type of instruction may make
students frustrated and painfully aware of their lack of un-
derstanding, in contrast with fluent lectures that may serve to
confirm studentsinaccurately inflated perceptions of their own
To learn more about our studentsperceptions, we conducted
follow-up one-on-one, structured interviews with a subset of
students from the study (17 students total). The students were
drawn from both semesters and provided a representative sam-
ple of the entire population as measured by their CLASS scores,
FCI scores, and final course grades. Consistent with the litera-
ture, most students (15 of 17) found the instruction in the active
classrooms disjointed and lacking in flow when compared with
the more fluent passive lecture. Students also cited the frequent
interruptions that accompanied each transition from group ac-
tivities to instructor feedback (14 responses), a concern that their
errors made during class would not be corrected (10 responses),
and a general feeling of frustration and confusion (14 responses)
when discussing their concerns about the actively taught classes.
In addition, although conventional wisdom suggests that students
do not always enjoy working in groups, none of the students
raised group work as an issue during interviews. In contrast, all
but 1 of the students found the passive lecture more enjoyable
and easier to follow. At the end of each interview, students were
shown the results of the study. After commenting on the results,
each student was asked if seeing these results will impact the
way you study,and 14 out of 17 students said that it would.
In addition, we investigated the connection between FOL and
perceived fluency with a linear regression model predicting stu-
dentsFOL, given by FOL question 2: I feel like I learned a
great deal from this lecture.Students who perceived the in-
structor to be highly fluent, as measured by agreement with the
statement The instructor was effective at teaching,reported
more than half an SD (0.51) higher FOL compared with those
who perceived the instructor as disfluent (P<0.001). Notably,
the type of instruction (active vs. passive) was not significant in
predicting FOL; only the perceived fluency of the instructor was
relevant. We conducted additional one-on-one, structured in-
terviews to validate that students interpret the question about
teaching effectiveness as a measure for fluency of instruction.
These interviews revealed that students interpret this question
primarily as 1) clarity of explanations, 2) organization of pre-
sentation, and 3) smooth flow of instruction. In addition, stu-
dents presented several scenarios in which they could imagine
reporting that a teacher was highly effective even if they per-
sonally did not feel they learned very muchfor instance, if they
were not sufficiently prepared for a class or too tired to pay close
attention. The strong correlation between studentsFOL and the
effectiveness/fluency of instruction suggests that greater per-
ceived fluency is related to higher perceived FOL.
A second factor that could account for our observed results is
that novices (such as the students in our study) generally have
poor metacognition and are not good at judging their own
learning. The same knowledge that underlies the ability to
produce correct judgment, is also the knowledge that underlies
the ability to recognize correct judgment. To lack the former is to
be deficient in the latter.(27) Although this well-known effect
predicts that studentsFOL may be unreliable, it does not predict
whether these feelings should be biased in favor of active versus
passive styles of teaching. We investigated this hypothesis by
adding a nonlinear interaction term to model 2, described above,
that predicts studentsperformance on the TOL. We found a
moderately significant (P<0.05) interaction between students
background physics knowledge as measured by the FCI and their
FOL as measured by question 2: I feel like I learned a great deal
from this lecture.The sign of this interaction was positive, which
means that students with more prior expertise had a stronger
(more positive) correlation between FOL and actual perfor-
mance on the test. Combining this observation with that in the
previous paragraph, we propose that novice students are poor at
judging their actual learning and thus rely on inaccurate meta-
cognitive cues such as fluency of instruction when they attempt to
assess their own learning. These 2 factors together could explain
the strong, overall negative correlation we observed in this study.
A final factor could be that the students in this study had little
prior experience with fully student-centered classrooms in a col-
lege environment (12). As suggested by the interviews described
above, when students experienced confusion and increased cog-
nitive effort associated with active learning, they perceived this
disfluency as a signal of poor learning, while in fact the opposite is
true. It is unlikely that the sheer novelty of student-centered active
learning alone can account for studentsnegative response to this
mode of instruction. First, as mentioned above, both the experi-
mental (active) and control (passive) groups experienced a change
from the usual instructional approach in these courses: in the
passive group, students experienced none of the small-group ac-
tivities that were interspersed in the usual course lectures. Second,
one can imagine a thought experiment in which students are given
one-on-one tutoring with an expert tutor for 1 wk of a course. This
would constitute a dramatic change from their usual classroom
experience, but nearly all students would likely prefer this style of
instructionwhich is demonstrably superior (41, 42)to their
familiar lectures.
Based on the 3 factors discussed above, it is likely that a sig-
nificant part of studentscomparably negative response to this
intense style of active learning is a result of the disfluency they
experience in this cognitively demanding environment. We carried
out a semester-long intervention to see if these attitudes could be
changed. Near the beginning of a physics course that used the
same active learning strategy described here, the instructor gave a
20-min presentation that started with a brief description of active
learning and evidence for its effectiveness. He then presented
additional detail about the connections between perceived flu-
ency, FOL, and actual learning, including a discussion of the
negative correlations we observed in this study. (The transcript for
this presentation can be found in SI Appendix.) Studentsquestions
and discussion following the presentation indicated that they were
most interested in the idea that fluency and FOL can often be
misleading. Students indicated that this knowledge would be
useful for understanding how to approach active learning. At the
end of the semester, over 65% of students reported on a survey
that their feelings about the effectiveness of active learning sig-
nificantly improved over the course of the semester. A similar
proportion (75%) of students reported that the intervention at the
beginning of the semester helped them feel more favorably toward
active learning during lectures.
As the success of active learning crucially depends on student
motivation and engagement, it is of paramount importance that
students appreciate, early in the semester, the benefits of struggling
with the material during active learning. If students are misled by
their inherent response into thinking that they are not learning,
they will not be able to self-regulate, and they will not learn as
Deslauriers et al. PNAS Latest Articles
successfully. In addition, during group work, poor attitudes or low
engagement of a few students can have negative effects on other
students in their groups. Thus, although students may eventually,
on their own, discover the value of active learning during a semester-
long course, their learning will be impaired during the first part of
the course while they still feel the inherent disfluency associated with
in-class activities.
We recommend that instructors intervene early on by explicitly
presenting the value of increased cognitive efforts associated with
active learning. Instructors should also give an examination (or
other assessment) as early as possible so students can gauge their
actual learning. These strategies can help students get on board
with active learning as quickly as possible. Then, throughout the
semester, instructors should adopt research-based explanation and
facilitation strategies (26), should encourage students to work
hard during activities, and should remind them of the value of
increased cognitive effort. Instructors should also solicit frequent
feedback such as one-minute papersthroughout the course (43)
and respond to studentsconcerns. The success of active learning
will be greatly enhanced if students accept that it leads to deeper
learningand acknowledge that it may sometimes feel like exactly
the opposite is true.
These recommendations should apply to other student populations
and to other disciplines as the cognitive principles underlying
these effects are not specific to physics or to the well-prepared
students in this course. To illustrate this point, imagine a course
with a different group of students, or in a different subject, that
uses a highly effective interactive pedagogy with course materials
tailored to its own student audience. Now bring in a fluent and
charismatic lecturer with special knowledge of student thinking
who uses the same materials but eliminates all interactive en-
gagement from the course, consistent with the design of this
study in which active learning alone is toggled on and off. As a
specific example, consider Peer Instruction (2) with well-honed
clicker questions that target common student difficulties and
misconceptions. Instead of allowing students to answer and discuss
these questions, the lecturer would describe and explain each of
the answers. From the research reviewed in ref. 4, it is clear that
students would learn less in the passive lecture environment. For
instance, students deprived of active engagement with clicker
questions could not discover their own misconceptions or con-
struct their own correct explanations. Yet based on the cognitive
principles discussed above, the fluent lecturer could address stu-
dent difficulties and misconceptions in such a way as to make
students feel like they learned a lot from the lecture. Indeed, given
our observation that highly proficient students are better able to
judge their own learning, it is reasonable to expect that students
who are less well prepared than those in our study would show
even larger discrepancies between actual learning and FOL.
In conclusion, we find that studentsperception of their own
learning can be anticorrelated with their actual learning under
well-controlled implementations of active learning versus passive
lectures. These results point to the importance of preparing and
coaching students early in the semester for active instruction and
suggest that instructors should persuade students that they are
benefitting from active instruction. Without this preparation,
students can be misled by the inherent disfluency associated with
the sustained cognitive effort required for active learning, which
in turn can have a negative impact on their actual learning. This
is especially important for students who are new to fully student-
centered active learning (12), as were the students in this study.
These results also suggest that student evaluations of teaching
should be used with caution as they rely on studentsperceptions
of learning and could inadvertently favor inferior passive teaching
methods over research-based active pedagogical approaches (44,
45)a superstar lecturer could create such a positive FOL that
students would choose those lectures over active learning. In ad-
dition, given the powerful general influence of fluency on meta-
cognitive judgments (31), we expect that these results are likely to
generalize to a variety of college-level subjects.
ACKNOWLEDGMENTS. We acknowledge significant contributions from Eric
Mazur and David J. Morin; along with valuable discussions with Gary King,
Erin Driver-Linn, Andrew Ho, Edward J. Kim, Jon R. Star, Federico Capasso,
Dustin Tingley, Philip M. Sadler, and Melissa Franklin.
1. R. R. Hake, Interactive-engagement vs. traditional methods: A six-thousand-student
survey of mechanics test data for introductory physics courses. Am. J. Phys. 66,6474
2. C. H. Crouch, E. Mazur, Peer instruction: Ten years of experience and results. Am. J.
Phys. 69, 970977 (2001).
3. L. Deslauriers, E. Schelew, C. Wieman, Improved learning in a large-enrollment physics
class. Science 332, 862864 (2011).
4. S. Freeman et al., Active learning increases student performance in science, engineering,
and mathematics. Proc. Natl. Acad. Sci. U.S.A. 111,84108415 (2014).
5. J. M. Fraser et al ., Teaching and physics education research: Bridging the gap. Rep.
Prog. Phys. 77, 032401 (2014).
6. L. Deslauriers, C. Wieman, Learning and retention of quantum concepts with different
teaching methods. Phys. Rev. ST Phys. Educ. 7, 010101 (2011).
7. W. K. Adams et al., New instrument for measuring student beliefs about physics and
learning physics: The Colorado Learning Attitudes about Science Survey. Phys. Rev. ST
Phys. Educ. 2, 010101 (2006).
8. E. Brewe, L. H. Kramer, G. OBrien, Modeling instruction: Positive attitudinal shifts in
introductory physics measured with CLASS. Phys. Rev. ST Phys. Educ. 5, 013102 (2009).
9. J. Watkins, E. Mazur, Retaining students in science, technology, engineering, and
mathematics (STEM) majors. J. Coll. Sci. Teach. 42,3641 (2013).
10. C. Henderson, M. H. Dancy, Barriers to the use of research-based instructional strat-
egies: The influence of both individual and situational characteristics. Phys. Rev. ST
Phys. Educ. 3, 020102 (2007).
11. J. Handelsman et al., Education. Scientific teaching. Science 304, 521522 (2004).
12. M. Stains et al., Anatomy of STEM teaching in North American universities. Science
359, 14681470 (2018).
13. C. Henderson, T. Stelzer, L. Hsu, D. Meredith, Maximizing the benefits of physics edu-
cation research: Building productive relationships and promoting institutional change.
American Physical Society Forum on Education Newsletter, Fall 2005, pp. 1114. https:// Accessed 20 June 2019.
14. M. Dancy, C. Henderson, Framework for articulating instructional practices and con-
ceptions. Phys. Rev. ST Phys. Educ. 3, 010103 (2007).
15. R. M. Felder, R. Brent, Navigating the bumpy road to student-centered instruction.
Coll. Teach. 44,4347 (1996).
16. D. U. Silverthorn, P. M. Thorn, M. D. Svinicki, Its difficult to change the way we teach:
Lessons from the Integrative Themes in Physiology curriculum module project. Adv.
Physiol. Educ. 30, 204214 (2006).
17. A. P. Fagen, C. H. Crouch, E. Mazur, Peer instruction: Results from a range of classrooms.
Phys. Teach. 40, 206209 (2002).
18. C. Turpen, M. Dancy, C. Henderson, Faculty perspectives on using peer instruction: A
national study. AIP Conf. Proc. 1289, 325328 (2010).
19. J. W. Belcher, Improving Student Understanding with TEAL [TEAL =Te chnology
Enhanced Active Learning], The MIT Faculty Newsletter,vol.XVI,no.2, 2003.http:// Accessed 20 June 2019.
20. M. H. Dancy, C. Henderson, Beyond the individual instructor: Systemic constraints in the
implementation of research-informed practices. AIP Conf. Proc. 790,113116 (2005).
21. R. M. Felder, Random thoughts: Sermons for grumpy campers. Chem. Eng. Educ. 41,
183184 (2007).
22. R. M. Felder, Random thoughts: The link between teaching and research. 2. How to
strengthen each without weakening the other. Chem. Eng. Educ. 44, 213214 (2010).
23. C. Henderson, M. Dancy, M. Niewiadomska-Bugaj, Use of research-based instructional
strategies in introductory physics: Where do faculty leave the innovation-decision
process? Phys. Rev. ST Phys. Educ. 8, 020104 (2012).
24. M. Vuorela, L. Nummenmaa, How undergraduate students meet a new learning
environment? Comput. Human Behav. 20, 763777 (2004).
25. K. Nguyen et al., Studentsexpectations, types of instruction, and instructor strategies
predicting student response to active learning. Int. J. Eng. Educ. 33,218 (2017).
26. S. Tharayil et al., Strategies to mitigate student resistance to active learning. Int. J.
STEM Educ. 5, 7 (2018).
27. J. Kruger, D. Dunning, Unskilled and unaware of it: How difficulties in recognizing
ones own incompetence lead to inflated self-assessments. J. Pers. Soc. Psychol. 77,
11211134 (1999).
28. J. D. Bransford, A. L. B rown, R. R. Cocking, Eds., How People Learn: Brain, Mind,
Experience, and School (National Academy Press, 1999).
29. S. R. Porter, Self-reported learning gains: A theory and test of college student survey
response. Res. High. Educ. 54, 201226 (2013).
30. S. K. Carpenter, M. M. Wilford, N. Kornell, K. M. Mullaney, Appearances can be de-
ceiving: Instructor fluency increases perceptions of learning without increasing actual
learning. Psychon. Bull. Rev. 20, 13501356 (2013).
| Deslauriers et al.
31. D. M. Oppenheimer, The secret life of fluency. Trends Cogn. Sci. 12, 237241 (2008).
32. D. Hestenes, M. Wells, G. Swackhamer, Force concept inventory. Phys. Teach. 30, 141
158 (1992).
33. K. K. Perkins, M. Gratny, Who becomes a physics major? A long-term longitudinal
study examining the roles of pre-college beliefs about physics and learning physics,
interest, and academic achievement. AIP Conf. Proc. 1289, 253256 (2010).
34. E. Gire, B. Jones, E. Price, Characterizing the epistemological development of physics
majors. Phys. Rev. ST Phys. Educ 5, 010103 (2009).
35. S. P. Bates, R. K. Galloway, C. Loptson, K. A. Slaughter, How attitudes and beliefs about
physics change from high school to faculty. Phys. Rev. ST Phys. Educ. 7, 020114 (2011).
36. D. J. Jones, K. W. Madison, C. E. Wieman, Transforming a fourth-year modern optics
course using a deliberate practice framework. Phys. Rev. ST Phys. Educ. 11, 020108 (2015).
37. K. A. Ericsson, R. Th. Krampe, C. Tesch-Römer, The role of deliberate practice in the
acquisition of expert performance. Psychol. Rev. 100, 363406 (1993).
38. H. White, A heteroskedasticity-consistent covariance matrix estimator and a direct
test for heteroskedasticity. Econometrica 48, 817838 (1980).
39. A. Abadie, S. Athey, G. W. Imbens, J. Wooldridge, When should you adjust standard
errors for clustering? (NBER Working Paper 24003, National Bureau of Economic
Research, Cambridge, MA) (November 2017).
40. C. Diemand-Yauman, D. M. Oppenheimer,B.E.Vaughan,Fortunefavorsthebold (and the
Italicized): Effects of disfluency on educational outcomes. Cognition 118,111115 (2011).
41. M. R. Lepper, M. Woolverton, The wisdom of practice: lessons learned from the
study of highly effective tutorsin Improving Academic Achievement, J. Aronson, Ed.
(Academic Press, 2002), pp. 135158.
42. W. B. Wood, K. D. Tanner, The role of the lecturer as tutor: Doing what effective
tutors do in a large lecture class. CBE Life Sci. Educ. 11,39 (2012).
43. D. R. Stead, A review of the one-minute paper. Active Learn. High. Educ. 6, 118131
44. B. Uttl, C. A. White, D. W. Gonzalez, Meta-analysis of facultys teaching effectiveness:
Student evaluation of teaching ratings and student learning are not related. Stud.
Educ. Eval. 57,2242 (2017).
45. N. Kornell, H. Hausman, Do the best teachers get the best ratings? Front. Psychol. 7,
570 (2016).
Deslauriers et al. PNAS Latest Articles
... While this method certainly has its advantages, it often involves little or no participation from the students. Although an active learning style leads to better learning, often students feel like they learn less, particularly due to the increase in cognitive effort required giving the perception of poorer learning (Deslauriers et al., 2019). ...
... Recent literature in higher education overwhelmingly depicts traditional lectures as an inferior method of instruction, counterposing it with a desired student-centered model in which active learning approaches are viewed as superior [1][2][3][4]. In the wake of the Covid-19 disruption to the sector, many tertiary institutions are considering mandating radical changes concerning instruction in response to repeated calls to abandon lectures altogether [1]. ...
Full-text available
In the last decade, major efforts have been made to promote inquiry-based mathematics learning at the tertiary level. The Inquiry-Based Mathematics Education (IBME) movement has gained strong momentum among some mathematicians, attracting substantial funding, including from some US government agencies. This resulted in the successful mobilization of regional consortia in many states, uniting over 800 mathematics education practitioners working to reform undergraduate education. Inquiry-based learning is characterized by the fundamental premise that learners should be allowed to learn 'new to them' mathematics without being taught. This progressive idea is based on the assumption that it is best to advance learners to the level of experts by engaging learners in mathematical practices similar to those of practising mathematicians: creating new definitions, conjectures and proofs - that way learners are thought to develop 'deep mathematical understanding'. However, concerted efforts to radically reform mathematics education must be systematically scrutinized in view of available evidence and theoretical advances in the learning sciences. To that end, this scoping review sought to consolidate the extant research literature from cognitive science and educational psychology, offering a critical commentary on the effectiveness of inquiry-based learning. Our analysis of research articles and books pertaining to the topic revealed that the call for a major reform by the IBME advocates is not justified. Specifically, the general claim that students would learn better (and acquire superior conceptual understanding) if they were not taught is not supported by evidence. Neither is the general claim about the merits of IBME for addressing equity issues in mathematics classrooms.
Objective The purpose of this study was to examine the effects of course delivery methods on examination grades and student perceptions in a sonography course. Materials and Methods The participant included all sonography students (n = 103), enrolled at a Midwestern university, during the academic years (AY) of 2010–2021. A retrospective, convergent mixed-methods design was used to collect and analyze data, related to the course delivery method. Results The highest overall mean examination score and course satisfaction rating resulted from the blended learning format and the lowest mean examination scores and course satisfaction resulted from the flipped learning format. Conclusion Health professions students are a diverse group of learners. Pedagogical practices should include course design and delivery methods which educate all learners. Courses that balance both face-to-face learning with opportunities for self-directed learning improve student satisfaction which could lead to improved student outcomes and provide the foundation for students to become competent health care professionals.
Objective The objective was to explore students’ perceptions of learning quality improvement (QI) in a virtual setting and identify factors that promote or inhibit virtual learning. Design We used an exploratory case study design with focus group interviews. The data were analysed using a thematic analysis approach, with an analytical framework derived from activity theory and Bloom’s revised taxonomy of six categories of cognitive processes of learning. Setting Postgraduate students participating in a virtual 1-day simulation module to learn QI at two universities in Norway. Participants Four focus groups with a total of 12 participants. Results The students’ descriptions of learning outcomes indicate that the learning activity involved a variety of cognitive activities, including higher-order cognitive processes. We identified three themes pertaining to the students’ experiences of the virtual learning activity: learning through active participation, constructing a virtual learning opportunity and creating a virtual learning environment. The students described that participation and active engagement led to a greater understanding and an integration of theory and practical improvement skills. They reported that to engage in the virtual learning opportunity, it was necessary to create a learning environment where they felt psychologically safe. Conclusion Our findings indicate that it is possible to facilitate collaborative learning integrating theoretical knowledge and practical skills in a virtual setting. Students experienced that engaging in the virtual learning activity contributed to the integration of theoretical knowledge and practical skills. Psychological safety seems to be important for students’ engagement in the virtual learning activity. A virtual learning environment alters prior common norms for interaction based on physical presence, which in turn affect students feeling of psychological safety. Educators need to be aware of this and facilitate a virtual learning environment where students feel comfortable to engage.
Full-text available
Background Research has shown that active learning promotes student learning and increases retention rates of STEM undergraduates. Yet, instructors are reluctant to change their teaching approaches for several reasons, including a fear of student resistance to active learning. This paper addresses this issue by building on our prior work which demonstrates that certain instructor strategies can positively influence student responses to active learning. We present an analysis of interview data from 17 engineering professors across the USA about the ways they use strategies to reduce student resistance to active learning in their undergraduate engineering courses. Results Our data reveal that instructor strategies for reducing student resistance generally fall within two broad types: explanation and facilitation strategies. Explanation strategies consist of the following: (a) explain the purpose, (b) explain course expectations, and (c) explain activity expectations. Facilitation strategies include the following: (a) approach non-participants, (b) assume an encouraging demeanor, (c) grade on participation, (d) walk around the room, (e) invite questions, (f) develop a routine, (g) design activities for participation, and (h) use incremental steps. Four of the strategies emerged from our analysis and were previously unstudied in the context of student resistance. Conclusions The findings of this study have practical implications for instructors wishing to implement active learning. There is a variety of strategies to reduce student resistance to active learning, and there are multiple successful ways to implement the strategies. Importantly, effective use of strategies requires some degree of intentional course planning. These strategies should be considered as a starting point for instructors seeking to better incorporate the use of active learning strategies into their undergraduate engineering classrooms.
Full-text available
We review recent studies that asked: Do college students learn relatively more from teachers whom they rate highly on student evaluation forms? Recent studies measured learning at two time points. When learning was measured with a test at the end of the course, the teachers who got the highest ratings were the ones who contributed the most to learning. But when learning was measured as performance in subsequent related courses, the teachers who had received relatively low ratings appeared to have been most effective. We speculate about why these effects occurred: Making a course difficult in productive ways may decrease ratings but enhance learning. Despite their limitations, we do not suggest abandoning student ratings, but do recommend that student evaluation scores should not be the sole basis for evaluating college teaching and they should be recognized for what they are.
Full-text available
The correlation between research and teaching and the steps required to be followed in order to strengthen their relationship has been discussed. Undergraduate research provides several benefits such as improving retention of some student populations and influencing some students to pursue graduate study. The link between research and teaching can be strengthened by encouraging faculty members to use inductive teaching methods such as inquiry-based, problem-based, and project-based learning. Inductive methods such as inquiry-based, problem-based, and project-based learning when implemented correctly enable students to attain high-level thinking and problem-solving skills. The advisor must mentor the students to ensure undergraduate research to be effective. The successful integration involve relevant incorporation of the instructor's research into course lectures, assignments, and exams, the use of inductive teaching methods and guiding students through well-conducted research projects.
Full-text available
Richard M. Felder of North Carolina State University have been encouraging active and cooperative learning that makes students more responsible for their own learning than they are when instructors simply lecture. Felder thinks that teaching means making learning happen and not just putting out information. The performance evaluation of the student is likely to depend more on how well he or she can work with the group and how well they can solve differential equations and design piping systems. The students are advised to complete their home assignments in order to perform well in their exams.
We report data from ten years of teaching with Peer Instruction (PI) in the calculus- and algebra-based introductory physics courses for nonmajors; our results indicate increased student mastery of both conceptual reasoning and quantitative problem solving upon implementing PI. We also discuss ways we have improved our implementation of PI since introducing it in 1991. Most notably, we have replaced in-class reading quizzes with pre-class written responses to the reading, introduced a research-based mechanics textbook for portions of the course, and incorporated cooperative learning into the discussion sections as well as the lectures. These improvements are intended to help students learn more from pre-class reading and to increase student engagement in the discussion sections, and are accompanied by further increases in student understanding.
In empirical work in economics it is common to report standard errors that account for clustering of units. Typically, the motivation given for the clustering adjustments is that unobserved components in outcomes for units within clusters are correlated. However, because correlation may occur across more than one dimension, this motivation makes it difficult to justify why researchers use clustering in some dimensions, such as geographic, but not others, such as age cohorts or gender. It also makes it difficult to explain why one should not cluster with data from a randomized experiment. In this paper, we argue that clustering is in essence a design problem, either a sampling design or an experimental design issue. It is a sampling design issue if sampling follows a two stage process where in the first stage, a subset of clusters were sampled randomly from a population of clusters, while in the second stage, units were sampled randomly from the sampled clusters. In this case the clustering adjustment is justified by the fact that there are clusters in the population that we do not see in the sample. Clustering is an experimental design issue if the assignment is correlated within the clusters. We take the view that this second perspective best fits the typical setting in economics where clustering adjustments are used. This perspective allows us to shed new light on three questions: (i) when should one adjust the standard errors for clustering, (ii) when is the conventional adjustment for clustering appropriate, and (iii) when does the conventional adjustment of the standard errors matter.
Engineering instructors' adoption of active learning has been slow, despite significant evidence supporting its efficacy. A common instructor concern is that students will respond negatively. This study measures the relationship between student response to instruction and 1) students' expectations for types of instruction, 2) students' experiences of different types of instruction, and 3) instructor strategies for using in-class activities. Student Response to Instructional Practices (StRIP) survey data from 179 students at three U.S. institutions were analyzed using hierarchical linear regression modeling. Significant predictors in the final models of student response were student expectations of active learning lecture and passive lecture, experiences of group based activities, and instructor strategies for explaining and facilitating active learning. These empirical results support recommendations in prior literature about best practices for reducing student resistance and demonstrate that instructors have great power to influence student reactions to active learning and ultimately reduce student resistance. There was no evidence in this data set to support the common concern that instructor or course evaluations are negatively affected by adopting active learning strategies.
Student evaluation of teaching (SET) ratings are used to evaluate faculty's teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty's teaching effectiveness.