ArticlePDF Available
13
SEPTEMBER/OCTOBER 2024
The Reading League has provided a set of
guidelines to examine the extent to which
reading curricula are inconsistent with re-
search or may include practices that can result
in zero or even a negative effect (The Reading
League, 2023), but those guidelines focus on
curricula rather than specific interventions. An
intervention is “a planned modification of the
environment made for the purpose of chang-
ing behavior in a prespecified way” (Tilly, 2008,
p. 21). An intervention is fundamentally differ-
ent from instruction because it is a planned
modification to the environment and needs
criteria to support practical decision making.
An intervention can be considered evidence
based if multiple rigorous studies have shown a
reliably positive effect for a given population of
students. Reading teachers and other groups
can see if there is an evidence base for any
practice by searching Google Scholar (www.
scholar.google.com) to determine if there are
published studies about the effectiveness of
the practice with at least moderate effect sizes
(e.g., d or g = 0.30 to 0.50). There may be inter-
ventions for which studies have yet to be con-
ducted, but they still should be consistent with
previous findings from the science of reading.
Summaries of Research
The first step to determine if a specific inter-
vention is aligned with the science of reading
is to examine the relevant research literature.
There are multiple sources of summaries of
research regarding specific interventions, in-
cluding practice guides from the Institute of
Educational Sciences of the United States De-
partment of Education (https://ies.ed.gov/ncee/
wwc/practiceguides). Practitioners can also go
to the National Center on Intensive Interven-
tion (NCII; www.intensiveintervention.org) to
find summaries that evaluate both the quality
of the research and the size of the effects of 36
reading interventions.
Individual meta-analyses are comprehen-
sive reviews of the literature that provide an
estimate of the effect to make general con-
clusions about the research literature and can
be especially useful in identifying effective in-
terventions. Burns et al. (2014) found 22 meta-
analyses about reading interventions, which
resulted in a list of interventions that led to
moderate to large effects (e.g., repeated read-
ing, reciprocal teaching, and concept maps) or
small effects (e.g., Whole Language approach-
es, working memory training, and comput-
er-assisted instruction).
Components of an Effective Reading
Intervention
Burns et al. (2014) also identified five essen-
tial attributes that all of the interventions with
Guidelines for Determining if a Reading
Intervention Is Consistent With the
Science of Reading
by Matthew K. Burns and Valentina A. Contesse
How do you know if an intervention is consistent with research? Many of the posts on social
media pages dedicated to the science of reading ask if a particular curriculum, instructional
approach, or intervention is aligned with the science. Given that the science of reading is a
vast, interdisciplinary body of research about reading, the consequences of using interventions
that are not aligned are potentially tragic. For example, reading interventions for children
with learning difficulties in the 1960s, 1970s, and early 1980s focused on various approaches to
perceptual training, improving sequential memory, and matching the modality of instruction
with a supposed preferred learning style, all of which led to small effects and poor student
outcomes (Kavale, 2001).
An intervention is fundamentally
different from instruction
because it is a planned
modification to the environment
and needs criteria to support
practical decision making.
13-21_Burns_Feature.indd 1313-21_Burns_Feature.indd 13 9/5/24 7:22 AM9/5/24 7:22 AM
14 The Reading League Journal
large effects had in common and that seemed
to differentiate them from interventions with
small effects. Those attributes (a) included ex-
plicit instruction in the skill, (b) were targeted
to students’ needs, (c) provided high opportu-
nities to respond (OTR), (d) offered an appropri-
ate level of challenge, and (e) delivered imme-
diate corrective feedback. Next, we will review
the research for the five research-supported
components of effective reading interventions.
Explicit Instruction
Explicit instruction is a systematic method of
teaching through small obtainable steps that
include frequent checks for understanding
and active and successful participation by all
students (Archer & Hughes, 2011). Rosenshine
(1986) outlined six steps for explicit instruction:
(a) review, (b) model, (c) provide guided prac-
tice, (d) give corrective feedback, (e) provide
independent practice, and (f) monitor student
progress. Effective interventions must present
information in a meaningful sequence of skills
and must include some aspect of modeling (I
do), guided practice (we do), and independent
practice (you do).
Explicit instruction is perhaps the most re-
searched approach to teaching reading that
exists. Stockard et al. (2018) reviewed 328 stud-
ies of explicit instruction and found an effect
size of 0.51 (226 studies) for reading and 0.66 (52
studies) for spelling. Training teachers how to
explicitly teach reading comprehension strate-
gies led to more use of modeling (g = 0.85) and
guided and independent practice (g = 1.43),
both of which had significantly positive effects
(g=0.85 and g=1.43, respectively) on reading
comprehension among low-achieving stu-
dents (Okkinga et al., 2018). Modeling is a cru-
cial aspect of explicit instruction.
Targeted to Student Needs
Selecting an evidence-based intervention that
targets a specific skill (e.g., phonemic aware-
ness, phonics, fluency, or comprehension) is
the starting point for building an effective in-
tervention system (NCII, 2018). The first step in
delivering a reading intervention should be to
identify gaps in a student’s core reading skills
and to focus intervention efforts on those skills.
Burns and colleagues (2023) found positive ef-
fects for analyzing data from the areas outlined
by the National Reading Panel (NRP, 2000) to
identify the most fundamental skill that a stu-
dent has yet to master and to target interven-
tion efforts on that skill. For example, students
who lack proficiency in both comprehension
and fluency, but possess adequate decoding
skills, should receive fluency interventions. In
contrast, those with low skills in comprehen-
sion, fluency, and decoding, but have sufficient
phonemic awareness, should receive an inter-
vention that focuses on decoding.
Students who receive reading interven-
tions targeted in this manner demonstrate
significantly more progress when compared to
those receiving comprehensive multicompo-
nent interventions or to students without ini-
tial reading difficulties (Burns et al., 2023). Hall
and Burns (2018) examined 26 studies on small-
group reading interventions for elementary
and middle school students and found a mod-
erate overall effect size of g = 0.54, but interven-
tions that targeted specific skills (g = 0.65) were
more effective than multicomponent interven-
tions addressing multiple skills (g = 0.35).
Targeting an intervention does not imply
exclusive concentration on one area. Rather, it
signifies that the primary emphasis is on the
specified target, while other facets of reading
may also be incorporated. For instance, a read-
ing fluency intervention may include asking the
student comprehension questions, a decoding
intervention should include reading connected
text and could begin with a phonemic aware-
ness warm-up, and phonemic awareness in-
terventions often incorporate letter manipula-
tion techniques to enhance effectiveness (NRP,
2000).
High Opportunities to Respond
Providing practice while learning a new skill
is one of the most important components of
effective instruction and intervention, but not
all practice is good practice. In order to lead to
retention and generalization, practice must in-
volve repetition, generation, and interleaving of
review items.
Repetition
The first criterion for effective practice is repeti-
tion. Simply reading a word or grapheme-pho-
neme correspondence only once does little for
building retention. We have known conclu-
Explicit instruction is a
systematic method of teaching
through small obtainable steps
that include frequent checks for
understanding and active and
successful participation by all
students.
13-21_Burns_Feature.indd 1413-21_Burns_Feature.indd 14 9/5/24 7:22 AM9/5/24 7:22 AM
15
SEPTEMBER/OCTOBER 2024
sively for 40 years that increasing the number
of presentations while rehearsing new items,
called opportunities to respond (OTR), leads
to improved academic engagement, learning,
and retention of the newly learned items. A
comparison of various interventions found that
increased OTR was the causal mechanism and
led to a large effect of r = .82 with a correlation
between number of repetitions and retention
(Szadokierski & Burns, 2008).
Generation
Students remember something better if they
have to self-generate a response rather than
simply read an answer. For example, students
will remember the sound for /ch/ better if they
generate three words from memory that start
with the sound than if they say the words repre-
sented by three pictures of objects that all start
with the /ch/ sound. Retention is also better if
students generate a verbal or written response
than if they just think the answer to themselves
(McCurdy et al., 2020). Generation (d = 1.31) was
at least as important as the number of repeti-
tions (d = 0.66) while practicing (Zaslofsky et al.
2016).
Interleaving
Interleaving practice is practicing several relat-
ed skills together (A,B,C,A,B,C,A,B,C as opposed
to A,A,A,B,B,B,C,C,C) after initially learning each.
Interleaving has led to positive effects on reten-
tion (g = 0.65) and generalization (g = 0.66), and
the more similar the items were, the better they
were retained (Firth et al., 2021). The effects are
stronger for smaller discrete stimuli, like letters
than for longer tasks like words or sentences
(Brunmair & Richter, 2019).
Appropriate Level of Challenge
To be successful, the intervention should pro-
vide an appropriate level of challenge. Unfor-
tunately, the appropriate level of challenge is
often determined by using reading books with
a gradient of difficulty, organizing the books
according to difficulty level, and placing stu-
dents into a specific level of the book series
based on data from a standardized measure
of reading level. However, an appropriate level
of challenge is better defined in relation to the
percentage of words read correctly in any given
passage or book and the amount of informa-
tion taught. Next, we discuss both aspects of
an appropriate level of challenge.
Percentage of Known
Teaching new skills so that they are sequenced
to build on each other increases student learn-
ing. Students who read 93% to 97% of the words
correctly had better reading outcomes than
those who read at higher or lower percentages
(Burns, 2024). Reading 93% to 97% of the words
correctly is a research-based definition of an
appropriate level of challenge during interven-
tion. In fact, the correlation between student
growth during reading fluency intervention
and the frequency with which each student
reads 93% to 97% of the words correctly was
r = .80 (Burns, 2007).
The guidelines for an appropriate level of
challenge for other tasks (e.g., grapheme-pho-
neme correspondences, learning irregular
words in isolation, spelling) are less clear than
for reading connected text. Meta-analytic re-
search found large effects for interventions
focused on isolated skills (e.g., spelling, letter
sounds, math facts) if they contained 50% to
60% known items (d = 1.00), 70% to 85% known
items (d = 1.17), or 90% known items (d = 1.22),
which compared to a small effect if the inter-
vention contained less than 50% known items
(d = 0.39; Burns, 2004). Although the specific
criteria may not be clear, including a high per-
centage of known items seems important.
Amount of Information
The amount of new learning that students can
process, retain, and use is limited and is referred
to as an acquisition rate (AR). Cesaro (1967) fa-
mously found that teaching students a number
of words that exceeded their individual limit
resulted in significantly fewer words being re-
tained and also reduced retention of previously
learned words. Cesaro called this phenomenon
retroactive cognitive interference, in which piec-
es of information competed with each other as
they overlapped in memory systems. Retroac-
tive interference has been supported by subse-
quent research (Darby & Sloutsky, 2015).
Interventions should only teach the num-
ber of new items that match the student’s AR,
which can be found by noting when the stu-
dent starts making frequent mistakes while
learning any new item. Teaching a number of
items that exceeded a student’s AR resulted in
learning fewer words (Haegele & Burns, 2015)
and increased time off task during reading in-
struction (Burns et al., 2021). Thus, interventions
should purposefully control the size of the in-
structional set, which is often inversely related
to learning (i.e., smaller sets were associated
with more learning; Poncy et al., 2015).
Feedback
Learning is enhanced during intervention when
students receive feedback based on their re-
13-21_Burns_Feature.indd 1513-21_Burns_Feature.indd 15 9/5/24 7:22 AM9/5/24 7:22 AM
16 The Reading League Journal
sponses. Feedback within an instructional
process can be broadly defined as sharing
and clarifying criteria for student success and
providing precise and honest information
about progress toward the criteria (Panadero
& Lipnevich, 2022). Feedback should focus on
the correct response rather than the incorrect
student response and should include an op-
portunity for the student to present the correct
response (Barbetta et al., 1993).
The review of meta-analyses conducted by
Wisniewski and colleagues (2020) found an ef-
fect size of d = 0.46 (238 study effects) for cor-
rective feedback. The effectiveness of feedback
is influenced by the frequency, immediacy, and
content or accuracy. For example, during initial
learning, a student’s incorrect response should
be followed by immediate corrective feedback,
but feedback can be delayed when engaged in
independent practice after the initial acquisi-
tion of the skill.
Using the Criteria
The five components of an effective interven-
tion outlined by Burns et al. (2014) can provide
a framework for examining how consistent a
reading intervention is with previous research.
As shown in Table 1, each criterion can be rated
on a scale of 1 to 4, with 4 being highest and
suggesting consistency with research on that
component. Next, we will demonstrate how to
use the criteria to evaluate two interventions.
Incremental Rehearsal
Incremental rehearsal (IR) is an approach used
to practice newly learned isolated stimuli to
improve retention of them. It is most effec-
tive for students who have difficulty remem-
bering stimuli such as grapheme-phoneme
correspondences, high-frequency words, or
vocabulary words, and it usually begins by as-
sessing items that the student is responding
to incorrectly and those that are mastered.
The unknown items are then written on index
cards or some other venue along with seven to
nine known items and practiced incrementally
(Tucker & Burns, 2016).
A review of Google Scholar found 19 pub-
lished studies of IR effects since 2013. IR was also
favorably rated by the NCII (2021; i.e., three of five
studies were rated as “convincing evidence” for
study design, two were rated as “partially con-
vincing evidence,” and all had either large ef-
fects or “convincing evidence” for an effect from
visual analysis of single-case designs).
IR includes all five components of an ef-
fective intervention. As shown in Table 2, the
intervention addresses multiple criteria for the
Explicit Instruction and Correctly Targeted com-
ponents, but it does not inherently involve prog-
ress monitoring or reinforce other areas of read-
ing and would need to be contextualized with
applied practice of the skill. The remaining three
components (High OTR, Appropriate Level of
Challenge, and Feedback) are all inherent to IR
and would all receive a rating of 4, including for
Appropriate Level of Challenge because set size
considerations are included in the procedures
(Tucker & Burns, 2016). Because all components
received a rating of three or four, the interven-
tion is likely consistent with previous research.
Cover, Copy, and Compare
Much like IR, Cover, Copy, and Compare (CCC) is
a well-researched intervention that helps stu-
dents learn new stimuli such as math facts or
spelling words. CCC for spelling involves pro-
viding each student with a sheet of paper with
at least three columns and a series of spelling
words in the first column. Each student then
studies the first word in the column, folds the
paper so that the first column is covered but
the second column is exposed, spells the word
from memory in the second column, and then
unfolds the paper to see if their spelling is cor-
rect. If it is not correct, the student corrects the
word in the third column. If it is correct, then the
student proceeds to the next word and starts
the process over again (Cieslar et al., 2008).
The NCII (2021) reviewed six studies of CCC
(five math and one spelling), of which only one
was rated as “convincing evidence” for study
design, two had “partially convincing evidence”
for the study design, and three had “uncon-
vincing evidence” for the study design. The
NCII also rated two single-case design studies
as providing “partially convincing evidence” of
the effectiveness of the intervention, three had
“unconvincing evidence” for the effectiveness,
and one study reported an effect size of -0.27
on targeted research measures.
Using the components of an effective inter-
vention shown in Table 2, CCC does not rate as
favorably as IR. The intervention provides little
modeling because the students are shown a
Feedback should focus on the
correct response rather than the
incorrect student response and
should include an opportunity
for the student to present the
correct response.
13-21_Burns_Feature.indd 1613-21_Burns_Feature.indd 16 9/5/24 7:22 AM9/5/24 7:22 AM
| www.brookespublishing.com | 1-800-638-3775
Bestsellers on evidence-based
literacy instruction
Explore our new and bestselling
titles on the science of reading
Talk with our esteemed
experts
Find foundational texts
and practical guides
Get a special conference
discount
Stop by the Brookes booth to
13-21_Burns_Feature.indd 1713-21_Burns_Feature.indd 17 9/5/24 7:22 AM9/5/24 7:22 AM
18 The Reading League Journal
Table 1
Guidelines for Evaluating if a Reading Intervention Aligns With Research
Component and Criteria Ratings
Explicit Instruction 1 2 3 4
Uses modeling, guided practice, and
independent practice.
Consistently and regularly monitors student
progress.
Includes frequent reviews.
Students engage in skill without
being provided a model.
No approach to monitor progress.
No opportunities to review.
One of the criteria is
present.
Two of the
criteria are
present.
All three
of the
criteria
are
present.
Correctly Targeted 1 2 3 4
Focuses on one area of reading.
Focus area is selected based on student need.
Reinforces other relevant areas of reading.
Attempts to cover all areas of
reading OR no clear intervention
focus.
Intervention is focused but
not based on student need.
Does not reinforce other
relevant areas of reading.
Two of the
criteria are
present.
All three
of the
criteria
are
present.
High Opportunities to Respond 1 2 3 4
Includes sufficient practice (repetition) to learn
the new skill.
Students have to generate the response.
Practice intersperses review items while
learning new ones.
Little to no practice provided. Sufficient practice provided.
No generation of a
response.
No interspersing of review
items.
Two of the
criteria are
present.
All three
of the
criteria
are
present.
Appropriate Level of Challenge 1 2 3 4
New skills are sequenced so that they build on
each other.
Students read connected text that contains
93% to 97% known words OR can complete
isolated skill tasks with at least 85% accuracy.
Intervention considers set size based on
student skill and age.
New skills are not well sequenced.
Students read connected text that
contains less than 93% known
words OR complete most tasks
with less than 85% accuracy.
Uses too large of a set size.
One of the characteristics is
present.
Two of the
criteria are
present.
All three
of the
criteria
are
present.
Feedback 1 2 3 4
Interventionist immediately states that each
response is correct or incorrect.
Interventionist asks the student to give the
correct response.
Interventionist’s response includes specific
information.
Not apparent that feedback is
provided.
Feedback may be provided
but only one of the criteria
is present.
Two of the
criteria are
present.
All three
of the
criteria
are
present.
13-21_Burns_Feature.indd 1813-21_Burns_Feature.indd 18 9/5/24 7:22 AM9/5/24 7:22 AM
19
SEPTEMBER/OCTOBER 2024
Intervention Explicit Instruction Correctly Targeted
High
Opportunities to
Respond
Appropriate Level of
Challenge Feedback
Incremental
Rehearsal
Rating = 3
New stimulus is
modeled.
Provides guided
practice.
Frequent review of
previously learned
items.
No inherent progress
monitoring.
Rating = 3
Focuses on one
area of reading.
Assesses known
and unknown
items.
Does not reinforce
other areas of
reading.
Rating = 4
Considerable
repetition.
Students
generate the
response.
Includes a high
percentage of
known items
interleaved.
Rating = 4
High percentage of
known items.
Carefully selected
items for teaching
and review (known
items).
Set size is
addressed.
Rating = 4
Errors are corrected
immediately.
Correct response is
modeled.
Students required
to state the correct
response after errors.
Cover, Copy,
and Compare
Rating = 2
Minimal modeling
and no modeling of
pronunciation.
No review built into
the procedure.
Resulting worksheets
can be used to
monitor progress.
Rating = 2
Focuses on one
area of reading
(spelling).
Not matched to
student needs.
Does not reinforce
other areas of
reading.
Rating = 1
Provides only
one repetition.
Students
generate a
response.
No interspersal
of review and
new items.
Rating = 1
Sequence depends
on sequence of
spelling curriculum.
No control over
difficulty (could be
100% new).
Rating = 2
Students immediately
correct errors.
Correct response is
modeled.
Dependent on
student correctly
understanding
modeled correct
response.
Table 2
Examples of Ratings for Two Common Reading Interventions
13-21_Burns_Feature.indd 1913-21_Burns_Feature.indd 19 9/5/24 7:22 AM9/5/24 7:22 AM
20 The Reading League Journal
model of correct spelling, but the word is not
read aloud to the student, and the student
could practice the word while mispronouncing
it. There is also no review built into the proce-
dure, and CCC would need to be contextualized
with the applied practice of the skill to address
other areas in reading. CCC would likely receive
a 1 for High OTR because it only includes one
repetition, and attempts to increase the num-
ber of repetitions to three did not enhance
the effects (Erion et al., 2009), suggesting that
three repetitions were insufficient.
The intervention does result in a permanent
written product that teachers can use to evalu-
ate student progress. The intervention would
receive a rating of 2 for Explicit Instruction, Cor-
rectly Targeted, and Feedback, but only a 1 for
High OTR and Appropriate Level of Challenge.
Thus, it seems that CCC does not address two
components and likely does not adequately
address the other three.
Conclusion
It is important for new interventions to align
with previous research, and the criteria pre-
sented here could help practitioners evaluate
how well individual reading interventions align
with the science. The two examples shown
here, IR and CCC, are well-researched, and
practitioners could simply rely on the previous
research. However, the guidelines should also
be applied and studied when evaluating inter-
ventions that have not been frequently stud-
ied. Additional research is needed to examine
the reliability of decisions made with these
guidelines across multiple raters and the valid-
ity of the resulting decisions. Moreover, the ex-
amples presented here used letter sounds and
spelling, which are only two potential targets
for reading interventions.
Practitioners should select the intervention
with the strongest research base, but in the ab-
sence of such data, selecting interventions that
are most aligned with previous research could
increase student outcomes. Given that school
personnel have previously attempted reading
interventions that are based on fads rather
than science (Kavale, 2001), a method to eval-
uate consistency with research may be a key
component in selecting interventions.
References
Archer, A., & Hughes, C. (2011).Explicit instruction: Effective
and efficient teaching. Guilford.
Barbetta, P. M., Heron, T. E., & Heward, W. L. (1993). Effects
of active student response during error correction
on the acquisition, maintenance, and generalization
of sight words by students with developmental
disabilities.Journal of Applied Behavior Analysis,26(1),
111-119.
Brunmair, M., & Richter, T. (2019). Similarity matters:
A meta-analysis of interleaved learning and its
moderators.Psychological Bulletin, 145(11), 1029–1052.
Burns, M. K. (2004). Empirical analysis of drill ratio
research: Refining the instructional level for drill tasks.
Remedial and Special Education, 25, 167-175.
Burns, M. K. (2007). Reading at the instructional level
with children identified as learning disabled: Potential
implications for response–to-intervention. School
Psychology Quarterly, 22, 297-313.
Burns, M. K. (2024). Assessing an instructional level during
reading fluency interventions: A meta-analysis of the
effects on reading.Assessment for Effective Intervention.
Burns, M. K., Aguilar, L. N., Warmbold‐Brann, K., Preast, J.
L., & Taylor, C. N. (2021). Effect of acquisition rates on off‐
task behavior of kindergarten students while learning
sight words.Psychology in the Schools,58(1), 5-17.
Burns, M., Duesenberg-Marshall, M., Sussman-Dawson,
K., Romero, M., Wilson, D., & Felten, M. (2023). Effects
of targeting reading interventions: Testing a skill-by-
treatment interaction in an applied setting.Preventing
School Failure: Alternative Education for Children and
Youth, 68(4):1-9.
Burns, M. K., VanDerHeyden, A. M., & Zaslofsky, A. F.
(2014). Best practices in delivering intensive academic
interventions with a skill-by-treatment interaction. In P.
L. Harrison & A. Thomas (Eds.)Best practices in school
psychology: Student-level services 6th ed.; pp. 129-141).
National Association of School Psychologists.
Cesaro, J. (1967). The interference theory of forgetting.
Scientific American, 217(4), 117-124.
Cieslar, W., McLaughlin, T. F., & Derby, K. M. (2008). Effects
of the copy, cover, and compare procedure on the math
and spelling performance of a high school student with
behavioral disorder: A case report.Preventing School
Failure,52(4), 45-51.
Darby, K. P., & Sloutsky, V. M. (2015). The cost of learning:
Interference effects in memory development.Journal of
Experimental Psychology: General, 144(2), 410-431.
Erion, J., Davenport, C., Rodax, N., Scholl, B., & Hardy, J.
(2009). Cover-copy-compare and spelling: One versus
three repetitions. Journal of Behavioral Education, 18,
319-330. https://doi.org/10.1007/s10864-009-9095-4.
Firth, J., Rivers, I., & Boyle, J. (2021). A systematic review of
interleaving as a concept learning strategy.Review of
Education,9(2), 642-684.
Haegele, K., & Burns, M. K. (2015). Effect of modifying
intervention set size with acquisition rate data among
students identified with a learning disability.Journal of
Behavioral Education,24, 33-50.
Hall, M. S., & Burns, M. K. (2018). Meta-analysis of targeted
small-group reading interventions.Journal of School
Psychology,66, 54-66.
Kavale, K. A. (2001). Decision making in special education:
The function of meta-analysis. Exceptionality, 9, 245-268.
McCurdy, M.P., Viechtbauer, W., Sklenar, A.M.,
Frankenstein, A. N., & Leshikar, E. D. (2020).Theories
of the generation effect and the impact of generation
constraint: A meta-analytic review.Psychonomic
Bulletin & Review, 27, 1139–1165.
National Center on Intensive Intervention. (2018). Breaking
down the DBI process: Questions & considerations.
Office of Special Education Programs, U.S. Department
of Education.
National Center on Intensive Intervention. (2018).
What is data-based individualization. https://
intensiveintervention.org/data-based-individualization.
National Center on Intensive Intervention. (2021).
Academic intervention tools chart. https://charts.
intensiveintervention.org/aintervention.
13-21_Burns_Feature.indd 2013-21_Burns_Feature.indd 20 9/5/24 7:22 AM9/5/24 7:22 AM
21
SEPTEMBER/OCTOBER 2024
National Reading Panel. (2000). Report of the National
Reading Panel: Teaching children to read: An evidence-
based assessment of the scientific research literature
on reading and its implications for reading instruction:
Reports of the subgroups. National Institute of Child
Health and Human Development, National Institutes of
Health.
Okkinga, M., van Steensel, R., van Gelderen, A. J., &
Sleegers, P. J. (2018). Effects of reciprocal teaching on
reading comprehension of low‐achieving adolescents.
The importance of specific teacher skills.Journal of
Research in Reading,41(1), 20-41.
Panadero, E., & Lipnevich, A. A. (2022). A review of feedback
models and typologies: Towards an integrative model of
feedback elements.Educational Research Review,35,
100416.
Poncy, B. C., Solomon, G. E., Duhon, G. J., Skinner, C. H.,
Moore, K., & Simons, S. (2015). An analysis of learning rate
and curricular scope: Caution when choosing academic
interventions based on aggregated outcomes. School
Psychology Review, 44(3), 289–305.
Rosenshine, B. V. (1986). Synthesis of research on explicit
teaching. Educational Leadership, 43, 60-69.
Stockard, J., Wood, T. W., Coughlin, C., & Rasplica Khoury, C.
(2018). The effectiveness of direct instruction curricula:
A meta-analysis of a half century of research.Review of
Educational Research,88(4), 479-507.
Szadokierski, I., & Burns, M. K. (2008). Analogue evaluation
of the effects of opportunities to respond and ratios of
known items within drill rehearsal of Esperanto words.
Journal of School Psychology, 46(5), 593-609.
The Reading League. (2023). Curriculum evaluation
guidelines. Available online at https://www.
thereadingleague.org/curriculum-evaluation-
guidelines/
Tilly, W. D. III. (2008). The evolution of school psychology
to science-based practice: Problem solving and the
three-tiered model. In Thomas A., Grimes J. (Eds.),Best
practices in school psychology(5th ed., pp. 17–36).
National Association of School Psychologists.
Tucker, J. A., & Burns, M. K. (2016). Helping students
remember what they learn: An intervention for teachers
and school psychologists.Communiqué,44(6), 23.
Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power
of feedback revisited: A meta-analysis of educational
feedback research.Frontiers in Psychology,10, 3087.
Zaslofsky, A. F., Scholin, S. E., Burns, M. K., & Varma, S. (2016).
Comparison of opportunities to respond and generation
effect as potential causal mechanisms for incremental
rehearsal with multiplication combinations.Journal of
School Psychology,55, 71-78.
Matthew Burns
Dr. Matthew K. Burns is the Fien Professor of Special Education at the University of Florida and an
assistant director of the University of Florida Literacy Institute. Dr. Burns has published over 200
articles and book chapters in national publications and has co-authored or co-edited 15 books.
Dr. Burns is one of the leading researchers regarding the use of assessment data to determine
individual or small-group interventions and has published extensively on response to intervention,
academic interventions, and facilitating problem-solving teams. He received the 2020 Senior
Scientist Award from Division 16 (School Psychology) of the American Psychological Association.
Valentina Contesse
Dr. Valentina Contesse is a clinical assistant professor of special education at the University of
Florida. Dr. Contesse is a University of Florida Literacy Institute (UFLI) faculty member and the
co-author ofUFLI Foundations: An Explicit and Systematic Phonics Program. Dr. Contesse teaches
courses in various UF teacher preparation programs and coordinates professional development for
practicing teachers focused on evidence-based reading instruction. Her research focuses on early
literacy instruction and intervention, teacher preparation and training in reading, and the effects
of performance feedback on the implementation of evidence-based instructional practices.
13-21_Burns_Feature.indd 2113-21_Burns_Feature.indd 21 9/5/24 7:22 AM9/5/24 7:22 AM
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The current study meta-analyzed 27 effects from 21 studies to determine the effect assessment of text difficulty had on reading fluency interventions, which resulted in an overall weighted effect size ( ES) = 0.43 (95% CI = [0.25, 0.62], p < .001). Using reading passages that represented an instructional level based on accuracy criteria led to a large weighted effect of ES = 1.03, 95% CI = [0.65, 1.40], p < .01), which was reliably larger ( p < .05) than that for reading fluency interventions that used reading passages with an instructional level based on rate criteria (weighted ES = 0.29, 95% CI = [0.07, 0.50], p < .01). Using reading passages based on leveling systems or those written at the students’ current grade level resulted in small weighted effects. The approach to determining difficulty for reading passages used in reading fluency interventions accounted for 11% of the variance in the effect ( p < .05) beyond student group (no risk, at-risk, disability) and type of fluency intervention. The largest weighted effect was found for students with reading disabilities ( ES = 1.14, 95% CI = [0.64, 1.65], p < .01).
Article
Full-text available
A systematic review was conducted into the effect of interleaving the order of examples of concepts in terms of both memory of items and transfer to new items. This concept has important implications for how and when teachers present examples in the classroom. A total of 26 studies met the inclusion criteria; a subset of 17 studies (with 32 constituent datasets) formed the basis of a meta‐analysis, and the remainder were analysed within a narrative review. Memory (as tested by presenting studied items from a learned category) showed an interleaving benefit with effect sizes (Hedges’ g) of up to 0.65, and transfer (as tested by presenting novel items from a learned category) a benefit with effect sizes of up to 0.66. Interleaving was found to be of greatest use when differences between items are subtle, and the benefit extended to both art‐ and science‐based items, with implication for practitioner decisions over how and when to apply the technique. It also extended to delayed tests. The review revealed that the literature is dominated by laboratory studies of university undergraduates, and the need for future school‐based research using authentic classroom tasks is outlined.
Article
Full-text available
Assessing a student's acquisition rates (ARs) is a reliable way to determine how many new words should be taught in one lesson without reducing retention. Exceeding a student's AR can result in frustration and problem behaviors. The purpose of this study was to examine the effect of AR on the off-task behavior of kindergarten students while participating in a commonly used sight-word instruction video. Participants included 39 kindergarten students whose ARs were assessed before showing the sight-word video. Behavior was measured as on-and off-task using momentary time-sampling with 10-s intervals. Results indicated that students' time off task increased after exceeding their ARs, with a noticeable immediate increase. The implications and limitations of these results are discussed.
Article
Full-text available
A meta-analysis (435 studies, k = 994, N > 61,000) of empirical research on the effects of feedback on student learning was conducted with the purpose of replicating and expanding the Visible Learning research (Hattie and Timperley, 2007; Hattie, 2009; Hattie and Zierer, 2019) from meta-synthesis. Overall results based on a random-effects model indicate a medium effect (d = 0.48) of feedback on student learning, but the significant heterogeneity in the data shows that feedback cannot be understood as a single consistent form of treatment. A moderator analysis revealed that the impact is substantially influenced by the information content conveyed. Furthermore, feedback has higher impact on cognitive and motor skills outcomes than on motivational and behavioral outcomes. We discuss these findings in the light of the assumptions made in The power of feedback (Hattie and Timperley, 2007). In general, the results suggest that feedback has rightly become a focus of teaching research and practice. However, they also point toward the necessity of interpreting different forms of feedback as independent measures.
Article
Full-text available
An interleaved presentation of items (as opposed to a blocked presentation) has been proposed to foster inductive learning (interleaving effect). A meta-analysis of the interleaving effect (based on 59 studies with 238 effect sizes nested in 158 samples) was conducted to quantify the magnitude of the interleaving effect, to test its generalizability across different settings and learning materials, and to examine moderators that could augment the theoretical models of interleaved learning. A multilevel meta-analysis revealed a moderate overall interleaving effect (Hedges’ g = 0.42). Interleaved practice was best for studies using paintings (g = 0.67) and other visual materials. Results for studies using mathematical tasks revealed a small interleaving effect (g = 0.34), whereas results for expository texts and tastes were ambiguous with nonsignificant overall effects. An advantage of blocking compared to interleaving was found for studies based on words (g = -0.39). A multiple meta-regression analysis revealed stronger interleaving effects for learning material more similar between categories, for learning material less similar within categories, and for more complex learning material. These results are consistent with the theoretical account of interleaved learning, most notably with the sequential theory of attention (attentional bias framework). We conclude that interleaving can effectively foster inductive learning but that the setting and the type of learning material must be considered. The interleaved learning, however, should be used with caution in certain conditions, especially for expository texts and words.
Article
Full-text available
Quantitative mixed models were used to examine literature published from 1966 through 2016 on the effectiveness of Direct Instruction. Analyses were based on 328 studies involving 413 study designs and almost 4,000 effects. Results are reported for the total set and subareas regarding reading, math, language, spelling, and multiple or other academic subjects; ability measures; affective outcomes; teacher and parent views; and single-subject designs. All of the estimated effects were positive and all were statistically significant except results from metaregressions involving affective outcomes. Characteristics of the publications, methodology, and sample were not systematically related to effect estimates. Effects showed little decline during maintenance, and effects for academic subjects were greater when students had more exposure to the programs. Estimated effects were educationally significant, moderate to large when using the traditional psychological benchmarks, and similar in magnitude to effect sizes that reflect performance gaps between more and less advantaged students.
Article
Full-text available
Small-group reading interventions are commonly used in schools but the components that make them effective are still debated or unknown. The current study meta-analyzed 26 small-group reading intervention studies that resulted in 27 effect sizes. Findings suggested a moderate overall effect for small-group reading interventions (weighted g = 0.54). Interventions were more effective if they were targeted to a specific skill (g = 0.65), then as part of a comprehensive intervention program that addressed multiple skills (g = 0.35). There was a small correlation between intervention effects and group size (r = 0.21) and duration (r = 0.11). Small-group interventions led to a larger median effect size (g = 0.64) for elementary-aged students than for those in middle or high school (g = 0.20), but the two confidence intervals overlapped. Implications for research and practice are discussed.
Article
Full-text available
Low‐achieving adolescents are known to have difficulties with reading comprehension. This article discusses how reciprocal teaching can improve low‐achieving adolescents' reading comprehension in whole‐classroom settings (as opposed to small‐group settings) and to what extent intervention effects are dependent on teacher behaviour. Over the course of 1 year, experimental teachers (n = 10) were given extensive training and coaching aimed at using principles of reciprocal teaching, while control teachers (n = 10) used their regular teaching method. Observations of teacher behaviour were focused on instruction of reading strategies, modelling and support of group work and were performed in both experimental and control classes, comprising a total of 369 students (mean age = 13.01). Our study shows that reciprocal teaching contributed to adolescent low achievers' reading comprehension only when experimental teachers provided high‐quality strategy instruction. In addition, results suggest that the quality of implementation of reciprocal teaching in whole‐classroom settings should receive more research attention. Highlights What is already known about this topic • Reciprocal teaching is a method of instructing and guiding learners in reading comprehension. • It consists of a set of three related instructional principles: (a) teaching comprehension‐fostering reading strategies; (b) expert modelling, scaffolding and fading; and (c) students practising and discussing reading strategies with other students, guided and coached by the teacher. • High quality of implementation of reciprocal teaching by teachers in classrooms is difficult. What this paper adds • After 1 year of implementing reciprocal teaching, no main effects of the treatment were established. • Intervention effects were moderated by quality of instruction: strategy instruction led to higher scores on reading comprehension in the treatment condition but not in the control condition. • Implementation of the instructional principles was by no means optimal: teachers were unable to provide detailed guidance to students working in small groups and modelling of strategies requires more experience and theoretical insight in the use and nature of reading strategies. Implications for practice and/or policy • Extensive training and coaching are needed for teachers to become experts in reciprocal teaching. • Teachers need hands‐on tools to be able to guide students in their collaborative group work and to fade the teachers' role in order to allow more individual self‐regulation by students in their use of strategies. • Implementation quality has to be taken into account when doing effectiveness research and when adopting new, theory‐based didactic approaches.
Article
Full-text available
Incremental rehearsal (IR) is an intervention with demonstrated effectiveness in increasing retention of information, yet little is known about how specific intervention components contribute to the intervention's effectiveness. The purpose of this study was to further the theoretical understanding of the intervention by comparing the effects of opportunities to respond (OTR) and generation demand on retention of multiplication combinations. Using a between subject 2 × 2 factorial design, 103 4th and 5th grade students were taught seven multiplication combinations using one of four versions of IR that orthogonally varied OTR (high versus low) and generation demands (high versus low). A two-way ANOVA revealed main effects for OTR, generation demands, and an interaction of the two factors. The effect of generation demands was large (d = 1.31), whereas the overall effect of OTR was moderate (d = 0.66). Critically, the two factors interacted, with the largest learning gains observed when OTR and generation demands were both high. The results of this study suggest that generation demand is an important factor in the effectiveness of rehearsal interventions.